mirror of https://github.com/istio/istio.io.git
[Release-1.5] Update VM expansion doc (#6743)
* Early draft for VM updates * Update the second doc - instead of duplication we should just link * Remove some details, we should document ACME certs in 1.6 * update vm doc for 1.5 * Fix lint error * remove duplication * fix lint error * fix lint error * fix lint error * fix lint error * fix broken link * fix broken link Co-authored-by: Adam Miller <1402860+adammil2000@users.noreply.github.com>
This commit is contained in:
parent
3f9e5a0e6c
commit
479b4b161c
|
@ -24,11 +24,11 @@ bare metal and the clusters.
|
|||
|
||||
- Virtual machines (VMs) must have IP connectivity to the Ingress gateways in the mesh.
|
||||
|
||||
- Services in the cluster must be accessible through the Ingress gateway.
|
||||
|
||||
## Installation steps
|
||||
|
||||
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
|
||||
|
||||
### Customized installation of Istio on the cluster
|
||||
### Preparing the Kubernetes cluster for VMs
|
||||
|
||||
The first step when adding non-Kubernetes services to an Istio mesh is to
|
||||
configure the Istio installation itself, and generate the configuration files
|
||||
|
@ -37,112 +37,44 @@ following commands on a machine with cluster admin privileges:
|
|||
|
||||
1. Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See [Certificate Authority (CA) certificates](/docs/tasks/security/plugin-ca-cert/) for more details.
|
||||
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **not** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
{{< /warning >}}
|
||||
1. Follow the same steps as [setting up single-network](/docs/examples/virtual-machines/single-network) configuration for the initial setup of the
|
||||
cluster and certificates with the change of how you deploy Istio control plane:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ kubectl create secret generic cacerts -n istio-system \
|
||||
--from-file=@samples/certs/ca-cert.pem@ \
|
||||
--from-file=@samples/certs/ca-key.pem@ \
|
||||
--from-file=@samples/certs/root-cert.pem@ \
|
||||
--from-file=@samples/certs/cert-chain.pem@
|
||||
$ istioctl manifest apply \
|
||||
-f install/kubernetes/operator/examples/vm/values-istio-meshexpansion.yaml
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy Istio control plane into the cluster
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
-f install/kubernetes/operator/examples/vm/values-istio-meshexpansion-gateways.yaml \
|
||||
--set coreDNS.enabled=true
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[installation instructions](/docs/setup/install/istioctl/).
|
||||
|
||||
1. Create `vm` namespace for the VM services.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create ns vm
|
||||
{{< /text >}}
|
||||
|
||||
1. Define the namespace the VM joins. This example uses the `SERVICE_NAMESPACE` environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export SERVICE_NAMESPACE="vm"
|
||||
{{< /text >}}
|
||||
|
||||
1. Extract the initial keys the service account needs to use on the VMs.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.root-cert\.pem}' | base64 --decode > root-cert.pem
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.key\.pem}' | base64 --decode > key.pem
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.cert-chain\.pem}' | base64 --decode > cert-chain.pem
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine and store the IP address of the Istio ingress gateway since the
|
||||
VMs access [Citadel](/docs/concepts/security/) and
|
||||
[Pilot](/docs/ops/deployment/architecture/#pilot) and workloads on cluster through
|
||||
this IP address.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
$ echo $GWIP
|
||||
35.232.112.158
|
||||
{{< /text >}}
|
||||
|
||||
1. Generate a `cluster.env` configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges
|
||||
to intercept and redirect via Envoy.
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
1. Check the contents of the generated `cluster.env` file. It should be similar to the following example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat cluster.env
|
||||
ISTIO_CP_AUTH=MUTUAL_TLS
|
||||
ISTIO_SERVICE_CIDR=172.21.0.0/16
|
||||
{{< /text >}}
|
||||
|
||||
1. If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposes
|
||||
to the `cluster.env` file with the following command. You can change the ports later if necessary.
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "ISTIO_INBOUND_PORTS=8888" >> cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
### Setup DNS
|
||||
|
||||
Reference [Setup DNS](/docs/setup/install/multicluster/gateways/#setup-dns) to set up DNS for the cluster.
|
||||
|
||||
### Setting up the VM
|
||||
|
||||
Next, run the following commands on each machine that you want to add to the mesh:
|
||||
|
||||
1. Copy the previously created `cluster.env` and `*.pem` files to the VM.
|
||||
1. Copy the previously created `cluster.env` and `*.pem` files to the VM. For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GCE_NAME="your-gce-instance"
|
||||
$ gcloud compute scp --project=${MY_PROJECT} --zone=${MY_ZONE} {key.pem,cert-chain.pem,cluster.env,root-cert.pem} ${GCE_NAME}:~
|
||||
{{< /text >}}
|
||||
|
||||
1. Install the Debian package with the Envoy sidecar.
|
||||
|
||||
{{< text bash >}}
|
||||
$ gcloud compute ssh --project=${MY_PROJECT} --zone=${MY_ZONE} "${GCE_NAME}"
|
||||
$ curl -L https://storage.googleapis.com/istio-release/releases/{{< istio_full_version >}}/deb/istio-sidecar.deb > istio-sidecar.deb
|
||||
$ sudo dpkg -i istio-sidecar.deb
|
||||
{{< /text >}}
|
||||
|
||||
1. Add the IP address of the Istio gateway to `/etc/hosts`. Revisit the [Customized installation of Istio on the Cluster](#customized-installation-of-istio-on-the-cluster) section to learn how to obtain the IP address.
|
||||
The following example updates the `/etc/hosts` file with the Istio gateway address:
|
||||
1. Add the IP address of Istio gateway to `/etc/hosts`. Revisit the [preparing the cluster](#preparing-the-kubernetes-cluster-for-vms) section to learn how to obtain the IP address.
|
||||
The following example updates the `/etc/hosts` file with the Istiod address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
|
||||
$ echo "${GWIP} istiod.istio-system.svc" | sudo tee -a /etc/hosts
|
||||
{{< /text >}}
|
||||
|
||||
A better options is to configure the DNS resolver of the VM to resolve the address, using a split-DNS server. Using
|
||||
/etc/hosts is an easy to use example. It is also possible to use a real DNS and certificate for Istiod, this is beyond
|
||||
the scope of this document.
|
||||
|
||||
1. Install `root-cert.pem`, `key.pem` and `cert-chain.pem` under `/etc/certs/`.
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -150,169 +82,68 @@ The following example updates the `/etc/hosts` file with the Istio gateway addre
|
|||
$ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
|
||||
{{< /text >}}
|
||||
|
||||
1. Install `root-cert.pem` under `/var/run/secrets/istio/`.
|
||||
|
||||
1. Install `cluster.env` under `/var/lib/istio/envoy/`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo cp cluster.env /var/lib/istio/envoy
|
||||
{{< /text >}}
|
||||
|
||||
1. Transfer ownership of the files in `/etc/certs/` and `/var/lib/istio/envoy/` to the Istio proxy.
|
||||
1. Transfer ownership of the files in `/etc/certs/` , `/var/lib/istio/envoy/` and `/var/run/secrets/istio/`to the Istio proxy.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
|
||||
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy /var/run/secrets/istio/
|
||||
{{< /text >}}
|
||||
|
||||
1. Start Istio using `systemctl`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl start istio-auth-node-agent
|
||||
$ sudo systemctl start istio
|
||||
{{< /text >}}
|
||||
|
||||
## Added Istio resources
|
||||
## Send requests from VM workloads to Kubernetes services
|
||||
|
||||
The Istio resources below are added to support adding VMs to the mesh with
|
||||
gateways. These resources remove the flat network requirement between the VM and
|
||||
cluster.
|
||||
After setup, the machine can access services running in the Kubernetes cluster
|
||||
or on other VMs.
|
||||
|
||||
| Resource Kind| Resource Name | Function |
|
||||
| ---------------------------- |--------------------------- | ----------------- |
|
||||
| `configmap` | `coredns` | Send *.global request to `istiocordns` service |
|
||||
| `service` | `istiocoredns` | Resolve *.global to Istio Ingress gateway |
|
||||
| `gateway.networking.istio.io` | `meshexpansion-gateway` | Open port for Pilot, Citadel and Mixer |
|
||||
| `gateway.networking.istio.io` | `istio-multicluster-ingressgateway` | Open port 15443 for inbound *.global traffic |
|
||||
| `envoyfilter.networking.istio.io` | `istio-multicluster-ingressgateway` | Transform `*.global` to `*. svc.cluster.local` |
|
||||
| `destinationrule.networking.istio.io` | `istio-multicluster-destinationrule` | Set traffic policy for 15443 traffic |
|
||||
| `destinationrule.networking.istio.io` | `meshexpansion-dr-pilot` | Set traffic policy for `istio-pilot` |
|
||||
| `destinationrule.networking.istio.io` | `istio-policy` | Set traffic policy for `istio-policy` |
|
||||
| `destinationrule.networking.istio.io` | `istio-telemetry` | Set traffic policy for `istio-telemetry` |
|
||||
| `virtualservice.networking.istio.io` | `meshexpansion-vs-pilot` | Set route info for `istio-pilot` |
|
||||
| `virtualservice.networking.istio.io` | `meshexpansion-vs-citadel` | Set route info for `istio-citadel` |
|
||||
The following example shows accessing a service running in the Kubernetes cluster from a VM using
|
||||
`/etc/hosts/`, in this case using a service from the [Bookinfo example](/docs/examples/bookinfo/).
|
||||
|
||||
## Expose service running on cluster to VMs
|
||||
|
||||
Every service in the cluster that needs to be accessed from the VM requires a service entry configuration in the cluster. The host used in the service entry should be of the form `<name>.<namespace>.global` where name and namespace correspond to the service’s name and namespace respectively.
|
||||
|
||||
To demonstrate access from VM to cluster services, configure the
|
||||
the [httpbin service]({{< github_tree >}}/samples/httpbin)
|
||||
in the cluster.
|
||||
|
||||
1. Deploy the `httpbin` service in the cluster
|
||||
1. Connect to the cluster service from VM as in the example below:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace bar
|
||||
$ kubectl label namespace bar istio-injection=enabled
|
||||
$ kubectl apply -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
1. Create a service entry for the `httpbin` service in the cluster.
|
||||
|
||||
To allow services in VM to access `httpbin` in the cluster, we need to create
|
||||
a service entry for it. The host name of the service entry should be of the form
|
||||
`<name>.<namespace>.global` where name and namespace correspond to the
|
||||
remote service's name and namespace respectively.
|
||||
|
||||
For DNS resolution for services under the `*.global` domain, you need to assign these
|
||||
services an IP address.
|
||||
|
||||
{{< tip >}}
|
||||
Each service (in the `.global` DNS domain) must have a unique IP within the cluster.
|
||||
{{< /tip >}}
|
||||
|
||||
If the global services have actual VIPs, you can use those, but otherwise we suggest
|
||||
using IPs from the loopback range `127.0.0.0/8` that are not already allocated.
|
||||
These IPs are non-routable outside of a pod.
|
||||
In this example we'll use IPs in `127.255.0.0/16` which avoids conflicting with
|
||||
well known IPs such as `127.0.0.1` (`localhost`).
|
||||
Application traffic for these IPs will be captured by the sidecar and routed to the
|
||||
appropriate remote service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -n bar -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin.bar.forvms
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
# the IP address to which httpbin.bar.global will resolve to
|
||||
# must be unique for each service, within a given cluster.
|
||||
# This address need not be routable. Traffic for this IP will be captured
|
||||
# by the sidecar and routed appropriately.
|
||||
# This address will also be added into VM's /etc/hosts
|
||||
- 127.255.0.3
|
||||
endpoints:
|
||||
# This is the routable address of the ingress gateway in the cluster.
|
||||
# Traffic from the VMs will be
|
||||
# routed to this address.
|
||||
- address: ${CLUSTER_GW_ADDR}
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
The configurations above will result in all traffic from VMs for
|
||||
`httpbin.bar.global` on *any port* to be routed to the endpoint
|
||||
`<IPofClusterIngressGateway>:15443` over a mutual TLS connection.
|
||||
|
||||
The gateway for port 15443 is a special SNI-aware Envoy
|
||||
preconfigured and installed as part of the meshexpansion with gateway Istio installation step
|
||||
in the [Customized installation of Istio on the Cluster](#customized-installation-of-istio-on-the-cluster) section. Traffic entering port 15443 will be
|
||||
load balanced among pods of the appropriate internal service of the target
|
||||
cluster (in this case, `httpbin.bar` in the cluster).
|
||||
|
||||
{{< warning >}}
|
||||
Do not create a `Gateway` configuration for port 15443.
|
||||
{{< /warning >}}
|
||||
|
||||
## Send requests from VM to Kubernetes services
|
||||
|
||||
After setup, the machine can access services running in the Kubernetes cluster.
|
||||
|
||||
The following example shows accessing a service running in the Kubernetes
|
||||
cluster from a VM using `/etc/hosts/`, in this case using a
|
||||
service from the [httpbin service]({{<github_tree>}}/samples/httpbin).
|
||||
|
||||
1. On the added VM, add the service name and address to its `/etc/hosts` file.
|
||||
You can then connect to the cluster service from the VM, as in the example
|
||||
below:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "127.255.0.3 httpbin.bar.global" | sudo tee -a /etc/hosts
|
||||
$ curl -v httpbin.bar.global:8000
|
||||
$ curl -v ${GWIP}/productpage
|
||||
< HTTP/1.1 200 OK
|
||||
< server: envoy
|
||||
< content-type: text/html; charset=utf-8
|
||||
< content-length: 9593
|
||||
|
||||
< content-length: 1836
|
||||
< server: istio-envoy
|
||||
... html content ...
|
||||
{{< /text >}}
|
||||
|
||||
The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
||||
The `server: istio-envoy` header indicates that the sidecar intercepted the traffic.
|
||||
|
||||
## Running services on the added VM
|
||||
|
||||
1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8888:
|
||||
1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8080:
|
||||
|
||||
{{< text bash >}}
|
||||
$ python -m SimpleHTTPServer 8888
|
||||
$ gcloud compute ssh ${GCE_NAME}
|
||||
$ python -m SimpleHTTPServer 8080
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine the VM instance's IP address.
|
||||
1. Determine the VM instance's IP address. For example, find the IP address
|
||||
of the GCE instance with the following commands:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GCE_IP=$(gcloud --format="value(networkInterfaces[0].networkIP)" compute instances describe ${GCE_NAME})
|
||||
$ echo ${GCE_IP}
|
||||
{{< /text >}}
|
||||
|
||||
1. Add VM services to the mesh
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl experimental add-to-mesh external-service vmhttp ${VM_IP} http:8888 -n ${SERVICE_NAMESPACE}
|
||||
$ istioctl experimental add-to-mesh external-service vmhttp ${VM_IP} http:8080 -n ${SERVICE_NAMESPACE}
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
|
@ -332,10 +163,10 @@ The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
|||
1. Send a request from the `sleep` service on the pod to the VM's HTTP service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8888
|
||||
$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8080
|
||||
{{< /text >}}
|
||||
|
||||
If configured properly, you will see something similar to the output below.
|
||||
You should see something similar to the output below.
|
||||
|
||||
{{< text html >}}
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
|
||||
|
|
|
@ -65,32 +65,37 @@ following commands on a machine with cluster admin privileges:
|
|||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy Istio control plane into the cluster
|
||||
1. For a simple setup, deploy Istio control plane into the cluster
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
-f install/kubernetes/operator/examples/vm/values-istio-meshexpansion.yaml
|
||||
$ istioctl manifest apply
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[installation instructions](/docs/setup/install/istioctl/).
|
||||
|
||||
Alternatively, the user can create an explicit service of type `LoadBalancer` and use
|
||||
[internal load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)
|
||||
type. User can also deploy a separate ingress Gateway, with internal load balancer type for both mesh expansion and
|
||||
multicluster. The main requirement is for the exposed address to do TCP load balancing to the Istiod deployment,
|
||||
and for the DNS name associated with the assigned load balancer address to match the certificate provisioned
|
||||
into istiod deployment, defaulting to 'istiod.istio-system.svc'
|
||||
|
||||
1. Define the namespace the VM joins. This example uses the `SERVICE_NAMESPACE`
|
||||
environment variable to store the namespace. The value of this variable must
|
||||
match the namespace you use in the configuration files later on.
|
||||
match the namespace you use in the configuration files later on, and the identity encoded in the certificates.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export SERVICE_NAMESPACE="default"
|
||||
$ export SERVICE_NAMESPACE="vm"
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine and store the IP address of the Istio ingress gateway since the VMs
|
||||
access [Citadel](/docs/concepts/security/) and
|
||||
[Pilot](/docs/ops/deployment/architecture/#pilot) through this IP address.
|
||||
1. Determine and store the IP address of the Istiod since the VMs
|
||||
access [Istiod](/docs/ops/deployment/architecture/#pilot) through this IP address.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
$ echo $GWIP
|
||||
35.232.112.158
|
||||
$ export IstiodIP=$(kubectl get -n istio-system service istiod -o jsonpath='{.spec.clusterIP}')
|
||||
$ echo $IstiodIP
|
||||
10.55.240.12
|
||||
{{< /text >}}
|
||||
|
||||
1. Generate a `cluster.env` configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges
|
||||
|
@ -100,14 +105,17 @@ following commands on a machine with cluster admin privileges:
|
|||
|
||||
{{< text bash >}}
|
||||
$ ISTIO_SERVICE_CIDR=$(gcloud container clusters describe $K8S_CLUSTER --zone $MY_ZONE --project $MY_PROJECT --format "value(servicesIpv4Cidr)")
|
||||
$ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
|
||||
$ echo -e "ISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
It is also possible to intercept all traffic, as is done for pods. Depending on vendor and installation mechanism
|
||||
you may use different commands to determine the IP range used for services and pods. Multiple ranges can be
|
||||
specified if the VM is making requests to multiple K8S clusters.
|
||||
|
||||
1. Check the contents of the generated `cluster.env` file. It should be similar to the following example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat cluster.env
|
||||
ISTIO_CP_AUTH=MUTUAL_TLS
|
||||
ISTIO_SERVICE_CIDR=10.55.240.0/20
|
||||
{{< /text >}}
|
||||
|
||||
|
@ -118,15 +126,20 @@ following commands on a machine with cluster admin privileges:
|
|||
$ echo "ISTIO_INBOUND_PORTS=3306,8080" >> cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
1. Extract the initial keys the service account needs to use on the VMs.
|
||||
1. In order to use mesh expansion, the VM must be provisioned with certificates signed by the same root CA as
|
||||
the rest of the mesh.
|
||||
|
||||
It is recommended to follow the instructions for "Plugging in External CA Key and Certificates", and use a
|
||||
separate intermediary CA for provisioning the VM. There are many tools and procedures for managing
|
||||
certificates for VMs - Istio requirement is that the VM will get a certificate with an Istio-compatible
|
||||
SPIFEE SAN, with the correct trust domain, namespace and service account.
|
||||
|
||||
As an example, for very simple demo setups, you can also use:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.root-cert\.pem}' |base64 --decode > root-cert.pem
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.key\.pem}' |base64 --decode > key.pem
|
||||
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
|
||||
-o jsonpath='{.data.cert-chain\.pem}' |base64 --decode > cert-chain.pem
|
||||
$ go run istio.io/istio/security/tools/generate_cert \
|
||||
-client -host spiffee://cluster.local/vm/vmname --out-priv key.pem --out-cert cert-chain.pem -mode citadel
|
||||
$ kubectl -n istio-system get cm istio-ca-root-cert -o jsonpath='{.data.root-cert\.pem}' > root-cert.pem
|
||||
{{< /text >}}
|
||||
|
||||
### Setting up the VM
|
||||
|
@ -148,13 +161,17 @@ Next, run the following commands on each machine that you want to add to the mes
|
|||
$ sudo dpkg -i istio-sidecar.deb
|
||||
{{< /text >}}
|
||||
|
||||
1. Add the IP address of the Istio gateway to `/etc/hosts`. Revisit the [preparing the cluster](#preparing-the-kubernetes-cluster-for-vms) section to learn how to obtain the IP address.
|
||||
The following example updates the `/etc/hosts` file with the Istio gateway address:
|
||||
1. Add the IP address of the Istiod to `/etc/hosts`. Revisit the [preparing the cluster](#preparing-the-kubernetes-cluster-for-vms) section to learn how to obtain the IP address.
|
||||
The following example updates the `/etc/hosts` file with the Istiod address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
|
||||
$ echo "${IstiodIP} istiod.istio-system.svc" | sudo tee -a /etc/hosts
|
||||
{{< /text >}}
|
||||
|
||||
A better options is to configure the DNS resolver of the VM to resolve the address, using a split-DNS server. Using
|
||||
/etc/hosts is an easy to use example. It is also possible to use a real DNS and certificate for Istiod, this is beyond
|
||||
the scope of this document.
|
||||
|
||||
1. Install `root-cert.pem`, `key.pem` and `cert-chain.pem` under `/etc/certs/`.
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -162,30 +179,23 @@ The following example updates the `/etc/hosts` file with the Istio gateway addre
|
|||
$ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
|
||||
{{< /text >}}
|
||||
|
||||
1. Install `root-cert.pem` under `/var/run/secrets/istio/`.
|
||||
|
||||
1. Install `cluster.env` under `/var/lib/istio/envoy/`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo cp cluster.env /var/lib/istio/envoy
|
||||
{{< /text >}}
|
||||
|
||||
1. Transfer ownership of the files in `/etc/certs/` and `/var/lib/istio/envoy/` to the Istio proxy.
|
||||
1. Transfer ownership of the files in `/etc/certs/` , `/var/lib/istio/envoy/` and `/var/run/secrets/istio/`to the Istio proxy.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify the Istio Agent works:
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo node_agent
|
||||
....
|
||||
CSR is approved successfully. Will renew cert in 1079h59m59.84568493s
|
||||
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy /var/run/secrets/istio/
|
||||
{{< /text >}}
|
||||
|
||||
1. Start Istio using `systemctl`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl start istio-auth-node-agent
|
||||
$ sudo systemctl start istio
|
||||
{{< /text >}}
|
||||
|
||||
|
@ -315,7 +325,6 @@ The following are some basic troubleshooting steps for common VM-related issues.
|
|||
- Check the status of the Istio Agent and sidecar:
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl status istio-auth-node-agent
|
||||
$ sudo systemctl status istio
|
||||
{{< /text >}}
|
||||
|
||||
|
@ -324,10 +333,9 @@ The following are some basic troubleshooting steps for common VM-related issues.
|
|||
|
||||
{{< text bash >}}
|
||||
$ ps aux | grep istio
|
||||
root 6941 0.0 0.2 75392 16820 ? Ssl 21:32 0:00 /usr/local/istio/bin/node_agent --logtostderr
|
||||
root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=default exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
|
||||
root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=vm exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
|
||||
istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
|
||||
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.default~default.svc.cluster.local
|
||||
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.vm-vm.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
- Check the Envoy access and error logs:
|
||||
|
|
Loading…
Reference in New Issue