VMs: add doc test, re-org examples (#8892)

* Add documentation test for VMs

* Remove single and multinetwork docs, move up a level

* Fixes

* fix tests

* fix test
This commit is contained in:
John Howard 2021-02-04 21:31:44 -08:00 committed by GitHub
parent 09e3f8e17e
commit 51650924c9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 360 additions and 477 deletions

View File

@ -339,7 +339,7 @@ bypassing the sidecar proxies.
## Relation to virtual machines support
Note that the scenario described in this post is different from the
[Bookinfo with Virtual Machines](/docs/examples/virtual-machines/bookinfo/) example. In that scenario, a MySQL instance runs on an
[Bookinfo with Virtual Machines](/docs/examples/virtual-machines/) example. In that scenario, a MySQL instance runs on an
external
(outside the cluster) machine (a bare metal or a VM), integrated with the Istio service mesh. The MySQL service becomes
a first-class citizen of the mesh with all the beneficial features of Istio applicable. Among other things, the service

View File

@ -1,14 +0,0 @@
---
title: Virtual Machines
description: Examples that add workloads running on virtual machines to an Istio mesh.
weight: 30
aliases:
- /docs/examples/mesh-expansion/
- /docs/examples/mesh-expansion
- /docs/tasks/virtual-machines
keywords:
- kubernetes
- vms
- virtual-machine
test: n/a
---

View File

@ -9,21 +9,16 @@ keywords:
aliases:
- /docs/examples/integrating-vms/
- /docs/examples/mesh-expansion/bookinfo-expanded
- /docs/examples/virtual-machines/bookinfo/
- /docs/examples/vm-bookinfo
owner: istio/wg-environments-maintainers
test: no
test: yes
---
This example deploys the Bookinfo application across Kubernetes with one
service running on a virtual machine (VM), and illustrates how to control
this infrastructure as a single mesh.
{{< warning >}}
This example is still under development and only tested on Google Cloud Platform.
On IBM Cloud or other platforms where overlay network of Pods is isolated from VM network,
VMs cannot initiate any direct communication to Kubernetes Pods even when using Istio.
{{< /warning >}}
## Overview
{{< image width="80%" link="./vm-bookinfo.svg" caption="Bookinfo running on VMs" >}}
@ -35,22 +30,29 @@ https://docs.google.com/drawings/d/1G1592HlOVgtbsIqxJnmMzvy6ejIdhajCosxF1LbvspI/
## Before you begin
- Setup Istio by following the instructions in the
[Installation guide](/docs/setup/getting-started/).
[Virtual Machine Installation guide](/docs/setup/install/virtual-machine/).
- Deploy the [Bookinfo](/docs/examples/bookinfo/) sample application (in the `bookinfo` namespace).
- Create a VM named 'vm-1' in the same project as the Istio cluster, and [join the mesh](/docs/examples/virtual-machines/single-network/).
- Create a VM and add it to the `vm` namespace, following the steps in
[Configure the virtual machine](/docs/setup/install/virtual-machine/#configure-the-virtual-machine).
## Running MySQL on the VM
We will first install MySQL on the VM, and configure it as a backend for the ratings service.
All commands below should be run on the VM.
On the VM:
Install `mariadb`:
{{< text bash >}}
$ sudo apt-get update && sudo apt-get install -y mariadb-server
$ sudo sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf
$ sudo mysql
{{< /text >}}
Set up authentication:
{{< text bash >}}
$ cat <<EOF | sudo mysql
# Grant access to root
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
# Grant root access to other IPs
@ -58,9 +60,7 @@ CREATE USER 'root'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
quit;
{{< /text >}}
{{< text bash >}}
EOF
$ sudo systemctl restart mysql
{{< /text >}}
@ -69,7 +69,8 @@ You can find details of configuring MySQL at [Mysql](https://mariadb.com/kb/en/l
On the VM add ratings database to mysql.
{{< text bash >}}
$ curl -q {{< github_file >}}/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -ppassword
$ curl -LO {{< github_file >}}/samples/bookinfo/src/mysql/mysqldb-init.sql
$ mysql -u root -ppassword < mysqldb-init.sql
{{< /text >}}
To make it easy to visually inspect the difference in the output of the Bookinfo application, you can change the ratings that are generated by using the
@ -97,36 +98,34 @@ $ mysql -u root -ppassword test -e "update ratings set rating=1 where reviewid=
+----------+--------+
{{< /text >}}
## Find out the IP address of the VM that will be used to add it to the mesh
## Expose the mysql service to the mesh
On the VM:
When the virtual machine is started, it will automatically be registered into the mesh.
However, just like when creating a Pod, we still need to create a Service before we can easily access it.
{{< text bash >}}
$ hostname -I
$ cat <<EOF | kubectl apply -f - -n vm
apiVersion: v1
kind: Service
metadata:
name: mysqldb
labels:
app: mysqldb
spec:
ports:
- port: 3306
name: tcp
selector:
app: mysqldb
EOF
{{< /text >}}
## Registering the mysql service with the mesh
On a host with access to [`istioctl`](/docs/reference/commands/istioctl) commands, register the VM and mysql db service
{{< text bash >}}
$ istioctl register -n vm mysqldb <ip-address-of-vm> 3306
I1108 20:17:54.256699 40419 register.go:43] Registering for service 'mysqldb' ip '10.150.0.5', ports list [{3306 mysql}]
I1108 20:17:54.256815 40419 register.go:48] 0 labels ([]) and 1 annotations ([alpha.istio.io/kubernetes-serviceaccounts=default])
W1108 20:17:54.573068 40419 register.go:123] Got 'services "mysqldb" not found' looking up svc 'mysqldb' in namespace 'vm', attempting to create it
W1108 20:17:54.816122 40419 register.go:138] Got 'endpoints "mysqldb" not found' looking up endpoints for 'mysqldb' in namespace 'vm', attempting to create them
I1108 20:17:54.886657 40419 register.go:180] No pre existing exact matching ports list found, created new subset {[{10.150.0.5 <nil> nil}] [] [{mysql 3306 }]}
I1108 20:17:54.959744 40419 register.go:191] Successfully updated mysqldb, now with 1 endpoints
{{< /text >}}
Note that the 'mysqldb' virtual machine does not need and should not have special Kubernetes privileges.
## Using the mysql service
The ratings service in Bookinfo will use the DB on the machine. To verify that it works, create version 2 of the ratings service that uses the mysql db on the VM. Then specify route rules that force the review service to use the ratings version 2.
{{< text bash >}}
$ istioctl kube-inject -n bookinfo -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml@ | kubectl apply -n bookinfo -f -
$ kubectl apply -n bookinfo -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml@
{{< /text >}}
Create route rules that will force Bookinfo to use the ratings back end:
@ -138,4 +137,16 @@ $ kubectl apply -n bookinfo -f @samples/bookinfo/networking/virtual-service-rati
You can verify the output of the Bookinfo application is showing 1 star from Reviewer1 and 4 stars from Reviewer2 or change the ratings on your VM and see the
results.
You can also find some troubleshooting and other information in the [RawVM MySQL]({{< github_blob >}}/samples/rawvm/README.md) document in the meantime.
## Reaching Kubernetes services from the virtual machine
In the above example, we treated our virtual machine as only a server.
We can also seamlessly call Kubernetes services from our virtual machine:
{{< text bash >}}
$ curl productpage.bookinfo:9080
...
<title>Simple Bookstore App</title>
...
{{< /text >}}
Istio's [DNS proxying](/docs/ops/configuration/traffic-management/dns-proxy/) automatically configures DNS for the virtual machine, allowing us to make calls to Kubernetes hostnames.

View File

@ -1,153 +0,0 @@
---
title: Virtual Machines in Multi-Network Meshes
description: Learn how to add a service running on a virtual machine to your multi-network
Istio mesh.
weight: 30
keywords:
- kubernetes
- virtual-machine
- gateways
- vms
aliases:
- /docs/examples/mesh-expansion/multi-network
- /docs/tasks/virtual-machines/multi-network
owner: istio/wg-environments-maintainers
test: no
---
This example provides instructions to integrate a VM or a bare metal host into a
multi-network Istio mesh deployed on Kubernetes using gateways. This approach
doesn't require VPN connectivity or direct network access between the VM, the
bare metal and the clusters.
## Prerequisites
- One or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
- Virtual machines (VMs) must have IP connectivity to the east-west gateways in the mesh.
- Services in the cluster must be accessible through the east-west gateway.
- Installation must be completed using [virtual machine installation](/docs/setup/install/virtual-machine) instructions, following steps for Multi-Network.
## Verify setup
After setup, the machine can access services running in the Kubernetes cluster
or on other VMs. When a service on the VM tries to access a service in the mesh running on Kubernetes, the endpoints (i.e., IPs) for those services will be the ingress gateway on the Kubernetes Cluster. To verify that, on the VM run the following command (assuming you have a service named `httpbin` on the Kubernetes cluster):
{{< text bash >}}
$ curl -v localhost:15000/clusters | grep httpbin
{{< /text >}}
This should show endpoints for `httpbin` that point to the ingress gateway similar to this:
{{< text text >}}
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_active::1
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_connect_fail::0
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_total::1
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::rq_active::0
{{< /text >}}
The IP `34.72.46.113` in this case is the ingress gateway public endpoint.
### Send requests from VM workloads to Kubernetes services
At this point we should be able to send traffic to `httpbin.default.svc.cluster.local` and get a response from the server. You may have to set up DNS in `/etc/hosts` to map the `httpbin.default.svc.cluster.local` domain name to an IP since the IP will not resolve. In this case, the IP should be an IP that gets routed to the local Istio Proxy sidecar. You can use the IP from the `ISTIO_SERVICE_CIDR` variable in the `cluster.env` file you created in the [Setup Virtual Machine documentation](/docs/setup/install/virtual-machine/).
{{< text bash >}}
$ curl -v httpbin.default.svc.cluster.local:8000/headers
{{< /text >}}
### Running services on the added VM
1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8080:
{{< text bash >}}
$ python -m SimpleHTTPServer 8080
{{< /text >}}
{{< idea >}}
Note, you may have to open firewalls to be able to access the 8080 port on your VM
{{< /idea >}}
1. Add VM services to the mesh
Add a service to the Kubernetes cluster into a namespace (in this example, `<vm-namespace>`) where you prefer to keep resources (like `Service`, `ServiceEntry`, `WorkloadEntry`, `ServiceAccount`) with the VM services:
{{< text bash >}}
$ cat <<EOF | kubectl -n <vm-namespace> apply -f -
apiVersion: v1
kind: Service
metadata:
name: cloud-vm
labels:
app: cloud-vm
spec:
ports:
- port: 8080
name: http-vm
targetPort: 8080
selector:
app: cloud-vm
EOF
{{< /text >}}
Lastly create a workload with the external IP of the VM (substitute `VM_IP` with the IP of your VM):
{{< tip >}}
You can skip this step if using Automated WorkloadEntry Creation.
{{< /tip >}}
{{< text bash >}}
$ cat <<EOF | kubectl -n <vm-namespace> apply -f -
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
name: "cloud-vm"
namespace: "<vm-namespace>"
spec:
address: "${VM_IP}"
labels:
app: cloud-vm
serviceAccount: "<service-account>"
EOF
{{< /text >}}
1. Deploy a pod running the `sleep` service in the Kubernetes cluster, and wait until it is ready:
{{< text bash >}}
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
...
{{< /text >}}
1. Send a request from the `sleep` service on the pod to the VM's HTTP service:
{{< text bash >}}
$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl -sS cloud-vm.${VM_NAMESPACE}.svc.cluster.local:8080
{{< /text >}}
You should see something similar to the output below.
{{< text html >}}
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bashrc">.bashrc</a></li>
<li><a href=".ssh/">.ssh/</a></li>
...
</body>
{{< /text >}}
**Congratulations!** You successfully configured a service running in a pod within the cluster to
send traffic to a service running on a VM outside of the cluster and tested that
the configuration worked.
## Cleanup
At this point, you can remove the VM resources from the Kubernetes cluster in the `<vm-namespace>` namespace.

View File

@ -1,197 +0,0 @@
---
title: Example Application using Virtual Machines in a Single Network Mesh
description: Learn how to add a service running on a virtual machine to your single-network
Istio mesh.
weight: 20
keywords:
- kubernetes
- vms
- virtual-machines
aliases:
- /docs/setup/kubernetes/additional-setup/mesh-expansion/
- /docs/examples/mesh-expansion/single-network
- /docs/tasks/virtual-machines/single-network
owner: istio/wg-environments-maintainers
test: no
---
This example provides instructions to integrate a virtual machine or a bare metal host into a
single network Istio mesh deployed on Kubernetes. This approach requires L3 connectivity
between the virtual machine and the Kubernetes cluster.
## Prerequisites
- One or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
- Virtual machines must have L3 IP connectivity to the endpoints in the mesh.
This typically requires a VPC or a VPN, as well as a container network that
provides direct (without NAT or firewall deny) routing to the endpoints. The
machine is not required to have access to the cluster IP addresses assigned by
Kubernetes.
- Installation must be completed using [virtual machine installation](/docs/setup/install/virtual-machine) instructions.
## Verify installation
After installation, the virtual machine can access services running in the Kubernetes cluster or in
other virtual machines. To verify the virtual machine connectivity, run the following command
(assuming you have a service named `httpbin` on the Kubernetes cluster:
{{< text bash >}}
$ curl -v localhost:15000/clusters | grep httpbin
{{< /text >}}
This shows endpoints for `httpbin`:
{{< text text >}}
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_active::1
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_connect_fail::0
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_total::1
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::rq_active::0
{{< /text >}}
The IP `34.72.46.113` in this case is the pod IP address of the httpbin endpoint.
### Send requests from virtual machine workloads to Kubernetes services
You can send traffic to `httpbin.default.svc.cluster.local` and get a response from the server. You must configure DNS in `/etc/hosts` to map the `httpbin.default.svc.cluster.local` domain name to an IP, or the IP will not resolve. In this case, the IP should be an IP that is routed over the single network using L3 connectivity. You should use the IP of the service in the Kubernetes cluster.
{{< text bash >}}
$ curl -v httpbin.default.svc.cluster.local:8000/headers
{{< /text >}}
### Running services on the virtual machine
1. Setup an HTTP server on the virtual machine to serve HTTP traffic on port 8080:
{{< text bash >}}
$ python -m SimpleHTTPServer 8080
{{< /text >}}
{{< warning >}}
You may have to open firewalls to be able to access the 8080 port on your virtual machine
{{< /warning >}}
1. Add virtual machine services to the mesh
Add a service to the Kubernetes cluster into a namespace (in this example, `<vm-namespace>`) where you prefer to keep resources (like `Service`, `ServiceEntry`, `WorkloadEntry`, `ServiceAccount`) with the virtual machine services:
{{< text bash >}}
$ cat <<EOF | kubectl -n <vm-namespace> apply -f -
apiVersion: v1
kind: Service
metadata:
name: cloud-vm
labels:
app: cloud-vm
spec:
ports:
- port: 8080
name: http-vm
targetPort: 8080
selector:
app: cloud-vm
EOF
{{< /text >}}
Create a `WorkloadEntry` with the external IP of the virtual machine. Substitute `VM_IP` with the IP of your virtual machine:
{{< tip >}}
This step can be skipped if you followed the VM auto-registration steps during install.
{{< /tip >}}
{{< text bash >}}
$ cat <<EOF | kubectl -n <vm-namespace> apply -f -
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
name: "cloud-vm"
namespace: "<vm-namespace>"
spec:
address: "${VM_IP}"
labels:
app: cloud-vm
serviceAccount: "<service-account>"
EOF
{{< /text >}}
1. Deploy a pod running the `sleep` service in the Kubernetes cluster, and wait until it is ready:
{{< text bash >}}
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
...
{{< /text >}}
1. Send a request from the `sleep` service on the pod to the virtual machine HTTP service:
{{< text bash >}}
$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl -sS cloud-vm.${VM_NAMESPACE}.svc.cluster.local:8080
{{< /text >}}
You will see output similar to this:
{{< text html >}}
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bashrc">.bashrc</a></li>
<li><a href=".ssh/">.ssh/</a></li>
...
</body>
{{< /text >}}
**Congratulations!** You successfully configured a service running in a pod within the cluster to
send traffic to a service running on a VM outside of the cluster and tested that
the configuration worked.
## Cleanup
At this point, you can remove the virtual machine resources from the Kubernetes cluster in the `<vm-namespace>` namespace.
## Troubleshooting
The following are some basic troubleshooting steps for common VM-related issues.
- When making requests from a VM to the cluster, ensure you don't run the requests as `root` or
`istio-proxy` user. By default, Istio excludes both users from interception.
- Verify the machine can reach the IP of the all workloads running in the cluster. For example:
{{< text bash >}}
$ kubectl get endpoints productpage -o jsonpath='{.subsets[0].addresses[0].ip}'
10.52.39.13
{{< /text >}}
{{< text bash >}}
$ curl 10.52.39.13:9080
html output
{{< /text >}}
- Check the status of the Istio Agent and sidecar:
{{< text bash >}}
$ sudo systemctl status istio
{{< /text >}}
- Check that the processes are running. The following is an example of the processes you should see on the VM if you run
`ps`, filtered for `istio`:
{{< text bash >}}
$ ps aux | grep istio
root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=vm exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.vm-vm.svc.cluster.local
{{< /text >}}
- Check the Envoy access and error logs for failures:
{{< text bash >}}
$ tail /var/log/istio/istio.log
$ tail /var/log/istio/istio.err.log
{{< /text >}}

View File

@ -0,0 +1,105 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2153,SC2155,SC2164
# Copyright Istio Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################################################################
# WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL MARKDOWN FILE:
# docs/examples/virtual-machines/index.md
####################################################################################################
snip_running_mysql_on_the_vm_1() {
sudo apt-get update && sudo apt-get install -y mariadb-server
sudo sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf
}
snip_running_mysql_on_the_vm_2() {
cat <<EOF | sudo mysql
# Grant access to root
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
# Grant root access to other IPs
CREATE USER 'root'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
quit;
EOF
sudo systemctl restart mysql
}
snip_running_mysql_on_the_vm_3() {
curl -LO https://raw.githubusercontent.com/istio/istio/release-1.9/samples/bookinfo/src/mysql/mysqldb-init.sql
mysql -u root -ppassword < mysqldb-init.sql
}
snip_running_mysql_on_the_vm_4() {
mysql -u root -ppassword test -e "select * from ratings;"
}
! read -r -d '' snip_running_mysql_on_the_vm_4_out <<\ENDSNIP
+----------+--------+
| ReviewID | Rating |
+----------+--------+
| 1 | 5 |
| 2 | 4 |
+----------+--------+
ENDSNIP
snip_running_mysql_on_the_vm_5() {
mysql -u root -ppassword test -e "update ratings set rating=1 where reviewid=1;select * from ratings;"
}
! read -r -d '' snip_running_mysql_on_the_vm_5_out <<\ENDSNIP
+----------+--------+
| ReviewID | Rating |
+----------+--------+
| 1 | 1 |
| 2 | 4 |
+----------+--------+
ENDSNIP
snip_expose_the_mysql_service_to_the_mesh_1() {
cat <<EOF | kubectl apply -f - -n vm
apiVersion: v1
kind: Service
metadata:
name: mysqldb
labels:
app: mysqldb
spec:
ports:
- port: 3306
name: tcp
selector:
app: mysqldb
EOF
}
snip_using_the_mysql_service_1() {
kubectl apply -n bookinfo -f samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml
}
snip_using_the_mysql_service_2() {
kubectl apply -n bookinfo -f samples/bookinfo/networking/virtual-service-ratings-mysql-vm.yaml
}
snip_reaching_kubernetes_services_from_the_virtual_machine_1() {
curl productpage.bookinfo:9080
}
! read -r -d '' snip_reaching_kubernetes_services_from_the_virtual_machine_1_out <<\ENDSNIP
...
<title>Simple Bookstore App</title>
...
ENDSNIP

View File

@ -0,0 +1,109 @@
#!/usr/bin/env bash
# shellcheck disable=SC2034,SC2155,SC2154
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
set -u
set -o pipefail
set -x
export VM_APP="mysqldb"
export VM_NAMESPACE="vm"
export WORK_DIR="$(mktemp -d)"
export SERVICE_ACCOUNT="default"
export CLUSTER_NETWORK=""
export VM_NETWORK=""
export CLUSTER="Kubernetes"
source "tests/util/samples.sh"
source "content/en/docs/setup/install/virtual-machine/snips.sh"
source "content/en/docs/setup/install/virtual-machine/common.sh"
function run_in_vm() {
script="${1:?script}"
docker exec --privileged vm bash -c "set -x; source /examples/snips.sh;
${script}
"
}
function run_in_vm_interactive() {
script="${1:?script}"
docker exec -t --privileged vm bash -c "set -x ;source /examples/snips.sh;
${script}
"
}
# @setup profile=none
setup_cluster_for_vms
EXTRA_VM_ARGS="-v ${PWD}/content/en/docs/examples/virtual-machines:/examples" setup_vm
start_vm
echo "VM STARTED"
run_in_vm "while ! curl localhost:15021/healthz/ready -s; do sleep 1; done"
run_in_vm "while ! curl archive.ubuntu.com -s; do sleep 1; done"
run_in_vm "
snip_running_mysql_on_the_vm_1
mkdir -p /var/lib/mysql /var/run/mysqld
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld;
chmod 777 /var/run/mysqld
"
# We do not have systemd, need to start mysql manually
docker exec --privileged -d vm mysqld --skip-grant-tables
# Wait for mysql to be ready
run_in_vm "while ! sudo mysql 2> /dev/null; do echo retrying mysql...; sleep 5; done"
run_in_vm snip_running_mysql_on_the_vm_3
check_table4() { run_in_vm_interactive snip_running_mysql_on_the_vm_4; }
_verify_contains check_table4 "${snip_running_mysql_on_the_vm_4_out}"
check_table5() { run_in_vm_interactive snip_running_mysql_on_the_vm_5; }
_verify_contains check_table5 "${snip_running_mysql_on_the_vm_5_out}"
snip_expose_the_mysql_service_to_the_mesh_1
# Setup test applications. Doc assumes these are present
kubectl create namespace bookinfo || true
kubectl label namespace bookinfo istio-injection=enabled --overwrite
kubectl apply -n bookinfo -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -n bookinfo -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl apply -n bookinfo -f samples/bookinfo/networking/destination-rule-all.yaml
startup_sleep_sample
for deploy in "productpage-v1" "details-v1" "ratings-v1" "reviews-v1" "reviews-v2" "reviews-v3"; do
_wait_for_deployment bookinfo "$deploy"
done
# Switch bookinfo to point to mysql
snip_using_the_mysql_service_1
snip_using_the_mysql_service_2
# Send traffic, ensure we get ratings
get_bookinfo_productpage() {
sample_http_request "/productpage"
}
_verify_contains get_bookinfo_productpage "glyphicon glyphicon-star"
run_curl() { run_in_vm_interactive snip_reaching_kubernetes_services_from_the_virtual_machine_1; }
_verify_elided run_curl "${snip_reaching_kubernetes_services_from_the_virtual_machine_1_out}"
# @cleanup
docker stop vm
kubectl delete -f samples/multicluster/expose-istiod.yaml --ignore-not-found=true
istioctl manifest generate | kubectl delete -f - --ignore-not-found=true
cleanup_sleep_sample
kubectl delete namespace istio-system vm bookinfo --ignore-not-found=true

View File

Before

Width:  |  Height:  |  Size: 218 KiB

After

Width:  |  Height:  |  Size: 218 KiB

View File

@ -0,0 +1,84 @@
#!/usr/bin/env bash
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
set -u
set -o pipefail
set -x
function setup_cluster_for_vms() {
snip_setup_wd
snip_setup_iop
echo y | snip_install_istio
snip_install_eastwest
snip_expose_istio
}
function setup_vm() {
snip_install_namespace || true
snip_install_sa || true
snip_create_wg
snip_apply_wg
snip_configure_wg
# Rather than spinning up a real VM, we will use a docker container
# We do the implicit "Securely transfer the files from "${WORK_DIR}" to the virtual machine." step by
# a volume mount
# shellcheck disable=SC2086
docker run --rm -it --init -d --network=kind --name vm \
-v "${WORK_DIR}:/root" -v "${PWD}/content/en/docs/setup/install/virtual-machine:/test" \
${EXTRA_VM_ARGS:-} \
-w "/root" \
gcr.io/istio-release/base:1.9-dev.2
POD_CIDR=$(kubectl get node -ojsonpath='{.items[0].spec.podCIDR}')
DOCKER_IP=$(docker inspect -f "{{ .NetworkSettings.Networks.kind.IPAddress }}" istio-testing-control-plane)
# Here, we run the snippets *inside* the docker VM. This mirrors the docs telling to run the commands
# on the VM
docker exec --privileged vm bash -c "
# Setup connectivity
ip route add ${POD_CIDR} via ${DOCKER_IP}
# Docker sets up a bunch of rules for DNS which messes with things. Just remove all of them
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -t nat -F
sudo iptables -t mangle -F
sudo iptables -F
sudo iptables -X
echo nameserver 8.8.8.8 | sudo tee /etc/resolv.conf
source /test/snips.sh
snip_configure_the_virtual_machine_1
snip_configure_the_virtual_machine_2
# TODO: we should probably have a better way to get the debian package
curl -LO https://storage.googleapis.com/istio-build/dev/1.9-alpha.cdae086ca8cae8be174c8feee509841f89792e43/deb/istio-sidecar.deb
sudo dpkg -i istio-sidecar.deb
snip_configure_the_virtual_machine_5
snip_configure_the_virtual_machine_6
snip_configure_the_virtual_machine_7
snip_configure_the_virtual_machine_8
"
}
function start_vm() {
# We cannot use systemd inside docker (since its a pain). Just run it directly.
docker exec --privileged -w / -e ISTIO_AGENT_FLAGS="--log_output_level=dns:debug" -d vm /usr/local/bin/istio-start.sh
}

View File

@ -1,5 +1,5 @@
#!/usr/bin/env bash
# shellcheck disable=SC2034
# shellcheck disable=SC2034,SC2155
# Copyright Istio Authors
#
@ -21,64 +21,21 @@ set -o pipefail
set -x
source "tests/util/samples.sh"
source "content/en/docs/setup/install/virtual-machine/common.sh"
VM_APP="vm-app"
VM_NAMESPACE="vm-namespace"
WORK_DIR="$(mktemp -d)"
SERVICE_ACCOUNT="default"
CLUSTER_NETWORK=""
VM_NETWORK=""
CLUSTER="Kubernetes"
export VM_APP="vm-app"
export VM_NAMESPACE="vm-namespace"
export WORK_DIR="$(mktemp -d)"
export SERVICE_ACCOUNT="default"
export CLUSTER_NETWORK=""
export VM_NETWORK=""
export CLUSTER="Kubernetes"
# @setup profile=none
snip_setup_wd
snip_setup_iop
echo y | snip_install_istio
snip_install_eastwest
snip_expose_istio
snip_install_namespace || true
snip_install_sa || true
snip_create_wg
snip_apply_wg
snip_configure_wg
# Rather than spinning up a real VM, we will use a docker container
# We do the implicit "Securely transfer the files from "${WORK_DIR}" to the virtual machine." step by
# a volume mount
docker run --rm -it --init -d --network=kind --name vm \
-v "${WORK_DIR}:/root" -v "${PWD}/content/en/docs/setup/install/virtual-machine:/test" -w "/root" \
gcr.io/istio-release/base:1.9-dev.2
POD_CIDR=$(kubectl get node -ojsonpath='{.items[0].spec.podCIDR}')
DOCKER_IP=$(docker inspect -f "{{ .NetworkSettings.Networks.kind.IPAddress }}" istio-testing-control-plane)
# Here, we run the snippets *inside* the docker VM. This mirrors the docs telling to run the commands
# on the VM
docker exec --privileged vm bash -c "
# Setup connectivity
ip route add ${POD_CIDR} via ${DOCKER_IP}
source /test/snips.sh
snip_configure_the_virtual_machine_1
snip_configure_the_virtual_machine_2
# TODO: we should probably have a better way to get the debian package
curl -LO https://storage.googleapis.com/istio-build/dev/1.10-alpha.e0558027c9915da4d966bad51a649abfa1bc17b6/deb/istio-sidecar.deb
sudo dpkg -i istio-sidecar.deb
snip_configure_the_virtual_machine_5
snip_configure_the_virtual_machine_6
snip_configure_the_virtual_machine_7
snip_configure_the_virtual_machine_8
"
# We cannot use systemd inside docker (since its a pain). Just run it directly.
docker exec --privileged -w / -d vm /usr/local/bin/istio-start.sh
setup_cluster_for_vms
setup_vm
start_vm
snip_verify_istio_works_successfully_2 || true
snip_verify_istio_works_successfully_3

View File

@ -40,22 +40,3 @@ spec:
For the workloads running on VMs and bare metal hosts, the lifetime of their Istio certificates is specified by the
`workload-cert-ttl` flag on each Istio Agent. The default value is also 90 days. This value should be no greater than
`max-workload-cert-ttl` of Citadel.
To customize this configuration, the argument for the Istio Agent service should be modified.
After [setting up the machines](/docs/examples/virtual-machines/single-network/#setting-up-the-vm) for Istio
mesh expansion, modify the file `/lib/systemd/system/istio-auth-node-agent.service` on the VMs or bare metal hosts:
{{< text plain >}}
...
[Service]
ExecStart=/usr/local/bin/node_agent --workload-cert-ttl=24h # Specify certificate lifetime for workloads on this machine.
Restart=always
StartLimitInterval=0
RestartSec=10
...
{{< /text >}}
The above configuration specifies that the Istio certificates for workloads running on this VM or bare metal host
will have 24 hours lifetime.
After configuring the service, restart the Istio Agent by running `systemctl daemon-reload`.

View File

@ -37,12 +37,12 @@ startup_sleep_sample() {
kubectl delete pods -l app=sleep --force
set -e
kubectl apply -f samples/sleep/sleep.yaml
kubectl apply -f samples/sleep/sleep.yaml -n default
_wait_for_deployment default sleep
}
cleanup_sleep_sample() {
kubectl delete -f samples/sleep/sleep.yaml || true
kubectl delete -f samples/sleep/sleep.yaml -n default || true
}
startup_httpbin_sample() {