Finish fixing remaining lint errors

This commit is contained in:
mtail 2018-04-05 07:47:20 -07:00
parent a6fae1b368
commit 7acb463af4
33 changed files with 575 additions and 495 deletions

View File

@ -118,7 +118,7 @@ ServiceRole
Servicegraph
Sharding
SolarWinds
SolarWindws
StatefulSets
TCP-level
TLS-secured
Tcpdump

View File

@ -181,7 +181,7 @@ Note that the port is derived by the `URI.parse` from the URI's schema (https://
When the `WITH_ISTIO` environment variable is defined, the request is performed without SSL (plain HTTP).
We set the `WITH_ISTIO` environment variable to _"true"_ in the
[Kubernetes deployment spec of _details v2_](https://github.com/istio/istio/blob/master/samples/bookinfo/kube/bookinfo-details-v2.yaml),
[Kubernetes deployment spec of details v2](https://github.com/istio/istio/blob/master/samples/bookinfo/kube/bookinfo-details-v2.yaml),
the `container` section:
```yaml

View File

@ -45,15 +45,15 @@ Istio Auth uses [Kubernetes service accounts](https://kubernetes.io/docs/tasks/c
* A service account in Istio has the format "spiffe://\<_domain_\>/ns/\<_namespace_>/sa/\<_serviceaccount_\>".
* _domain_ is currently _cluster.local_. We will support customization of domain in the near future.
* _namespace_ is the namespace of the Kubernetes service account.
* _serviceaccount_ is the Kubernetes service account name.
* _domain_ is currently _cluster.local_. We will support customization of domain in the near future.
* _namespace_ is the namespace of the Kubernetes service account.
* _serviceaccount_ is the Kubernetes service account name.
* A service account is **the identity (or role) a workload runs as**, which represents that workload's privileges. For systems requiring strong security, the
amount of privilege for a workload should not be identified by a random string (i.e., service name, label, etc), or by the binary that is deployed.
* For example, let's say we have a workload pulling data from a multi-tenant database. If Alice ran this workload, she will be able to pull
a different set of data than if Bob ran this workload.
* For example, let's say we have a workload pulling data from a multi-tenant database. If Alice ran this workload, she will be able to pull
a different set of data than if Bob ran this workload.
* Service accounts enable strong security policies by offering the flexibility to identify a machine, a user, a workload, or a group of workloads (different
workloads can run as the same service account).

View File

@ -50,13 +50,12 @@ service/version. However, consumers of a service can also override
and
[retry]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#HTTPRetry)
defaults by providing request-level overrides through special HTTP headers.
With the Envoy proxy implementation, the headers are "x-envoy-upstream-rq-timeout-ms" and
"x-envoy-max-retries", respectively.
With the Envoy proxy implementation, the headers are `x-envoy-upstream-rq-timeout-ms` and
`x-envoy-max-retries`, respectively.
## FAQ
_1. Do applications still handle failures when running in Istio?_
Q: *Do applications still handle failures when running in Istio?*
Yes. Istio improves the reliability and availability of services in the
mesh. However, **applications need to handle the failure (errors)
@ -65,15 +64,15 @@ a load balancing pool have failed, Envoy will return HTTP 503. It is the
responsibility of the application to implement any fallback logic that is
needed to handle the HTTP 503 error code from an upstream service.
_1. Will Envoy's failure recovery features break applications that already
use fault tolerance libraries (e.g., [Hystrix](https://github.com/Netflix/Hystrix))?_
Q: *Will Envoy's failure recovery features break applications that already
use fault tolerance libraries (e.g. [Hystrix](https://github.com/Netflix/Hystrix))?*
No. Envoy is completely transparent to the application. A failure response
returned by Envoy would not be distinguishable from a failure response
returned by the upstream service to which the call was made.
_1. How will failures be handled when using application-level libraries and
Envoy at the same time?_
Q: *How will failures be handled when using application-level libraries and
Envoy at the same time?*
Given two failure recovery policies for the same destination service (e.g.,
two timeouts -- one set in Envoy and another in application's library), **the

View File

@ -103,7 +103,7 @@ To start the application, follow the instructions below corresponding to your Is
ingress resource as illustrated in the above diagram.
All 3 versions of the reviews service, v1, v2, and v3, are started.
> Note that in a realistic deployment, new versions of a microservice are deployed
> In a realistic deployment, new versions of a microservice are deployed
over time instead of deploying all versions simultaneously.
1. Confirm all services and pods are correctly defined and running:
@ -194,16 +194,20 @@ To start the application, follow the instructions below corresponding to your Is
1. Bring up the application containers.
- To test with Consul, run the following commands:
```bash
docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/consul/bookinfo.sidecars.yaml up -d
```
- To test with Eureka, run the following commands:
```bash
docker-compose -f samples/bookinfo/eureka/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/eureka/bookinfo.sidecars.yaml up -d
```
To test with Consul, run the following commands:
```bash
docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/consul/bookinfo.sidecars.yaml up -d
```
To test with Eureka, run the following commands:
```bash
docker-compose -f samples/bookinfo/eureka/bookinfo.yaml up -d
docker-compose -f samples/bookinfo/eureka/bookinfo.sidecars.yaml up -d
```
1. Confirm that all docker containers are running:
```bash
@ -225,6 +229,7 @@ To confirm that the Bookinfo application is running, run the following `curl` co
```bash
curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
```
```xxx
200
```
@ -265,13 +270,13 @@ uninstall and clean it up using the following instructions.
1. Delete the routing rules and application containers
1. In a Consul setup, run the following command:
In a Consul setup, run the following command:
```bash
samples/bookinfo/consul/cleanup.sh
```
1. In a Eureka setup, run the following command:
In a Eureka setup, run the following command:
```bash
samples/bookinfo/eureka/cleanup.sh

View File

@ -18,6 +18,7 @@ features are important, and so on. This is not a task, but a feature of
Istio.
## Before you begin
* Describe installation options.
* Install Istio control plane in a Kubernetes cluster by following the quick start instructions in the

View File

@ -16,6 +16,7 @@ This sample demonstrates how to obtain uniform metrics, logs, traces across diff
Placeholder.
## Before you begin
* Describe installation options.
* Install Istio control plane in a Kubernetes cluster by following the quick start instructions in the

View File

@ -12,7 +12,6 @@ type: markdown
Quick Start instructions to install and configure Istio in a Docker Compose setup.
## Prerequisites
* [Docker](https://docs.docker.com/engine/installation/#cloud)
@ -23,18 +22,20 @@ Quick Start instructions to install and configure Istio in a Docker Compose setu
1. Go to the [Istio release](https://github.com/istio/istio/releases) page to download the
installation file corresponding to your OS. If you are using a MacOS or Linux system, you can also
run the following command to download and extract the latest release automatically:
```bash
curl -L https://git.io/getLatestIstio | sh -
```
1. Extract the installation file and change the directory to the file location. The
installation directory contains:
* Sample applications in `samples/`
* The `istioctl` client binary in the `bin/` directory. `istioctl` is used for creating routing rules and policies.
* The `istio.VERSION` configuration file
installation directory contains:
* Sample applications in `samples/`
* The `istioctl` client binary in the `bin/` directory. `istioctl` is used for creating routing rules and policies.
* The `istio.VERSION` configuration file
1. Add the `istioctl` client to your PATH.
For example, run the following command on a MacOS or Linux system:
For example, run the following command on a MacOS or Linux system:
```bash
export PATH=$PWD/bin:$PATH
@ -73,13 +74,13 @@ Quick Start instructions to install and configure Istio in a Docker Compose setu
You can now deploy your own application or one of the sample applications provided with the
installation like [Bookinfo]({{home}}/docs/guides/bookinfo.html).
> Note 1: Since there is no concept of pods in a Docker setup, the Istio
> sidecar runs in the same container as the application. We will use
> [Registrator](https://gliderlabs.github.io/registrator/latest/) to
> Since there is no concept of pods in a Docker setup, the Istio
> sidecar runs in the same container as the application. We will
> use [Registrator](https://gliderlabs.github.io/registrator/latest/) to
> automatically register instances of services in the Consul service
> registry.
> Note 2: the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because HTTP/1.0 is not supported.
>
> The application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because HTTP/1.0 is not supported.
```bash
docker-compose -f <your-app-spec>.yaml up -d
@ -87,7 +88,7 @@ docker-compose -f <your-app-spec>.yaml up -d
## Uninstalling
1. Uninstall Istio core components by removing the docker containers:
Uninstall Istio core components by removing the docker containers:
```bash
docker-compose -f install/consul/istio.yaml down

View File

@ -11,8 +11,8 @@ type: markdown
Using Istio in a non-Kubernetes environment involves a few key tasks:
1. Setting up the Istio control plane with the Istio API server
2. Adding the Istio sidecar to every instance of a service
3. Ensuring requests are routed through the sidecars
1. Adding the Istio sidecar to every instance of a service
1. Ensuring requests are routed through the sidecars
## Setting up the control plane
@ -77,7 +77,6 @@ services:
]
```
### Other Istio components
Debian packages for Istio Pilot, Mixer, and CA are available through the
@ -87,7 +86,6 @@ docker.io/istio/istio-ca). Note that these components are stateless and can
be scaled horizontally. Each of these components depends on the Istio API
server, which in turn depends on the etcd cluster for persistence.
## Adding sidecars to service instances
Each instance of a service in an application must be accompanied by the
@ -97,7 +95,7 @@ into these components. For example, if your infrastructure uses VMs, the
Istio sidecar process must be run on each VM that needs to be part of the
service mesh.
## Routing traffic through Istio Sidecar
## Routing traffic through the Istio sidecar
Part of the sidecar installation should involve setting up appropriate IP
Table rules to transparently route application's network traffic through

View File

@ -12,7 +12,6 @@ type: markdown
Quick Start instructions to install and configure Istio in a Docker Compose setup.
## Prerequisites
* [Docker](https://docs.docker.com/engine/installation/#cloud)
@ -21,20 +20,22 @@ Quick Start instructions to install and configure Istio in a Docker Compose setu
## Installation steps
1. Go to the [Istio release](https://github.com/istio/istio/releases) page to download the
installation file corresponding to your OS. If you are using a MacOS or Linux system, you can also
run the following command to download and extract the latest release automatically:
installation file corresponding to your OS. If you are using a MacOS or Linux system, you can also
run the following command to download and extract the latest release automatically:
```bash
curl -L https://git.io/getLatestIstio | sh -
```
1. Extract the installation file and change the directory to the file location. The
installation directory contains:
* Sample applications in `samples/`
* The `istioctl` client binary in the `bin/` directory. `istioctl` is used for creating routing rules and policies.
* The `istio.VERSION` configuration file
installation directory contains:
* Sample applications in `samples/`
* The `istioctl` client binary in the `bin/` directory. `istioctl` is used for creating routing rules and policies.
* The `istio.VERSION` configuration file
1. Add the `istioctl` client to your PATH.
For example, run the following command on a MacOS or Linux system:
For example, run the following command on a MacOS or Linux system:
```bash
export PATH=$PWD/bin:$PATH
@ -53,6 +54,7 @@ Quick Start instructions to install and configure Istio in a Docker Compose setu
```bash
docker ps -a
```
> If the Istio Pilot container terminates, ensure that you run the `istioctl context-create` command and re-run the command from the previous step.
1. Configure `istioctl` to use mapped local port for the Istio API server:
@ -66,13 +68,13 @@ Quick Start instructions to install and configure Istio in a Docker Compose setu
You can now deploy your own application or one of the sample applications provided with the
installation like [Bookinfo]({{home}}/docs/guides/bookinfo.html).
> Note 1: Since there is no concept of pods in a Docker setup, the Istio
> sidecar runs in the same container as the application. We will use
> [Registrator](https://gliderlabs.github.io/registrator/latest/) to
> Since there is no concept of pods in a Docker setup, the Istio
> sidecar runs in the same container as the application. We will
> use [Registrator](https://gliderlabs.github.io/registrator/latest/) to
> automatically register instances of services in the Eureka service
> registry.
> Note 2: the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because HTTP/1.0 is not supported.
>
> The application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because HTTP/1.0 is not supported.
```bash
docker-compose -f <your-app-spec>.yaml up -d
@ -80,7 +82,7 @@ docker-compose -f <your-app-spec>.yaml up -d
## Uninstalling
1. Uninstall Istio core components by removing the docker containers:
Uninstall Istio core components by removing the docker containers:
```bash
docker-compose -f install/eureka/istio.yaml down

View File

@ -97,7 +97,7 @@ Operator installs Istio on OpenShift and additionally wants to deploy some of th
ansible-playbook main.yml -e '{"istio": {"samples": ["helloworld", "bookinfo"]}}'
```
**When Jaeger is enabled, Zipkin is disabled even when Zipkin is selected in the addons.**
> When Jaeger is enabled, Zipkin is disabled even when Zipkin is selected in the addons.
## Uninstalling
@ -106,4 +106,4 @@ In this case, the `istio.delete_resources` flag does not need to be set.
Setting `istio.delete_resources` to true will delete the Istio control plane from the cluster.
**In order to avoid any inconsistencies, this flag should only be used to reinstall the same version of Istio on a cluster.**
> In order to avoid any inconsistencies, this flag should only be used to reinstall the same version of Istio on a cluster.

View File

@ -14,7 +14,7 @@ type: markdown
Quick start instructions for the setup and configuration of Istio using the Helm package manager.
*Installation with Helm prior to Istio 0.7 is unstable and not recommended.*
* Installation with Helm prior to Istio 0.7 is unstable and not recommended.*
## Prerequisites

View File

@ -1,5 +1,5 @@
---
title: Istio Mesh Expansion
title: Mesh Expansion
overview: Instructions for integrating VMs and bare metal hosts into an Istio mesh deployed on Kubernetes.
order: 60
@ -42,51 +42,51 @@ You should customize it based on your provisioning tools and DNS requirements.
* Setup Internal Load Balancers (ILBs) for Kube DNS, Pilot, Mixer and CA. This step is specific to
each cloud provider, so you may need to edit annotations.
```
kubectl apply -f install/kubernetes/mesh-expansion.yaml
```
```bash
kubectl apply -f install/kubernetes/mesh-expansion.yaml
```
* Generate the Istio 'cluster.env' configuration to be deployed in the VMs. This file contains
the cluster IP address ranges to intercept.
```bash
export GCP_OPTS="--zone MY_ZONE --project MY_PROJECT"
```
```bash
install/tools/setupMeshEx.sh generateClusterEnv MY_CLUSTER_NAME
```
```bash
export GCP_OPTS="--zone MY_ZONE --project MY_PROJECT"
```
```bash
install/tools/setupMeshEx.sh generateClusterEnv MY_CLUSTER_NAME
```
Example generated file:
Here's an example generated file
```bash
cat cluster.env
```
```
ISTIO_SERVICE_CIDR=10.63.240.0/20
```
```bash
cat cluster.env
```
```xxx
ISTIO_SERVICE_CIDR=10.63.240.0/20
```
* Generate DNS configuration file to be used in the VMs. This will allow apps on the VM to resolve
cluster service names, which will be intercepted by the sidecar and forwarded.
```bash
# Make sure your kubectl context is set to your cluster
install/tools/setupMeshEx.sh generateDnsmasq
```
```bash
# Make sure your kubectl context is set to your cluster
install/tools/setupMeshEx.sh generateDnsmasq
```
Example generated file:
Here's an example generated file
```bash
cat kubedns
```
```
server=/svc.cluster.local/10.150.0.7
address=/istio-mixer/10.150.0.8
address=/istio-pilot/10.150.0.6
address=/istio-ca/10.150.0.9
address=/istio-mixer.istio-system/10.150.0.8
address=/istio-pilot.istio-system/10.150.0.6
address=/istio-ca.istio-system/10.150.0.9
```
```bash
cat kubedns
```
```xxx
server=/svc.cluster.local/10.150.0.7
address=/istio-mixer/10.150.0.8
address=/istio-pilot/10.150.0.6
address=/istio-ca/10.150.0.9
address=/istio-mixer.istio-system/10.150.0.8
address=/istio-pilot.istio-system/10.150.0.6
address=/istio-ca.istio-system/10.150.0.9
```
### Setting up the machines
@ -107,6 +107,7 @@ install/tools/setupMeshEx.sh gceMachineSetup VM_NAME
```
Otherwise, run
```bash
install/tools/setupMeshEx.sh machineSetup VM_NAME
```
@ -128,51 +129,64 @@ Save the files as `/etc/dnsmasq.d/kubedns` and `/var/lib/istio/envoy/cluster.env
adding it to `/etc/resolv.conf` directly or via DHCP scripts. To verify, check that the VM can resolve
names and connect to pilot, for example:
On the VM/external host:
```bash
host istio-pilot.istio-system
```
Example generated message:
```
# Verify you get the same address as shown as "EXTERNAL-IP" in 'kubectl get svc -n istio-system istio-pilot-ilb'
istio-pilot.istio-system has address 10.150.0.6
```
Check that you can resolve cluster IPs. The actual address will depend on your deployment.
```bash
host istio-pilot.istio-system.svc.cluster.local.
```
Example generated message:
```
istio-pilot.istio-system.svc.cluster.local has address 10.63.247.248
```
Check istio-ingress similarly:
```bash
host istio-ingress.istio-system.svc.cluster.local.
```
Example generated message:
```
istio-ingress.istio-system.svc.cluster.local has address 10.63.243.30
```
On the VM/external host:
```bash
host istio-pilot.istio-system
```
Example generated message:
```xxx
# Verify you get the same address as shown as "EXTERNAL-IP" in 'kubectl get svc -n istio-system istio-pilot-ilb'
istio-pilot.istio-system has address 10.150.0.6
```
Check that you can resolve cluster IPs. The actual address will depend on your deployment.
```bash
host istio-pilot.istio-system.svc.cluster.local.
```
Example generated message:
```xxx
istio-pilot.istio-system.svc.cluster.local has address 10.63.247.248
```
Check istio-ingress similarly:
```bash
host istio-ingress.istio-system.svc.cluster.local.
```
Example generated message:
```xxx
istio-ingress.istio-system.svc.cluster.local has address 10.63.243.30
```
* Verify connectivity by checking whether the VM can connect to Pilot and to an endpoint.
```bash
curl 'http://istio-pilot.istio-system:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
```
```
{
"hosts": [
```bash
curl 'http://istio-pilot.istio-system:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
```
```json
{
"ip_address": "10.60.1.4",
"port": 8080
"hosts": [
{
"ip_address": "10.60.1.4",
"port": 8080
}
]
}
]
}
```
```bash
# On the VM, use the address above. It will directly connect to the pod running istio-pilot.
curl 'http://10.60.1.4:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
```
```
```bash
# On the VM, use the address above. It will directly connect to the pod running istio-pilot.
curl 'http://10.60.1.4:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
```
* Extract the initial Istio authentication secrets and copy them to the machine. The default
installation of Istio includes Istio CA and will generate Istio secrets even if
@ -182,15 +196,15 @@ is named as `istio.<serviceaccount>`). It is recommended that you perform this
step to make it easy to enable mTLS in the future and to upgrade to a future version
that will have mTLS enabled by default.
```bash
# ACCOUNT defaults to 'default', or SERVICE_ACCOUNT environment variable
# NAMESPACE defaults to current namespace, or SERVICE_NAMESPACE environment variable
# (this step is done by machineSetup)
# On a mac either brew install base64 or set BASE64_DECODE="/usr/bin/base64 -D"
install/tools/setupMeshEx.sh machineCerts ACCOUNT NAMESPACE
```
```bash
# ACCOUNT defaults to 'default', or SERVICE_ACCOUNT environment variable
# NAMESPACE defaults to current namespace, or SERVICE_NAMESPACE environment variable
# (this step is done by machineSetup)
# On a mac either brew install base64 or set BASE64_DECODE="/usr/bin/base64 -D"
install/tools/setupMeshEx.sh machineCerts ACCOUNT NAMESPACE
```
The generated files (`key.pem`, `root-cert.pem`, `cert-chain.pem`) must be copied to /etc/certs on each machine, readable by istio-proxy.
The generated files (`key.pem`, `root-cert.pem`, `cert-chain.pem`) must be copied to /etc/certs on each machine, readable by istio-proxy.
* Install Istio Debian files and start 'istio' and 'istio-auth-node-agent' services.
Get the debian packages from [GitHub releases](https://github.com/istio/istio/releases) or:
@ -216,11 +230,12 @@ or other mesh expansion machines.
# Assuming you install bookinfo in 'bookinfo' namespace
curl productpage.bookinfo.svc.cluster.local:9080
```
```
```xxx
... html content ...
```
Check that the processes are running:
```bash
ps aux |grep istio
```
@ -230,7 +245,9 @@ root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash
istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.default~default.svc.cluster.local
```
Istio auth node agent is healthy:
```bash
sudo systemctl status istio-auth-node-agent
```
@ -258,26 +275,27 @@ Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.862575 6941 nodeag
* Configure the sidecar to intercept the port. This is configured in ``/var/lib/istio/envoy/sidecar.env`,
using the ISTIO_INBOUND_PORTS environment variable.
Example (on the VM running the service):
Example (on the VM running the service):
```bash
echo "ISTIO_INBOUND_PORTS=27017,3306,8080" > /var/lib/istio/envoy/sidecar.env
systemctl restart istio
```
```bash
echo "ISTIO_INBOUND_PORTS=27017,3306,8080" > /var/lib/istio/envoy/sidecar.env
systemctl restart istio
```
* Manually configure a selector-less service and endpoints. The 'selector-less' service is used for
services that are not backed by Kubernetes pods.
Example, on a machine with permissions to modify Kubernetes services:
```bash
# istioctl register servicename machine-ip portname:port
istioctl -n onprem register mysql 1.2.3.4 3306
istioctl -n onprem register svc1 1.2.3.4 http:7000
```
```bash
# istioctl register servicename machine-ip portname:port
istioctl -n onprem register mysql 1.2.3.4 3306
istioctl -n onprem register svc1 1.2.3.4 http:7000
```
After the setup, Kubernetes pods and other mesh expansions should be able to access the
services running on the machine.
## Putting it all together
## What's next
See the [Bookinfo Mesh Expansion]({{home}}/docs/guides/integrating-vms.html) guide.
* See the [Bookinfo Mesh Expansion]({{home}}/docs/guides/integrating-vms.html) guide.

View File

@ -42,7 +42,7 @@ application. It uses Deployment Manager to automate the steps detailed in the [
1. Once you have an account and project enabled, click the following link to open the Deployment Manager.
- [Istio GKE Deployment Manager](https://accounts.google.com/signin/v2/identifier?service=cloudconsole&continue=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&followup=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&flowName=GlifWebSignIn&flowEntry=ServiceLogin)
[Istio GKE Deployment Manager](https://accounts.google.com/signin/v2/identifier?service=cloudconsole&continue=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&followup=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&flowName=GlifWebSignIn&flowEntry=ServiceLogin)
We recommend that you leave the default settings as the rest of this tutorial shows how to access the installed features. By default the tool creates a
GKE alpha cluster with the specified settings, then installs the Istio [control plane]({{home}}/docs/concepts/what-is-istio/overview.html#architecture), the
@ -69,11 +69,12 @@ application. It uses Deployment Manager to automate the steps detailed in the [
Once deployment is complete, do the following on the workstation where you've installed `gcloud`:
1. Bootstrap `kubectl` for the cluster you just created and confirm the cluster is
running and Istio is enabled
running and Istio is enabled
```bash
gcloud container clusters list
```
```xxx
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
istio-cluster us-central1-a v1.9.2-gke.1 130.211.216.64 n1-standard-2 v1.9.2-gke.1 3 RUNNING
@ -94,6 +95,7 @@ Verify Istio is installed in its own namespace
```bash
kubectl get deployments,ing -n istio-system
```
```xxx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/grafana 1 1 1 1 3m
@ -106,11 +108,13 @@ deploy/prometheus 1 1 1 1 3m
deploy/servicegraph 1 1 1 1 3m
deploy/zipkin 1 1 1 1 3m
```
Now confirm that the Bookinfo sample application is also installed:
```bash
kubectl get deployments,ing
```
```xxx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/details-v1 1 1 1 1 3m
@ -198,7 +202,7 @@ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=pr
View the console at:
```
```xxx
http://localhost:9090/graph
```

View File

@ -1,6 +1,6 @@
---
title: Quick Start
overview: Quick Start instructions to setup the Istio service mesh in a Kubernetes cluster.
overview: Quick start instructions to setup the Istio service mesh in a Kubernetes cluster.
order: 10
@ -10,8 +10,7 @@ type: markdown
{% include home.html %}
Quick Start instructions to install and configure Istio in a Kubernetes cluster.
Quick start instructions to install and configure Istio in a Kubernetes cluster.
## Prerequisites
@ -30,19 +29,17 @@ If you wish to enable [automatic sidecar injection]({{home}}/docs/setup/kubernet
match the version supported by your cluster (version 1.7 or later for CRD
support).
* Depending on your Kubernetes provider:
### [Minikube](https://github.com/kubernetes/minikube/releases)
To install Istio locally, install the latest version of
To install Istio locally, install the latest version of
[Minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) (version 0.25.0 or later).
```bash
minikube start \
--extra-config=controller-manager.ClusterSigningCertFile="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.ClusterSigningKeyFile="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.Admission.PluginNames=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--kubernetes-version=v1.9.0
--extra-config=controller-manager.ClusterSigningCertFile="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.ClusterSigningKeyFile="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.Admission.PluginNames=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--kubernetes-version=v1.9.0
```
### [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)
@ -64,7 +61,7 @@ gcloud container clusters get-credentials <cluster-name> \
--project <project-name>
```
Grant cluster admin permissions to the current user (admin permissions are required to create the necessary RBAC rules for Istio).
Grant cluster admin permissions to the current user (admin permissions are required to create the necessary RBAC rules for Istio).
```bash
kubectl create clusterrolebinding cluster-admin-binding \
@ -103,15 +100,17 @@ Configure `kubectl` CLI based on steps [here](https://www.ibm.com/support/knowle
OpenShift by default does not allow containers running with UID 0. Enable containers running
with UID 0 for Istio's service accounts for ingress as well the Prometheus and Grafana addons:
```bash
oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-grafana-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-prometheus-service-account -n istio-system
```
```bash
oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-grafana-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z istio-prometheus-service-account -n istio-system
```
Service account that runs application pods need privileged security context constraints as part of sidecar injection.
```bash
oc adm policy add-scc-to-user privileged -z default -n <target-namespace>
```
```bash
oc adm policy add-scc-to-user privileged -z default -n <target-namespace>
```
### AWS (w/Kops)
@ -125,7 +124,7 @@ kops edit cluster $YOURCLUSTER
Add following in the configuration file just opened:
```bash
```xxx
kubeAPIServer:
admissionControl:
- NamespaceLifecycle
@ -162,7 +161,8 @@ for i in `kubectl get pods -nkube-system | grep api | awk '{print $1}'` ; do ku
```
Output should be:
```bash
```xxx
[...] --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority [...]
```
@ -174,49 +174,54 @@ namespace, and can manage services from all other namespaces.
1. Go to the [Istio release](https://github.com/istio/istio/releases) page to download the
installation file corresponding to your OS. If you are using a MacOS or Linux system, you can also
run the following command to download and extract the latest release automatically:
```bash
curl -L https://git.io/getLatestIstio | sh -
```
```bash
curl -L https://git.io/getLatestIstio | sh -
```
1. Extract the installation file and change the directory to the file location. The
installation directory contains:
* Installation `.yaml` files for Kubernetes in `install/`
* Sample applications in `samples/`
* The `istioctl` client binary in the `bin/` directory. `istioctl` is used when manually injecting Envoy as a sidecar proxy and for creating routing rules and policies.
* The `istio.VERSION` configuration file
installation directory contains:
* Installation `.yaml` files for Kubernetes in `install/`
* Sample applications in `samples/`
* The `istioctl` client binary in the `bin/` directory. `istioctl` is used when manually injecting Envoy as a sidecar proxy and for creating routing rules and policies.
* The `istio.VERSION` configuration file
1. Change directory to istio package. For example, if the package is istio-{{site.data.istio.version}}
```bash
cd istio-{{site.data.istio.version}}
```
```bash
cd istio-{{site.data.istio.version}}
```
1. Add the `istioctl` client to your PATH.
For example, run the following command on a MacOS or Linux system:
```bash
export PATH=$PWD/bin:$PATH
```
For example, run the following command on a MacOS or Linux system:
```bash
export PATH=$PWD/bin:$PATH
```
1. Install Istio's core components. Choose one of the two _**mutually exclusive**_ options below or alternately install
with the [Helm Chart]({{home}}/docs/setup/kubernetes/helm-install.html):
with the [Helm Chart]({{home}}/docs/setup/kubernetes/helm-install.html):
a) Install Istio without enabling [mutual TLS authentication]({{home}}/docs/concepts/security/mutual-tls.html) between sidecars.
Choose this option for clusters with existing applications, applications where services with an
Istio sidecar need to be able to communicate with other non-Istio Kubernetes services, and
applications that use [liveliness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/),
headless services, or StatefulSets.
```bash
kubectl apply -f install/kubernetes/istio.yaml
```
a) Install Istio without enabling [mutual TLS authentication]({{home}}/docs/concepts/security/mutual-tls.html) between sidecars.
Choose this option for clusters with existing applications, applications where services with an
Istio sidecar need to be able to communicate with other non-Istio Kubernetes services, and
applications that use [liveliness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/),
headless services, or StatefulSets.
_**OR**_
```bash
kubectl apply -f install/kubernetes/istio.yaml
```
b) Install Istio and enable [mutual TLS authentication]({{home}}/docs/concepts/security/mutual-tls.html) between sidecars.:
```bash
kubectl apply -f install/kubernetes/istio-auth.yaml
```
_**OR**_
Both options create the `istio-system` namespace along with the required RBAC permissions,
and deploy Istio-Pilot, Istio-Mixer, Istio-Ingress, and Istio-CA (Certificate Authority).
b) Install Istio and enable [mutual TLS authentication]({{home}}/docs/concepts/security/mutual-tls.html) between sidecars.:
```bash
kubectl apply -f install/kubernetes/istio-auth.yaml
```
Both options create the `istio-system` namespace along with the required RBAC permissions,
and deploy Istio-Pilot, Istio-Mixer, Istio-Ingress, and Istio-CA (Certificate Authority).
1. *Optional:* If your cluster has Kubernetes version 1.9 or greater, and you wish to enable automatic proxy injection,
install the [sidecar injector webhook]({{home}}/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection).
@ -224,34 +229,37 @@ install the [sidecar injector webhook]({{home}}/docs/setup/kubernetes/sidecar-in
## Verifying the installation
1. Ensure the following Kubernetes services are deployed: `istio-pilot`, `istio-mixer`,
`istio-ingress`.
```bash
kubectl get svc -n istio-system
```
```bash
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.83.245.171 35.184.245.62 80:32730/TCP,443:30574/TCP 5h
istio-pilot 10.83.251.173 <none> 8080/TCP,8081/TCP 5h
istio-mixer 10.83.244.253 <none> 9091/TCP,9094/TCP,42422/TCP 5h
```
`istio-ingress`.
Note: If your cluster is running in an environment that does not support an external load balancer
```bash
kubectl get svc -n istio-system
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.83.245.171 35.184.245.62 80:32730/TCP,443:30574/TCP 5h
istio-pilot 10.83.251.173 <none> 8080/TCP,8081/TCP 5h
istio-mixer 10.83.244.253 <none> 9091/TCP,9094/TCP,42422/TCP 5h
```
> If your cluster is running in an environment that does not support an external load balancer
(e.g., minikube), the `EXTERNAL-IP` of `istio-ingress` says `<pending>`. You must access the
application using the service NodePort, or use port-forwarding instead.
2. Ensure the corresponding Kubernetes pods are deployed and all containers are up and running:
`istio-pilot-*`, `istio-mixer-*`, `istio-ingress-*`, `istio-ca-*`,
and, optionally, `istio-sidecar-injector-*`.
```bash
kubectl get pods -n istio-system
```
```bash
istio-ca-3657790228-j21b9 1/1 Running 0 5h
istio-ingress-1842462111-j3vcs 1/1 Running 0 5h
istio-sidecar-injector-184129454-zdgf5 1/1 Running 0 5h
istio-pilot-2275554717-93c43 1/1 Running 0 5h
istio-mixer-2104784889-20rm8 2/2 Running 0 5h
```
1. Ensure the corresponding Kubernetes pods are deployed and all containers are up and running:
`istio-pilot-*`, `istio-mixer-*`, `istio-ingress-*`, `istio-ca-*`,
and, optionally, `istio-sidecar-injector-*`.
```bash
kubectl get pods -n istio-system
```
```xxx
istio-ca-3657790228-j21b9 1/1 Running 0 5h
istio-ingress-1842462111-j3vcs 1/1 Running 0 5h
istio-sidecar-injector-184129454-zdgf5 1/1 Running 0 5h
istio-pilot-2275554717-93c43 1/1 Running 0 5h
istio-mixer-2104784889-20rm8 2/2 Running 0 5h
```
## Deploy your application
@ -281,29 +289,32 @@ kubectl create -f <(istioctl kube-inject -f <your-app-spec>.yaml)
* Uninstall Istio sidecar injector:
If you installed Istio with sidecar injector enabled, uninstall it:
```bash
kubectl delete -f install/kubernetes/istio-sidecar-injector-with-ca-bundle.yaml
```
If you installed Istio with sidecar injector enabled, uninstall it:
```bash
kubectl delete -f install/kubernetes/istio-sidecar-injector-with-ca-bundle.yaml
```
* Uninstall Istio core components. For the {{site.data.istio.version}} release, the uninstall
deletes the RBAC permissions, the `istio-system` namespace, and hierarchically all resources under it.
It is safe to ignore errors for non-existent resources because they may have been deleted hierarchically.
deletes the RBAC permissions, the `istio-system` namespace, and hierarchically all resources under it.
It is safe to ignore errors for non-existent resources because they may have been deleted hierarchically.
a) If you installed Istio with mutual TLS authentication disabled:
```bash
kubectl delete -f install/kubernetes/istio.yaml
```
a) If you installed Istio with mutual TLS authentication disabled:
_**OR**_
```bash
kubectl delete -f install/kubernetes/istio.yaml
```
b) If you installed Istio with mutual TLS authentication enabled:
```bash
kubectl delete -f install/kubernetes/istio-auth.yaml
```
_**OR**_
b) If you installed Istio with mutual TLS authentication enabled:
```bash
kubectl delete -f install/kubernetes/istio-auth.yaml
```
## What's next
* See the sample [Bookinfo]({{home}}/docs/guides/bookinfo.html) application.
* See how to [test Istio mutual TLS Authentication]({{home}}/docs/tasks/security/mutual-tls.html).
* See how to [test mutual TLS authentication]({{home}}/docs/tasks/security/mutual-tls.html).

View File

@ -37,7 +37,7 @@ yaml file directly, e.g.
kubectl apply -f istio.yaml (or istio-auth.yaml)
```
Note: If you have used [Helm](https://istio.io/docs/setup/kubernetes/helm.html)
> If you have used [Helm](https://istio.io/docs/setup/kubernetes/helm.html)
to generate a customized Istio deployment, please use the customized yaml files
generated by Helm instead of the standard installation yamls.
@ -52,27 +52,28 @@ of sidecar proxy. There are two cases: Manual injection and Automatic injection.
1. Manual injection:
If automatic sidecar injection is not enabled, you can upgrade the
sidecar manually by running the following command:
If automatic sidecar injection is not enabled, you can upgrade the
sidecar manually by running the following command:
```bash
kubectl apply -f <(istioctl kube-inject -i $ISTIO_NAMESPACE -f $ORIGINAL_DEPLOYMENT_YAML)
```
```bash
kubectl apply -f <(istioctl kube-inject -i $ISTIO_NAMESPACE -f $ORIGINAL_DEPLOYMENT_YAML)
```
If the sidecar was previously injected with some customized inject config
files, you will need to change the version tag in the config files to the new
version and reinject the sidecar as follows:
If the sidecar was previously injected with some customized inject config
files, you will need to change the version tag in the config files to the new
version and reinject the sidecar as follows:
```bash
kubectl apply -f <(istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--filename $ORIGINAL_DEPLOYMENT_YAML)
```
2. Automatic injection:
```bash
kubectl apply -f <(istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--filename $ORIGINAL_DEPLOYMENT_YAML)
```
If automatic sidecar injection is enabled, you can upgrade the sidecar
by doing a rolling update for all the pods, so that the new version of
sidecar will be automatically re-injected
1. Automatic injection:
There are some tricks to reload all pods. E.g. There is a [bash script](https://gist.github.com/jmound/ff6fa539385d1a057c82fa9fa739492e)
which triggers the rolling update by patching the grace termination period.
If automatic sidecar injection is enabled, you can upgrade the sidecar
by doing a rolling update for all the pods, so that the new version of
sidecar will be automatically re-injected
There are some tricks to reload all pods. E.g. There is a [bash script](https://gist.github.com/jmound/ff6fa539385d1a057c82fa9fa739492e)
which triggers the rolling update by patching the grace termination period.

View File

@ -27,8 +27,8 @@ This task shows you how to use Istio to dynamically limit the traffic to a servi
istioctl create -f samples/bookinfo/kube/route-rule-reviews-v3.yaml
```
> If you have conflicting rule that you set in previous tasks,
use `istioctl replace` instead of `istioctl create`.
> If you have conflicting rule that you set in previous tasks,
use `istioctl replace` instead of `istioctl create`.
## Rate limits
@ -187,7 +187,7 @@ If you would like the above policies enforced for a given namespace instead of t
* Remove the application routing rules:
```
```bash
istioctl delete -f samples/bookinfo/kube/route-rule-reviews-test-v2.yaml
istioctl delete -f samples/bookinfo/kube/route-rule-reviews-v3.yaml
```

View File

@ -27,10 +27,10 @@ This task shows how to control access to a service using the Kubernetes labels.
```
> If you have conflicting rules that you set in previous tasks,
use `istioctl replace` instead of `istioctl create`.
> use `istioctl replace` instead of `istioctl create`.
>
> If you are using a namespace other than `default`,
use `istioctl -n namespace ...` to specify the namespace.
> use `istioctl -n namespace ...` to specify the namespace.
## Access control using _denials_
@ -51,19 +51,25 @@ of the `reviews` service. We would like to cut off access to version `v3` of the
1. Explicitly deny access to version `v3` of the `reviews` service.
Run the following command to set up the deny rule along with a handler and an instance.
```bash
istioctl create -f samples/bookinfo/kube/mixer-rule-deny-label.yaml
```
You can expect to see the output similar to the following:
```bash
Created config denier/default/denyreviewsv3handler at revision 2882105
Created config checknothing/default/denyreviewsv3request at revision 2882106
Created config rule/default/denyreviewsv3 at revision 2882107
```
Notice the following in the `denyreviewsv3` rule:
```
```xxx
match: destination.labels["app"] == "ratings" && source.labels["app"]=="reviews" && source.labels["version"] == "v3"
```
It matches requests coming from the service `reviews` with label `v3` to the service `ratings`.
This rule uses the `denier` adapter to deny requests coming from version `v3` of the reviews service.
@ -84,6 +90,7 @@ Istio also supports attribute-based whitelists and blacklists. The following whi
`denier` configuration in the previous section. The rule effectively rejects requests from version `v3` of the `reviews` service.
1. Remove the denier configuration that you added in the previous section.
```bash
istioctl delete -f samples/bookinfo/kube/mixer-rule-deny-label.yaml
```
@ -106,7 +113,8 @@ Istio also supports attribute-based whitelists and blacklists. The following whi
overrides: ["v1", "v2"] # overrides provide a static list
blacklist: false
```
and then run the following command:
and then run the following command:
```bash
istioctl create -f whitelist-handler.yaml
@ -123,7 +131,8 @@ Save the following YAML snippet as `appversion-instance.yaml`:
spec:
value: source.labels["version"]
```
and then run the following command:
and then run the following command:
```bash
istioctl create -f appversion-instance.yaml
@ -132,7 +141,6 @@ Save the following YAML snippet as `appversion-instance.yaml`:
1. Enable `whitelist` checking for the ratings service.
Save the following YAML snippet as `checkversion-rule.yaml`:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: rule
@ -145,7 +153,8 @@ Save the following YAML snippet as `checkversion-rule.yaml`:
instances:
- appversion.listentry
```
and then run the following command:
and then run the following command:
```bash
istioctl create -f checkversion-rule.yaml
@ -166,7 +175,7 @@ Verify that after logging in as "jason" you see black stars.
* Remove the application routing rules:
```
```bash
istioctl delete -f samples/bookinfo/kube/route-rule-reviews-test-v2.yaml
istioctl delete -f samples/bookinfo/kube/route-rule-reviews-v3.yaml
```
@ -184,5 +193,5 @@ Verify that after logging in as "jason" you see black stars.
* Discover the full [Attribute Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Understand the differences between Kubernetes network policies and Istio
access control policies from this
[blog]({{home}}/blog/using-network-policy-in-concert-with-istio.html).
access control policies from this
[blog]({{home}}/blog/using-network-policy-in-concert-with-istio.html).

View File

@ -20,28 +20,28 @@ original HTTPS traffic. And this is the reason Istio can work on HTTPS services.
## Before you begin
* Set up Istio by following the instructions in the
[quick start]({{home}}/docs/setup/kubernetes/quick-start.html).
Note that authentication should be **disabled** at step 5 in the
[installation steps]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps).
Set up Istio by following the instructions in the
[quick start]({{home}}/docs/setup/kubernetes/quick-start.html).
Note that authentication should be **disabled** at step 5 in the
[installation steps]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps).
### Generate certificates and configmap
You need to have openssl installed to run this command
```bash
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
```
```bash
$ kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt
kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt
secret "nginxsecret" created
```
Create a configmap used for the HTTPS service
```bash
$ kubectl create configmap nginxconfigmap --from-file=samples/https/default.conf
kubectl create configmap nginxconfigmap --from-file=samples/https/default.conf
configmap "nginxconfigmap" created
```
@ -50,8 +50,10 @@ configmap "nginxconfigmap" created
This section creates a NGINX-based HTTPS service.
```bash
$ kubectl apply -f samples/https/nginx-app.yaml
...
kubectl apply -f samples/https/nginx-app.yaml
```
```xxx
service "my-nginx" created
replicationcontroller "my-nginx" created
```
@ -67,6 +69,7 @@ Get the pods
```bash
kubectl get pod
```
```xxx
NAME READY STATUS RESTARTS AGE
my-nginx-jwwck 2/2 Running 0 1h
@ -74,14 +77,17 @@ sleep-847544bbfc-d27jg 2/2 Running 0 18h
```
Ssh into the istio-proxy container of sleep pod.
```bash
kubectl exec -it sleep-847544bbfc-d27jg -c istio-proxy /bin/bash
```
Call my-nginx
```bash
curl https://my-nginx -k
```
```xxx
...
<h1>Welcome to nginx!</h1>
@ -93,6 +99,7 @@ You can actually combine the above three command into one:
```bash
kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://my-nginx -k
```
```xxx
...
<h1>Welcome to nginx!</h1>
@ -121,6 +128,7 @@ Make sure the pod is up and running
```bash
kubectl get pod
```
```xxx
NAME READY STATUS RESTARTS AGE
my-nginx-6svcc 2/2 Running 0 1h
@ -128,9 +136,11 @@ sleep-847544bbfc-d27jg 2/2 Running 0 18h
```
And run
```bash
kubectl exec sleep-847544bbfc-d27jg -c sleep -- curl https://my-nginx -k
```
```xxx
...
<h1>Welcome to nginx!</h1>
@ -138,16 +148,18 @@ kubectl exec sleep-847544bbfc-d27jg -c sleep -- curl https://my-nginx -k
```
If you run from istio-proxy container, it should work as well
```bash
kubectl exec sleep-847544bbfc-d27jg -c istio-proxy -- curl https://my-nginx -k
```
```xxx
...
<h1>Welcome to nginx!</h1>
...
```
Note: this example is borrowed from [kubernetes examples](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md).
> This example is borrowed from [kubernetes examples](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md).
### Create an HTTPS service with Istio sidecar with mTLS enabled
@ -163,6 +175,7 @@ And wait for everything is down, i.e., there is no pod in control plane namespac
```bash
kubectl get pod -n istio-system
```
```xxx
No resources found.
```
@ -174,9 +187,11 @@ kubectl apply -f install/kubernetes/istio-auth.yaml
```
Make sure everything is up and running:
```bash
kubectl get po -n istio-system
```
```xxx
NAME READY STATUS RESTARTS AGE
istio-ca-58c5856966-k6nm4 1/1 Running 0 2m
@ -199,21 +214,25 @@ Make sure the pod is up and running
```bash
kubectl get pod
```
```bash
```xxx
NAME READY STATUS RESTARTS AGE
my-nginx-9dvet 2/2 Running 0 1h
sleep-77f457bfdd-hdknx 2/2 Running 0 18h
```
And run
```bash
kubectl exec sleep-77f457bfdd-hdknx -c sleep -- curl https://my-nginx -k
```
```xxx
...
<h1>Welcome to nginx!</h1>
...
```
The reason is that for the workflow "sleep -> sleep-proxy -> nginx-proxy -> nginx",
the whole flow is L7 traffic, and there is a L4 mTLS encryption between sleep-proxy
and nginx-proxy. In this case, everything works fine.
@ -223,6 +242,7 @@ However, if you run this command from istio-proxy container, it will not work.
```bash
kubectl exec sleep-77f457bfdd-hdknx -c istio-proxy -- curl https://my-nginx -k
```
```xxx
curl: (35) gnutls_handshake() failed: Handshake failed
command terminated with exit code 35

View File

@ -1,5 +1,5 @@
---
title: Per-service mutual TLS authentication enablement
title: Per-service mutual TLS authentication control
overview: This task shows how to change mutual TLS authentication for a single service.
order: 50
@ -11,7 +11,7 @@ type: markdown
In the [Installation guide]({{home}}/docs/setup/kubernetes/quick-start.html#installation-steps), we show how to enable [mutual TLS authentication]({{home}}/docs/concepts/security/mutual-tls.html) between sidecars. The settings will be applied to all sidecars in the mesh.
In this tutorial, you will learn:
In this task, you will learn:
* Annotate Kubernetes service to disable (or enable) mutual TLS authentication for a selective service(s).
* Modify Istio mesh config to exclude mutual TLS authentication for control services.
@ -39,7 +39,7 @@ In this initial setup, we expect the sleep instance in default namespace can tal
kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl http://httpbin.default:8000/ip -s
```
```bash
```json
{
"origin": "127.0.0.1"
}
@ -49,16 +49,16 @@ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name} -n legacy) -n legacy -- curl http://httpbin.default:8000/ip -s
```
```bash
```xxx
command terminated with exit code 56
```
## Disable mutual TLS authentication for "httpbin" service.
## Disable mutual TLS authentication for httpbin
If we want to disable mTLS only for httpbin (on port 8000), without changing the mesh authentication settings,
we can do that by adding this annotations to the httpbin service definition.
```bash
```xxx
annotations:
auth.istio.io/8000: NONE
```
@ -72,7 +72,7 @@ Note:
* Annotations can also be used for a (server) service that *does not have sidecar*, to instruct Istio do not apply mTLS for the client when making a call to that service. In fact, if a system has some services that are not managed by Istio (i.e without sidecar), this is a recommended solution to fix communication problem with those services.
## Disable mutual TLS authentication for control services.
## Disable mutual TLS authentication for control services
As we cannot annotate control services, such as API server, in Istio 0.3, we introduced [mtls_excluded_services](https://github.com/istio/api/blob/master/mesh/v1alpha1/config.proto#L200:19) to the mesh configuration to specify the list of services for which mTLS should not be used. If your application needs to communicate to any control service, it's fully-qualified domain name should be listed there.
@ -94,7 +94,7 @@ It's then expected that request to kubernetes.default service should be possible
kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl https://kubernetes.default:443/api/ -k -s
```
```bash
```json
{
"kind": "APIVersions",
"versions": [
@ -121,6 +121,6 @@ The same test request above now fail with code 35, as sleep's sidecar starts usi
kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl https://kubernetes.default:443/api/ -k -s
```
```bash
```xxx
command terminated with exit code 35
```

View File

@ -18,6 +18,7 @@ The [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application is used
as the example application throughout this task.
## Before you begin
* [Install Istio]({{home}}/docs/setup/) in your cluster and deploy an
application. This task assumes that Mixer is setup in a default configuration
(`--configDefaultNamespace=istio-system`). If you use a different
@ -25,9 +26,11 @@ as the example application throughout this task.
* Install the Prometheus add-on. Prometheus
will be used to verify task success.
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
See [Prometheus](https://prometheus.io) for details.
## Collecting new telemetry data
@ -36,6 +39,7 @@ as the example application throughout this task.
stream that Istio will generate and collect automatically.
Save the following as `new_telemetry.yaml`:
```yaml
# Configuration for metric instances
apiVersion: "config.istio.io/v1alpha2"
@ -130,7 +134,8 @@ as the example application throughout this task.
```
The expected output is similar to:
```
```xxx
Created config metric/istio-system/doublerequestcount at revision 1973035
Created config prometheus/istio-system/doublehandler at revision 1973036
Created config rule/istio-system/doubleprom at revision 1973037

View File

@ -18,8 +18,9 @@ The [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application is used as
the example application throughout this task.
## Before you begin
* [Install Istio]({{home}}/docs/setup/) in your cluster and deploy an
application.
[Install Istio]({{home}}/docs/setup/) in your cluster and deploy an
application.
## Querying Istio Metrics
@ -41,7 +42,7 @@ the example application throughout this task.
The output will be similar to:
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus 10.59.241.54 <none> 9090/TCP 2m
```
@ -55,8 +56,7 @@ the example application throughout this task.
curl http://$GATEWAY_URL/productpage
```
Note: `$GATEWAY_URL` is the value set in the
[Bookinfo]({{home}}/docs/guides/bookinfo.html) guide.
> `$GATEWAY_URL` is the value set in the [Bookinfo]({{home}}/docs/guides/bookinfo.html) guide.
1. Open the Prometheus UI.
@ -70,10 +70,10 @@ the example application throughout this task.
1. Execute a Prometheus query.
In the "Expression" input box at the top of the web page, enter the text:
`istio_request_count`. Then, click the **Execute** button.
In the "Expression" input box at the top of the web page, enter the text:
`istio_request_count`. Then, click the **Execute** button.
The results will be similar to:
The results will be similar to:
{% include figure.html width='100%' ratio='39.36%'
img='./img/prometheus_query_result.png'
@ -104,7 +104,7 @@ the example application throughout this task.
rate(istio_request_count{destination_service=~"productpage.*", response_code="200"}[5m])
```
### About the Prometheus Add-on
### About the Prometheus add-on
Mixer comes with a built-in [Prometheus](https://prometheus.io) adapter that
exposes an endpoint serving generated metric values. The Prometheus add-on is a
@ -128,15 +128,15 @@ docs](https://prometheus.io/docs/querying/basics/).
* In Kubernetes environments, execute the following command to remove the
Prometheus add-on:
```bash
kubectl delete -f install/kubernetes/addons/prometheus.yaml
```
```bash
kubectl delete -f install/kubernetes/addons/prometheus.yaml
```
* Remove any `kubectl port-forward` processes that may still be running:
```bash
killall kubectl
```
```bash
killall kubectl
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions

View File

@ -66,8 +66,7 @@ the example application throughout this task.
Refresh the page a few times (or send the command a few times) to generate a
small amount of traffic.
> `$GATEWAY_URL` is the value set in the
[Bookinfo]({{home}}/docs/guides/bookinfo.html) guide.
> `$GATEWAY_URL` is the value set in the [Bookinfo]({{home}}/docs/guides/bookinfo.html) guide.
1. Open the Servicegraph UI.
@ -134,12 +133,12 @@ depends on the standard Istio metric configuration.
## Cleanup
* In Kubernetes environments, execute the following command to remove the
Servicegraph add-on:
Servicegraph add-on:
```bash
kubectl delete -f install/kubernetes/addons/servicegraph.yaml
```
```bash
kubectl delete -f install/kubernetes/addons/servicegraph.yaml
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.

View File

@ -19,24 +19,27 @@ The [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application is used
as the example application throughout this task.
## Before you begin
* [Install Istio]({{home}}/docs/setup/) in your cluster and deploy an
application.
application.
* This task assumes that the Bookinfo sample will be deployed in the `default`
namespace. If you use a different namespace, you will need to update the
example configuration and commands.
namespace. If you use a different namespace, you will need to update the
example configuration and commands.
* Install the Prometheus add-on. Prometheus
will be used to verify task success.
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
See [Prometheus](https://prometheus.io) for details.
will be used to verify task success.
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
See [Prometheus](https://prometheus.io) for details.
## Collecting new telemetry data
1. Create a new YAML file to hold configuration for the new metrics that Istio
will generate and collect automatically.
will generate and collect automatically.
Save the following as `tcp_telemetry.yaml`:
@ -117,7 +120,8 @@ as the example application throughout this task.
```
The expected output is similar to:
```
```xxx
Created config metric/default/mongosentbytes at revision 3852843
Created config metric/default/mongoreceivedbytes at revision 3852844
Created config prometheus/default/mongohandler at revision 3852845
@ -131,19 +135,19 @@ as the example application throughout this task.
If you are using a cluster with automatic sidecar injection enabled,
simply deploy the services using `kubectl`:
```
```bash
kubectl apply -f samples/bookinfo/kube/bookinfo-ratings-v2.yaml
```
If you are using manual sidecar injection, use the following command instead:
```
```bash
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-ratings-v2.yaml)
```
Expected output:
```
```xxx
deployment "ratings-v2" configured
```
@ -152,32 +156,32 @@ as the example application throughout this task.
If you are using a cluster with automatic sidecar injection enabled,
simply deploy the services using `kubectl`:
```
```bash
kubectl apply -f samples/bookinfo/kube/bookinfo-db.yaml
```
If you are using manual sidecar injection, use the following command instead:
```
```bash
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo-db.yaml)
```
Expected output:
```
```xxx
service "mongodb" configured
deployment "mongodb-v1" configured
```
1. Add routing rules to send traffic to `v2` of the `ratings` service:
```
```bash
istioctl create -f samples/bookinfo/kube/route-rule-ratings-db.yaml
```
Expected output:
```
```xxxx
Created config route-rule//ratings-test-v2 at revision 7216403
Created config route-rule//reviews-test-ratings-v2 at revision 7216404
```
@ -206,16 +210,16 @@ as the example application throughout this task.
the `istio_mongo_received_bytes` metric. The table displayed in the
**Console** tab includes entries similar to:
```
istio_mongo_received_bytes{destination_version="v1",instance="istio-mixer.istio-system:42422",job="istio-mesh",source_service="ratings.default.svc.cluster.local",source_version="v2"} 2317
```xxx
istio_mongo_received_bytes{destination_version="v1",instance="istio-mixer.istio-system:42422",job="istio-mesh",source_service="ratings.default.svc.cluster.local",source_version="v2"} 2317
```
NOTE: Istio also collects protocol-specific statistics for MongoDB. For
example, the value of total OP_QUERY messages sent from the `ratings` service
is collected in the following metric:
`envoy_mongo_mongo_collection_ratings_query_total` (click
[here](http://localhost:9090/graph#%5B%7B%22range_input%22%3A%221h%22%2C%22expr%22%3A%22envoy_mongo_mongo_collection_ratings_query_total%22%2C%22tab%22%3A1%7D%5D)
to execute the query).
> Istio also collects protocol-specific statistics for MongoDB. For
> example, the value of total OP_QUERY messages sent from the `ratings` service
> is collected in the following metric:
> `envoy_mongo_mongo_collection_ratings_query_total` (click
> (click [here](http://localhost:9090/graph#%5B%7B%22range_input%22%3A%221h%22%2C%22expr%22%3A%22envoy_mongo_mongo_collection_ratings_query_total%22%2C%22tab%22%3A1%7D%5D)
> to execute the query).
## Understanding TCP telemetry collection
@ -250,15 +254,15 @@ protocols within policies.
* Remove the new telemetry configuration:
```bash
istioctl delete -f tcp_telemetry.yaml
```
```bash
istioctl delete -f tcp_telemetry.yaml
```
* Remove the `port-forward` process:
```bash
killall kubectl
```
```bash
killall kubectl
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
@ -267,17 +271,15 @@ protocols within policies.
## What's next
* Learn more about [Mixer]({{home}}/docs/concepts/policy-and-control/mixer.html)
and [Mixer
Config]({{home}}/docs/concepts/policy-and-control/mixer-config.html).
and [Mixer Config]({{home}}/docs/concepts/policy-and-control/mixer-config.html).
* Discover the full [Attribute
Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
Vocabulary]({{home}}/docs/reference/config/mixer/attribute-vocabulary.html).
* Refer to the [In-Depth Telemetry]({{home}}/docs/guides/telemetry.html) guide.
* Learn more about [Querying Istio
Metrics]({{home}}/docs/tasks/telemetry/querying-metrics.html).
Metrics]({{home}}/docs/tasks/telemetry/querying-metrics.html).
* Learn more about the [MongoDB-specific statistics generated by
Envoy](https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/mongo_proxy_filter#statistics).
Envoy](https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/mongo_proxy_filter#statistics).

View File

@ -18,16 +18,17 @@ The [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application is used as
the example application throughout this task.
## Before you begin
* [Install Istio]({{home}}/docs/setup/) in your cluster and deploy an
application.
* Install the Prometheus add-on.
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
```bash
kubectl apply -f install/kubernetes/addons/prometheus.yaml
```
Use of the Prometheus add-on is _required_ for the Istio Dashboard.
Use of the Prometheus add-on is _required_ for the Istio Dashboard.
## Viewing the Istio Dashboard
@ -49,7 +50,7 @@ the example application throughout this task.
The output will be similar to:
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana 10.59.247.103 <none> 3000/TCP 2m
```
@ -95,8 +96,7 @@ the example application throughout this task.
caption='Istio Dashboard With Traffic'
%}
Note: `$GATEWAY_URL` is the value set in the
[Bookinfo]({{home}}/docs/guides/bookinfo.html) guide.
> `$GATEWAY_URL` is the value set in the [Bookinfo]({{home}}/docs/guides/bookinfo.html) guide.
### About the Grafana add-on
@ -122,18 +122,18 @@ For more on how to create, configure, and edit dashboards, please see the
## Cleanup
* In Kubernetes environments, execute the following command to remove the Grafana
add-on:
add-on:
```bash
kubectl delete -f install/kubernetes/addons/grafana.yaml
```
```bash
kubectl delete -f install/kubernetes/addons/grafana.yaml
```
* Remove any `kubectl port-forward` processes that may be running:
```bash
killall kubectl
```
```bash
killall kubectl
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.

View File

@ -30,6 +30,7 @@ This task shows how to inject delays and test the resiliency of your application
# Fault injection
## Fault injection using HTTP delay
To test our Bookinfo application microservices for resiliency, we will _inject a 7s delay_
between the reviews:v2 and ratings microservices, for user "jason". Since the _reviews:v2_ service has a
10s timeout for its calls to the ratings service, we expect the end-to-end flow to
@ -46,6 +47,7 @@ continue without any errors.
```bash
istioctl get virtualservice ratings -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
@ -88,30 +90,31 @@ continue without any errors.
## Understanding what happened
The reason that the entire reviews service has failed is because our Bookinfo application
has a bug. The timeout between the productpage and reviews service is less (3s + 1 retry = 6s total)
than the timeout between the reviews and ratings service (10s). These kinds of bugs can occur in
typical enterprise applications where different teams develop different microservices
independently. Istio's fault injection rules help you identify such anomalies without
impacting end users.
The reason that the entire reviews service has failed is because our Bookinfo application
has a bug. The timeout between the productpage and reviews service is less (3s + 1 retry = 6s total)
than the timeout between the reviews and ratings service (10s). These kinds of bugs can occur in
typical enterprise applications where different teams develop different microservices
independently. Istio's fault injection rules help you identify such anomalies without
impacting end users.
> Notice that we are restricting the failure impact to user "jason" only. If you login
> as any other user, you would not experience any delays.
> Notice that we are restricting the failure impact to user "jason" only. If you login
> as any other user, you would not experience any delays.
**Fixing the bug:** At this point we would normally fix the problem by either increasing the
productpage timeout or decreasing the reviews to ratings service timeout,
terminate and restart the fixed microservice, and then confirm that the `productpage`
returns its response without any errors.
**Fixing the bug:** At this point we would normally fix the problem by either increasing the
productpage timeout or decreasing the reviews to ratings service timeout,
terminate and restart the fixed microservice, and then confirm that the `productpage`
returns its response without any errors.
However, we already have this fix running in v3 of the reviews service, so we can simply
fix the problem by migrating all
traffic to `reviews:v3` as described in the
[traffic shifting]({{home}}/docs/tasks/traffic-management/traffic-shifting.html) task.
However, we already have this fix running in v3 of the reviews service, so we can simply
fix the problem by migrating all
traffic to `reviews:v3` as described in the
[traffic shifting]({{home}}/docs/tasks/traffic-management/traffic-shifting.html) task.
(Left as an exercise for the reader - change the delay rule to
use a 2.8 second delay and then run it against the v3 version of reviews.)
(Left as an exercise for the reader - change the delay rule to
use a 2.8 second delay and then run it against the v3 version of reviews.)
## Fault injection using HTTP Abort
As another test of resiliency, we will introduce an HTTP abort to the ratings microservices for the user "jason".
We expect the page to load immediately unlike the delay example and display the "product ratings not available"
message.
@ -127,6 +130,7 @@ message.
```bash
istioctl get virtualservice ratings -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService

View File

@ -280,46 +280,45 @@ service declaration.
1. Determine the ingress URL:
- If your cluster is running in an environment that supports external load balancers,
use the ingress' external address:
* If your cluster is running in an environment that supports external load balancers, use the ingress' external address:
```bash
kubectl get ingress simple-ingress -o wide
```
```bash
kubectl get ingress simple-ingress -o wide
```
```xxx
NAME HOSTS ADDRESS PORTS AGE
simple-ingress * 130.211.10.121 80 1d
```
```xxx
NAME HOSTS ADDRESS PORTS AGE
simple-ingress * 130.211.10.121 80 1d
```
```bash
export INGRESS_HOST=130.211.10.121
```
```bash
export INGRESS_HOST=130.211.10.121
```
* If load balancers are not supported, use the ingress controller pod's hostIP:
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```xxx
169.47.243.100
```
```xxx
169.47.243.100
```
along with the istio-ingress service's nodePort for port 80:
along with the istio-ingress service's nodePort for port 80:
```bash
kubectl -n istio-system get svc istio-ingress
```
```bash
kubectl -n istio-system get svc istio-ingress
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```bash
export INGRESS_HOST=169.47.243.100:31486
```
```bash
export INGRESS_HOST=169.47.243.100:31486
```
1. Access the httpbin service using _curl_:
@ -405,45 +404,45 @@ service declaration.
1. Determine the ingress URL:
* If your cluster is running in an environment that supports external load balancers,
use the ingress' external address:
use the ingress' external address:
```bash
kubectl get ingress secure-ingress -o wide
```
```bash
kubectl get ingress secure-ingress -o wide
```
```xxx
NAME HOSTS ADDRESS PORTS AGE
secure-ingress * 130.211.10.121 80 1d
```
```xxx
NAME HOSTS ADDRESS PORTS AGE
secure-ingress * 130.211.10.121 80 1d
```
```bash
export SECURE_INGRESS_HOST=130.211.10.121
```
```bash
export SECURE_INGRESS_HOST=130.211.10.121
```
* If load balancers are not supported, use the ingress controller pod's hostIP:
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```xxx
169.47.243.100
```
```xxx
169.47.243.100
```
along with the istio-ingress service's nodePort for port 443:
along with the istio-ingress service's nodePort for port 443:
```bash
kubectl -n istio-system get svc istio-ingress
```
```bash
kubectl -n istio-system get svc istio-ingress
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```bash
export SECURE_INGRESS_HOST=169.47.243.100:32254
```
```bash
export SECURE_INGRESS_HOST=169.47.243.100:32254
```
1. Access the httpbin service using _curl_:

View File

@ -14,7 +14,7 @@ This task shows you how to configure dynamic request routing based on weights an
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
[Installation guide]({{home}}/docs/setup/).
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
@ -28,7 +28,7 @@ This is because without an explicit default version set, Istio will
route requests to all available versions of a service in a random fashion.
> This task assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample,
you'll need to use `replace` rather than `create` in the following command.
you'll need to use `replace` rather than `create` in the following command.
1. Set the default version for all microservices to v1.
@ -45,6 +45,7 @@ route requests to all available versions of a service in a random fashion.
```bash
istioctl get virtualservices -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService

View File

@ -103,7 +103,6 @@ to the `ratings` service.
You should now see that it returns in 1 second (instead of 2), but the reviews are unavailable.
## Understanding what happened
In this task, you used Istio to set the request timeout for calls to the `reviews`

View File

@ -82,50 +82,49 @@ The following are known limitations of Istio Ingress:
[rule match configuration]({{home}}/docs/reference/config/istio.routing.v1alpha1.html#matchcondition)
of the form (`prefix: /`).
### Verifying ingress
### Verifying HTTP ingress
1. Determine the ingress URL:
* If your cluster is running in an environment that supports external load balancers,
use the ingress' external address:
* If your cluster is running in an environment that supports external load balancers, use the ingress' external address:
```bash
kubectl get ingress simple-ingress -o wide
```
```bash
kubectl get ingress simple-ingress -o wide
```
```xxx
NAME HOSTS ADDRESS PORTS AGE
simple-ingress * 130.211.10.121 80 1d
```
```xxx
NAME HOSTS ADDRESS PORTS AGE
simple-ingress * 130.211.10.121 80 1d
```
```bash
export INGRESS_HOST=130.211.10.121
```
```bash
export INGRESS_HOST=130.211.10.121
```
* If load balancers are not supported, use the ingress controller pod's hostIP:
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```bash
kubectl -n istio-system get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'
```
```xxx
169.47.243.100
```
```xxx
169.47.243.100
```
along with the istio-ingress service's nodePort for port 80:
along with the istio-ingress service's nodePort for port 80:
```bash
kubectl -n istio-system get svc istio-ingress
```
```bash
kubectl -n istio-system get svc istio-ingress
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```xxx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress 10.10.10.155 <pending> 80:31486/TCP,443:32254/TCP 32m
```
```bash
export INGRESS_HOST=169.47.243.100:31486
```
```bash
export INGRESS_HOST=169.47.243.100:31486
```
1. Access the httpbin service using _curl_:
@ -209,7 +208,7 @@ The following are known limitations of Istio Ingress:
> Because SNI is not yet supported, Envoy currently only allows a single TLS secret in the ingress.
> That means the secretName field in ingress resource is not used.
### Verifying ingress
### Verifying HTTPS ingress
1. Determine the ingress URL:

View File

@ -4,7 +4,7 @@ layout: default
<div class="container-fluid">
<div class="row row-offcanvas row-offcanvas-left">
<div class="col-6 col-md-3 col-xl-2 sidebar-offcanvas">
{% include sidebar.html docs=site.docs %}
{% include sidebar.html docs=site.blog %}
</div>
{% assign needTOC = true %}

View File

@ -7,3 +7,5 @@ exclude_rule 'MD041'
exclude_rule 'MD031'
exclude_rule 'MD033'
exclude_rule 'MD013'
exclude_rule 'MD007'
exclude_rule 'MD034'

View File

@ -1,3 +1,3 @@
mdspell --en-us --ignore-acronyms --ignore-numbers --no-suggestions --report *.md */*.md */*/*.md */*/*/*.md */*/*/*/*.md
mdl --ignore-front-matter --style mdl_style.rb .
# rake test
rake test