10 KiB
title | description | weight | aliases | keywords | |||
---|---|---|---|---|---|---|---|
Control Egress Traffic | Describes how to configure Istio to route traffic from services in the mesh to external services. | 40 |
|
|
This task uses the new v1alpha3 traffic management API. The old API has been deprecated and will be removed in the next Istio release. If you need to use the old version, follow the docs here.
By default, Istio-enabled services are unable to access URLs outside of the cluster because the pod uses iptables to transparently redirect all outbound traffic to the sidecar proxy, which only handles intra-cluster destinations.
This task describes how to configure Istio to expose external services to Istio-enabled clients. You'll learn how to enable access to external services by defining ServiceEntry configurations, or alternatively, to bypass the Istio proxy for a specific range of IPs.
Before you begin
-
Setup Istio by following the instructions in the Installation guide.
-
Start the [sleep]({{< github_tree >}}/samples/sleep) sample which you use as a test source for external calls.
If you have enabled automatic sidecar injection, deploy the
sleep
application:{{< text bash >}} $ kubectl apply -f samples/sleep/sleep.yaml {{< /text >}}
Otherwise, you have to manually inject the sidecar before deploying the
sleep
application:{{< text bash >}} $ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) {{< /text >}}
Note that any pod that you can
exec
andcurl
from will do for the procedures below.
Configuring Istio external services
Using Istio ServiceEntry
configurations, you can access any publicly accessible service
from within your Istio cluster. In this task you access
httpbin.org and www.google.com as examples.
Configuring the external services
-
Create a
ServiceEntry
to allow access to an external HTTP service:{{< text bash >}} $ cat <<EOF | istioctl create -f - apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-ext spec: hosts:
- httpbin.org ports:
- number: 80 name: http protocol: HTTP EOF {{< /text >}}
-
Create a
ServiceEntry
to allow access to an external HTTPS service:{{< text bash >}} $ cat <<EOF | istioctl create -f - apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: google-ext spec: hosts:
- www.google.com ports:
- number: 443 name: https protocol: HTTPS EOF {{< /text >}}
Make requests to the external services
-
Exec into the pod being used as the test source. For example, if you are using the
sleep
service, run the following commands:{{< text bash >}}
export SOURCE_POD=
(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) $ kubectl exec -it $SOURCE_POD -c sleep bash {{< /text >}} -
Make a request to the external HTTP service:
{{< text bash >}} $ curl http://httpbin.org/headers {{< /text >}}
-
Make a request to the external HTTPS service:
{{< text bash >}} $ curl https://www.google.com {{< /text >}}
Setting route rules on an external service
Similar to inter-cluster requests, Istio
routing rules
can also be set for external services that are accessed using ServiceEntry
configurations.
In this example, you use istioctl
to set a timeout rule on calls to the httpbin.org service.
-
From inside the pod being used as the test source, make a curl request to the
/delay
endpoint of the httpbin.org external service:{{< text bash >}} $ kubectl exec -it $SOURCE_POD -c sleep bash $ time curl -o /dev/null -s -w "%{http_code}\n" http://httpbin.org/delay/5 200
real 0m5.024s user 0m0.003s sys 0m0.003s {{< /text >}}
The request should return 200 (OK) in approximately 5 seconds.
-
Exit the source pod and use
istioctl
to set a 3s timeout on calls to the httpbin.org external service:{{< text bash >}} $ cat <<EOF | istioctl create -f - apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin-ext spec: hosts: - httpbin.org http:
- timeout: 3s
route:
- destination: host: httpbin.org weight: 100 EOF {{< /text >}}
- timeout: 3s
route:
-
Wait a few seconds, then make the curl request again:
{{< text bash >}} $ kubectl exec -it $SOURCE_POD -c sleep bash $ time curl -o /dev/null -s -w "%{http_code}\n" http://httpbin.org/delay/5 504
real 0m3.149s user 0m0.004s sys 0m0.004s {{< /text >}}
This time a 504 (Gateway Timeout) appears after 3 seconds. Although httpbin.org was waiting 5 seconds, Istio cut off the request at 3 seconds.
Calling external services directly
If you want to completely bypass Istio for a specific IP range,
you can configure the Envoy sidecars to prevent them from
intercepting
the external requests. This can be done by setting the global.proxy.includeIPRanges
variable of
Helm and updating the ConfigMap
istio-sidecar-injector by using kubectl apply
. After istio-sidecar-injector is updated, the value of global.proxy.includeIPRanges
will affect all the future deployments of the application pods.
The simplest way to use the global.proxy.includeIPRanges
variable is to pass it the IP range(s)
used for internal cluster services, thereby excluding external IPs from being redirected
to the sidecar proxy.
The values used for internal IP range(s), however, depends on where your cluster is running.
For example, with Minikube the range is 10.0.0.1/24, so you would update your ConfigMap
istio-sidecar-injector like this:
{{< text bash >}} $ helm template install/kubernetes/helm/istio --set global.proxy.includeIPRanges="10.0.0.1/24" -x templates/sidecar-injector-configmap.yaml | kubectl apply -f - {{< /text >}}
Note that you should use the same Helm command you used to install Istio,
in particular, the same value of the --namespace
flag. In addition to the flags you used to install Istio, add --set global.proxy.includeIPRanges="10.0.0.1/24" -x templates/sidecar-injector-configmap.yaml
.
Redeploy the sleep
application as described in the Before you begin section.
Set the value of global.proxy.includeIPRanges
Set the value of global.proxy.includeIPRanges
according to your cluster provider.
IBM Cloud Private
-
Get your
service_cluster_ip_range
from IBM Cloud Private configuration file undercluster/config.yaml
:{{< text bash >}} $ cat cluster/config.yaml | grep service_cluster_ip_range {{< /text >}}
The following is a sample output:
{{< text plain >}} service_cluster_ip_range: 10.0.0.1/24 {{< /text >}}
-
Use
--set global.proxy.includeIPRanges="10.0.0.1/24"
IBM Cloud Kubernetes Service
Use --set global.proxy.includeIPRanges="172.30.0.0/16\,172.20.0.0/16\,10.10.10.0/24"
Google Container Engine (GKE)
The ranges are not fixed, so you will need to run the gcloud container clusters describe
command to determine the ranges to use. For example:
{{< text bash >}} $ gcloud container clusters describe XXXXXXX --zone=XXXXXX | grep -e clusterIpv4Cidr -e servicesIpv4Cidr clusterIpv4Cidr: 10.4.0.0/14 servicesIpv4Cidr: 10.7.240.0/20 {{< /text >}}
Use --set global.proxy.includeIPRanges="10.4.0.0/14\,10.7.240.0/20"
Azure Container Service(ACS)
Use --set global.proxy.includeIPRanges="10.244.0.0/16\,10.240.0.0/16
Minikube
Use --set global.proxy.includeIPRanges="10.0.0.1/24"
Access the external services
After updating the ConfigMap
istio-sidecar-injector and redeploying the sleep
application,
the Istio sidecar will only intercept and manage internal requests
within the cluster. Any external request bypasses the sidecar and goes straight to its intended destination. For example:
{{< text bash >}}
export SOURCE_POD=
(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
$ kubectl exec -it $SOURCE_POD -c sleep curl http://httpbin.org/headers
{{< /text >}}
Understanding what happened
In this task you looked at two ways to call external services from an Istio mesh:
-
Using a
ServiceEntry
(recommended). -
Configuring the Istio sidecar to exclude external IPs from its remapped IP table.
The first approach, using ServiceEntry
, lets
you use all of the same Istio service mesh features for calls to services inside or outside
of the cluster. You saw this by setting a timeout rule for calls to an external service.
The second approach bypasses the Istio sidecar proxy, giving your services direct access to any external URL. However, configuring the proxy this way does require cluster provider specific knowledge and configuration.
Cleanup
-
Remove the rules:
{{< text bash >}} $ istioctl delete serviceentry httpbin-ext google-ext $ istioctl delete virtualservice httpbin-ext {{< /text >}}
-
Shutdown the [sleep]({{< github_tree >}}/samples/sleep) service:
{{< text bash >}} $ kubectl delete -f samples/sleep/sleep.yaml {{< /text >}}
-
Update the
ConfigMap
istio-sidecar-injector to redirect all outbound traffic to the sidecar proxies:{{< text bash >}} $ helm template install/kubernetes/helm/istio -x templates/sidecar-injector-configmap.yaml | kubectl apply -f - {{< /text >}}