Updates multicluster gateway doc to support node ports (#3063)

This commit is contained in:
Daneyon Hansen 2019-01-16 14:51:49 -07:00 committed by istio-bot
parent 33194761d5
commit 0351954137
2 changed files with 53 additions and 3 deletions

View File

@ -99,7 +99,16 @@ running in a second cluster.
`httpbin.bar.global` on *any port* to be routed to the endpoint
`<IPofCluster2IngressGateway>:15443` over an mTLS connection.
> Do not create a `Gateway` configuration for port 15443.
If your cluster2 Kubernetes cluster is running in an environment that does not
support external load-balancers, you must use the IP and nodePort corresponding
to port 15443 of a node running the `istio-ingressgateway` service. Instructions
for obtaining the node IP can be found in the
[Control Ingress Traffic](/docs/tasks/traffic-management/ingress/#determining-the-ingress-ip-and-ports)
guide. The following command can be used to obtain the nodePort:
{{< text bash >}}
$ kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'
{{< /text >}}
The gateway for port 15443 is a special SNI-aware Envoy
preconfigured and installed as part of the multicluster Istio installation step
@ -107,6 +116,8 @@ running in a second cluster.
load balanced among pods of the appropriate internal service of the target
cluster (in this case, `httpbin.bar` in `cluster2`).
> Do not create a `Gateway` configuration for port 15443.
1. Verify that `httpbin` is accessible from the `sleep` service.
{{< text bash >}}

View File

@ -58,6 +58,9 @@ on **each** Kubernetes cluster.
--from-file=@samples/certs/cert-chain.pem@
{{< /text >}}
1. Update Helms dependencies by following step 2 in the
[Installation with Helm](/docs/setup/kubernetes/helm-install/#installation-steps) instructions.
1. Generate a multicluster-gateways Istio configuration file using `helm`:
{{< text bash >}}
@ -90,8 +93,10 @@ services from remote clusters in the format
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
will provide DNS resolution for these services. In order to utilize this
DNS, Kubernetes' DNS needs to be configured to point to CoreDNS as the DNS
server for the `.global` DNS domain. Create the following ConfigMap (or
update an existing one):
server for the `.global` DNS domain. Create one of the following ConfigMaps
or update an existing one:
For clusters that use kube-dns:
{{< text bash >}}
$ kubectl apply -f - <<EOF
@ -106,6 +111,40 @@ data:
EOF
{{< /text >}}
For clusters that use CoreDNS:
{{< text bash >}}
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
global:53 {
errors
cache 30
proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
}
EOF
{{< /text >}}
## Configure application services
Every service in a given cluster that needs to be accessed from a different remote