mirror of https://github.com/istio/istio.io.git
Refactor ztunnel docs (#15028)
* Big big refactor. * layer on some numbers * add changes from #15008 * Fix lint errors * two steps forward, one step back with linting * heading level * Update content/en/docs/ambient/usage/add-workloads/index.md Co-authored-by: Ben Leggett <854255+bleggett@users.noreply.github.com> * Apply some fixes from code review * Some clarity fixes * a space really messes up a lint * Personally I think trailing spaces are just fine * move waypoint table * sigh --------- Co-authored-by: Ben Leggett <854255+bleggett@users.noreply.github.com>
This commit is contained in:
parent
eece719cda
commit
ee954f86b6
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Ambient Mode
|
||||
description: Information for setting up and operating Istio in ambient mode.
|
||||
description: Information for setting up and operating Istio with support for ambient mode.
|
||||
weight: 17
|
||||
aliases:
|
||||
- /docs/ops/ambient
|
||||
|
|
|
@ -18,18 +18,6 @@ To enforce L7 policies, add the `istio.io/use-waypoint` label to your resource t
|
|||
precedence over the namespace waypoint as long as the service waypoint can handle service or all traffic.
|
||||
Similarly, a label on a pod will take precedence over a namespace label
|
||||
|
||||
### Labels {#ambient-labels}
|
||||
|
||||
You can use the following labels to add your resource to the {{< gloss >}}ambient{{< /gloss >}} mesh and manage L4 traffic with the ambient {{< gloss >}}data plane{{< /gloss >}}, use a waypoint to enforce L7 policy for your resource, and control how traffic is sent to the waypoint.
|
||||
|
||||
| Name | Feature Status | Resource | Description |
|
||||
| --- | --- | --- | --- |
|
||||
| `istio.io/dataplane-mode` | Beta | `Namespace` or `Pod` (latter has precedence) | Add your resource to an ambient mesh. <br><br> Valid values: `ambient` or `none`. |
|
||||
| `istio.io/use-waypoint` | Beta | `Namespace`, `Service` or `Pod` | Use a waypoint for traffic to the labeled resource for L7 policy enforcement. <br><br> Valid values: `{waypoint-name}`, `{namespace}/{waypoint-name}`, or `#none` (with hash). |
|
||||
| `istio.io/waypoint-for` | Alpha | `Gateway` | Specifies what types of endpoints the waypoint will process traffic for. <br><br> Valid values: `service`, `workload`, `none` or `all`. This label is optional and the default value is `service`. |
|
||||
|
||||
In order for your `istio.io/use-waypoint` label value to be effective, you have to ensure the waypoint is configured for the endpoint which is using it. By default waypoints accept traffic for service endpoints. For example, when you label a pod to use a specific waypoint via the `istio.io/use-waypoint` label, the waypoint should be labeled `istio.io./waypoint-for` with the value `workload` or `all`.
|
||||
|
||||
### Layer 7 policy attachment to waypoints
|
||||
|
||||
You can attach Layer 7 policies (such as `AuthorizationPolicy`, `RequestAuthentication`, `Telemetry`, `WasmPlugin`, etc) to your waypoint using `targetRefs`.
|
||||
|
|
|
@ -43,7 +43,7 @@ Traffic to and from pods in the mesh will be fully encrypted with mTLS by defaul
|
|||
|
||||
Data will now enter and leave the pod network namespace encrypted. Every pod in the mesh has the ability to enforce mesh policy and securely encrypt traffic, even though the user application running in the pod has no awareness of either.
|
||||
|
||||
Here’s a diagram to illustrate how encrypted traffic flows between pods in the ambient mesh in the new model:
|
||||
This diagram illustrates how encrypted traffic flows between pods in the ambient mesh in the new model:
|
||||
|
||||
{{< image width="100%"
|
||||
link="./traffic-flows-between-pods-in-ambient.svg"
|
||||
|
@ -52,7 +52,7 @@ Here’s a diagram to illustrate how encrypted traffic flows between pods in the
|
|||
|
||||
## Observing and debugging traffic redirection in ambient mode
|
||||
|
||||
If traffic redirection is not working correctly in ambient mode, some quick checks can be made to help narrow down the problem. To demonstrate traffic redirection in action, first follow the steps described in the [ztunnel L4 networking guide](/docs/ambient/usage/ztunnel), including deployment of Istio with ambient mode enabled in a Kubernetes cluster, and the deployment of `httpbin` and `sleep` in the namespace tagged for ambient mode. Once you have verified that the application is successfully running in the ambient mesh, you can use the following steps to observe the traffic redirection.
|
||||
If traffic redirection is not working correctly in ambient mode, some quick checks can be made to help narrow down the problem. To demonstrate traffic redirection in action, first follow the steps described in the [ztunnel debugging guide](/docs/ambient/usage/debugging).
|
||||
|
||||
### Check the ztunnel proxy logs
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Getting Started
|
||||
description: How to deploy and install Istio in ambient mode.
|
||||
weight: 1
|
||||
weight: 2
|
||||
aliases:
|
||||
- /docs/ops/ambient/getting-started
|
||||
- /latest/docs/ops/ambient/getting-started
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Installation Guide
|
||||
title: Install
|
||||
description: Installation guide for Istio ambient mode.
|
||||
weight: 5
|
||||
aliases:
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Overview
|
||||
description: An overview of Istio's ambient data plane mode.
|
||||
weight: 1
|
||||
owner: istio/wg-docs-maintainers-english
|
||||
test: no
|
||||
---
|
||||
|
||||
In **ambient mode**, Istio implements its [features](/docs/concepts) using a per-node Layer 4 (L4) proxy, and optionally a per-namespace Layer 7 (L7) proxy.
|
||||
|
||||
This layered approach allows you to adopt Istio in a more incremental fashion, smoothly transitioning from no mesh, to a secure L4 overlay, to full L7 processing and policy — on a per-namespace basis, as needed. Furthermore, workloads running in different Istio {{< gloss >}}data plane{{< /gloss >}} modes interoperate seamlessly, allowing users to mix and match capabilities based on their particular needs as they change over time.
|
||||
|
||||
Since workload pods no longer require proxies running in sidecars in order to participate in the mesh, ambient mode is often informally referred to as "sidecar-less mesh".
|
||||
|
||||
## How it works
|
||||
|
||||
Ambient mode splits Istio’s functionality into two distinct layers. At the base, the **ztunnel** secure overlay handles routing and zero trust security for traffic. Above that, when needed, users can enable L7 **waypoint proxies** to get access to the full range of Istio features. Waypoint proxies, while heavier than the ztunnel overlay alone, still run as an ambient component of the infrastructure, requiring no modifications to application pods.
|
||||
|
||||
{{< tip >}}
|
||||
Pods and workloads using sidecar mode can co-exist within the same mesh as pods that use ambient mode. The term "ambient mesh" refers to an Istio mesh that was installed with support for ambient mode, and so can support mesh pods that use either type of data plane.
|
||||
{{< /tip >}}
|
||||
|
||||
For details on the design of ambient mode, and how it interacts with the Istio {{< gloss >}}control plane{{< /gloss >}}, see the [data plane](/docs/ambient/architecture/data-plane) and [control plane](/docs/ambient/architecture/control-plane) architecture documentation.
|
||||
|
||||
## ztunnel
|
||||
|
||||
The ztunnel (Zero Trust tunnel) component is a purpose-built, per-node proxy that powers Istio's ambient data plane mode.
|
||||
|
||||
Ztunnel is responsible for securely connecting and authenticating workloads within the mesh. The ztunnel proxy is written in Rust and is intentionally scoped to handle L3 and L4 functions such as mTLS, authentication, L4 authorization and telemetry. Ztunnel does not terminate workload HTTP traffic or parse workload HTTP headers. The ztunnel ensures L3 and L4 traffic is efficiently and securely transported to waypoint proxies, where the full suite of Istio’s L7 functionality, such as HTTP telemetry and load balancing, is implemented.
|
||||
|
||||
The term "secure overlay" is used to collectively describe the set of L4 networking functions implemented in an ambient mesh via the ztunnel proxy. At the transport layer, this is implemented via an HTTP CONNECT-based traffic tunneling protocol called [HBONE](/docs/ambient/architecture/hbone).
|
||||
|
||||
## Waypoint proxies
|
||||
|
||||
The waypoint proxy is a deployment of the {{< gloss >}}Envoy{{</ gloss >}} proxy; the same engine that Istio uses for its sidecar data plane mode.
|
||||
|
||||
Waypoint proxies run outside of application pods. They are installed, upgraded, and scale independently from applications.
|
||||
|
||||
Some use cases of Istio in ambient mode may be addressed solely via the L4 secure overlay features, and will not need L7 features, thereby not requiring deployment of a waypoint proxy. Use cases requiring advanced traffic management and L7 networking features will require deployment of a waypoint.
|
||||
|
||||
| Application deployment use case | Ambient mode configuration |
|
||||
| ------------------------------- | -------------------------- |
|
||||
| Zero Trust networking via mutual-TLS, encrypted and tunneled data transport of client application traffic, L4 authorization, L4 telemetry | ztunnel only (default) |
|
||||
| As above, plus advanced Istio traffic management features (including L7 authorization, telemetry and VirtualService routing) | ztunnel and waypoint proxies |
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Upgrade Guide
|
||||
title: Upgrade
|
||||
description: Upgrade guide for Istio ambient mode.
|
||||
weight: 10
|
||||
aliases:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: User Guides
|
||||
description: How to configure a mesh in ambient mode.
|
||||
description: How to configure your mesh to take advantage of ambient mode.
|
||||
weight: 15
|
||||
aliases:
|
||||
- /docs/ops/ambient/usage
|
||||
|
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
title: Add workloads to the mesh
|
||||
description: Understand how to add workloads to an ambient mesh.
|
||||
weight: 1
|
||||
owner: istio/wg-networking-maintainers
|
||||
test: no
|
||||
---
|
||||
|
||||
In most cases, a cluster administrator will deploy the Istio mesh infrastructure. Once Istio is successfully deployed with support for the ambient {{< gloss >}}data plane{{< /gloss >}} mode, it will be transparently available to applications deployed by all users in namespaces that have been configured to use it.
|
||||
|
||||
## Enabling ambient mode for an application in the mesh
|
||||
|
||||
To add an applications or namespaces to the mesh in ambient mode, add the label `istio.io/dataplane-mode=ambient` to the corresponding resource. You can apply this label to a namespace or to an individual pod.
|
||||
|
||||
Ambient mode can be seamlessly enabled (or disabled) completely transparently as far as the application pods are concerned. Unlike the {{< gloss >}}sidecar{{< /gloss >}} data plane mode, there is no need to restart applications to add them to the mesh, and they will not show as having an extra container deployed in their pod.
|
||||
|
||||
## Communicating between pods in different data plane modes
|
||||
|
||||
There are multiple options for interoperability between application pods using the ambient data plane mode, and non-ambient endpoints (including Kubernetes application pods, Istio gateways or Kubernetes Gateway API instances). This interoperability provides multiple options for seamlessly integrating ambient and non-ambient workloads within the same Istio mesh, allowing for phased introduction of ambient capability as best suits the needs of your mesh deployment and operation.
|
||||
|
||||
### Pods outside the mesh
|
||||
|
||||
You may have namespaces which are not part of the mesh at all, in either sidecar or ambient mode. In this case, the non-mesh pods initiate traffic directly to the destination pods without going through the source node's ztunnel, while the destination pod's ztunnel enforces any L4 policy to control whether traffic should be allowed or denied.
|
||||
|
||||
For example, setting a `PeerAuthentication` policy with mTLS mode set to `STRICT`, in an ambient-enabled namespace, will cause traffic from outside the mesh to be denied.
|
||||
|
||||
### Pods inside the mesh using sidecar mode
|
||||
|
||||
Istio supports East-West interoperability between a pod with a sidecar and a pod using ambient mode, within the same mesh. The sidecar proxy knows to use the HBONE protocol since the destination has been discovered to be an HBONE destination.
|
||||
|
||||
{{< tip >}}
|
||||
For sidecar proxies to use the HBONE/mTLS signaling option when communicating with ambient destinations, they need to be configured with `ISTIO_META_ENABLE_HBONE` set to `true` in the proxy metadata. This is the default in `MeshConfig` when using the `ambient` profile, hence you do not have to do anything else when using this profile.
|
||||
{{< /tip >}}
|
||||
|
||||
A `PeerAuthentication` policy with mTLS mode set to `STRICT` will allow traffic from a pod with an Istio sidecar proxy.
|
||||
|
||||
### Ingress and egress gateways and ambient mode pods
|
||||
|
||||
An ingress gateway may run in a non-ambient namespace, and expose services provided by ambient mode, sidecar mode or non-mesh pods. Interoperability is also supported between pods in ambient mode and Istio egress gateways.
|
||||
|
||||
## Pod selection logic for ambient and sidecar modes
|
||||
|
||||
Istio's two data plane modes, sidecar and ambient, can co-exist in the same cluster. It is important to ensure that the same pod or namespace does not get configured to use both modes at the same time. However, if this does occur, the sidecar mode currently takes precedence for such a pod or namespace.
|
||||
|
||||
Note that two pods within the same namespace could in theory be set to use different modes by labeling individual pods separately from the namespace label; however, this is not recommended. For most common use cases a single mode should be used for all pods within a single namespace.
|
||||
|
||||
The exact logic to determine whether a pod is set up to use ambient mode is as follows:
|
||||
|
||||
1. The `istio-cni` plugin configuration exclude list configured in `cni.values.excludeNamespaces` is used to skip namespaces in the exclude list.
|
||||
1. `ambient` mode is used for a pod if
|
||||
|
||||
* The namespace or pod has the label `istio.io/dataplane-mode=ambient`
|
||||
* The pod does not have the opt-out label `istio.io/dataplane-mode=none`
|
||||
* The annotation `sidecar.istio.io/status` is not present on the pod
|
||||
|
||||
The simplest option to avoid a configuration conflict is for a user to ensure that for each namespace, it either has the label for sidecar injection (`istio-injection=enabled`) or for ambient mode (`istio.io/dataplane-mode=ambient`), but never both.
|
||||
|
||||
## Label reference {#ambient-labels}
|
||||
|
||||
The following labels control if a resource is included in the mesh in ambient mode, if a waypoint proxy is used to enforce L7 policy for your resource, and to control how traffic is sent to the waypoint.
|
||||
|
||||
| Name | Feature Status | Resource | Description |
|
||||
| --- | --- | --- | --- |
|
||||
| `istio.io/dataplane-mode` | Beta | `Namespace` or `Pod` (latter has precedence) | Add your resource to an ambient mesh. <br><br> Valid values: `ambient` or `none`. |
|
||||
| `istio.io/use-waypoint` | Beta | `Namespace`, `Service` or `Pod` | Use a waypoint for traffic to the labeled resource for L7 policy enforcement. <br><br> Valid values: `{waypoint-name}`, `{namespace}/{waypoint-name}`, or `#none` (with hash). |
|
||||
| `istio.io/waypoint-for` | Alpha | `Gateway` | Specifies what types of endpoints the waypoint will process traffic for. <br><br> Valid values: `service`, `workload`, `none` or `all`. This label is optional and the default value is `service`. |
|
||||
|
||||
In order for your `istio.io/use-waypoint` label value to be effective, you have to ensure the waypoint is configured for the endpoint which is using it. By default waypoints accept traffic for service endpoints. For example, when you label a pod to use a specific waypoint via the `istio.io/use-waypoint` label, the waypoint should be labeled `istio.io./waypoint-for` with the value `workload` or `all`.
|
|
@ -0,0 +1,141 @@
|
|||
---
|
||||
title: Debug connectivity issues with ztunnel
|
||||
description: How to validate the node proxies have the correct configuration.
|
||||
weight: 50
|
||||
owner: istio/wg-networking-maintainers
|
||||
test: no
|
||||
---
|
||||
|
||||
This section describes some options for monitoring the ztunnel proxy configuration and datapath. This information can also help with some high level troubleshooting and in identifying information that would be useful to collect and provide in a bug report if there are any problems. Additional advanced monitoring of ztunnel internals and advanced troubleshooting is out of scope for this guide.
|
||||
|
||||
## Viewing ztunnel proxy state
|
||||
|
||||
The ztunnel proxy gets configuration and discovery information from the istiod {{< gloss >}}control plane{{< /gloss >}} via xDS APIs.
|
||||
|
||||
The `istioctl x ztunnel-config` command allows you to view discovered workloads as seen by a ztunnel proxy.
|
||||
|
||||
In the first example, you see all the workloads and control plane components that ztunnel is currently tracking, including information about the IP address and protocol to use when connecting to that component and whether there is a waypoint proxy associated with that workload.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x ztunnel-config workloads
|
||||
NAMESPACE POD NAME IP NODE WAYPOINT PROTOCOL
|
||||
default bookinfo-gateway-istio-59dd7c96db-q9k6v 10.244.1.11 ambient-worker None TCP
|
||||
default details-v1-cf74bb974-5sqkp 10.244.1.5 ambient-worker None HBONE
|
||||
default notsleep-5c785bc478-zpg7j 10.244.2.7 ambient-worker2 None HBONE
|
||||
default productpage-v1-87d54dd59-fn6vw 10.244.1.10 ambient-worker None HBONE
|
||||
default ratings-v1-7c4bbf97db-zvkdw 10.244.1.6 ambient-worker None HBONE
|
||||
default reviews-v1-5fd6d4f8f8-knbht 10.244.1.16 ambient-worker None HBONE
|
||||
default reviews-v2-6f9b55c5db-c94m2 10.244.1.17 ambient-worker None HBONE
|
||||
default reviews-v3-7d99fd7978-7rgtd 10.244.1.18 ambient-worker None HBONE
|
||||
default sleep-7656cf8794-r7zb9 10.244.1.12 ambient-worker None HBONE
|
||||
istio-system istiod-7ff4959459-qcpvp 10.244.2.5 ambient-worker2 None TCP
|
||||
istio-system ztunnel-6hvcw 10.244.1.4 ambient-worker None TCP
|
||||
istio-system ztunnel-mf476 10.244.2.6 ambient-worker2 None TCP
|
||||
istio-system ztunnel-vqzf9 10.244.0.6 ambient-control-plane None TCP
|
||||
kube-system coredns-76f75df574-2sms2 10.244.0.3 ambient-control-plane None TCP
|
||||
kube-system coredns-76f75df574-5bf9c 10.244.0.2 ambient-control-plane None TCP
|
||||
local-path-storage local-path-provisioner-7577fdbbfb-pslg6 10.244.0.4 ambient-control-plane None TCP
|
||||
|
||||
{{< /text >}}
|
||||
|
||||
The `ztunnel-config` command can be used to view the secrets holding the TLS certificates that the ztunnel proxy has received from the istiod control plane to use for mTLS.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x ztunnel-config certificates "$ZTUNNEL".istio-system
|
||||
CERTIFICATE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-details Leaf Available true c198d859ee51556d0eae13b331b0c259 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-details Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-productpage Leaf Available true 64c3828993c7df6f85a601a1615532cc 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-productpage Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-ratings Leaf Available true 720479815bf6d81a05df8a64f384ebb0 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-ratings Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-reviews Leaf Available true 285697fb2cf806852d3293298e300c86 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z
|
||||
spiffe://cluster.local/ns/default/sa/bookinfo-reviews Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
|
||||
spiffe://cluster.local/ns/default/sa/sleep Leaf Available true fa33bbb783553a1704866842586e4c0b 2024-05-05T09:25:49Z 2024-05-04T09:23:49Z
|
||||
spiffe://cluster.local/ns/default/sa/sleep Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
|
||||
{{< /text >}}
|
||||
|
||||
Using these commands, you can check that ztunnel proxies are configured with all the expected workloads and TLS certificate. Additionally, missing information can be used for troubleshooting any networking errors.
|
||||
|
||||
You may use the `all` option to view all parts of the ztunnel-config with a single CLI command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x ztunnel-config all -o json
|
||||
{{< /text >}}
|
||||
|
||||
You can also view the raw configuration dump of a ztunnel proxy via a `curl` to an endpoint inside its pod:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl debug -it $ZTUNNEL -n istio-system --image=curlimages/curl -- curl localhost:15000/config_dump
|
||||
{{< /text >}}
|
||||
|
||||
## Viewing Istiod state for ztunnel xDS resources
|
||||
|
||||
Sometimes you may wish to view the state of ztunnel proxy config resources as maintained in the istiod control plane, in the format of the xDS API resources defined specially for ztunnel proxies. This can be done by exec-ing into the istiod pod and obtaining this information from port 15014 for a given ztunnel proxy as shown in the example below. This output can then also be saved and viewed with a JSON pretty print formatter utility for easier browsing (not shown in the example).
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ISTIOD=$(kubectl get pods -n istio-system -l app=istiod -o=jsonpath='{.items[0].metadata.name}')
|
||||
$ kubectl debug -it $ISTIOD -n istio-system --image=curlimages/curl -- curl localhost:15014/debug/config_dump?proxyID="$ZTUNNEL".istio-system
|
||||
{{< /text >}}
|
||||
|
||||
## Verifying ztunnel traffic through logs
|
||||
|
||||
ztunnel's traffic logs can be queried using the standard Kubernetes log facilities.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n default exec deploy/sleep -- sh -c 'for i in $(seq 1 10); do curl -s -I http://productpage:9080/; done'
|
||||
HTTP/1.1 200 OK
|
||||
Server: Werkzeug/3.0.1 Python/3.12.1
|
||||
--snip--
|
||||
{{< /text >}}
|
||||
|
||||
The response displayed confirms the client pod receives responses from the service. You can now check logs of the ztunnel pods to confirm the traffic was sent over the HBONE tunnel.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system logs -l app=ztunnel | grep -E "inbound|outbound"
|
||||
2024-05-04T09:59:05.028709Z info access connection complete src.addr=10.244.1.12:60059 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.10:9080 dst.hbone_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=175 bytes_recv=80 duration="1ms"
|
||||
2024-05-04T09:59:05.028771Z info access connection complete src.addr=10.244.1.12:58508 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.10:15008 dst.hbone_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="outbound" bytes_sent=80 bytes_recv=175 duration="1ms"
|
||||
--snip--
|
||||
{{< /text >}}
|
||||
|
||||
These log messages confirm the traffic was sent via the ztunnel proxy. Additional fine-grained monitoring can be done by checking logs on the specific ztunnel proxy instances that are on the same nodes as the source and destination pods of the traffic. If these logs are not seen, then a possibility is that [traffic redirection](/docs/ambient/architecture/traffic-redirection) may not be working correctly.
|
||||
|
||||
{{< tip >}}
|
||||
Traffic always traverses the ztunnel pod, even when the source and destination of the traffic are on the same compute node.
|
||||
{{< /tip >}}
|
||||
|
||||
### Verifying ztunnel load balancing
|
||||
|
||||
The ztunnel proxy automatically performs client-side load balancing if the destination is a service with multiple endpoints. No additional configuration is needed. The load balancing algorithm is an internally fixed L4 Round Robin algorithm that distributes traffic based on L4 connection state, and is not user configurable.
|
||||
|
||||
{{< tip >}}
|
||||
If the destination is a service with multiple instances or pods and there is no waypoint associated with the destination service, then the source ztunnel performs L4 load balancing directly across these instances or service backends and then sends traffic via the remote ztunnel proxies associated with those backends. If the destination service is configured to use one or more waypoint proxies, then the source ztunnel proxy performs load balancing by distributing traffic across these waypoint proxies and sends traffic via the remote ztunnel proxies on the node hosting the waypoint proxy instances.
|
||||
{{< /tip >}}
|
||||
|
||||
By calling a service with multiple backends, we can validate that client traffic is balanced across the service replicas.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n default exec deploy/sleep -- sh -c 'for i in $(seq 1 10); do curl -s -I http://reviews:9080/; done'
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system logs -l app=ztunnel | grep -E "outbound"
|
||||
--snip--
|
||||
2024-05-04T10:11:04.964851Z info access connection complete src.addr=10.244.1.12:35520 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.9:15008 dst.hbone_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
|
||||
2024-05-04T10:11:04.969578Z info access connection complete src.addr=10.244.1.12:35526 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.9:15008 dst.hbone_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
|
||||
2024-05-04T10:11:04.974720Z info access connection complete src.addr=10.244.1.12:35536 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.7:15008 dst.hbone_addr="10.244.1.7:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-5fd6d4f8f8-26j92" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
|
||||
2024-05-04T10:11:04.979462Z info access connection complete src.addr=10.244.1.12:35552 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.8:15008 dst.hbone_addr="10.244.1.8:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-6f9b55c5db-c2dtw" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
|
||||
{{< /text >}}
|
||||
|
||||
This is a round robin load balancing algorithm and is separate from and independent of any load balancing algorithm that may be configured within a `VirtualService`'s `TrafficPolicy` field, since as discussed previously, all aspects of `VirtualService` API objects are instantiated on the Waypoint proxies and not the ztunnel proxies.
|
||||
|
||||
### Observability of ambient mode traffic
|
||||
|
||||
In addition to checking ztunnel logs and other monitoring options noted above, you can also use normal Istio monitoring and telemetry functions to monitor application traffic using the ambient data plane mode.
|
||||
|
||||
* [Prometheus installation](/docs/ops/integrations/prometheus/#installation)
|
||||
* [Kiali installation](/docs/ops/integrations/kiali/#installation)
|
||||
* [Istio metrics](/docs/reference/config/metrics/)
|
||||
* [Querying Metrics from Prometheus](/docs/tasks/observability/metrics/querying-metrics/)
|
||||
|
||||
If a service is only using the secure overlay provided by ztunnel, the Istio metrics reported will only be the L4 TCP metrics (namely `istio_tcp_sent_bytes_total`, `istio_tcp_received_bytes_total`, `istio_tcp_connections_opened_total`, `istio_tcp_connections_closed_total`). The full set of Istio and Envoy metrics will be reported if a waypoint proxy is used.
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
title: Enable policy in ambient mode
|
||||
description: The two enforcement points for policy in an ambient mesh.
|
||||
weight: 20
|
||||
owner: istio/wg-networking-maintainers
|
||||
test: no
|
||||
---
|
||||
|
||||
The ztunnel proxy performs authorization policy enforcement when a workload is enrolled in secure overlay mode (i.e. with no waypoint proxy configured).
|
||||
The actual enforcement point is at the receiving (or server-side) ztunnel proxy in the path of a connection.
|
||||
|
||||
## Layer 4 authorization policies
|
||||
|
||||
A basic L4 authorization policy looks like this:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: security.istio.io/v1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: allow-sleep-to-httpbin
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
action: ALLOW
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals:
|
||||
- cluster.local/ns/ambient-demo/sa/sleep
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
The behavior of the `AuthorizationPolicy` API has the same functional behavior in Istio ambient mode as in sidecar mode. When there is no `AuthorizationPolicy` provisioned, then the default action is `ALLOW`. Once a policy is provisioned, pods matching the selector in the policy only allow traffic which is explicitly allowed. In this example, pods with the label `app:httpbin` only allow traffic from sources with an identity principal of `cluster.local/ns/ambient-demo/sa/sleep`. Traffic from all other sources will be denied.
|
||||
|
||||
## Layer 7 authorization policies without waypoints installed
|
||||
|
||||
{{< warning >}}
|
||||
If an `AuthorizationPolicy` has been configured that requires any traffic processing beyond L4, and if no waypoint proxies are configured for the destination of the traffic, then ztunnel proxy will simply drop all traffic as a defensive move. Hence, check to ensure that either all rules involve L4 processing only or else if non-L4 rules are unavoidable, that waypoint proxies are configured.
|
||||
{{< /warning >}}
|
||||
|
||||
This example adds a check for the HTTP GET method.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: allow-sleep-to-httpbin
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
action: ALLOW
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals:
|
||||
- cluster.local/ns/ambient-demo/sa/sleep
|
||||
to:
|
||||
- operation:
|
||||
methods: ["GET"]
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
Even though the identity of the pod is otherwise correct, the presence of a L7 policy causes the ztunnel to deny the connection.
|
||||
|
||||
{{< text plain >}}
|
||||
command terminated with exit code 56
|
||||
{{< /text >}}
|
||||
|
||||
You can also confirm by viewing logs of specific ztunnel proxy pods (not shown in the example here) that it is always the ztunnel proxy on the node hosting the destination pod that actually enforces the policy.
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Layer 7 Networking & Services with Waypoint Proxies
|
||||
description: Gain the full set of Istio feature with optional waypoint proxies.
|
||||
weight: 2
|
||||
title: Configure waypoint proxies
|
||||
description: Gain the full set of Istio features with optional Layer 7 proxies.
|
||||
weight: 10
|
||||
aliases:
|
||||
- /docs/ops/ambient/usage/waypoint
|
||||
- /latest/docs/ops/ambient/usage/waypoint
|
||||
|
|
|
@ -1,534 +0,0 @@
|
|||
---
|
||||
title: Layer 4 Networking & mTLS with Ztunnel
|
||||
description: Understand and manage Istio's "zero-trust tunnel" proxy.
|
||||
weight: 2
|
||||
aliases:
|
||||
- /docs/ops/ambient/usage/ztunnel
|
||||
- /latest/docs/ops/ambient/usage/ztunnel
|
||||
owner: istio/wg-networking-maintainers
|
||||
test: no
|
||||
---
|
||||
|
||||
## Introduction {#introsection}
|
||||
|
||||
This guide describes in-depth the functionality and usage of the ztunnel proxy and Layer 4 networking functions in Istio ambient mode. To simply try out Istio ambient mode, follow the [Ambient Quickstart](/docs/ambient/getting-started/) instead. This guide follows a user journey and works through multiple examples to detail the design and architecture of Istio ambient. It is highly recommended to follow the topics linked below in sequence.
|
||||
|
||||
* [Introduction](#introsection)
|
||||
* [Deploying an Application](#deployapplication)
|
||||
* [Monitoring the ztunnel proxy & L4 networking](#monitoringzt)
|
||||
* [L4 Authorization Policy](#l4auth)
|
||||
* [Ambient Interoperability with non-Ambient endpoints](#interop)
|
||||
|
||||
The ztunnel (Zero Trust Tunnel) component is a purpose-built per-node proxy for Istio ambient mesh. Since workload pods no longer require proxies running in sidecars in order to participate in the mesh, Istio in ambient mode is informally also referred to as "sidecar-less" mesh.
|
||||
|
||||
{{< tip >}}
|
||||
Pods/workloads using sidecar proxies can co-exist within the same mesh as pods that operate in ambient mode. Mesh pods that use sidecar proxies can also interoperate with pods in the same Istio mesh that are running in ambient mode. The term ambient mesh refers to an Istio mesh that has a superset of the capabilities and hence can support mesh pods that use either type of proxying.
|
||||
{{< /tip >}}
|
||||
|
||||
The ztunnel node proxy is responsible for securely connecting and authenticating workloads within the ambient mesh. The ztunnel proxy is written in Rust and is intentionally scoped to handle L3 and L4 functions in the ambient mesh such as mTLS, authentication, L4 authorization and telemetry. Ztunnel does not terminate workload HTTP traffic or parse workload HTTP headers. The ztunnel ensures L3 and L4 traffic is efficiently and securely transported to **waypoint proxies**, where the full suite of Istio’s L7 functionality, such as HTTP telemetry and load balancing, is implemented. The term "Secure Overlay Networking" is used informally to collectively describe the set of L4 networking functions implemented in an ambient mesh via the ztunnel proxy. At the transport layer, this is implemented via an HTTP CONNECT-based traffic tunneling protocol called [HBONE](/docs/ambient/architecture/hbone).
|
||||
|
||||
Some use cases of Istio in ambient mode may be addressed solely via the L4 secure overlay networking features, and will not need L7 features thereby not requiring deployment of a waypoint proxy. Other use cases requiring advanced traffic management and L7 networking features will require deployment of a waypoint proxy. This guide focuses on functionality related to the L4 secure overlay network using ztunnel proxies. This guide refers to L7 only when needed to describe some L4 ztunnel function. Other guides are dedicated to cover the advanced L7 networking functions and the use of waypoint proxies in detail.
|
||||
|
||||
| Application Deployment Use Case | Istio Ambient Mesh Configuration |
|
||||
| ------------- | ------------- |
|
||||
| Zero Trust networking via mutual-TLS, encrypted and tunneled data transport of client application traffic, L4 authorization, L4 telemetry | Baseline Ambient Mesh with ztunnel proxy networking |
|
||||
| Application requires L4 Mutual-TLS plus advanced Istio traffic management features (incl VirtualService, L7 telemetry, L7 Authorization) | Full Istio Ambient Mesh configuration both ztunnel proxy and waypoint proxy based networking |
|
||||
|
||||
### Environment used for this guide
|
||||
|
||||
The examples in this guide used a deployment of Istio version `1.21.0` on a `kind` cluster of version `0.20.0` running Kubernetes version `1.27.3`.
|
||||
|
||||
The examples below require a cluster with more than 1 worker node in order to explain how cross-node traffic operates. Refer to the [installation user guide](/docs/ambient/install/) or [getting started guide](/docs/ambient/getting-started/) for information on installing Istio in ambient mode on a Kubernetes cluster.
|
||||
|
||||
For details on the design of the ambient {{< gloss >}}data plane{{< /gloss >}}, and how it interacts with the Istio {{< gloss >}}control plane{{< /gloss >}}, see the [data plane](/docs/ambient/architecture/data-plane) and [control plane](/docs/ambient/architecture/control-plane) documentation.
|
||||
|
||||
## Deploying an Application {#deployapplication}
|
||||
|
||||
Normally, a user with Istio admin privileges will deploy the Istio mesh infrastructure. Once Istio is successfully deployed in ambient mode, it will be transparently available to applications deployed by all users in namespaces that have been annotated to use Istio ambient as illustrated in the examples below.
|
||||
|
||||
### Basic application deployment without Ambient
|
||||
|
||||
First, deploy a simple HTTP client server application without making it part of the Istio ambient mesh. Execute the following examples from the top of a local Istio repository or Istio folder created by downloading the istioctl client as described in Istio guides.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create ns ambient-demo
|
||||
$ kubectl apply -f samples/httpbin/httpbin.yaml -n ambient-demo
|
||||
$ kubectl apply -f samples/sleep/sleep.yaml -n ambient-demo
|
||||
$ kubectl apply -f samples/sleep/notsleep.yaml -n ambient-demo
|
||||
$ kubectl scale deployment sleep --replicas=2 -n ambient-demo
|
||||
{{< /text >}}
|
||||
|
||||
These manifests deploy multiple replicas of the `sleep` and `notsleep` pods which will be used as clients for the httpbin service pod (for simplicity, the command-line outputs have been deleted in the code samples above).
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl wait -n ambient-demo --for=condition=ready pod --selector=app=httpbin --timeout=90s
|
||||
pod/httpbin-648cd984f8-7vg8w condition met
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pods -n ambient-demo
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
httpbin-648cd984f8-7vg8w 1/1 Running 0 31m
|
||||
notsleep-bb6696574-2tbzn 1/1 Running 0 31m
|
||||
sleep-69cfb4968f-mhccl 1/1 Running 0 31m
|
||||
sleep-69cfb4968f-rhhhp 1/1 Running 0 31m
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get svc httpbin -n ambient-demo
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
httpbin ClusterIP 10.110.145.219 <none> 8000/TCP 28m
|
||||
{{< /text >}}
|
||||
|
||||
Note that each application pod has just 1 container running in it (the "1/1" indicator) and that `httpbin` is an http service listening on `ClusterIP` service port 8000. You should now be able to `curl` this service from either client pod and confirm it returns the `httpbin` web page as shown below. At this point there is no `TLS` of any form being used.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n ambient-demo -- curl httpbin:8000 -s | grep title -m 1
|
||||
<title>httpbin.org</title>
|
||||
{{< /text >}}
|
||||
|
||||
### Enabling ambient for an application
|
||||
|
||||
You can now enable ambient for the application deployed in the prior subsection by simply adding the label `istio.io/dataplane-mode=ambient` to the application's namespace as shown below. Note that this example focuses on a fresh namespace with new, sidecar-less workloads captured via ambient mode only. Later sections will describe how conflicts are resolved in hybrid scenarios that mix sidecar mode and ambient mode within the same mesh.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl label namespace ambient-demo istio.io/dataplane-mode=ambient
|
||||
$ kubectl get pods -n ambient-demo
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
httpbin-648cd984f8-7vg8w 1/1 Running 0 78m
|
||||
notsleep-bb6696574-2tbzn 1/1 Running 0 77m
|
||||
sleep-69cfb4968f-mhccl 1/1 Running 0 78m
|
||||
sleep-69cfb4968f-rhhhp 1/1 Running 0 78m
|
||||
{{< /text >}}
|
||||
|
||||
Note that after ambient is enabled for the namespace, every application pod still only has 1 container, and the uptime of these pods indicates these were not restarted in order to enable ambient mode (unlike sidecar mode, which requires pods to be restarted when the sidecar proxies are injected). This results in better user experience and operational performance since ambient mode can seamlessly be enabled (or disabled) completely transparently as far as the application pods are concerned.
|
||||
|
||||
Initiate a `curl` request again from one of the client pods to the service to verify that traffic continues to flow while ambient mode.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n ambient-demo -- curl httpbin:8000 -s | grep title -m 1
|
||||
<title>httpbin.org</title>
|
||||
{{< /text >}}
|
||||
|
||||
This indicates the traffic path is working. The next section looks at how to monitor the configuration and data plane of the ztunnel proxy to confirm that traffic is correctly using the ztunnel proxy.
|
||||
|
||||
## Monitoring the ztunnel proxy & L4 networking {#monitoringzt}
|
||||
|
||||
This section describes some options for monitoring the ztunnel proxy configuration and datapath. This information can also help with some high level troubleshooting and in identifying information that would be useful to collect and provide in a bug report if there are any problems. Additional advanced monitoring of ztunnel internals and advanced troubleshooting is out of scope for this guide.
|
||||
|
||||
### Viewing ztunnel proxy state
|
||||
|
||||
As indicated previously, the ztunnel proxy on each node gets configuration and discovery information from the istiod component via xDS APIs. Use the `istioctl proxy-config` command shown below to view discovered workloads as seen by a ztunnel proxy as well as secrets holding the TLS certificates that the ztunnel proxy has received from the istiod control plane to use in mTLS signaling on behalf of the local workloads.
|
||||
|
||||
In the first example, you see all the workloads and control plane components that the specific ztunnel pod is currently tracking including information about the IP address and protocol to use when connecting to that component and whether there is a Waypoint proxy associated with that workload. This example can repeated with any of the other ztunnel pods in the system to display their current configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ZTUNNEL=$(kubectl get pods -n istio-system -o wide | grep ztunnel -m 1 | sed 's/ .*//')
|
||||
$ echo "$ZTUNNEL"
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config workloads "$ZTUNNEL".istio-system
|
||||
NAME NAMESPACE IP NODE WAYPOINT PROTOCOL
|
||||
coredns-6d4b75cb6d-ptbhb kube-system 10.240.0.2 amb1-control-plane None TCP
|
||||
coredns-6d4b75cb6d-tv5nz kube-system 10.240.0.3 amb1-control-plane None TCP
|
||||
httpbin-648cd984f8-2q9bn ambient-demo 10.240.1.5 amb1-worker None HBONE
|
||||
httpbin-648cd984f8-7dglb ambient-demo 10.240.2.3 amb1-worker2 None HBONE
|
||||
istiod-5c7f79574c-pqzgc istio-system 10.240.1.2 amb1-worker None TCP
|
||||
local-path-provisioner-9cd9bd544-x7lq2 local-path-storage 10.240.0.4 amb1-control-plane None TCP
|
||||
notsleep-bb6696574-r4xjl ambient-demo 10.240.2.5 amb1-worker2 None HBONE
|
||||
sleep-69cfb4968f-mwglt ambient-demo 10.240.1.4 amb1-worker None HBONE
|
||||
sleep-69cfb4968f-qjmfs ambient-demo 10.240.2.4 amb1-worker2 None HBONE
|
||||
ztunnel-5jfj2 istio-system 10.240.0.5 amb1-control-plane None TCP
|
||||
ztunnel-gkldc istio-system 10.240.1.3 amb1-worker None TCP
|
||||
ztunnel-xxbgj istio-system 10.240.2.2 amb1-worker2 None TCP
|
||||
{{< /text >}}
|
||||
|
||||
In the second example, you see the list of TLS certificates that this ztunnel proxy instance has received from istiod to use in TLS signaling.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config secrets "$ZTUNNEL".istio-system
|
||||
NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
|
||||
spiffe://cluster.local/ns/ambient-demo/sa/httpbin CA Available true edf7f040f4b4d0b75a1c9a97a9b13545 2023-09-20T19:02:00Z 2023-09-19T19:00:00Z
|
||||
spiffe://cluster.local/ns/ambient-demo/sa/httpbin Cert Chain Available true ec30e0e1b7105e3dce4425b5255287c6 2033-09-16T18:26:19Z 2023-09-19T18:26:19Z
|
||||
spiffe://cluster.local/ns/ambient-demo/sa/sleep CA Available true 3b9dbea3b0b63e56786a5ea170995f48 2023-09-20T19:00:44Z 2023-09-19T18:58:44Z
|
||||
spiffe://cluster.local/ns/ambient-demo/sa/sleep Cert Chain Available true ec30e0e1b7105e3dce4425b5255287c6 2033-09-16T18:26:19Z 2023-09-19T18:26:19Z
|
||||
spiffe://cluster.local/ns/istio-system/sa/istiod CA Available true 885ee63c08ef9f1afd258973a45c8255 2023-09-20T18:26:34Z 2023-09-19T18:24:34Z
|
||||
spiffe://cluster.local/ns/istio-system/sa/istiod Cert Chain Available true ec30e0e1b7105e3dce4425b5255287c6 2033-09-16T18:26:19Z 2023-09-19T18:26:19Z
|
||||
spiffe://cluster.local/ns/istio-system/sa/ztunnel CA Available true 221b4cdc4487b60d08e94dc30a0451c6 2023-09-20T18:26:35Z 2023-09-19T18:24:35Z
|
||||
spiffe://cluster.local/ns/istio-system/sa/ztunnel Cert Chain Available true ec30e0e1b7105e3dce4425b5255287c6 2033-09-16T18:26:19Z 2023-09-19T18:26:19Z
|
||||
{{< /text >}}
|
||||
|
||||
Using these CLI commands, a user can check that ztunnel proxies are getting configured with all the expected workloads and TLS certificates and missing information can be used for troubleshooting to explain any potential observed networking errors. A user may also use the `all` option to view all parts of the proxy-config with a single CLI command and the JSON output formatter as shown in the example below to display the complete set of available state information.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-config all "$ZTUNNEL".istio-system -o json | jq
|
||||
{{< /text >}}
|
||||
|
||||
Note that when used with a ztunnel proxy instance, not all options of the `istioctl proxy-config` CLI are supported since some apply only to sidecar proxies.
|
||||
|
||||
An advanced user may also view the raw configuration dump of a ztunnel proxy via a `curl` to the endpoint inside a ztunnel proxy pod as shown in the following example.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec ds/ztunnel -n istio-system -- curl http://localhost:15000/config_dump | jq .
|
||||
{{< /text >}}
|
||||
|
||||
### Viewing Istiod state for ztunnel xDS resources
|
||||
|
||||
Sometimes an advanced user may want to view the state of ztunnel proxy config resources as maintained in the istiod control plane, in the format of the xDS API resources defined specially for ztunnel proxies. This can be done by exec-ing into the istiod pod and obtaining this information from port 15014 for a given ztunnel proxy as shown in the example below. This output can then also be saved and viewed with a JSON pretty print formatter utility for easier browsing (not shown in the example).
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -n istio-system deploy/istiod -- curl localhost:15014/debug/config_dump?proxyID="$ZTUNNEL".istio-system | jq
|
||||
{{< /text >}}
|
||||
|
||||
### Verifying ztunnel traffic logs
|
||||
|
||||
Send some traffic from a client `sleep` pod to the `httpbin` service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n ambient-demo exec deploy/sleep -- sh -c "for i in $(seq 1 10); do curl -s -I http://httpbin:8000/; done"
|
||||
HTTP/1.1 200 OK
|
||||
Server: gunicorn/19.9.0
|
||||
--snip--
|
||||
{{< /text >}}
|
||||
|
||||
The response displayed confirms the client pod receives responses from the service. Now check logs of the ztunnel pods to confirm the traffic was sent over the HBONE tunnel.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system logs -l app=ztunnel | grep -E "inbound|outbound"
|
||||
2023-08-14T09:15:46.542651Z INFO outbound{id=7d344076d398339f1e51a74803d6c854}: ztunnel::proxy::outbound: proxying to 10.240.2.10:80 using node local fast path
|
||||
2023-08-14T09:15:46.542882Z INFO outbound{id=7d344076d398339f1e51a74803d6c854}: ztunnel::proxy::outbound: complete dur=269.272µs
|
||||
--snip--
|
||||
{{< /text >}}
|
||||
|
||||
These log messages confirm the traffic indeed used the ztunnel proxy in the datapath. Additional fine-grained monitoring can be done by checking logs on the specific ztunnel proxy instances that are on the same nodes as the source and destination pods of traffic. If these logs are not seen, then a possibility is that traffic redirection may not be working correctly. Detailed description of monitoring and troubleshooting of the traffic redirection logic is out of scope for this guide. Note that as mentioned previously, with ambient traffic always traverses the ztunnel pod even when the source and destination of the traffic are on the same compute node.
|
||||
|
||||
### Monitoring and Telemetry via Prometheus, Grafana, Kiali
|
||||
|
||||
In addition to checking ztunnel logs and other monitoring options noted above, one can also use normal Istio monitoring and telemetry functions to monitor application traffic within an Istio Ambient mesh.
|
||||
The use of Istio in ambient mode does not change this behavior. Since this functionality is largely unchanged in Istio ambient mode from Istio sidecar mode, these details are not repeated in this guide. Please refer to:
|
||||
|
||||
* [Prometheus installation](/docs/ops/integrations/prometheus/#installation)
|
||||
* [Kiali installation](/docs/ops/integrations/kiali/#installation)
|
||||
* [Istio metrics](/docs/reference/config/metrics/)
|
||||
* [Querying Metrics from Prometheus](/docs/tasks/observability/metrics/querying-metrics/)
|
||||
|
||||
One point to note is that in case of a service that is only using ztunnel and L4 networking, the Istio metrics reported will currently only be the L4 TCP metrics (namely `istio_tcp_sent_bytes_total`, `istio_tcp_received_bytes_total`, `istio_tcp_connections_opened_total`, `istio_tcp_connections_closed_total`). The full set of Istio and Envoy metrics will be reported when a Waypoint proxy is involved.
|
||||
|
||||
### Verifying ztunnel load balancing
|
||||
|
||||
The ztunnel proxy automatically performs client-side load balancing if the destination is a service with multiple endpoints. No additional configuration is needed. The ztunnel load balancing algorithm is an internally fixed L4 Round Robin algorithm that distributes traffic based on L4 connection state and is not user configurable.
|
||||
|
||||
{{< tip >}}
|
||||
If the destination is a service with multiple instances or pods and there is no Waypoint associated with the destination service, then the source ztunnel proxy performs L4 load balancing directly across these instances or service backends and then sends traffic via the remote ztunnel proxies associated with those backends. If the destination service does have a Waypoint deployment (with one or more backend instances of the Waypoint proxy) associated with it, then the source ztunnel proxy performs load balancing by distributing traffic across these Waypoint proxies and sends traffic via the remote ztunnel proxies associated with the Waypoint proxy instances.
|
||||
{{< /tip >}}
|
||||
|
||||
Now repeat the previous example with multiple replicas of the service pod and verify that client traffic is load balanced across the service replicas. Wait for all pods in the ambient-demo namespace to go into Running state before continuing to the next step.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n ambient-demo scale deployment httpbin --replicas=2 ; kubectl wait --for condition=available deployment/httpbin -n ambient-demo
|
||||
deployment.apps/httpbin scaled
|
||||
deployment.apps/httpbin condition met
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n ambient-demo exec deploy/sleep -- sh -c "for i in $(seq 1 10); do curl -s -I http://httpbin:8000/; done"
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system logs -l app=ztunnel | grep -E "inbound|outbound"
|
||||
--snip--
|
||||
|
||||
2023-08-14T09:33:24.969996Z INFO inbound{id=ec177a563e4899869359422b5cdd1df4 peer_ip=10.240.2.16 peer_id=spiffe://cluster.local/ns/ambient-demo/sa/sleep}: ztunnel::proxy::inbound: got CONNECT request to 10.240.1.11:80
|
||||
2023-08-14T09:33:25.028601Z INFO inbound{id=1ebc3c7384ee68942bbb7c7ed866b3d9 peer_ip=10.240.2.16 peer_id=spiffe://cluster.local/ns/ambient-demo/sa/sleep}: ztunnel::proxy::inbound: got CONNECT request to 10.240.1.11:80
|
||||
|
||||
--snip--
|
||||
|
||||
2023-08-14T09:33:25.226403Z INFO outbound{id=9d99723a61c9496532d34acec5c77126}: ztunnel::proxy::outbound: proxy to 10.240.1.11:80 using HBONE via 10.240.1.11:15008 type Direct
|
||||
2023-08-14T09:33:25.273268Z INFO outbound{id=9d99723a61c9496532d34acec5c77126}: ztunnel::proxy::outbound: complete dur=46.9099ms
|
||||
2023-08-14T09:33:25.276519Z INFO outbound{id=cc87b4de5ec2ccced642e22422ca6207}: ztunnel::proxy::outbound: proxying to 10.240.2.10:80 using node local fast path
|
||||
2023-08-14T09:33:25.276716Z INFO outbound{id=cc87b4de5ec2ccced642e22422ca6207}: ztunnel::proxy::outbound: complete dur=231.892µs
|
||||
|
||||
--snip--
|
||||
{{< /text >}}
|
||||
|
||||
Here note the logs from the ztunnel proxies first indicating the http CONNECT request to the new destination pod (10.240.1.11) which indicates the setup of the HBONE tunnel to ztunnel on the node hosting the additional destination service pod. This is then followed by logs indicating the client traffic being sent to both 10.240.1.11 and 10.240.2.10 which are the two destination pods providing the service. Also note that the datapath is performing client-side load balancing in this case and not depending on Kubernetes service load balancing. In your setup these numbers will be different and will match the pod addresses of the httpbin pods in your cluster.
|
||||
|
||||
This is a round robin load balancing algorithm and is separate from and independent of any load balancing algorithm that may be configured within a `VirtualService`'s `TrafficPolicy` field, since as discussed previously, all aspects of `VirtualService` API objects are instantiated on the Waypoint proxies and not the ztunnel proxies.
|
||||
|
||||
### Pod selection logic for ambient and sidecar modes
|
||||
|
||||
Istio with sidecar proxies can co-exist with ambient based node level proxies within the same compute cluster. It is important to ensure that the same pod or namespace does not get configured to use both a sidecar proxy and an ambient node-level proxy. However, if this does occur, currently sidecar injection takes precedence for such a pod or namespace.
|
||||
|
||||
Note that two pods within the same namespace could in theory be set to use different modes by labeling individual pods separately from the namespace label, however this is not recommended. For most common use cases it is recommended that a single mode be used for all pods within a single namespace.
|
||||
|
||||
The exact logic to determine whether a pod is set up to use ambient mode is as follows.
|
||||
|
||||
1. The `istio-cni` plugin configuration exclude list configured in `cni.values.excludeNamespaces` is used to skip namespaces in the exclude list.
|
||||
1. `ambient` mode is used for a pod if
|
||||
|
||||
* The namespace or pod has the label `istio.io/dataplane-mode=ambient`
|
||||
* The pod does not have the opt-out label `istio.io/dataplane-mode=none`
|
||||
* The annotation `sidecar.istio.io/status` is not present on the pod
|
||||
|
||||
The simplest option to avoid a configuration conflict is for a user to ensure that for each namespace, it either has the label for sidecar injection (`istio-injection=enabled`) or for ambient data plane mode (`istio.io/dataplane-mode=ambient`) but never both.
|
||||
|
||||
## L4 Authorization Policy {#l4auth}
|
||||
|
||||
As mentioned previously, the ztunnel proxy performs Authorization policy enforcement when it requires only L4 traffic processing in order to enforce the policy in the data plane and there are no Waypoints involved. The actual enforcement point is at the receiving (or server side) ztunnel proxy in the path of a connection.
|
||||
|
||||
Apply a basic L4 Authorization policy for the already deployed `httpbin` application as shown in the example below.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -n ambient-demo -f - <<EOF
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: allow-sleep-to-httpbin
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
action: ALLOW
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals:
|
||||
- cluster.local/ns/ambient-demo/sa/sleep
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
The behavior of the `AuthorizationPolicy` API has the same functional behavior in Istio ambient mode as in sidecar mode. When there is no `AuthorizationPolicy` provisioned, then the default action is `ALLOW`. Once the policy above is provisioned, pods matching the selector in the policy (i.e. app:httpbin) only allow traffic explicitly whitelisted which in this case is sources with principal (i.e. identity) of `cluster.local/ns/ambient-demo/sa/sleep`. Now as shown below, if you try the curl operation to the `httpbin` service from the `sleep` pods, it still works but the same operation is blocked when initiated from the `notsleep` pods.
|
||||
|
||||
Note that this policy performs an explicit `ALLOW` action on traffic from sources with principal (i.e. identity) of `cluster.local/ns/ambient-demo/sa/sleep` and hence traffic from all other sources will be denied.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n ambient-demo -- curl httpbin:8000 -s | grep title -m 1
|
||||
<title>httpbin.org</title>
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/notsleep -n ambient-demo -- curl httpbin:8000 -s | grep title -m 1
|
||||
command terminated with exit code 56
|
||||
{{< /text >}}
|
||||
|
||||
Note that there are no waypoint proxies deployed and yet this `AuthorizationPolicy` is getting enforced and this is because this policy only requires L4 traffic processing which can be performed by ztunnel proxies. These policy actions can be further confirmed by checking ztunnel logs and looking for logs that indicate RBAC actions as shown in the following example.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs ds/ztunnel -n istio-system | grep -E RBAC
|
||||
-- snip --
|
||||
2023-10-10T23:14:00.534962Z INFO inbound{id=cc493da5e89877489a786fd3886bd2cf peer_ip=10.240.2.2 peer_id=spiffe://cluster.local/ns/ambient-demo/sa/notsleep}: ztunnel::proxy::inbound: RBAC rejected conn=10.240.2.2(spiffe://cluster.local/ns/ambient-demo/sa/notsleep)->10.240.1.2:80
|
||||
2023-10-10T23:15:33.339867Z INFO inbound{id=4c4de8de802befa5da58a165a25ff88a peer_ip=10.240.2.2 peer_id=spiffe://cluster.local/ns/ambient-demo/sa/notsleep}: ztunnel::proxy::inbound: RBAC rejected conn=10.240.2.2(spiffe://cluster.local/ns/ambient-demo/sa/notsleep)->10.240.1.2:80
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
If an `AuthorizationPolicy` has been configured that requires any traffic processing beyond L4, and if no waypoint proxies are configured for the destination of the traffic, then ztunnel proxy will simply drop all traffic as a defensive move. Hence, check to ensure that either all rules involve L4 processing only or else if non-L4 rules are unavoidable, then waypoint proxies are also configured to handle policy enforcement.
|
||||
{{< /warning >}}
|
||||
|
||||
As an example, modify the `AuthorizationPolicy` to include a check for the HTTP GET method as shown below. Now notice that both `sleep` and `notsleep` pods are blocked from sending traffic to the destination `httpbin` service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -n ambient-demo -f - <<EOF
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: allow-sleep-to-httpbin
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
action: ALLOW
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals:
|
||||
- cluster.local/ns/ambient-demo/sa/sleep
|
||||
to:
|
||||
- operation:
|
||||
methods: ["GET"]
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n ambient-demo -- curl httpbin:8000 -s | grep title -m 1
|
||||
command terminated with exit code 56
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/notsleep -n ambient-demo -- curl httpbin:8000 -s | grep title -m 1
|
||||
command terminated with exit code 56
|
||||
{{< /text >}}
|
||||
|
||||
You can also confirm by viewing logs of specific ztunnel proxy pods (not shown in the example here) that it is always the ztunnel proxy on the node hosting the destination pod that actually enforces the policy.
|
||||
|
||||
Go ahead and delete this `AuthorizationPolicy` before continuing with the rest of the examples in the guide.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete AuthorizationPolicy allow-sleep-to-httpbin -n ambient-demo
|
||||
{{< /text >}}
|
||||
|
||||
## Ambient Interoperability with non-ambient endpoints {#interop}
|
||||
|
||||
In the use cases so far, the traffic source and destination pods are both ambient pods. This section covers some mixed use cases where ambient endpoints need to communicate with non-ambient endpoints. As with prior examples in this guide, this section covers use cases that do not require waypoint proxies.
|
||||
|
||||
1. [East-West non-mesh pod to ambient mesh pod (and use of `PeerAuthentication` resource)](#ewnonmesh)
|
||||
1. [East-West Istio sidecar proxy pod to ambient mesh pod](#ewside2ambient)
|
||||
1. [North-South Ingress Gateway to ambient backend pods](#nsingress2ambient)
|
||||
|
||||
### East-West non-mesh pod to ambient mesh pod (and use of PeerAuthentication resource) {#ewnonmesh}
|
||||
|
||||
In the example below, the same `httpbin` service which has already been set up in the prior examples is accessed via client `sleep` pods that are running in a separate namespace that is not part of the mesh. This example shows that east-west traffic between ambient mesh pods and non mesh pods is seamlessly supported. The non-mesh pods initiate traffic directly to the destination pods without going through the source ztunnel, while the destination ztunnel enforces any L4 policy to control whether traffic should be allowed or denied.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace client-a
|
||||
$ kubectl apply -f samples/sleep/sleep.yaml -n client-a
|
||||
$ kubectl wait --for condition=available deployment/sleep -n client-a
|
||||
{{< /text >}}
|
||||
|
||||
Wait for the pods to get to Running state in the client-a namespace before continuing.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n client-a -- curl httpbin.ambient-demo.svc.cluster.local:8000 -s | grep title -m 1
|
||||
<title>httpbin.org</title>
|
||||
{{< /text >}}
|
||||
|
||||
As shown in the example below, now add a `PeerAuthentication` resource with mTLS mode set to `STRICT`, in the ambient namespace and confirm that the same client's traffic is now rejected with an error indicating the request was rejected. This is because the client is using simple HTTP to connect to the server instead of an HBONE tunnel with mTLS. This is a possible method that can be used to prevent non-Istio sources from sending traffic to Istio ambient pods.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -n ambient-demo -f - <<EOF
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: PeerAuthentication
|
||||
metadata:
|
||||
name: peerauth
|
||||
spec:
|
||||
mtls:
|
||||
mode: STRICT
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n client-a -- curl httpbin.ambient-demo.svc.cluster.local:8000 -s | grep title -m 1
|
||||
command terminated with exit code 56
|
||||
{{< /text >}}
|
||||
|
||||
Change the mTLS mode to `PERMISSIVE` and confirm that the ambient pods can once again accept non-mTLS connections including from non-mesh pods in this case.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -n ambient-demo -f - <<EOF
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: PeerAuthentication
|
||||
metadata:
|
||||
name: peerauth
|
||||
spec:
|
||||
mtls:
|
||||
mode: PERMISSIVE
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n client-a -- curl httpbin.ambient-demo.svc.cluster.local:8000 -s | grep title -m 1
|
||||
<title>httpbin.org</title>
|
||||
{{< /text >}}
|
||||
|
||||
### East-West Istio sidecar proxy pod to ambient mesh pod {#ewside2ambient}
|
||||
|
||||
This use case is that of seamless East-West traffic interoperability between an Istio pod using a sidecar proxy and an ambient pod within the same mesh.
|
||||
|
||||
The same httpbin service from the previous example is used but now add a client to access this service from another namespace which is labeled for sidecar injection. This also works automatically and transparently as shown in the example below. In this case the sidecar proxy running with the client automatically knows to use the HBONE control plane since the destination has been discovered to be an HBONE destination. The user does not need to do any special configuration to enable this.
|
||||
|
||||
{{< tip >}}
|
||||
For sidecar proxies to use the HBONE/mTLS signaling option when communicating with ambient destinations, they need to be configured with `ISTIO_META_ENABLE_HBONE` set to true in the proxy metadata. This is automatically set for the user as default in the `MeshConfig` when using the `ambient` profile, hence the user does not need to do anything additional when using this profile.
|
||||
{{< /tip >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create ns client-b
|
||||
$ kubectl label namespace client-b istio-injection=enabled
|
||||
$ kubectl apply -f samples/sleep/sleep.yaml -n client-b
|
||||
$ kubectl wait --for condition=available deployment/sleep -n client-b
|
||||
{{< /text >}}
|
||||
|
||||
Wait for the pods to get to Running state in the client-b namespace before continuing.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec deploy/sleep -n client-b -- curl httpbin.ambient-demo.svc.cluster.local:8000 -s | grep title -m 1
|
||||
<title>httpbin.org</title>
|
||||
{{< /text >}}
|
||||
|
||||
Again, it can further be verified from viewing the logs of the ztunnel pod (not shown in the example) at the destination node that traffic does in fact use the HBONE and CONNECT based path from the sidecar proxy based source client pod to the ambient based destination service pod. Additionally, although not shown, it can also be verified that unlike the previous subsection, in this case even if you apply a `PeerAuthentication` resource to the namespace tagged for ambient mode, communication continues between client and service pods since both use the HBONE control and data planes relying on mTLS.
|
||||
|
||||
### North-South Ingress Gateway to ambient backend pods {#nsingress2ambient}
|
||||
|
||||
This section describes a use case for North-South traffic with an Istio Gateway exposing the httpbin service via the Kubernetes Gateway API. The gateway itself is running in a non-Ambient namespace and may be an existing gateway that is also exposing other services that are provided by non-ambient pods. Hence, this example shows that ambient workloads can also interoperate with Istio gateways that need not themselves be running in namespaces tagged for ambient mode of operation.
|
||||
|
||||
For this example, you can use `metallb` to provide a load balancer service on an IP addresses that is reachable from outside the cluster. The same example also works with other forms of North-South load balancing options. The example below assumes that you have already installed `metallb` in this cluster to provide the load balancer service including a pool of IP addresses for `metallb` to use for exposing services externally. Refer to the [`metallb` guide for kind](https://kind.sigs.k8s.io/docs/user/loadbalancer/) for instructions on setting up `metallb` on kind clusters or refer to the instructions from the [`metallb` documentation](https://metallb.universe.tf/installation/) appropriate for your environment.
|
||||
|
||||
This example uses the Kubernetes Gateway API for configuring the N-S gateway. Since this API is not currently provided as default in Kubernetes and kind distributions, you have to install the API CRDs first as shown in the example.
|
||||
|
||||
An instance of `Gateway` using the Kubernetes Gateway API CRDs will then be deployed to leverage this `metallb` load balancer service. The instance of Gateway runs in the istio-system namespace in this example to represent an existing Gateway running in a non-ambient namespace. Finally, an `HTTPRoute` will be provisioned with a backend reference pointing to the existing httpbin service that is running on an ambient pod in the ambient-demo namespace.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
|
||||
{ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.6.1" | kubectl apply -f -; }
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
{{< boilerplate gateway-api-future >}}
|
||||
{{< boilerplate gateway-api-choose >}}
|
||||
{{< /tip >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - << EOF
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: httpbin-gateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
gatewayClassName: istio
|
||||
listeners:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: HTTP
|
||||
allowedRoutes:
|
||||
namespaces:
|
||||
from: All
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -n ambient-demo -f - << EOF
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: httpbin
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: httpbin-gateway
|
||||
namespace: istio-system
|
||||
rules:
|
||||
- backendRefs:
|
||||
- name: httpbin
|
||||
port: 8000
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
Next find the external service IP address on which the Gateway is listening and then access the httpbin service on this IP address (172.18.255.200 in the example below) from outside the cluster as shown below.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get service httpbin-gateway-istio -n istio-system
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
httpbin-gateway-istio LoadBalancer 10.110.30.25 172.18.255.200 15021:32272/TCP,80:30159/TCP 121m
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ export INGRESS_HOST=$(kubectl -n istio-system get service httpbin-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
$ echo "$INGRESS_HOST"
|
||||
172.18.255.200
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl "$INGRESS_HOST" -s | grep title -m 1
|
||||
<title>httpbin.org</title>
|
||||
{{< /text >}}
|
||||
|
||||
These examples illustrate multiple options for interoperability between ambient pods and non-ambient endpoints (which can be either Kubernetes application pods or Istio gateway pods with both Istio native gateways and Kubernetes Gateway API instances). Interoperability is also supported between Istio ambient pods and Istio Egress Gateways as well as scenarios where the ambient pods run the client-side of an application with the service side running outside of the mesh of on a mesh pod that uses the sidecar proxy mode. Hence, users have multiple options for seamlessly integrating ambient and non-ambient workloads within the same Istio mesh, allowing for phased introduction of ambient capability as best suits the needs of Istio mesh deployments and operations.
|
Loading…
Reference in New Issue