Sync replace "sleep" to “curl” part 1 into Chinese (#15792)

This commit is contained in:
Wilson Wu 2024-10-12 12:56:47 +08:00 committed by GitHub
parent 7fd29679fa
commit 61789039f1
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
30 changed files with 227 additions and 230 deletions

View File

@ -9,18 +9,18 @@
则将启用 Egress Gateway 和访问日志。
{{< /tip >}}
* 将 [sleep]({{< github_tree >}}/samples/sleep) 示例应用程序部署为发送请求的测试源。
* 将 [curl]({{< github_tree >}}/samples/curl) 示例应用程序部署为发送请求的测试源。
如果您启用了[自动 Sidecar 注入](/zh/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection)
运行以下命令部署示例应用程序:
{{< text bash >}}
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl apply -f @samples/curl/curl.yaml@
{{< /text >}}
否则,在使用以下命令部署 `sleep` 应用程序之前,手动注入 Sidecar
否则,在使用以下命令部署 `curl` 应用程序之前,手动注入 Sidecar
{{< text bash >}}
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@)
$ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@)
{{< /text >}}
{{< tip >}}
@ -30,5 +30,5 @@
* 为了发送请求,您需要创建 `SOURCE_POD` 环境变量来存储源 Pod 的名称:
{{< text bash >}}
$ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath={.items..metadata.name})
{{< /text >}}

View File

@ -107,7 +107,7 @@ inpod_mark: 1337
按照以下步骤确认端口 15001、15006 和 15008 上的套接字已打开并处于侦听状态。
{{< text bash >}}
$ kubectl debug $(kubectl get pod -l app=sleep -n ambient-demo -o jsonpath='{.items[0].metadata.name}') -it -n ambient-demo --image nicolaka/netshoot -- ss -ntlp
$ kubectl debug $(kubectl get pod -l app=curl -n ambient-demo -o jsonpath='{.items[0].metadata.name}') -it -n ambient-demo --image nicolaka/netshoot -- ss -ntlp
Defaulting debug container name to debugger-nhd4d.
State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
LISTEN 0 128 127.0.0.1:15080 0.0.0.0:*
@ -121,7 +121,7 @@ LISTEN 0 128 *:15008 *:*
要查看应用程序中一个 Pod 内的 iptables 规则设置,请执行以下命令:
{{< text bash >}}
$ kubectl debug $(kubectl get pod -l app=sleep -n ambient-demo -o jsonpath='{.items[0].metadata.name}') -it --image gcr.io/istio-release/base --profile=netadmin -n ambient-demo -- iptables-save
$ kubectl debug $(kubectl get pod -l app=curl -n ambient-demo -o jsonpath='{.items[0].metadata.name}') -it --image gcr.io/istio-release/base --profile=netadmin -n ambient-demo -- iptables-save
Defaulting debug container name to debugger-m44qc.
# 由 iptables-save 生成

View File

@ -37,12 +37,12 @@ $ kubectl delete namespace istio-system
## 删除示例应用程序 {#remove-the-sample-application}
要删除 Bookinfo 示例应用程序和 `sleep` 部署,请运行以下命令:
要删除 Bookinfo 示例应用程序和 `curl` 部署,请运行以下命令:
{{< text bash >}}
$ kubectl delete -f {{< github_file >}}/samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl delete -f {{< github_file >}}/samples/bookinfo/platform/kube/bookinfo-versions.yaml
$ kubectl delete -f {{< github_file >}}/samples/sleep/sleep.yaml
$ kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl delete -f samples/bookinfo/platform/kube/bookinfo-versions.yaml
$ kubectl delete -f samples/curl/curl.yaml
{{< /text >}}
## 删除 Kubernetes Gateway API CRD {#remove-the-kubernetes-gateway-api-crds}

View File

@ -40,16 +40,16 @@ EOF
如果您在浏览器中打开 Bookinfo 应用程序(`http://localhost:8080/productpage`
如之前一样,您将看到产品页面。但是,如果您尝试从不同的服务帐户访问 `productpage` 服务,则会看到错误。
让我们尝试从 `sleep` Pod 访问 Bookinfo 应用程序:
让我们尝试从 `curl` Pod 访问 Bookinfo 应用程序:
{{< text syntax=bash snip_id=deploy_sleep >}}
$ kubectl apply -f {{< github_file >}}/samples/sleep/sleep.yaml
{{< text syntax=bash snip_id=deploy_curl >}}
$ kubectl apply -f samples/curl/curl.yaml
{{< /text >}}
由于 `sleep` Pod 使用不同的服务账户,它无法访问 `productpage` 服务:
由于 `curl` Pod 使用不同的服务账户,它无法访问 `productpage` 服务:
{{< text bash >}}
$ kubectl exec deploy/sleep -- curl -s "http://productpage:9080/productpage"
$ kubectl exec deploy/curl -- curl -s "http://productpage:9080/productpage"
command terminated with exit code 56
{{< /text >}}
@ -72,7 +72,7 @@ NAME CLASS ADDRESS PROGRAMMED AGE
waypoint istio-waypoint 10.96.58.95 True 42s
{{< /text >}}
添加 [L7 鉴权策略](/zh/docs/ambient/usage/l7-features/)将明确允许 `sleep` 服务向
添加 [L7 鉴权策略](/zh/docs/ambient/usage/l7-features/)将明确允许 `curl` 服务向
`productpage` 服务发送 `GET` 请求,但不能执行其他操作:
{{< text syntax=bash snip_id=deploy_l7_policy >}}
@ -92,7 +92,7 @@ spec:
- from:
- source:
principals:
- cluster.local/ns/default/sa/sleep
- cluster.local/ns/default/sa/curl
to:
- operation:
methods: ["GET"]
@ -110,7 +110,7 @@ EOF
{{< text bash >}}
$ # This fails with an RBAC error because we're not using a GET operation
$ kubectl exec deploy/sleep -- curl -s "http://productpage:9080/productpage" -X DELETE
$ kubectl exec deploy/curl -- curl -s "http://productpage:9080/productpage" -X DELETE
RBAC: access denied
{{< /text >}}
@ -121,8 +121,8 @@ RBAC: access denied
{{< /text >}}
{{< text bash >}}
$ # This works as we're explicitly allowing GET requests from the sleep pod
$ kubectl exec deploy/sleep -- curl -s http://productpage:9080/productpage | grep -o "<title>.*</title>"
$ # This works as we're explicitly allowing GET requests from the curl pod
$ kubectl exec deploy/curl -- curl -s http://productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
{{< /text >}}

View File

@ -41,7 +41,7 @@ EOF
为了确认 100 个请求的流量中大约 10% 流向 `reviews-v2`,您可以运行以下命令:
{{< text syntax=bash snip_id=test_traffic_split >}}
$ kubectl exec deploy/sleep -- sh -c "for i in \$(seq 1 100); do curl -s http://productpage:9080/productpage | grep reviews-v.-; done"
$ kubectl exec deploy/curl -- sh -c "for i in \$(seq 1 100); do curl -s http://productpage:9080/productpage | grep reviews-v.-; done"
{{< /text >}}
您会注意到大多数请求都发往 `reviews-v1`。如果您在浏览器中打开 Bookinfo 应用程序并多次刷新页面,

View File

@ -20,10 +20,10 @@ Wasm 可扩展性的一个主要优势是可以在运行时动态加载扩展插
1. 参照 [Ambient 模式入门指南](/zh/docs/ambient/getting-started)中的指示说明设置 Istio。
1. 部署 [Bookinfo 示例应用](/zh/docs/ambient/getting-started/deploy-sample-app)。
1. [将 default 命名空间添加到 Ambient 网格](/zh/docs/ambient/getting-started/secure-and-visualize)。
1. 部署 [sleep]({{< github_tree >}}/samples/sleep) 样例应用,用作发送请求的测试源。
1. 部署 [curl]({{< github_tree >}}/samples/curl) 样例应用,用作发送请求的测试源。
{{< text syntax=bash >}}
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl apply -f @samples/curl/curl.yaml@
{{< /text >}}
## 在网关处 {#at-a-gateway}
@ -83,14 +83,14 @@ Istio 代理将解释 WasmPlugin 配置,从 OCI 镜像仓库下载远程 Wasm
1. 在没有凭据的情况下测试 `/productpage`
{{< text syntax=bash snip_id=test_gateway_productpage_without_credentials >}}
$ kubectl exec deploy/sleep -- curl -s -w "%{http_code}" -o /dev/null "http://bookinfo-gateway-istio.default.svc.cluster.local/productpage"
$ kubectl exec deploy/curl -- curl -s -w "%{http_code}" -o /dev/null "http://bookinfo-gateway-istio.default.svc.cluster.local/productpage"
401
{{< /text >}}
1. 使用 WasmPlugin 资源中配置的凭据来测试 `/productpage`
{{< text syntax=bash snip_id=test_gateway_productpage_with_credentials >}}
$ kubectl exec deploy/sleep -- curl -s -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" -w "%{http_code}" "http://bookinfo-gateway-istio.default.svc.cluster.local/productpage"
$ kubectl exec deploy/curl -- curl -s -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" -w "%{http_code}" "http://bookinfo-gateway-istio.default.svc.cluster.local/productpage"
200
{{< /text >}}
@ -111,7 +111,7 @@ $ istioctl waypoint apply --enroll-namespace --wait
验证到达服务的流量:
{{< text syntax=bash snip_id=verify_traffic >}}
$ kubectl exec deploy/sleep -- curl -s -w "%{http_code}" -o /dev/null http://productpage:9080/productpage
$ kubectl exec deploy/curl -- curl -s -w "%{http_code}" -o /dev/null http://productpage:9080/productpage
200
{{< /text >}}
@ -166,14 +166,14 @@ basic-auth-at-waypoint 14m
1. 在不含凭据的情况下测试内部 `/productpage`
{{< text syntax=bash snip_id=test_waypoint_productpage_without_credentials >}}
$ kubectl exec deploy/sleep -- curl -s -w "%{http_code}" -o /dev/null http://productpage:9080/productpage
$ kubectl exec deploy/curl -- curl -s -w "%{http_code}" -o /dev/null http://productpage:9080/productpage
401
{{< /text >}}
1. 在有凭据的情况下测试内部 `/productpage`
{{< text syntax=bash snip_id=test_waypoint_productpage_with_credentials >}}
$ kubectl exec deploy/sleep -- curl -s -w "%{http_code}" -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" http://productpage:9080/productpage
$ kubectl exec deploy/curl -- curl -s -w "%{http_code}" -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" http://productpage:9080/productpage
200
{{< /text >}}
@ -216,21 +216,21 @@ EOF
1. 使用通用 `waypoint` 代理处配置的凭据来测试内部 `/productpage`
{{< text syntax=bash snip_id=test_waypoint_service_productpage_with_credentials >}}
$ kubectl exec deploy/sleep -- curl -s -w "%{http_code}" -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" http://productpage:9080/productpage
$ kubectl exec deploy/curl -- curl -s -w "%{http_code}" -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" http://productpage:9080/productpage
200
{{< /text >}}
1. 使用特定的 `reviews-svc-waypoint` 代理处配置的凭据来测试内部 `/reviews`
{{< text syntax=bash snip_id=test_waypoint_service_reviews_with_credentials >}}
$ kubectl exec deploy/sleep -- curl -s -w "%{http_code}" -o /dev/null -H "Authorization: Basic MXQtaW4zOmFkbWluMw==" http://reviews:9080/reviews/1
$ kubectl exec deploy/curl -- curl -s -w "%{http_code}" -o /dev/null -H "Authorization: Basic MXQtaW4zOmFkbWluMw==" http://reviews:9080/reviews/1
200
{{< /text >}}
1. 在没有凭据的情况下测试内部 `/reviews`
{{< text syntax=bash snip_id=test_waypoint_service_reviews_without_credentials >}}
$ kubectl exec deploy/sleep -- curl -s -w "%{http_code}" -o /dev/null http://reviews:9080/reviews/1
$ kubectl exec deploy/curl -- curl -s -w "%{http_code}" -o /dev/null http://reviews:9080/reviews/1
401
{{< /text >}}

View File

@ -27,7 +27,7 @@ ztunnel 代理可以强制执行鉴权策略。强制执行点是在连接路径
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: allow-sleep-to-httpbin
name: allow-curl-to-httpbin
spec:
selector:
matchLabels:
@ -37,7 +37,7 @@ spec:
- from:
- source:
principals:
- cluster.local/ns/ambient-demo/sa/sleep
- cluster.local/ns/ambient-demo/sa/curl
{{< /text >}}
此策略既可用于 {{< gloss "sidecar" >}}Sidecar 模式{{< /gloss >}},也能用于 Ambient 模式。
@ -45,7 +45,7 @@ spec:
Istio `AuthorizationPolicy` API 的四层TCP特性在 Ambient 模式中的行为与在 Sidecar 模式中的行为相同。
当没有配置鉴权策略时,默认的操作是 `ALLOW`。一旦配置了某个策略,此策略指向的目标 Pod 只允许显式允许的流量。
在上述示例中,带有 `app: httpbin` 标签的 Pod 只允许源自身份主体为
`cluster.local/ns/ambient-demo/sa/sleep` 的流量。来自所有其他源的流量都将被拒绝。
`cluster.local/ns/ambient-demo/sa/curl` 的流量。来自所有其他源的流量都将被拒绝。
## 目标指向策略 {#targeting-policies}
@ -88,7 +88,7 @@ ztunnel 无法强制执行 L7 策略。如果一个策略中的规则与 L7 属
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: allow-sleep-to-httpbin
name: allow-curl-to-httpbin
spec:
selector:
matchLabels:
@ -98,7 +98,7 @@ spec:
- from:
- source:
principals:
- cluster.local/ns/ambient-demo/sa/sleep
- cluster.local/ns/ambient-demo/sa/curl
to:
- operation:
methods: ["GET"]

View File

@ -10,17 +10,17 @@ test: no
## 流量路由或安全策略问题 {#problems-with-traffic-routing-or-security-policy}
通过 `productpage` 服务将一些请求从 `sleep` Pod 发送到 `reviews` 服务:
通过 `productpage` 服务将一些请求从 `curl` Pod 发送到 `reviews` 服务:
{{< text bash >}}
$ kubectl exec deploy/sleep -- curl -s http://productpage:9080/productpage
$ kubectl exec deploy/curl -- curl -s http://productpage:9080/productpage
{{< /text >}}
将一些请求从 `sleep` Pod 发送到 `reviews` `v2` Pod
将一些请求从 `curl` Pod 发送到 `reviews` `v2` Pod
{{< text bash >}}
$ export REVIEWS_V2_POD_IP=$(kubectl get pod -l version=v2,app=reviews -o jsonpath='{.items[0].status.podIP}')
$ kubectl exec deploy/sleep -- curl -s http://$REVIEWS_V2_POD_IP:9080/reviews/1
$ kubectl exec deploy/curl -- curl -s http://$REVIEWS_V2_POD_IP:9080/reviews/1
{{< /text >}}
`reviews` 服务的请求应由 `reviews-svc-waypoint` 强制执行所有 L7 策略。
@ -49,7 +49,6 @@ $ kubectl exec deploy/sleep -- curl -s http://$REVIEWS_V2_POD_IP:9080/reviews/1
default bookinfo-gateway-istio 10.43.164.194 waypoint
default details 10.43.160.119 waypoint
default kubernetes 10.43.0.1 waypoint
default notsleep 10.43.156.147 waypoint
default productpage 10.43.172.254 waypoint
default ratings 10.43.71.236 waypoint
default reviews 10.43.162.105 reviews-svc-waypoint
@ -67,7 +66,6 @@ $ kubectl exec deploy/sleep -- curl -s http://$REVIEWS_V2_POD_IP:9080/reviews/1
NAMESPACE POD NAME IP NODE WAYPOINT PROTOCOL
default bookinfo-gateway-istio-7c57fc4647-wjqvm 10.42.2.8 k3d-k3s-default-server-0 None TCP
default details-v1-698d88b-wwsnv 10.42.2.4 k3d-k3s-default-server-0 None HBONE
default notsleep-685df55c6c-nwhs6 10.42.0.9 k3d-k3s-default-agent-0 None HBONE
default productpage-v1-675fc69cf-fp65z 10.42.2.6 k3d-k3s-default-server-0 None HBONE
default ratings-v1-6484c4d9bb-crjtt 10.42.0.4 k3d-k3s-default-agent-0 None HBONE
default reviews-svc-waypoint-c49f9f569-b492t 10.42.2.10 k3d-k3s-default-server-0 None TCP

View File

@ -23,13 +23,12 @@ $ istioctl ztunnel-config workloads
NAMESPACE POD NAME IP NODE WAYPOINT PROTOCOL
default bookinfo-gateway-istio-59dd7c96db-q9k6v 10.244.1.11 ambient-worker None TCP
default details-v1-cf74bb974-5sqkp 10.244.1.5 ambient-worker None HBONE
default notsleep-5c785bc478-zpg7j 10.244.2.7 ambient-worker2 None HBONE
default productpage-v1-87d54dd59-fn6vw 10.244.1.10 ambient-worker None HBONE
default ratings-v1-7c4bbf97db-zvkdw 10.244.1.6 ambient-worker None HBONE
default reviews-v1-5fd6d4f8f8-knbht 10.244.1.16 ambient-worker None HBONE
default reviews-v2-6f9b55c5db-c94m2 10.244.1.17 ambient-worker None HBONE
default reviews-v3-7d99fd7978-7rgtd 10.244.1.18 ambient-worker None HBONE
default sleep-7656cf8794-r7zb9 10.244.1.12 ambient-worker None HBONE
default curl-7656cf8794-r7zb9 10.244.1.12 ambient-worker None HBONE
istio-system istiod-7ff4959459-qcpvp 10.244.2.5 ambient-worker2 None TCP
istio-system ztunnel-6hvcw 10.244.1.4 ambient-worker None TCP
istio-system ztunnel-mf476 10.244.2.6 ambient-worker2 None TCP
@ -54,8 +53,8 @@ spiffe://cluster.local/ns/default/sa/bookinfo-ratings Leaf Available
spiffe://cluster.local/ns/default/sa/bookinfo-ratings Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
spiffe://cluster.local/ns/default/sa/bookinfo-reviews Leaf Available true 285697fb2cf806852d3293298e300c86 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z
spiffe://cluster.local/ns/default/sa/bookinfo-reviews Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
spiffe://cluster.local/ns/default/sa/sleep Leaf Available true fa33bbb783553a1704866842586e4c0b 2024-05-05T09:25:49Z 2024-05-04T09:23:49Z
spiffe://cluster.local/ns/default/sa/sleep Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
spiffe://cluster.local/ns/default/sa/curl Leaf Available true fa33bbb783553a1704866842586e4c0b 2024-05-05T09:25:49Z 2024-05-04T09:23:49Z
spiffe://cluster.local/ns/default/sa/curl Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z
{{< /text >}}
使用这些命令,您可以检查 ztunnel 代理是否配置了所有预期的工作负载和 TLS 证书。
@ -90,7 +89,7 @@ $ kubectl debug -it $ISTIOD -n istio-system --image=curlimages/curl -- curl loca
可以使用标准 Kubernetes 日志工具来查询 ztunnel 的流量日志。
{{< text bash >}}
$ kubectl -n default exec deploy/sleep -- sh -c 'for i in $(seq 1 10); do curl -s -I http://productpage:9080/; done'
$ kubectl -n default exec deploy/curl -- sh -c 'for i in $(seq 1 10); do curl -s -I http://productpage:9080/; done'
HTTP/1.1 200 OK
Server: Werkzeug/3.0.1 Python/3.12.1
--snip--
@ -101,8 +100,8 @@ Server: Werkzeug/3.0.1 Python/3.12.1
{{< text bash >}}
$ kubectl -n istio-system logs -l app=ztunnel | grep -E "inbound|outbound"
2024-05-04T09:59:05.028709Z info access connection complete src.addr=10.244.1.12:60059 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.10:9080 dst.hbone_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=175 bytes_recv=80 duration="1ms"
2024-05-04T09:59:05.028771Z info access connection complete src.addr=10.244.1.12:58508 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.10:15008 dst.hbone_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="outbound" bytes_sent=80 bytes_recv=175 duration="1ms"
2024-05-04T09:59:05.028709Z info access connection complete src.addr=10.244.1.12:60059 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.10:9080 dst.hbone_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=175 bytes_recv=80 duration="1ms"
2024-05-04T09:59:05.028771Z info access connection complete src.addr=10.244.1.12:58508 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.10:15008 dst.hbone_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="outbound" bytes_sent=80 bytes_recv=175 duration="1ms"
--snip--
{{< /text >}}
@ -131,16 +130,16 @@ $ kubectl -n istio-system logs -l app=ztunnel | grep -E "inbound|outbound"
通过调用具有多个后端的服务,我们可以验证客户端流量在服务副本之间是否平衡。
{{< text bash >}}
$ kubectl -n default exec deploy/sleep -- sh -c 'for i in $(seq 1 10); do curl -s -I http://reviews:9080/; done'
$ kubectl -n default exec deploy/curl -- sh -c 'for i in $(seq 1 10); do curl -s -I http://reviews:9080/; done'
{{< /text >}}
{{< text bash >}}
$ kubectl -n istio-system logs -l app=ztunnel | grep -E "outbound"
--snip--
2024-05-04T10:11:04.964851Z info access connection complete src.addr=10.244.1.12:35520 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.9:15008 dst.hbone_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
2024-05-04T10:11:04.969578Z info access connection complete src.addr=10.244.1.12:35526 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.9:15008 dst.hbone_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
2024-05-04T10:11:04.974720Z info access connection complete src.addr=10.244.1.12:35536 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.7:15008 dst.hbone_addr="10.244.1.7:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-5fd6d4f8f8-26j92" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
2024-05-04T10:11:04.979462Z info access connection complete src.addr=10.244.1.12:35552 src.workload="sleep-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.1.8:15008 dst.hbone_addr="10.244.1.8:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-6f9b55c5db-c2dtw" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
2024-05-04T10:11:04.964851Z info access connection complete src.addr=10.244.1.12:35520 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.9:15008 dst.hbone_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
2024-05-04T10:11:04.969578Z info access connection complete src.addr=10.244.1.12:35526 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.9:15008 dst.hbone_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
2024-05-04T10:11:04.974720Z info access connection complete src.addr=10.244.1.12:35536 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.7:15008 dst.hbone_addr="10.244.1.7:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-5fd6d4f8f8-26j92" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
2024-05-04T10:11:04.979462Z info access connection complete src.addr=10.244.1.12:35552 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.8:15008 dst.hbone_addr="10.244.1.8:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-6f9b55c5db-c2dtw" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=84 bytes_recv=169 duration="2ms"
{{< /text >}}
这是一种轮询负载均衡算法,并且独立于可以在 `VirtualService``TrafficPolicy`

View File

@ -51,8 +51,8 @@ istio_tcp_connections_opened_total{
reporter="source",
request_protocol="tcp",
response_flags="-",
source_app="sleep",
source_principal="spiffe://cluster.local/ns/default/sa/sleep",source_workload_namespace="default",
source_app="curl",
source_principal="spiffe://cluster.local/ns/default/sa/curl",source_workload_namespace="default",
...}
{{< /text >}}
@ -61,11 +61,11 @@ istio_tcp_connections_opened_total{
## 基于日志校验 mTLS {#validate-mtls-from-logs}
您还可以结合对等身份来查看源或目标 ztunnel 日志以确认 mTLS 是否已启用。
以下是从 `sleep` 服务到 `details` 服务请求的源 ztunnel 的日志示例:
以下是从 `curl` 服务到 `details` 服务请求的源 ztunnel 的日志示例:
{{< text syntax=plain >}}
2024-08-21T15:32:05.754291Z info access connection complete src.addr=10.42.0.9:33772 src.workload="sleep-7656cf8794-6lsm4" src.namespace="default"
src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.42.0.5:15008 dst.hbone_addr=10.42.0.5:9080 dst.service="details.default.svc.cluster.local"
2024-08-21T15:32:05.754291Z info access connection complete src.addr=10.42.0.9:33772 src.workload="curl-7656cf8794-6lsm4" src.namespace="default"
src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.42.0.5:15008 dst.hbone_addr=10.42.0.5:9080 dst.service="details.default.svc.cluster.local"
dst.workload="details-v1-857849f66-ft8wx" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details"
direction="outbound" bytes_sent=84 bytes_recv=358 duration="15ms"
{{< /text >}}

View File

@ -477,7 +477,7 @@ Istio 按以下顺序检查层中的匹配策略:`CUSTOM`、`DENY`
- `rules` 下的 `to` 字段指定请求的操作
- `rules` 下的 `when` 字段指定应用规则所需的条件
以下示例显示了一个授权策略,该策略允许两个源(服务帐户 `cluster.local/ns/default/sa/sleep`
以下示例显示了一个授权策略,该策略允许两个源(服务帐户 `cluster.local/ns/default/sa/curl`
和命名空间 `dev`),在使用有效的 JWT 令牌发送请求时,可以访问命名空间 `foo`
中带有标签 `app: httpbin``version: v1` 的工作负载。
@ -496,7 +496,7 @@ spec:
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/sleep"]
principals: ["cluster.local/ns/default/sa/curl"]
- source:
namespaces: ["dev"]
to:
@ -729,7 +729,7 @@ spec:
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/sleep"]
principals: ["cluster.local/ns/default/sa/curl"]
to:
- operation:
methods: ["GET"]

View File

@ -50,7 +50,7 @@ test: no
reviews-v2-56f6855586-cnrjp 1/1 Running 0 7h
reviews-v2-56f6855586-lxc49 1/1 Running 0 7h
reviews-v2-56f6855586-qh84k 1/1 Running 0 7h
sleep-88ddbcfdd-cc85s 1/1 Running 0 7h
curl-88ddbcfdd-cc85s 1/1 Running 0 7h
{{< /text >}}
1. Kubernetes 采取无侵入的和逐步的[滚动更新](https://kubernetes.io/zh-cn/docs/tutorials/kubernetes-basics/update/update-intro/)

View File

@ -36,14 +36,14 @@ test: no
1. 向 Pod 发送请求并查看它是否返回正确结果:
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl -sS "$REVIEWS_V2_POD_IP:9080/reviews/7"
$ kubectl exec $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -- curl -sS "$REVIEWS_V2_POD_IP:9080/reviews/7"
{"id": "7","reviews": [{ "reviewer": "Reviewer1", "text": "An extremely entertaining play by Shakespeare. The slapstick humour is refreshing!", "rating": {"stars": 5, "color": "black"}},{ "reviewer": "Reviewer2", "text": "Absolutely fun and entertaining. The play lacks thematic depth when compared to other plays by Shakespeare.", "rating": {"stars": 4, "color": "black"}}]}
{{< /text >}}
1. 连续发送 10 次请求来执行原始负载测试:
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- sh -c "for i in 1 2 3 4 5 6 7 8 9 10; do curl -o /dev/null -s -w '%{http_code}\n' $REVIEWS_V2_POD_IP:9080/reviews/7; done"
$ kubectl exec $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -- sh -c "for i in 1 2 3 4 5 6 7 8 9 10; do curl -o /dev/null -s -w '%{http_code}\n' $REVIEWS_V2_POD_IP:9080/reviews/7; done"
200
200
...

View File

@ -88,17 +88,17 @@ test: no
reviews-v1-77c65dc5c6-r55tl 1/1 Running 0 49s
{{< /text >}}
1. 在服务达到 `Running` 状态后,部署一个测试 Pod[sleep]({{< github_tree >}}/samples/sleep)。
1. 在服务达到 `Running` 状态后,部署一个测试 Pod[curl]({{< github_tree >}}/samples/curl)。
此 Pod 用来向您的微服务发送请求:
{{< text bash >}}
$ kubectl apply -f {{< github_file >}}/samples/sleep/sleep.yaml
$ kubectl apply -f {{< github_file >}}/samples/curl/curl.yaml
{{< /text >}}
1. 从测试 Pod 中用 curl 命令发送请求给 Bookinfo 应用,以确认该应用运行正常:
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}') -c sleep -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
$ kubectl exec $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -c curl -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
{{< /text >}}

View File

@ -49,7 +49,7 @@ test: no
productpage-v1-59b4f9f8d5-d4prx 2/2 Running 0 2m
ratings-v1-b7b7fbbc9-sggxf 2/2 Running 0 2m
reviews-v2-dfbcf859c-27dvk 2/2 Running 0 2m
sleep-88ddbcfdd-cc85s 1/1 Running 0 7h
curl-88ddbcfdd-cc85s 1/1 Running 0 7h
{{< /text >}}
1. 通过您[之前](/zh/docs/examples/microservices-istio/bookinfo-kubernetes/#update-your-etc-hosts-configuration-file)在

View File

@ -15,7 +15,7 @@ test: no
1. 从测试 pod 中向服务之一发起 HTTP 请求:
{{< text bash >}}
$ kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl http://ratings:9080/ratings/7
$ kubectl exec -it $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -- curl http://ratings:9080/ratings/7
{{< /text >}}
## 混沌测试 {#chaos-testing}
@ -47,7 +47,7 @@ test: no
reviews-v1-77c65dc5c6-5wt8g 1/1 Running 0 47m
reviews-v1-77c65dc5c6-kjvxs 1/1 Running 0 48m
reviews-v1-77c65dc5c6-r55tl 1/1 Running 0 47m
sleep-88ddbcfdd-l9zq4 1/1 Running 0 47m
curl-88ddbcfdd-l9zq4 1/1 Running 0 47m
{{< /text >}}
请注意第一个 Pod 重启了一次。
@ -84,7 +84,7 @@ test: no
reviews-v1-77c65dc5c6-5wt8g 1/1 Running 0 48m
reviews-v1-77c65dc5c6-kjvxs 1/1 Running 0 49m
reviews-v1-77c65dc5c6-r55tl 1/1 Running 0 48m
sleep-88ddbcfdd-l9zq4 1/1 Running 0 48m
curl-88ddbcfdd-l9zq4 1/1 Running 0 48m
{{< /text >}}
第一个 Pod 重启了两次,其它两个 `details` Pod 重启了一次。

View File

@ -106,11 +106,11 @@ test: n/a
以下标签会覆盖默认策略并强制注入 Sidecar
{{< text bash yaml >}}
$ kubectl get deployment sleep -o yaml | grep "sidecar.istio.io/inject:" -B4
$ kubectl get deployment curl -o yaml | grep "sidecar.istio.io/inject:" -B4
template:
metadata:
labels:
app: sleep
app: curl
sidecar.istio.io/inject: "true"
{{< /text >}}
@ -158,10 +158,10 @@ Pod 创建也会失败。在这种情况下,您可以检查 Pod 的部署状
例如,如果在您尝试部署 Pod 时 `istiod` 控制平面 Pod 没有运行,则事件将显示以下错误:
{{< text bash >}}
$ kubectl get events -n sleep
$ kubectl get events -n curl
...
23m Normal SuccessfulCreate replicaset/sleep-9454cc476 Created pod: sleep-9454cc476-khp45
22m Warning FailedCreate replicaset/sleep-9454cc476 Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp 10.96.44.51:443: connect: connection refused
23m Normal SuccessfulCreate replicaset/curl-9454cc476 Created pod: curl-9454cc476-khp45
22m Warning FailedCreate replicaset/curl-9454cc476 Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp 10.96.44.51:443: connect: connection refused
{{< /text >}}
{{< text bash >}}

View File

@ -271,14 +271,14 @@ spec:
Service 定义中的端口名称 `http-web` 为该端口显式指定 http 协议。
假设在 default 命名空间中也有一个 [sleep]({{< github_tree >}}/samples/sleep)
假设在 default 命名空间中也有一个 [curl]({{< github_tree >}}/samples/curl)
Pod `Deployment`。当使用 Pod IP这是访问 Headless Service 的一种常见方式)从这个
`sleep` Pod 访问 `nginx` 时,请求经由 `PassthroughCluster` 到达服务器侧,
`curl` Pod 访问 `nginx` 时,请求经由 `PassthroughCluster` 到达服务器侧,
但服务器侧的 Sidecar 代理找不到前往 `nginx` 的路由入口,且出现错误 `HTTP 503 UC`
{{< text bash >}}
$ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c sleep -- curl 10.1.1.171 -s -o /dev/null -w "%{http_code}"
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl 10.1.1.171 -s -o /dev/null -w "%{http_code}"
503
{{< /text >}}
@ -292,8 +292,8 @@ $ kubectl exec -it $SOURCE_POD -c sleep -- curl 10.1.1.171 -s -o /dev/null -w "%
在指向 `nginx` 的请求中将 Host 头指定为 `nginx.default`,成功返回 `HTTP 200 OK`
{{< text bash >}}
$ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c sleep -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http_code}"
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http_code}"
200
{{< /text >}}
@ -307,13 +307,13 @@ $ kubectl exec -it $SOURCE_POD -c sleep -- curl 10.1.1.171 -s -o /dev/null -w "%
这可用于客户端无法在请求中包含头信息的某些场景。
{{< text bash >}}
$ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c sleep -- curl 10.1.1.171 -s -o /dev/null -w "%{http_code}"
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl 10.1.1.171 -s -o /dev/null -w "%{http_code}"
200
{{< /text >}}
{{< text bash >}}
$ kubectl exec -it $SOURCE_POD -c sleep -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http_code}"
$ kubectl exec -it $SOURCE_POD -c curl -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http_code}"
200
{{< /text >}}
@ -322,8 +322,8 @@ $ kubectl exec -it $SOURCE_POD -c sleep -- curl 10.1.1.171 -s -o /dev/null -w "%
Headless Service 的特定实例也可以仅使用域名进行访问。
{{< text bash >}}
$ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c sleep -- curl web-0.nginx.default -s -o /dev/null -w "%{http_code}"
$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')
$ kubectl exec -it $SOURCE_POD -c curl -- curl web-0.nginx.default -s -o /dev/null -w "%{http_code}"
200
{{< /text >}}

View File

@ -181,7 +181,7 @@ Istiod 是否按预期在工作:
2021-04-23T20:53:29.507641Z info ads XDS: Pushing:2021-04-23T20:53:29Z/23 Services:15 ConnectedEndpoints:2 Version:2021-04-23T20:53:29Z/23
2021-04-23T20:53:29.507911Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:
* found 0 CUSTOM actions
2021-04-23T20:53:29.508077Z debug authorization Processed authorization policy for sleep-557747455f-6dxbl.foo with details:
2021-04-23T20:53:29.508077Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 CUSTOM actions
2021-04-23T20:53:29.508128Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:
* found 1 DENY actions, 0 ALLOW actions, 0 AUDIT actions
@ -189,11 +189,11 @@ Istiod 是否按预期在工作:
* built 1 HTTP filters for DENY action
* added 1 HTTP filters to filter chain 0
* added 1 HTTP filters to filter chain 1
2021-04-23T20:53:29.508158Z debug authorization Processed authorization policy for sleep-557747455f-6dxbl.foo with details:
2021-04-23T20:53:29.508158Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions
2021-04-23T20:53:29.509097Z debug authorization Processed authorization policy for sleep-557747455f-6dxbl.foo with details:
2021-04-23T20:53:29.509097Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 CUSTOM actions
2021-04-23T20:53:29.509167Z debug authorization Processed authorization policy for sleep-557747455f-6dxbl.foo with details:
2021-04-23T20:53:29.509167Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:
* found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions
2021-04-23T20:53:29.509501Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:
* found 0 CUSTOM actions
@ -208,7 +208,7 @@ Istiod 是否按预期在工作:
* added 1 TCP filters to filter chain 2
* added 1 TCP filters to filter chain 3
* added 1 TCP filters to filter chain 4
2021-04-23T20:53:29.510903Z info ads LDS: PUSH for node:sleep-557747455f-6dxbl.foo resources:18 size:85.0kB
2021-04-23T20:53:29.510903Z info ads LDS: PUSH for node:curl-557747455f-6dxbl.foo resources:18 size:85.0kB
2021-04-23T20:53:29.511487Z info ads LDS: PUSH for node:httpbin-74fb669cc6-lpscm.foo resources:18 size:86.4kB
{{< /text >}}
@ -334,7 +334,7 @@ Pilot 负责向代理分发授权策略。下面的步骤用来确认 Pilot 按
{{< text plain >}}
...
2021-04-23T20:43:18.552857Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:46180, directRemoteIP: 10.44.3.13:46180, remoteIP: 10.44.3.13:46180,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/sleep, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
2021-04-23T20:43:18.552857Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:46180, directRemoteIP: 10.44.3.13:46180, remoteIP: 10.44.3.13:46180,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
':path', '/headers'
':method', 'GET'
':scheme', 'http'
@ -346,14 +346,14 @@ Pilot 负责向代理分发授权策略。下面的步骤用来确认 Pilot 按
'x-b3-traceid', '8a124905edf4291a21df326729b264e9'
'x-b3-spanid', '21df326729b264e9'
'x-b3-sampled', '0'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/sleep'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/curl'
, dynamicMetadata: filter_metadata {
key: "istio_authn"
value {
fields {
key: "request.auth.principal"
value {
string_value: "cluster.local/ns/foo/sa/sleep"
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
@ -365,13 +365,13 @@ Pilot 负责向代理分发授权策略。下面的步骤用来确认 Pilot 按
fields {
key: "source.principal"
value {
string_value: "cluster.local/ns/foo/sa/sleep"
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
key: "source.user"
value {
string_value: "cluster.local/ns/foo/sa/sleep"
string_value: "cluster.local/ns/foo/sa/curl"
}
}
}
@ -388,7 +388,7 @@ Pilot 负责向代理分发授权策略。下面的步骤用来确认 Pilot 按
{{< text plain >}}
...
2021-04-23T20:59:11.838468Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:49826, directRemoteIP: 10.44.3.13:49826, remoteIP: 10.44.3.13:49826,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/sleep, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
2021-04-23T20:59:11.838468Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:49826, directRemoteIP: 10.44.3.13:49826, remoteIP: 10.44.3.13:49826,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
':path', '/headers'
':method', 'GET'
':scheme', 'http'
@ -400,14 +400,14 @@ Pilot 负责向代理分发授权策略。下面的步骤用来确认 Pilot 按
'x-b3-traceid', '696607fc4382b50017c1f7017054c751'
'x-b3-spanid', '17c1f7017054c751'
'x-b3-sampled', '0'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/sleep'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/curl'
, dynamicMetadata: filter_metadata {
key: "istio_authn"
value {
fields {
key: "request.auth.principal"
value {
string_value: "cluster.local/ns/foo/sa/sleep"
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
@ -419,13 +419,13 @@ Pilot 负责向代理分发授权策略。下面的步骤用来确认 Pilot 按
fields {
key: "source.principal"
value {
string_value: "cluster.local/ns/foo/sa/sleep"
string_value: "cluster.local/ns/foo/sa/curl"
}
}
fields {
key: "source.user"
value {
string_value: "cluster.local/ns/foo/sa/sleep"
string_value: "cluster.local/ns/foo/sa/curl"
}
}
}
@ -447,7 +447,7 @@ Pilot 负责向代理分发授权策略。下面的步骤用来确认 Pilot 按
如果您怀疑 Istio 使用的某些密钥或证书不正确,您可以检查任何 Pod 的内容信息。
{{< text bash >}}
$ istioctl proxy-config secret sleep-8f795f47d-4s4t7
$ istioctl proxy-config secret curl-8f795f47d-4s4t7
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 138092480869518152837211547060273851586 2020-11-11T16:39:48Z 2020-11-10T16:39:48Z
ROOTCA CA ACTIVE true 288553090258624301170355571152070165215 2030-11-08T16:34:52Z 2020-11-10T16:34:52Z
@ -456,7 +456,7 @@ ROOTCA CA ACTIVE true 288553090258624301170
通过 `-o json` 标记,您可以将证书的全部内容传递给 `openssl` 来分析其内容:
{{< text bash >}}
$ istioctl proxy-config secret sleep-8f795f47d-4s4t7 -o json | jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -noout -text
$ istioctl proxy-config secret curl-8f795f47d-4s4t7 -o json | jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -noout -text
Certificate:
Data:
Version: 3 (0x2)

View File

@ -45,7 +45,7 @@ EOF
{{< text syntax=yaml snip_id=none >}}
kind: Deployment
metadata:
  name: sleep
  name: curl
spec:
...
  template:
@ -89,14 +89,14 @@ EOF
{{< text bash >}}
$ kubectl label namespace default istio-injection=enabled --overwrite
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl apply -f @samples/curl/curl.yaml@
{{< /text >}}
如果不开启 DNS 代理功能,请求 `address.internal` 时可能解析失败。
一旦启用,您将收到一个基于 `address` 配置的响应:
{{< text bash >}}
$ kubectl exec deploy/sleep -- curl -sS -v address.internal
$ kubectl exec deploy/curl -- curl -sS -v address.internal
* Trying 198.51.100.1:80...
{{< /text >}}
@ -147,7 +147,7 @@ EOF
现在,发送一个请求:
{{< text bash >}}
$ kubectl exec deploy/sleep -- curl -sS -v auto.internal
$ kubectl exec deploy/curl -- curl -sS -v auto.internal
* Trying 240.240.0.1:80...
{{< /text >}}
@ -243,7 +243,7 @@ $ kubectl exec deploy/sleep -- curl -sS -v auto.internal
1. 确认在客户端侧为每个服务分别配置了侦听器:
{{< text bash >}}
$ istioctl pc listener deploy/sleep | grep tcp-echo | awk '{printf "ADDRESS=%s, DESTINATION=%s %s\n", $1, $4, $5}'
$ istioctl pc listener deploy/curl | grep tcp-echo | awk '{printf "ADDRESS=%s, DESTINATION=%s %s\n", $1, $4, $5}'
ADDRESS=240.240.105.94, DESTINATION=Cluster: outbound|9000||tcp-echo.external-2.svc.cluster.local
ADDRESS=240.240.69.138, DESTINATION=Cluster: outbound|9000||tcp-echo.external-1.svc.cluster.local
{{< /text >}}
@ -253,7 +253,7 @@ $ kubectl exec deploy/sleep -- curl -sS -v auto.internal
{{< text bash >}}
$ kubectl -n external-1 delete -f @samples/tcp-echo/tcp-echo.yaml@
$ kubectl -n external-2 delete -f @samples/tcp-echo/tcp-echo.yaml@
$ kubectl delete -f @samples/sleep/sleep.yaml@
$ kubectl delete -f @samples/curl/curl.yaml@
$ istioctl uninstall --purge -y
$ kubectl delete ns istio-system external-1 external-2
$ kubectl label namespace default istio-injection-

View File

@ -102,7 +102,7 @@ meshConfig:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleep
name: curl
spec:
...
template:

View File

@ -17,7 +17,7 @@ test: no
通常表现为仅看到来自服务的集群本地实例cluster-local instance的响应
{{< text bash >}}
$ for i in $(seq 10); do kubectl --context=$CTX_CLUSTER1 -n sample exec sleep-dd98b5f48-djwdw -c sleep -- curl -s helloworld:5000/hello; done
$ for i in $(seq 10); do kubectl --context=$CTX_CLUSTER1 -n sample exec curl-dd98b5f48-djwdw -c curl -- curl -s helloworld:5000/hello; done
Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf
Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf
Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf
@ -67,9 +67,9 @@ $ kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/helloworld/helloworld.yaml \
-l version=v2 -n uninjected-sample
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/sleep/sleep.yaml -n uninjected-sample
-f samples/curl/curl.yaml -n uninjected-sample
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/sleep/sleep.yaml -n uninjected-sample
-f samples/curl/curl.yaml -n uninjected-sample
{{< /text >}}
使用 `-o wide` 参数验证 `cluster2` 集群中是否有一个正在运行的 helloworld Pod
@ -78,8 +78,8 @@ $ kubectl apply --context="${CTX_CLUSTER2}" \
{{< text bash >}}
$ kubectl --context="${CTX_CLUSTER2}" -n uninjected-sample get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-557747455f-jdsd8 1/1 Running 0 41s 10.100.0.2 node-2 <none> <none>
helloworld-v2-54df5f84b-z28p5 1/1 Running 0 43s 10.100.0.1 node-1 <none> <none>
sleep-557747455f-jdsd8 1/1 Running 0 41s 10.100.0.2 node-2 <none> <none>
{{< /text >}}
记下 `helloworld``IP` 地址列。在本例中,它是 `10.100.0.1`
@ -88,12 +88,12 @@ sleep-557747455f-jdsd8 1/1 Running 0 41s 10.100.0.2
$ REMOTE_POD_IP=10.100.0.1
{{< /text >}}
接下来,尝试从 `cluster1` 中的 `sleep` Pod 直接向此 Pod IP 发送流量:
接下来,尝试从 `cluster1` 中的 `curl` Pod 直接向此 Pod IP 发送流量:
{{< text bash >}}
$ kubectl exec --context="${CTX_CLUSTER1}" -n uninjected-sample -c sleep \
$ kubectl exec --context="${CTX_CLUSTER1}" -n uninjected-sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n uninjected-sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS $REMOTE_POD_IP:5000/hello
Hello version: v2, instance: helloworld-v2-54df5f84b-z28p5
{{< /text >}}
@ -135,12 +135,12 @@ $ diff \
如果您已经阅读了上面的章节,但问题仍没有解决,那么可能需要进行更深入的探讨。
下面这些步骤假定您已经完成了 [HelloWorld 认证](/zh/docs/setup/install/multicluster/verify/)指南,
并且确保 `helloworld``sleep` 服务已经在每个集群中被正确部署。
并且确保 `helloworld``curl` 服务已经在每个集群中被正确部署。
针对每个集群,找到 `sleep` 服务对应的 `helloworld``endpoints`
针对每个集群,找到 `curl` 服务对应的 `helloworld``endpoints`
{{< text bash >}}
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
{{< /text >}}
故障诊断信息因流量来源的集群不同而不同:
@ -150,7 +150,7 @@ $ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.s
{{< tab name="Primary cluster" category-value="primary" >}}
{{< text bash >}}
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
10.0.0.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
{{< /text >}}
@ -172,7 +172,7 @@ $ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l "istio/multiClu
{{< tab name="Remote cluster" category-value="remote" >}}
{{< text bash >}}
$ istioctl --context $CTX_CLUSTER2 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
$ istioctl --context $CTX_CLUSTER2 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
10.0.1.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
{{< /text >}}
@ -200,7 +200,7 @@ $ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l "istio/multiClu
主集群和从集群的步骤仍然适用于多网络,尽管多网络有其他情况:
{{< text bash >}}
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld
10.0.5.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
10.0.6.13:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
{{< /text >}}
@ -232,7 +232,7 @@ istio-eastwestgateway LoadBalancer 10.8.17.119 <PENDING> 15021:317
在源 Pod 上查看代理元数据。
{{< text bash >}}
$ kubectl get pod $SLEEP_POD_NAME \
$ kubectl get pod $CURL_POD_NAME \
-o jsonpath="{.spec.containers[*].env[?(@.name=='ISTIO_META_NETWORK')].value}"
{{< /text >}}

View File

@ -402,18 +402,18 @@ $ istioctl proxy-config bootstrap -n istio-system istio-ingressgateway-7d6874b48
服务网格内的每个代理容器都应该能和 Istiod 通信。
这可以通过几个简单的步骤来检测:
1. 创建一个 `sleep` Pod
1. 创建一个 `curl` Pod
{{< text bash >}}
$ kubectl create namespace foo
$ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n foo
$ kubectl apply -f <(istioctl kube-inject -f samples/curl/curl.yaml) -n foo
{{< /text >}}
1. 使用 `curl` 测试 Istiod 的连接。下面的示例使用默认 Istiod 配置参数和启用双向 TLS
调用 v1 注册 API
{{< text bash >}}
$ kubectl exec $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c sleep -n foo -- curl -sS istiod.istio-system:15014/version
$ kubectl exec $(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name}) -c curl -n foo -- curl -sS istiod.istio-system:15014/version
{{< /text >}}
您应该收到一个响应,其中列出了 Istiod 的版本。

View File

@ -161,7 +161,7 @@ EOF
{{< text bash >}}
$ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
/opt/spire/bin/spire-server entry create \
-spiffeID spiffe://example.org/ns/default/sa/sleep \
-spiffeID spiffe://example.org/ns/default/sa/curl \
-parentID spiffe://example.org/ns/spire/sa/spire-agent \
-selector k8s:ns:default \
-selector k8s:pod-label:spiffe.io/spire-managed-identity:true \
@ -280,8 +280,8 @@ EOF
1. 部署示例工作负载:
{{< text syntax=bash snip_id=apply_sleep >}}
$ istioctl kube-inject --filename @samples/security/spire/sleep-spire.yaml@ | kubectl apply -f -
{{< text syntax=bash snip_id=apply_curl >}}
$ istioctl kube-inject --filename @samples/security/spire/curl-spire.yaml@ | kubectl apply -f -
{{< /text >}}
除了需要 `spiffe.io/spire-managed-identity` 标签之外,工作负载还需要使用 SPIFFE CSI
@ -292,24 +292,24 @@ EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleep
name: curl
spec:
replicas: 1
selector:
matchLabels:
app: sleep
app: curl
template:
metadata:
labels:
app: sleep
app: curl
# 注入自定义 Sidecar 模板
annotations:
inject.istio.io/templates: "sidecar,spire"
spec:
terminationGracePeriodSeconds: 0
serviceAccountName: sleep
serviceAccountName: curl
containers:
- name: sleep
- name: curl
image: curlimages/curl
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
@ -350,7 +350,7 @@ JWT-SVID TTL : default
Selector : k8s:pod-uid:88b71387-4641-4d9c-9a89-989c88f7509d
Entry ID : af7b53dc-4cc9-40d3-aaeb-08abbddd8e54
SPIFFE ID : spiffe://example.org/ns/default/sa/sleep
SPIFFE ID : spiffe://example.org/ns/default/sa/curl
Parent ID : spiffe://example.org/spire/agent/k8s_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687
Revision : 0
X509-SVID TTL : default
@ -373,14 +373,14 @@ istiod-989f54d9c-sg7sn 1/1 Running 0 45s
1. 获取 Pod 信息:
{{< text syntax=bash snip_id=set_sleep_pod_var >}}
$ SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath="{.items[0].metadata.name}")
{{< text syntax=bash snip_id=set_curl_pod_var >}}
$ CURL_POD=$(kubectl get pod -l app=curl -o jsonpath="{.items[0].metadata.name}")
{{< /text >}}
1. 使用 `istioctl proxy-config secret` 命令检索 sleep 的 SVID 身份文档:
1. 使用 `istioctl proxy-config secret` 命令检索 curl 的 SVID 身份文档:
{{< text syntax=bash snip_id=get_sleep_svid >}}
$ istioctl proxy-config secret "$SLEEP_POD" -o json | jq -r \
{{< text syntax=bash snip_id=get_curl_svid >}}
$ istioctl proxy-config secret "$CURL_POD" -o json | jq -r \
'.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode > chain.pem
{{< /text >}}
@ -388,7 +388,7 @@ istiod-989f54d9c-sg7sn 1/1 Running 0 45s
{{< text syntax=bash snip_id=get_svid_subject >}}
$ openssl x509 -in chain.pem -text | grep SPIRE
Subject: C = US, O = SPIRE, CN = sleep-5f4d47c948-njvpk
Subject: C = US, O = SPIRE, CN = curl-5f4d47c948-njvpk
{{< /text >}}
## SPIFFE 联邦 {#spiffe-federation}

View File

@ -25,7 +25,7 @@ spec:
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/sleep"]
principals: ["cluster.local/ns/default/sa/curl"]
- source:
namespaces: ["httpbin"]
to:

View File

@ -109,30 +109,30 @@ values:
$ kubectl apply --namespace ipv6 -f @samples/tcp-echo/tcp-echo-ipv6.yaml@
{{< /text >}}
1. 部署 [sleep]({{< github_tree >}}/samples/sleep) 示例应用程序以用作发送请求的测试源。
1. 部署 [curl]({{< github_tree >}}/samples/curl) 示例应用程序以用作发送请求的测试源。
{{< text bash >}}
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl apply -f @samples/curl/curl.yaml@
{{< /text >}}
1. 验证到达双栈 Pod 的流量:
{{< text bash >}}
$ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo dualstack | nc tcp-echo.dual-stack 9000"
$ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo dualstack | nc tcp-echo.dual-stack 9000"
hello dualstack
{{< /text >}}
1. 验证到达 IPv4 Pod 的流量:
{{< text bash >}}
$ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv4 | nc tcp-echo.ipv4 9000"
$ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv4 | nc tcp-echo.ipv4 9000"
hello ipv4
{{< /text >}}
1. 验证到达 IPv6 Pod 的流量:
{{< text bash >}}
$ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv6 | nc tcp-echo.ipv6 9000"
$ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv6 | nc tcp-echo.ipv6 9000"
hello ipv6
{{< /text >}}
@ -193,7 +193,7 @@ values:
1. 验证 Envoy 端点是否被配置为同时路由到 IPv4 和 IPv6
{{< text syntax=bash snip_id=none >}}
$ istioctl proxy-config endpoints "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" --port 9000
$ istioctl proxy-config endpoints "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" --port 9000
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.244.0.19:9000 HEALTHY OK outbound|9000||tcp-echo.ipv4.svc.cluster.local
10.244.0.26:9000 HEALTHY OK outbound|9000||tcp-echo.dual-stack.svc.cluster.local
@ -208,6 +208,6 @@ values:
1. 清理应用程序命名空间和部署
{{< text bash >}}
$ kubectl delete -f @samples/sleep/sleep.yaml@
$ kubectl delete -f @samples/curl/curl.yaml@
$ kubectl delete ns dual-stack ipv4 ipv6
{{< /text >}}

View File

@ -43,19 +43,19 @@ Pod 所属命名空间的 Istio Sidecar 注入器自动注入。
#### 部署应用 {#deploying-an-app}
部署 sleep 应用,验证 Deployment 和 Pod 只有一个容器。
部署 curl 应用,验证 Deployment 和 Pod 只有一个容器。
{{< text bash >}}
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl apply -f @samples/curl/curl.yaml@
$ kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
sleep 1/1 1 1 12s sleep curlimages/curl app=sleep
curl 1/1 1 1 12s curl curlimages/curl app=curl
{{< /text >}}
{{< text bash >}}
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
sleep-8f795f47d-hdcgs 1/1 Running 0 42s
curl-8f795f47d-hdcgs 1/1 Running 0 42s
{{< /text >}}
`default` 命名空间标记为 `istio-injection=enabled`
@ -72,18 +72,18 @@ default Active 5m9s enabled
原来的 Pod 具有 `1/1 READY` 个容器,注入 Sidecar 后的 Pod 则具有 READY 为 `2/2 READY` 个容器。
{{< text bash >}}
$ kubectl delete pod -l app=sleep
$ kubectl get pod -l app=sleep
pod "sleep-776b7bcdcd-7hpnk" deleted
$ kubectl delete pod -l app=curl
$ kubectl get pod -l app=curl
pod "curl-776b7bcdcd-7hpnk" deleted
NAME READY STATUS RESTARTS AGE
sleep-776b7bcdcd-7hpnk 1/1 Terminating 0 1m
sleep-776b7bcdcd-bhn9m 2/2 Running 0 7s
curl-776b7bcdcd-7hpnk 1/1 Terminating 0 1m
curl-776b7bcdcd-bhn9m 2/2 Running 0 7s
{{< /text >}}
查看已注入 Pod 的详细状态。您应该看到被注入的 `istio-proxy` 容器和对应的卷。
{{< text bash >}}
$ kubectl describe pod -l app=sleep
$ kubectl describe pod -l app=curl
...
Events:
Type Reason Age From Message
@ -92,8 +92,8 @@ Events:
Normal Created 11s kubelet Created container istio-init
Normal Started 11s kubelet Started container istio-init
...
Normal Created 10s kubelet Created container sleep
Normal Started 10s kubelet Started container sleep
Normal Created 10s kubelet Created container curl
Normal Started 10s kubelet Started container curl
...
Normal Created 9s kubelet Created container istio-proxy
Normal Started 8s kubelet Started container istio-proxy
@ -103,13 +103,13 @@ Events:
{{< text bash >}}
$ kubectl label namespace default istio-injection-
$ kubectl delete pod -l app=sleep
$ kubectl delete pod -l app=curl
$ kubectl get pod
namespace/default labeled
pod "sleep-776b7bcdcd-bhn9m" deleted
pod "curl-776b7bcdcd-bhn9m" deleted
NAME READY STATUS RESTARTS AGE
sleep-776b7bcdcd-bhn9m 2/2 Terminating 0 2m
sleep-776b7bcdcd-gmvnr 1/1 Running 0 2s
curl-776b7bcdcd-bhn9m 2/2 Terminating 0 2m
curl-776b7bcdcd-gmvnr 1/1 Running 0 2s
{{< /text >}}
#### 控制注入策略 {#controlling-the-injection-policy}
@ -146,10 +146,10 @@ sleep-776b7bcdcd-gmvnr 1/1 Running 0 2s
要手动注入 Deployment请使用 [`istioctl kube-inject`](/zh/docs/reference/commands/istioctl/#istioctl-kube-inject)
{{< text bash >}}
$ istioctl kube-inject -f @samples/sleep/sleep.yaml@ | kubectl apply -f -
serviceaccount/sleep created
service/sleep created
deployment.apps/sleep created
$ istioctl kube-inject -f @samples/curl/curl.yaml@ | kubectl apply -f -
serviceaccount/curl created
service/curl created
deployment.apps/curl created
{{< /text >}}
默认情况下将使用集群内的配置,或者使用该配置的本地副本来完成注入。
@ -167,19 +167,19 @@ $ istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename @samples/sleep/sleep.yaml@ \
--filename @samples/curl/curl.yaml@ \
| kubectl apply -f -
serviceaccount/sleep created
service/sleep created
deployment.apps/sleep created
serviceaccount/curl created
service/curl created
deployment.apps/curl created
{{< /text >}}
验证 Sidecar 是否已经被注入到 READY 列下 `2/2`Sleep Pod 中。
验证 Sidecar 是否已经被注入到 READY 列下 `2/2`curl Pod 中。
{{< text bash >}}
$ kubectl get pod -l app=sleep
$ kubectl get pod -l app=curl
NAME READY STATUS RESTARTS AGE
sleep-64c6f57bc8-f5n4x 2/2 Running 0 24s
curl-64c6f57bc8-f5n4x 2/2 Running 0 24s
{{< /text >}}
## 自定义注入 {#customizing-injection}
@ -212,7 +212,7 @@ spec:
lifecycle:
preStop:
exec:
command: ["sleep", "10"]
command: ["curl", "10"]
volumes:
- name: certs
secret:

View File

@ -481,28 +481,28 @@ Webhook、ConfigMap 和 Secret以便使用外部控制平面。
$ kubectl label --context="${CTX_REMOTE_CLUSTER}" namespace sample istio-injection=enabled
{{< /text >}}
1. 部署示例 `helloworld``v1`)和 `sleep`
1. 部署示例 `helloworld``v1`)和 `curl`
{{< text bash >}}
$ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX_REMOTE_CLUSTER}"
$ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context="${CTX_REMOTE_CLUSTER}"
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context="${CTX_REMOTE_CLUSTER}"
$ kubectl apply -f @samples/curl/curl.yaml@ -n sample --context="${CTX_REMOTE_CLUSTER}"
{{< /text >}}
1. 等几秒钟Pod `helloworld``sleep` 将以 Sidecar 注入的方式运行:
1. 等几秒钟Pod `helloworld``curl` 将以 Sidecar 注入的方式运行:
{{< text bash >}}
$ kubectl get pod -n sample --context="${CTX_REMOTE_CLUSTER}"
NAME READY STATUS RESTARTS AGE
curl-64d7d56698-wqjnm 2/2 Running 0 9s
helloworld-v1-5b75657f75-ncpc5 2/2 Running 0 10s
sleep-64d7d56698-wqjnm 2/2 Running 0 9s
{{< /text >}}
1. 从 Pod `sleep` 向 Pod `helloworld` 服务发送请求:
1. 从 Pod `curl` 向 Pod `helloworld` 服务发送请求:
{{< text bash >}}
$ kubectl exec --context="${CTX_REMOTE_CLUSTER}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_REMOTE_CLUSTER}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" \
$ kubectl exec --context="${CTX_REMOTE_CLUSTER}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_REMOTE_CLUSTER}" -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc
{{< /text >}}
@ -830,28 +830,28 @@ $ export SECOND_CLUSTER_NAME=<您的第二个从集群名称>
$ kubectl label --context="${CTX_SECOND_CLUSTER}" namespace sample istio-injection=enabled
{{< /text >}}
1. 部署 `helloworld``v2` 版本)和 `sleep` 的示例:
1. 部署 `helloworld``v2` 版本)和 `curl` 的示例:
{{< text bash >}}
$ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX_SECOND_CLUSTER}"
$ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample --context="${CTX_SECOND_CLUSTER}"
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context="${CTX_SECOND_CLUSTER}"
$ kubectl apply -f @samples/curl/curl.yaml@ -n sample --context="${CTX_SECOND_CLUSTER}"
{{< /text >}}
1. 等待几秒钟,让 `helloworld` 和 Pod `sleep` 在注入 Sidecar 的情况下运行:
1. 等待几秒钟,让 `helloworld` 和 Pod `curl` 在注入 Sidecar 的情况下运行:
{{< text bash >}}
$ kubectl get pod -n sample --context="${CTX_SECOND_CLUSTER}"
NAME READY STATUS RESTARTS AGE
curl-557747455f-wtdbr 2/2 Running 0 9s
helloworld-v2-54df5f84b-9hxgw 2/2 Running 0 10s
sleep-557747455f-wtdbr 2/2 Running 0 9s
{{< /text >}}
1. 从 Pod `sleep` 向 `helloworld` 服务发送请求:
1. 从 Pod `curl` 向 `helloworld` 服务发送请求:
{{< text bash >}}
$ kubectl exec --context="${CTX_SECOND_CLUSTER}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_SECOND_CLUSTER}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" \
$ kubectl exec --context="${CTX_SECOND_CLUSTER}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_SECOND_CLUSTER}" -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw
{{< /text >}}

View File

@ -15,7 +15,7 @@ owner: istio/wg-environments-maintainers
`cluster2` 安装 `V2` 版的 `HelloWorld` 应用程序。
当处理一个请求时,`HelloWorld` 会在响应消息中包含它自身的版本号。
我们也会在两个集群中均部署 `Sleep` 容器。
我们也会在两个集群中均部署 `curl` 容器。
这些 Pod 将被用作客户端source发送请求给 `HelloWorld`
最后,通过收集这些流量数据,我们将能观测并识别出是那个集群处理了请求。
@ -92,48 +92,48 @@ helloworld-v2-758dd55874-6x4t8 2/2 Running 0 40s
等待 `helloworld-v2` 的状态最终变为 `Running` 状态:
## 部署 `Sleep` {#deploy-sleep}
## 部署 `curl` {#deploy-curl}
把应用 `Sleep` 部署到每个集群:
把应用 `curl` 部署到每个集群:
{{< text bash >}}
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f @samples/sleep/sleep.yaml@ -n sample
-f @samples/curl/curl.yaml@ -n sample
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f @samples/sleep/sleep.yaml@ -n sample
-f @samples/curl/curl.yaml@ -n sample
{{< /text >}}
确认 `cluster1``Sleep` 的状态:
确认 `cluster1``curl` 的状态:
{{< text bash >}}
$ kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=sleep
$ kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=curl
NAME READY STATUS RESTARTS AGE
sleep-754684654f-n6bzf 2/2 Running 0 5s
curl-754684654f-n6bzf 2/2 Running 0 5s
{{< /text >}}
等待 `Sleep` 的状态最终变为 `Running` 状态:
等待 `curl` 的状态最终变为 `Running` 状态:
确认 `cluster2``Sleep` 的状态:
确认 `cluster2``curl` 的状态:
{{< text bash >}}
$ kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=sleep
$ kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=curl
NAME READY STATUS RESTARTS AGE
sleep-754684654f-dzl9j 2/2 Running 0 5s
curl-754684654f-dzl9j 2/2 Running 0 5s
{{< /text >}}
等待 `Sleep` 的状态最终变为 `Running` 状态:
等待 `curl` 的状态最终变为 `Running` 状态:
## 验证跨集群流量 {#verifying-cross-cluster-traffic}
要验证跨集群负载均衡是否按预期工作,需要用 `Sleep` pod 重复调用服务 `HelloWorld`
要验证跨集群负载均衡是否按预期工作,需要用 `curl` pod 重复调用服务 `HelloWorld`
为了确认负载均衡按预期工作,需要从所有集群调用服务 `HelloWorld`
`cluster1` 中的 `Sleep` pod 发送请求给服务 `HelloWorld`
`cluster1` 中的 `curl` pod 发送请求给服务 `HelloWorld`
{{< text bash >}}
$ kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep \
$ kubectl exec --context="${CTX_CLUSTER1}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl helloworld.sample:5000/hello
{{< /text >}}
@ -145,12 +145,12 @@ Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
...
{{< /text >}}
现在,用 `cluster2` 中的 `Sleep` pod 重复此过程:
现在,用 `cluster2` 中的 `curl` pod 重复此过程:
{{< text bash >}}
$ kubectl exec --context="${CTX_CLUSTER2}" -n sample -c sleep \
$ kubectl exec --context="${CTX_CLUSTER2}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl helloworld.sample:5000/hello
{{< /text >}}

View File

@ -181,38 +181,38 @@ Istio 修订和 `discoverySelectors` 然后用于确定每个控制面托管的
$ kubectl label ns app-ns-3 usergroup=usergroup-2 istio.io/rev=usergroup-2
{{< /text >}}
1. 为每个命名空间部署一个 `sleep` 和 `httpbin` 应用:
1. 为每个命名空间部署一个 `curl` 和 `httpbin` 应用:
{{< text bash >}}
$ kubectl -n app-ns-1 apply -f samples/sleep/sleep.yaml
$ kubectl -n app-ns-1 apply -f samples/curl/curl.yaml
$ kubectl -n app-ns-1 apply -f samples/httpbin/httpbin.yaml
$ kubectl -n app-ns-2 apply -f samples/sleep/sleep.yaml
$ kubectl -n app-ns-2 apply -f samples/curl/curl.yaml
$ kubectl -n app-ns-2 apply -f samples/httpbin/httpbin.yaml
$ kubectl -n app-ns-3 apply -f samples/sleep/sleep.yaml
$ kubectl -n app-ns-3 apply -f samples/curl/curl.yaml
$ kubectl -n app-ns-3 apply -f samples/httpbin/httpbin.yaml
{{< /text >}}
1. 等待几秒钟,让 `httpbin``sleep` Pod 在注入 Sidecar 的情况下运行:
1. 等待几秒钟,让 `httpbin``curl` Pod 在注入 Sidecar 的情况下运行:
{{< text bash >}}
$ kubectl get pods -n app-ns-1
NAME READY STATUS RESTARTS AGE
httpbin-9dbd644c7-zc2v4 2/2 Running 0 115m
sleep-78ff5975c6-fml7c 2/2 Running 0 115m
curl-78ff5975c6-fml7c 2/2 Running 0 115m
{{< /text >}}
{{< text bash >}}
$ kubectl get pods -n app-ns-2
NAME READY STATUS RESTARTS AGE
httpbin-9dbd644c7-sd9ln 2/2 Running 0 115m
sleep-78ff5975c6-sz728 2/2 Running 0 115m
curl-78ff5975c6-sz728 2/2 Running 0 115m
{{< /text >}}
{{< text bash >}}
$ kubectl get pods -n app-ns-3
NAME READY STATUS RESTARTS AGE
httpbin-9dbd644c7-8ll27 2/2 Running 0 115m
sleep-78ff5975c6-sg4tq 2/2 Running 0 115m
curl-78ff5975c6-sg4tq 2/2 Running 0 115m
{{< /text >}}
### 确认应用到控制面的映射{#verify-app-to-control-plane-mapping}
@ -224,7 +224,7 @@ Istio 修订和 `discoverySelectors` 然后用于确定每个控制面托管的
$ istioctl ps -i usergroup-1
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
httpbin-9dbd644c7-hccpf.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-1-5ccc849b5f-wnqd6 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
sleep-78ff5975c6-9zb77.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-1-5ccc849b5f-wnqd6 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
curl-78ff5975c6-9zb77.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-1-5ccc849b5f-wnqd6 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
{{< /text >}}
{{< text bash >}}
@ -232,16 +232,16 @@ $ istioctl ps -i usergroup-2
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
httpbin-9dbd644c7-vvcqj.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
httpbin-9dbd644c7-xzgfm.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
sleep-78ff5975c6-fthmt.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
sleep-78ff5975c6-nxtth.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
curl-78ff5975c6-fthmt.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
curl-78ff5975c6-nxtth.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117
{{< /text >}}
### 确认应用连接仅在各个用户组内{#verify-app-conn-is-only-within-respective-usergroup}
1. 将 `usergroup-1``app-ns-1` 中的 `sleep` Pod 的请求发送到 `usergroup-2``app-ns-2` 中的 `httpbin` 服务:
1. 将 `usergroup-1``app-ns-1` 中的 `curl` Pod 的请求发送到 `usergroup-2``app-ns-2` 中的 `httpbin` 服务:
{{< text bash >}}
$ kubectl -n app-ns-1 exec "$(kubectl -n app-ns-1 get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000
$ kubectl -n app-ns-1 exec "$(kubectl -n app-ns-1 get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000
HTTP/1.1 503 Service Unavailable
content-length: 95
content-type: text/plain
@ -249,10 +249,10 @@ sleep-78ff5975c6-nxtth.app-ns-3 Kubernetes SYNCED SYNCED SYNCED
server: envoy
{{< /text >}}
1. 将 `usergroup-2``app-ns-2` 中的 `sleep` Pod 的请求发送到 `usergroup-2``app-ns-3` 中的 `httpbin` 服务:通信应发挥作用:
1. 将 `usergroup-2``app-ns-2` 中的 `curl` Pod 的请求发送到 `usergroup-2``app-ns-3` 中的 `httpbin` 服务:通信应发挥作用:
{{< text bash >}}
$ kubectl -n app-ns-2 exec "$(kubectl -n app-ns-2 get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000
$ kubectl -n app-ns-2 exec "$(kubectl -n app-ns-2 get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000
HTTP/1.1 200 OK
server: envoy
date: Thu, 22 Dec 2022 15:01:36 GMT