Fix broken links (#3636)

- We haven't been checking external links for months now due to a script error
when someone added an option that didn't work as expected. I'm fixing a bunch
of resulting broken links. I can't turn on the link checker yet since there are
some bad links in reference docs which I have to address first.

- Add a bunch of links to yaml files in our code examples using the @@ syntax.
This commit is contained in:
Martin Taillefer 2019-03-11 22:05:18 -07:00 committed by GitHub
parent 1f27ac771e
commit 1e075ef7cd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
44 changed files with 63 additions and 71 deletions

View File

@ -33,7 +33,7 @@ Follow us on [Twitter](https://twitter.com/IstioMesh) to get the latest news.
{{< community_item logo="./slack.svg" alt="Slack" >}}
If you're contributing code to Istio (or thinking about contributing!), you can join us on
[Istio's Slack](https://slack.istio.com) channel.
[Istio's Slack](https://istio.slack.com) channel.
Fill out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfdsupDfOWBtNVvVvXED6ULxtR4UIsYGCH_cQcRr0VcG1ZqQQ/viewform)
to join.
{{< /community_item >}}

View File

@ -74,7 +74,7 @@ branches.
- The [Istio blog](/blog)
- The [Announcements](https://discuss.istio.io/c/announcements) category on discuss.istio.io
- The [Istio Twitter feed](https://twitter.cqom/IstioMesh)
- The [Istio Twitter feed](https://twitter.com/IstioMesh)
- The [#announcement channel on Slack](https://istio.slack.com/messages/CFXS256EQ/)
As much as possible this announcement should be actionable, and include any mitigating steps users can take prior to

View File

@ -19,7 +19,7 @@ Over the years, there has been much collective effort in implementing aggressive
## Dont optimize layers, remove them
In my belief, optimizing something is a poor fallback to removing its requirement altogether. That was the goal of my [initial work](http://beyondcontainers.com/blog/a-brief-history-of-containers) on OS-level virtualization that led to Linux containers which effectively [removed virtual machines](https://www.oreilly.com/ideas/the-unwelcome-guest-why-vms-arent-the-solution-for-next-gen-applications) by running applications directly on the host operating system without requiring an intermediate guest. For a long time the industry was fighting the wrong battle distracted by optimizing VMs rather than removing the additional layer altogether.
In my belief, optimizing something is a poor fallback to removing its requirement altogether. That was the goal of my [initial work](https://apporbit.com/a-brief-history-of-containers-from-reality-to-hype/) on OS-level virtualization that led to Linux containers which effectively [removed virtual machines](https://www.oreilly.com/ideas/the-unwelcome-guest-why-vms-arent-the-solution-for-next-gen-applications) by running applications directly on the host operating system without requiring an intermediate guest. For a long time the industry was fighting the wrong battle distracted by optimizing VMs rather than removing the additional layer altogether.
I see the same pattern repeat itself with the connectivity of microservices, and networking in general. The network has been going through the changes that physical servers have gone through a decade earlier. New set of layers and constructs are being introduced. They are being baked deep into the protocol stack and even silicon without adequately considering low-touch alternatives. Perhaps there is a way to remove those additional layers altogether.

View File

@ -80,7 +80,7 @@ $ kubectl apply -f istio-minimal.yaml
Next, deploy the Bookinfo sample without the Istio sidecar containers:
{{< text bash >}}
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
{{< /text >}}
Now, configure a new Gateway that allows access to the reviews service from outside the Istio mesh, a new `VirtualService` that splits traffic evenly between v1 and v2 of the reviews service, and a set of new `DestinationRule` resources that match destination subsets to service versions:
@ -147,7 +147,7 @@ EOF
Finally, deploy a pod that you can use for testing with `curl` (and without the Istio sidecar container):
{{< text bash >}}
$ kubectl apply -f samples/sleep/sleep.yaml
$ kubectl apply -f @samples/sleep/sleep.yaml@
{{< /text >}}
## Testing your deployment

View File

@ -68,7 +68,7 @@ The creation of custom ingress gateway could be used in order to have different
key: secret-access-key
{{< /text >}}
1. If you use the `route53` [provider](https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53), you must provide a secret to perform DNS ACME Validation. To create the secret, apply the following configuration file:
1. If you use the `route53` [provider](https://cert-manager.readthedocs.io/en/latest/tasks/acme/configuring-dns01/route53.html), you must provide a secret to perform DNS ACME Validation. To create the secret, apply the following configuration file:
{{< text yaml >}}
apiVersion: v1

View File

@ -259,7 +259,7 @@ Just like any application, we'll use an Istio gateway to access the `bookinfo` a
* Create the `bookinfo` gateway in `cluster1`:
{{< text bash >}}
$ kubectl apply --context=$CTX_CLUSTER1 -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ kubectl apply --context=$CTX_CLUSTER1 -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
{{< /text >}}
* Follow the [Bookinfo sample instructions](/docs/examples/bookinfo/#determining-the-ingress-ip-and-port)

View File

@ -106,7 +106,6 @@ The Acmeair benchmark application can be found here: [IBM's BluePerf](https://gi
Both the synthetic benchmarks (fortio based) and the realistic application (BluePerf)
are part of the nightly release pipeline and you can see the results on:
* [https://fortio-daily.istio.io/](https://fortio-daily.istio.io/)
* [https://ibmcloud-perf.istio.io/regpatrol/](https://ibmcloud-perf.istio.io/regpatrol/)
This enables us to catch regression early and track improvements over time.
@ -123,8 +122,6 @@ This enables us to catch regression early and track improvements over time.
* See also [Istio's Performance oriented FAQ](https://github.com/istio/istio/wiki/Performance-Oriented-Setup-FAQ)
* And the [Performance and Scalability Working Group](https://github.com/istio/community/blob/master/WORKING-GROUPS.md#performance-and-scalability) work.
Current recommendations (when using all Istio features):
* 1 vCPU per peak thousand requests per second for the sidecar(s) with access logging (which is on by default) and 0.5 without, `fluentd` on the node is a big contributor to that cost as it captures and uploads logs.

View File

@ -174,7 +174,6 @@ is used for this purpose.
{{< text bash >}}
$ docker-compose -f @samples/bookinfo/platform/consul/bookinfo.yaml@ up -d
$ docker-compose -f samples/bookinfo/platform/consul/bookinfo.sidecars.yaml up -d
{{< /text >}}
1. Confirm that all docker containers are running:

View File

@ -138,7 +138,7 @@ $ kubectl get pods -n istio-system
{{< text bash >}}
$ helm template install/kubernetes/helm/istio \
--namespace istio-system --name istio-remote \
--values install/kubernetes/helm/istio/values-istio-remote.yaml \
--values @install/kubernetes/helm/istio/values-istio-remote.yaml@ \
--set global.remotePilotAddress=${PILOT_POD_IP} \
--set global.remotePolicyAddress=${POLICY_POD_IP} \
--set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} > $HOME/istio-remote.yaml
@ -224,8 +224,8 @@ $ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
{{< /text >}}

View File

@ -147,7 +147,7 @@ This will be used to access pilot on `cluster1` securely using the ingress gatew
{{< text bash >}}
$ helm template --name istio-remote --namespace=istio-system \
--values install/kubernetes/helm/istio/values-istio-remote.yaml \
--values @install/kubernetes/helm/istio/values-istio-remote.yaml@ \
--set global.mtls.enabled=true \
--set gateways.enabled=true \
--set security.selfSigned=false \
@ -298,8 +298,8 @@ The difference between the two instances is the version of their `helloworld` im
1. Deploy `helloworld v2`:
{{< text bash >}}
$ kubectl create --context=$CTX_CLUSTER2 -f samples/helloworld/helloworld.yaml -l app=helloworld -n sample
$ kubectl create --context=$CTX_CLUSTER2 -f samples/helloworld/helloworld.yaml -l version=v2 -n sample
$ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
$ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample
{{< /text >}}
1. Confirm `helloworld v2` is running:
@ -322,8 +322,8 @@ The difference between the two instances is the version of their `helloworld` im
1. Deploy `helloworld v1`:
{{< text bash >}}
$ kubectl create --context=$CTX_CLUSTER1 -f samples/helloworld/helloworld.yaml -l app=helloworld -n sample
$ kubectl create --context=$CTX_CLUSTER1 -f samples/helloworld/helloworld.yaml -l version=v1 -n sample
$ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
$ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample
{{< /text >}}
1. Confirm `helloworld v1` is running:
@ -341,7 +341,7 @@ We will call the `helloworld.sample` service from another in-mesh `sleep` servic
1. Deploy the `sleep` service:
{{< text bash >}}
$ kubectl create --context=$CTX_CLUSTER1 -f samples/sleep/sleep.yaml -n sample
$ kubectl create --context=$CTX_CLUSTER1 -f @samples/sleep/sleep.yaml@ -n sample
{{< /text >}}
1. Call the `helloworld.sample` service several times:

View File

@ -5,7 +5,7 @@ weight: 50
---
This page presents details about the metrics that Istio collects when using its initial configuration. You can add and remove metrics by changing configuration at any time, but this
is the built-in set. They can be found [here]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml)
is the built-in set. They can be found [here]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml)
under the section with "kind: metric”. It uses [metric
template](/docs/reference/config/policy-and-telemetry/templates/metric/) to define these metrics.

View File

@ -270,7 +270,7 @@ The `server: envoy` header indicates that the sidecar intercepted the traffic.
1. Deploy a pod running the `sleep` service in the Kubernetes cluster, and wait until it is ready:
{{< text bash >}}
$ kubectl apply -f samples/sleep/sleep.yaml
$ kubectl apply -f @samples/sleep/sleep.yaml@
$ kubectl get po
NAME READY STATUS RESTARTS AGE
productpage-v1-8fcdcb496-xgkwg 2/2 Running 0 1d

View File

@ -33,7 +33,7 @@ $ kubectl create namespace istio-system
run the following command:
{{< text bash >}}
$ kubectl create -f install/kubernetes/helm/helm-service-account.yaml
$ kubectl create -f @install/kubernetes/helm/helm-service-account.yaml@
{{< /text >}}
- You installed Tiller on your cluster. To install Tiller with the service

View File

@ -75,7 +75,7 @@ Make sure to use the `kubectl` CLI version that matches the Kubernetes version o
{{< /tip >}}
{{< text bash >}}
$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system --values install/kubernetes/helm/istio/values-istio-demo.yaml
$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system --values @install/kubernetes/helm/istio/values-istio-demo.yaml@
{{< /text >}}
1. Ensure the pods for the 9 Istio services and the pod for Prometheus are all fully deployed:

View File

@ -55,7 +55,7 @@ This approach has the following benefits:
$ cat install/kubernetes/namespace.yaml > istio-auth-sds.yaml
$ cat install/kubernetes/helm/istio-init/files/crd-* >> istio-auth-sds.yaml
$ helm dep update --skip-refresh install/kubernetes/helm/istio
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --values install/kubernetes/helm/istio/values-istio-sds-auth.yaml >> istio-auth-sds.yaml
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --values @install/kubernetes/helm/istio/values-istio-sds-auth.yaml@ >> istio-auth-sds.yaml
$ kubectl create -f istio-auth-sds.yaml
{{< /text >}}
@ -66,11 +66,11 @@ setup test services.
{{< text bash >}}
$ kubectl create ns foo
$ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n foo
$ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n foo
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n foo
$ kubectl create ns bar
$ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n bar
$ kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml) -n bar
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n bar
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n bar
{{< /text >}}
Verify all mutual TLS requests succeed:

View File

@ -261,7 +261,7 @@ log entries for `v1` and none for `v2`:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
05:47:50.159513 IP sleep-7b9f8bfcd-2djx5.38836 > 10-233-75-11.httpbin.default.svc.cluster.local.80: Flags [P.], seq 4039989036:4039989832, ack 3139734980, win 254, options [nop,nop,TS val 77427918 ecr 76730809], length 796: HTTP: GET /headers HTTP/1.1
E..P2.@.@.X.
E..P2.X.X.X.
.K.
.K....P..W,.$.......+.....
..t.....GET /headers HTTP/1.1

View File

@ -372,7 +372,7 @@ only this time for host `bookinfo.com` instead of `httpbin.example.com`.
1. Deploy the [Bookinfo sample application](/docs/examples/bookinfo/), without a gateway:
{{< text bash >}}
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
{{< /text >}}
1. Define a gateway for `bookinfo.com`:

View File

@ -94,7 +94,7 @@ need to create secrets for multiple hosts and update the gateway definitions.
1. Enable SDS at ingress gateway and deploy the ingress gateway agent.
Since this feature is disabled by default, you need to enable the
[`istio-ingressgateway.sds.enabled` flag]({{<github_blob>}}/install/kubernetes/helm/subcharts/gateways/values.yaml) in helm,
[`istio-ingressgateway.sds.enabled` flag]({{<github_blob>}}/install/kubernetes/helm/istio/charts/gateways/values.yaml) in helm,
and then generate the `istio-ingressgateway.yaml` file:
{{< text bash >}}

View File

@ -16,4 +16,4 @@ Mixer:
- generates trace spans for each request based on *operator-supplied* configuration
- sends the generated trace spans to the *operator-designated* tracing backends
The [Stackdriver tracing integration](https://cloud.google.com/istio/docs/istio-on-gke/installing#enabling_tracing) with Istio is one example of a tracing integration via Mixer.
The [Stackdriver tracing integration](https://cloud.google.com/istio/docs/istio-on-gke/installing#disabling_tracing_and_logging) with Istio is one example of a tracing integration via Mixer.

View File

@ -19,4 +19,4 @@ If you are using LightStep, you will also need to forward the following headers:
- `x-ot-span-context`
Header propagation may be accomplished through client libraries, such as [Zipkin](https://zipkin.io/pages/existing_instrumentations.html) or [Jaeger](https://github.com/jaegertracing/jaeger-client-java/tree/master/jaeger-core#b3-propagation). It may also be accomplished manually, as documented in the [Distributed Tracing Task](/docs/tasks/telemetry/distributed-tracing/overview#understanding-what-happened).
Header propagation may be accomplished through client libraries, such as [Zipkin](https://zipkin.io/pages/tracers_instrumentation.html) or [Jaeger](https://github.com/jaegertracing/jaeger-client-java/tree/master/jaeger-core#b3-propagation). It may also be accomplished manually, as documented in the [Distributed Tracing Task](/docs/tasks/telemetry/distributed-tracing/overview#understanding-what-happened).

View File

@ -8,4 +8,4 @@ Contributions are highly welcome. We look forward to community feedback, additio
The code repositories are hosted on [GitHub](https://github.com/istio). Please see our [Contribution Guidelines](https://github.com/istio/community/blob/master/CONTRIBUTING.md) to learn how to contribute.
In addition to the code, there are other ways to contribute to the Istio [community](/about/community/), including on our [discussion forum](https://discuss.istio.io),
[Slack](https://slack.istio.com), and [Stack Overflow](https://stackoverflow.com/questions/tagged/istio).
[Slack](https://istio.slack.com), and [Stack Overflow](https://stackoverflow.com/questions/tagged/istio).

View File

@ -152,7 +152,7 @@ Please be aware of the risk.
{{< text bash >}}
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
--set global.mtls.enabled=true --set sidecarInjectorWebhook.rewriteAppHTTPProbe=true \
-f install/kubernetes/helm/istio/values.yaml > $HOME/istio.yaml
-f @install/kubernetes/helm/istio/values.yaml@ > $HOME/istio.yaml
$ kubectl apply -f $HOME/istio.yaml
{{< /text >}}

View File

@ -14,7 +14,7 @@ Istio 是一个开源项目,拥有一个支持其使用和持续开发的活
{{< /community_item >}}
{{< community_item logo="/about/community/join/slack.svg" alt="Slack" >}}
通过 [Istio Slack](https://slack.istio.com) 与社区实时互动。
通过 [Istio Slack](https://istio.slack.com) 与社区实时互动。
{{< /community_item >}}
{{< community_item logo="/about/community/join/twitter.svg" alt="Twitter" >}}

View File

@ -59,7 +59,7 @@ Istio 安全团队在三个工作日内确认并分析每个漏洞报告。
- [Istio博客](/zh/blog)
- discuss.istio.io 上的[公告](https://discuss.istio.io/c/announcements)类别
- [Istio Twitter 反馈](https://twitter.cqom/IstioMesh)
- [Istio Twitter 反馈](https://twitter.com/IstioMesh)
- [Slack 上的 #announcement 频道](https://istio.slack.com/messages/CFXS256EQ/)
该公告应该尽可能具有可执行性,包括用户在升级到固定版本之前可以采取的临时解决方案。这些公告的建议时间是周一至周四的 16:00UTC。这意味着该公告将在太平洋早晨欧洲傍晚和亚洲傍晚发出。

View File

@ -19,7 +19,7 @@ Sidecar proxy 模式成就了很多奇迹。Sidecar 身处微服务的数据路
## 优化层还是删除层
在我看来,在对某些东西进行优化之前,应该先行考虑的是这个方面的需求是否可以取消。我最初的操作系统级虚拟化[工作目标](http://beyondcontainers.com/blog/a-brief-history-of-containers),就是[移除虚拟机](http://beyondcontainers.com/blog/a-brief-history-of-containers),用 Linux 容器直接在主机操作系统上运行应用,从而免除中间层的困扰。很长一段时间里,业界都在努力的对虚拟机进行优化,这是一场错误的战斗,真正应该做的是删除附加层。
在我看来,在对某些东西进行优化之前,应该先行考虑的是这个方面的需求是否可以取消。我最初的操作系统级虚拟化[工作目标](https://apporbit.com/a-brief-history-of-containers-from-reality-to-hype/),就是[移除虚拟机](https://apporbit.com/a-brief-history-of-containers-from-reality-to-hype/),用 Linux 容器直接在主机操作系统上运行应用,从而免除中间层的困扰。很长一段时间里,业界都在努力的对虚拟机进行优化,这是一场错误的战斗,真正应该做的是删除附加层。
在微服务以及网络的连接方面,历史再次重演。网络已经经历了物理服务器十年前所经历的变化。新引进层和结构正被深入集成到了协议栈甚至是晶片之中,却没有人认真考虑替代的可能。也许移除这些附加层才是更好的办法。

View File

@ -80,7 +80,7 @@ $ kubectl apply -f istio-minimal.yaml
然后,在没有 Istio sidecar 容器的前提下**部署 Bookinfo 示例**
{{< text bash >}}
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
{{< /text >}}
现在,**配置一个新的 Gateway** 允许从 Istio 网格外部访问 reviews service一个新的 `VirtualService` 用于平均分配到 reviews service v1 和 v2 版本的流量;以及一系列新的、将目标子集与服务版本相匹配的 `DestinationRule` 资源:
@ -147,7 +147,7 @@ EOF
最后,使用 `curl` **部署一个用于测试的 Pod**(没有 Istio sidecar 容器):
{{< text bash >}}
$ kubectl apply -f samples/sleep/sleep.yaml
$ kubectl apply -f @samples/sleep/sleep.yaml@
{{< /text >}}
## 测试您的部署

View File

@ -97,7 +97,6 @@ Acmeair 基准测试应用程序可以在这里找到:[IBM's BluePerf](https:/
综合基准测试(基于 fortio和方针应用BluePerf都是每晚发布管道nightly release pipeline的一部分您可以在此看到结果
* [https://fortio-daily.istio.io/](https://fortio-daily.istio.io/)
* [https://ibmcloud-perf.istio.io/regpatrol/](https://ibmcloud-perf.istio.io/regpatrol/)
这使我们能够及早发现回归并追踪一段时间内的改进。
@ -114,8 +113,6 @@ Acmeair 基准测试应用程序可以在这里找到:[IBM's BluePerf](https:/
* 另请参阅 [Istio 面向性能的常见问题解答](https://github.com/istio/istio/wiki/Istio-Performance-oriented-setup-FAQ)
* 以及[性能和可伸缩性工作组](https://github.com/istio/community/blob/master/WORKING-GROUPS.md#performance-and-scalability)的工作。
当前建议(使用所有 Istio 功能时):
* 开启访问日志(默认开启)时,为 Sidecar 每分配 1 个 vCPU 能够负担 1000 qps 的访问峰值,没有开启则 0.5 vCPU 即可负担同样峰值,节点上的 `fluentd` 由于需要捕获和上传日志,是主要的性能消耗者。

View File

@ -154,7 +154,6 @@ Bookinfo 是一个异构应用,几个微服务是由不同的语言编写的
{{< text bash >}}
$ docker-compose -f @samples/bookinfo/platform/consul/bookinfo.yaml@ up -d
$ docker-compose -f samples/bookinfo/platform/consul/bookinfo.sidecars.yaml up -d
{{< /text >}}
1. 确认所有的容器都在运行:

View File

@ -196,7 +196,7 @@ EOF
* 清理 `cluster1`:
{{< text bash >}}
$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/httpbin/sleep.yaml@
$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/httpbin/httpbin.yaml@
$ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar
$ kubectl delete --context=$CTX_CLUSTER1 ns foo
{{< /text >}}
@ -205,5 +205,5 @@ EOF
{{< text bash >}}
$ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
$ kubectl delete --context=$CTX_CLUSTER1 ns bar
$ kubectl delete --context=$CTX_CLUSTER2 ns bar
{{< /text >}}

View File

@ -225,8 +225,8 @@ $ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
{{< /text >}}

View File

@ -133,8 +133,8 @@ __注意__: 以下示例启用了 [自动 sidecar 注入](/zh/docs/setup/kuberne
1. 安装 `bookinfo` 在第一个集群 `cluster-1` 上。移除此集群上的 `reviews-v3` deployment 以便将其部署在 `cluster-2` 上:
{{< text bash >}}
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
{{< /text >}}

View File

@ -69,7 +69,7 @@ keywords: [kubernetes,multicluster]
{{< text bash >}}
$ helm template --namespace=istio-system \
--values install/kubernetes/helm/istio/values.yaml \
--values @install/kubernetes/helm/istio/values.yaml@ \
--set global.mtls.enabled=true \
--set global.enableTracing=false \
--set security.selfSigned=false \
@ -351,7 +351,7 @@ $ kubectl label --context=$CTX_LOCAL secret iks istio/multiCluster=true -n istio
1. 部署 `sleep` service
{{< text bash >}}
$ kubectl create --context=$CTX_LOCAL -f samples/sleep/sleep.yaml -n sample
$ kubectl create --context=$CTX_LOCAL -f @samples/sleep/sleep.yaml@ -n sample
{{< /text >}}
1. 多次请求 `helloworld.sample` service
@ -406,6 +406,6 @@ $ kubectl delete --context=$CTX_LOCAL -f istio-auth.yaml
$ kubectl delete --context=$CTX_LOCAL ns istio-system
$ helm delete --purge --kube-context=$CTX_LOCAL istio-init
$ kubectl delete --context=$CTX_LOCAL -f helloworld-v1.yaml -n sample
$ kubectl delete --context=$CTX_LOCAL -f samples/sleep/sleep.yaml -n sample
$ kubectl delete --context=$CTX_LOCAL -f @samples/sleep/sleep.yaml@ -n sample
$ kubectl delete --context=$CTX_LOCAL ns sample
{{< /text >}}

View File

@ -178,7 +178,7 @@ $ istioctl experimental convert-ingress [选项]
典型用例:
{{< text bash >}}
$ istioctl experimental convert-ingress -f samples/bookinfo/platform/kube/bookinfo-ingress.yaml
$ istioctl experimental convert-ingress -f @samples/bookinfo/platform/kube/bookinfo-ingress.yaml@
{{< /text >}}
## `istioctl experimental metrics`

View File

@ -4,7 +4,7 @@ description: 通过 Mixer 从 Istio 导出的默认监控指标。
weight: 50
---
此页面展示使用初始配置时Istio 收集的监控指标metrics的详细信息。您可以随时通过更改配置来添加和删除指标。您可以在[这个文件]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml)的 "kind: metric” 小节中找到它们。它使用了 [metric 模板](/zh/docs/reference/config/policy-and-telemetry/templates/metric/)来定义指标。
此页面展示使用初始配置时Istio 收集的监控指标metrics的详细信息。您可以随时通过更改配置来添加和删除指标。您可以在[这个文件]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml)的 "kind: metric” 小节中找到它们。它使用了 [metric 模板](/zh/docs/reference/config/policy-and-telemetry/templates/metric/)来定义指标。
我们将首先描述监控指标,然后描述每个指标的标签。

View File

@ -43,7 +43,7 @@ keywords: [kubernetes,multicluster,federation,gateway]
{{< text bash >}}
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
-f @install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml@ > $HOME/istio.yaml
-f @install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml@ > $HOME/istio.yaml
{{< /text >}}
要了解更多细节以及参数定制方法,请阅读:[用 Helm 进行安装](/zh/docs/setup/kubernetes/install/helm)。
@ -61,10 +61,10 @@ keywords: [kubernetes,multicluster,federation,gateway]
--from-file=@samples/certs/cert-chain.pem@
{{< /text >}}
* 依照 [Helm 安装步骤](/zh/docs/setup/kubernetes/install/helm/#安装步骤)中的介绍完成 Istio 的安装。必须使用参数 `--values install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml`,来启用正确的多集群设置。例如:
* 依照 [Helm 安装步骤](/zh/docs/setup/kubernetes/install/helm/#安装步骤)中的介绍完成 Istio 的安装。必须使用参数 `--values install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml`,来启用正确的多集群设置。例如:
{{< text bash >}}
$ helm install istio --name istio --namespace istio-system --values install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml
$ helm install istio --name istio --namespace istio-system --values @install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml@
{{< /text >}}
## 配置 DNS

View File

@ -24,7 +24,7 @@ $ kubectl create namespace istio-system
- 您为 Tiller 安装了一个服务帐户。如果没有安装,运行如下命令:
{{< text bash >}}
$ kubectl create -f install/kubernetes/helm/helm-service-account.yaml
$ kubectl create -f @install/kubernetes/helm/helm-service-account.yaml@
{{< /text >}}
- 使用服务安装 Tiller 。如果没有,请运行运行一下命令:

View File

@ -18,7 +18,7 @@ keywords: [kubernetes,upgrading]
通过 `kubectl apply` ,等待几秒钟,让 CRD 在 `kube-apiserver` 中提交:
{{< text bash >}}
$ kubectl apply -f @install/kubernetes/helm/istio/templates/crds.yaml@
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
{{< /text >}}
### 控制平面升级

View File

@ -249,7 +249,7 @@ keywords: [traffic-management,mirroring]
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
05:47:50.159513 IP sleep-7b9f8bfcd-2djx5.38836 > 10-233-75-11.httpbin.default.svc.cluster.local.80: Flags [P.], seq 4039989036:4039989832, ack 3139734980, win 254, options [nop,nop,TS val 77427918 ecr 76730809], length 796: HTTP: GET /headers HTTP/1.1
E..P2.@.@.X.
E..P2.X.X.X.
.K.
.K....P..W,.$.......+.....
..t.....GET /headers HTTP/1.1

View File

@ -330,7 +330,7 @@ keywords: [traffic-management,ingress]
1. 在没有网关的情况下部署 [Bookinfo 示例应用程序](/zh/docs/examples/bookinfo/)
{{< text bash >}}
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
{{< /text >}}
1. 使用 `bookinfo.com` 的主机重新部署 `Gateway`

View File

@ -18,4 +18,4 @@ Mixer
- 基于 *operator-supplied* 配置为每个请求生成追踪 span
- 发送生成的追踪 span 到 *operator-designated* 追踪后端
[Stackdriver 与 Istio 的追踪集成](https://cloud.google.com/istio/docs/istio-on-gke/installing#enabling_tracing)是通过 Mixer 进行追踪集成的一个例子。
[Stackdriver 与 Istio 的追踪集成](https://cloud.google.com/istio/docs/istio-on-gke/installing#disabling_tracing_and_logging)是通过 Mixer 进行追踪集成的一个例子。

View File

@ -21,5 +21,5 @@ Istio 允许报告服务网格中工作负载到工作负载间通信的追踪 s
- `x-ot-span-context`
header 传播可以通过客户端库完成,例如 [Zipkin](https://zipkin.io/pages/existing_instrumentations.html) 或 [Jaeger](https://github.com/jaegertracing/jaeger-client-java/tree/master/jaeger-core#b3-propagation)。
header 传播可以通过客户端库完成,例如 [Zipkin](https://zipkin.io/pages/tracers_instrumentation.html) 或 [Jaeger](https://github.com/jaegertracing/jaeger-client-java/tree/master/jaeger-core#b3-propagation)。
当然,这也可以手动完成,正如[分布式追踪任务](/zh/docs/tasks/telemetry/distributed-tracing/overview/#understanding-what-happened)中所描述的那样。

View File

@ -8,4 +8,4 @@ weight: 70
Istio 的代码存储库托管在 [GitHub](https://github.com/istio) 上。请参阅我们的[贡献指南](https://github.com/istio/community/blob/master/CONTRIBUTING.md)以了解如何对项目做出贡献。
除了代码之外,还有其他方式可以为 Istio [社区](/zh/about/community/) 做出贡献,包括我们的[论坛](https://discuss.istio.io)、
[Slack](https://slack.istio.com)、以及 [Stack Overflow](https://stackoverflow.com/questions/tagged/istio)。
[Slack](https://istio.slack.com)、以及 [Stack Overflow](https://stackoverflow.com/questions/tagged/istio)。

View File

@ -56,7 +56,7 @@ istio-system tcpkubeattrgenrulerule 13d
如果输出显示没有名为 `promhttp``promtcp` 的规则,则缺少将 mixer 指标实例发送到 Prometheus adapter 的 Mixer 配置。你必须提供将 Mixer 指标实例连接到 Prometheus handler 的规则配置。
作为参考,请参阅 [Prometheus 的默认规则]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml)。
作为参考,请参阅 [Prometheus 的默认规则]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml)。
## 验证 Prometheus handler 配置是否存在
@ -70,7 +70,7 @@ istio-system tcpkubeattrgenrulerule 13d
1. 如果输出未显示已配置的 Prometheus handler则必须重新在 Mixer 配置适当的 handler。
有关参考,请参阅 [Prometheus 的默认 handler 配置]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml)。
有关参考,请参阅 [Prometheus 的默认 handler 配置]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml)。
## 验证 Mixer 指标实例配置是否存在
@ -89,7 +89,7 @@ istio-system tcpkubeattrgenrulerule 13d
1. 如果输出未显示已配置的 Mixer 指标实例,则必须使用相应的实例配置重新配置 Mixer。
有关参考,请参阅 [Mixer 指标的默认实例配置]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml)。
有关参考,请参阅 [Mixer 指标的默认实例配置]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml)。
## 验证没有配置错误

View File

@ -52,14 +52,14 @@
{{- end -}}
{{- end -}}
{{- /* include a link to the special embedded @@ references so the links are statically checked as we build the site */ -}}
{{- /* include a dummy link to the special embedded @@ references so the links are statically checked as we build the site */ -}}
{{- $branch := .Site.Data.args.source_branch_name -}}
{{- $links := findRE "@(.*?)@" $text -}}
{{- range $link := $links -}}
{{- $target := trim $link "@" -}}
{{- if gt (len $target) 0 -}}
{{- $href := printf "https://raw.githubusercontent.com/istio/istio/%s/%s" $branch $target -}}
<a hidden data-skipendnotes="true" style="display:none" href="{{- $href -}}">Hello</a>
<a data-skipendnotes="true" style="display:none" href="{{- $href -}}">Hello</a>
{{- end -}}
{{- end -}}