mirror of https://github.com/istio/istio.io.git
Fix a bunch of capitalization and spacing errors. (#3108)
This commit is contained in:
parent
0e66794cf7
commit
8a9d5cb92b
|
|
@ -295,7 +295,6 @@ istio.io.
|
|||
jason
|
||||
json
|
||||
jwt.io
|
||||
k8s
|
||||
key.pem
|
||||
kube-api
|
||||
kube-apiserver
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ weight: 87
|
|||
|
||||
This post shows how to direct Istio logs to [Stackdriver](https://cloud.google.com/stackdriver/)
|
||||
and export those logs to various configured sinks such as such as
|
||||
[BigQuery](https://cloud.google.com/bigquery/), [Google Cloud Storage(GCS)](https://cloud.google.com/storage/)
|
||||
[BigQuery](https://cloud.google.com/bigquery/), [Google Cloud Storage](https://cloud.google.com/storage/)
|
||||
or [Cloud Pub/Sub](https://cloud.google.com/pubsub/). At the end of this post you can perform
|
||||
analytics on Istio data from your favorite places such as BigQuery, GCS or Cloud Pub/Sub.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Istio Soft Multi-tenancy Support
|
||||
title: Istio Soft Multi-Tenancy Support
|
||||
description: Using Kubernetes namespaces and RBAC to create an Istio soft multi-tenancy environment.
|
||||
publishdate: 2018-04-19
|
||||
subtitle: Using multiple Istio control planes and RBAC to create multi-tenancy
|
||||
|
|
@ -50,8 +50,8 @@ the Istio namespace of *istio-system1*.
|
|||
$ cat istio.yaml | sed s/istio-system/istio-system1/g > istio-system1.yaml
|
||||
{{< /text >}}
|
||||
|
||||
The istio yaml file contains the details of the Istio control plane deployment, including the
|
||||
pods that make up the control plane (mixer, pilot, ingress, CA). Deploying the two Istio
|
||||
The `istio.yaml` file contains the details of the Istio control plane deployment, including the
|
||||
pods that make up the control plane (Mixer, Pilot, Ingress, Galley, CA). Deploying the two Istio
|
||||
control plane yaml files:
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
@ -303,7 +303,7 @@ technology, ex. Kubernetes, rather than improvements in Istio capabilities.
|
|||
|
||||
## Issues
|
||||
|
||||
* The CA (Certificate Authority) and mixer Istio pod logs from one tenant's Istio control
|
||||
* The CA (Certificate Authority) and Mixer pod logs from one tenant's Istio control
|
||||
plane (ex. *istio-system* `namespace`) contained 'info' messages from a second tenant's
|
||||
Istio control plane (ex *istio-system1* `namespace`).
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Deploy a custom ingress gateway using cert-manager
|
||||
title: Deploy a Custom Ingress Gateway Using Cert-Manager
|
||||
description: Describes how to deploy a custom ingress gateway using cert-manager manually.
|
||||
subtitle: Custom ingress gateway
|
||||
publishdate: 2019-01-10
|
||||
|
|
|
|||
|
|
@ -117,7 +117,7 @@ This enables us to catch regression early and track improvements over time.
|
|||
|
||||
* Setup [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
|
||||
|
||||
* Split mixer check and report pods.
|
||||
* Split Mixer check and report pods.
|
||||
|
||||
* High availability (HA).
|
||||
|
||||
|
|
@ -129,7 +129,7 @@ Current recommendations (when using all Istio features):
|
|||
|
||||
* 1 vCPU per peak thousand requests per second for the sidecar(s) with access logging (which is on by default) and 0.5 without, `fluentd` on the node is a big contributor to that cost as it captures and uploads logs.
|
||||
|
||||
* Assuming typical cache hit ratio (>80%) for mixer checks: 0.5 vCPU per peak thousand requests per second for the mixer pods.
|
||||
* Assuming typical cache hit ratio (>80%) for Mixer checks: 0.5 vCPU per peak thousand requests per second for the mixer pods.
|
||||
|
||||
* Latency cost/overhead is approximately [10 millisecond](https://fortio.istio.io/browse?url=qps_400-s1_to_s2-0.7.1-2018-04-05-22-06.json) for service-to-service (2 proxies involved, mixer telemetry and checks) as of 0.7.1, we expect to bring this down to a low single digit ms.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Connect to an external HTTPS proxy
|
||||
title: Connect to an External HTTPS Proxy
|
||||
description: Describes how to configure Istio to let applications use an external HTTPS proxy.
|
||||
weight: 60
|
||||
keywords: [traffic-management,egress]
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ For example, run the following command on a macOS or Linux system:
|
|||
> {{< warning_icon >}} The Consul install only configures Istio Pilot. To use Istio Mixer (policy enforcement and telemetry reporting) or Istio Galley, further installation steps
|
||||
> will be necessary. Those steps are beyond the scope of this guide.
|
||||
|
||||
1. Confirm that all docker containers are running:
|
||||
1. Confirm that all Docker containers are running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker ps -a
|
||||
|
|
@ -92,7 +92,7 @@ $ docker-compose -f <your-app-spec>.yaml up -d
|
|||
|
||||
## Uninstalling
|
||||
|
||||
Uninstall Istio core components by removing the docker containers:
|
||||
Uninstall Istio core components by removing the Docker containers:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker-compose -f install/consul/istio.yaml down
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: Multicluster Installation
|
||||
description: Configure an Istio mesh across multiple kubernetes clusters.
|
||||
description: Configure an Istio mesh spanning multiple Kubernetes clusters.
|
||||
weight: 85
|
||||
type: section-index
|
||||
keywords: [kubernetes,multicluster,federation]
|
||||
keywords: [kubernetes,multicluster]
|
||||
---
|
||||
Refer to the [multicluster service mesh](/docs/concepts/multicluster-deployments/) concept documentation
|
||||
for more information.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Gateway connectivity
|
||||
title: Gateway Connectivity
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters using Istio Gateway to reach remote pods.
|
||||
weight: 2
|
||||
keywords: [kubernetes,multicluster,federation,gateway]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: VPN connectivity
|
||||
title: VPN Connectivity
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with direct network access to remote pods.
|
||||
weight: 5
|
||||
keywords: [kubernetes,multicluster,federation,vpn]
|
||||
|
|
|
|||
|
|
@ -6,8 +6,8 @@ skip_seealso: true
|
|||
keywords: [platform-setup,kubernetes,docker-for-desktop]
|
||||
---
|
||||
|
||||
If you want to run istio under docker for desktop's built-in Kubernetes, you may need to increase docker's memory limit
|
||||
under the *Advanced* pane of docker's preferences. Pilot by default requests `2048Mi` of memory, which is docker's
|
||||
If you want to run Istio under Docker for desktop's built-in Kubernetes, you may need to increase Docker's memory limit
|
||||
under the *Advanced* pane of Docker's preferences. Pilot by default requests `2048Mi` of memory, which is Docker's
|
||||
default limit.
|
||||
|
||||
{{< image width="60%" link="./dockerprefs.png" caption="Docker Preferences" >}}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Installing the sidecar
|
||||
title: Installing the Sidecar
|
||||
description: Instructions for installing the Istio sidecar in application pods automatically using the sidecar injector webhook or manually using istioctl CLI.
|
||||
weight: 45
|
||||
keywords: [kubernetes,sidecar,sidecar-injection]
|
||||
|
|
|
|||
|
|
@ -245,7 +245,7 @@ subnet.
|
|||
|
||||
## Cleanup
|
||||
|
||||
* Remove the mixer configuration:
|
||||
* Remove the Mixer configuration:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete -f checkip-rule.yaml
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ through secret-mount files, this approach has the following side effects:
|
|||
|
||||
These issues are addressed in Istio 1.1 through the support of SDS to provision identities.
|
||||
|
||||
Workload requests key/certificate from SDS server(node agent, which runs as per-node daemon set) using k8s service account JWT through
|
||||
Workload requests key/certificate from SDS server (node agent, which runs as per-node daemon set) using a Kubernetes service account JWT through
|
||||
the SDS API, node agent generates private key and send CSR request to Citadel, Citadel verifies the JWT and signs the certificate,
|
||||
the key/certificate will eventually be sent back to workload sidecar through SDS server(node agent).
|
||||
Since key/certificate which are generated through SDS are stored in workload sidecar's memory, it's more secure than file mount;
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Authorization for HTTP services
|
||||
title: Authorization for HTTP Services
|
||||
description: Shows how to set up role-based access control for HTTP services.
|
||||
weight: 40
|
||||
keywords: [security,access-control,rbac,authorization]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Authorization for TCP services
|
||||
title: Authorization for TCP Services
|
||||
description: Shows how to set up role-based access control for TCP services.
|
||||
weight: 40
|
||||
keywords: [security,access-control,rbac,tcp,authorization]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Citadel health checking
|
||||
title: Citadel Health Checking
|
||||
description: Shows how to enable Citadel health checking with Kubernetes.
|
||||
weight: 70
|
||||
keywords: [security,health-check]
|
||||
|
|
@ -16,7 +16,7 @@ Citadel contains a _prober client_ module that periodically checks Citadel's sta
|
|||
status of the gRPC server).
|
||||
If Citadel is healthy, the _prober client_ updates the _modification time_ of the _health status file_
|
||||
(the file is always empty). Otherwise, it does nothing. Citadel relies on a
|
||||
[K8s liveness and readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
|
||||
[Kubernetes liveness and readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
|
||||
with command line to check the _modification time_ of the _health status file_ on the pod.
|
||||
If the file is not updated for a period, the probe will be triggered and Kubelet will restart the Citadel container.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Plugging in external CA key and certificate
|
||||
title: Plugging in External CA Key and Certificate
|
||||
description: Shows how operators can configure Citadel with existing root certificate, signing certificate and key.
|
||||
weight: 60
|
||||
keywords: [security,certificates]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Groups-based authorization and authorization for list-typed claims
|
||||
title: Groups-Based Authorization and Authorization for List-Typed Claims
|
||||
description: Tutorial on how to configure the groups-base authorization and configure the authorization of list-typed claims in Istio.
|
||||
weight: 10
|
||||
keywords: [security,authorization]
|
||||
|
|
|
|||
|
|
@ -8,6 +8,6 @@ Istio uses the service registry to generate [Envoy](#envoy) configuration.
|
|||
|
||||
Istio does not provide [service discovery](https://en.wikipedia.org/wiki/Service_discovery),
|
||||
although most services are automatically added to the registry by Pilot
|
||||
adapters that reflect the discovered services of the underlying platform (k8s/consul/plain DNS).
|
||||
adapters that reflect the discovered services of the underlying platform (Kubernetes, Consul, plain DNS).
|
||||
Additional services can also be registered manually using a
|
||||
[`ServiceEntry`](/docs/concepts/traffic-management/#service-entries) configuration.
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ This may be caused by a known [Docker issue](https://github.com/docker/for-mac/i
|
|||
containers may skew significantly from the time on the host machine. If this is the case,
|
||||
when you select a very long date range in Zipkin you will see the traces appearing as much as several days too early.
|
||||
|
||||
You can also confirm this problem by comparing the date inside a docker container to outside:
|
||||
You can also confirm this problem by comparing the date inside a Docker container to outside:
|
||||
|
||||
{{< text bash >}}
|
||||
$ docker run --entrypoint date gcr.io/istio-testing/ubuntu-16-04-slave:latest
|
||||
|
|
|
|||
|
|
@ -215,7 +215,7 @@ used the `istio-galley-configuration` `configmap` and root certificate
|
|||
mounted from `istio.istio-galley-service-account` secret in the
|
||||
`istio-system` namespace.
|
||||
|
||||
1. Verify the `istio-galley` pods(s) are running:
|
||||
1. Verify the `istio-galley` pod(s) are running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get pod -listio=galley
|
||||
|
|
@ -289,7 +289,7 @@ configuration cannot be created and updated. In such cases you’ll see
|
|||
an error about `no such host` (Kubernetes 1.9) or `no endpoints
|
||||
available` (>=1.10).
|
||||
|
||||
Verify the `istio-galley` pods(s) are running and endpoints are ready.
|
||||
Verify the `istio-galley` pod(s) are running and endpoints are ready.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system get pod -listio=galley
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ $ mixs server [flags]
|
|||
| `--caCertFile <string>` | | 根证书颁发机构的证书文件的位置(默认为`/etc/istio/certs/root\-cert.pem`) |
|
||||
| `--certFile <string>` | | 双向 TLS 的证书文件的位置(默认`/etc/istio/certs/cert-chain.pem`) |
|
||||
| `--configDefaultNamespace <string>` | | 命名空间用于存储网格宽配置。(默认`istio-system`) |
|
||||
| `--configStoreURL <string>` | | 配置存储的 URL。对于文件系统,使用 k8s://path\_to\_kubeconfig,fs://,对于 `MCP/Galley`,使用 `mcp://<address>`。如果 path\_to\_kubeconfig 为空,则使用群集内 kubeconfig。(默认 `''`) |
|
||||
| `--configStoreURL <string>` | | 配置存储的 URL。对于文件系统,使用 `k8s://path\_to\_kubeconfig`,`fs://`,对于 `MCP/Galley`,使用 `mcp://<address>`。如果 path\_to\_kubeconfig 为空,则使用群集内 kubeconfig。(默认 `''`) |
|
||||
| `--ctrlz_address <string>` | | 监听 ControlZ 内省设施的 IP 地址。使用`\*`表示所有地址。(默认`127.0.0.1`) |
|
||||
| `--ctrlz_port <uint16>` | | 用于 ControlZ 内省工具的 IP 端口(默认为`9876`) |
|
||||
| `--keyFile <string>` | | 双向 TLS 的密钥文件的位置(默认`/etc/istio/certs/key.pem`) |
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: 服务注册表
|
||||
---
|
||||
Istio 维护了一个内部服务注册表 (service registry),它包含在服务网格中运行的一组[服务](#%E6%9C%8D%E5%8A%A1)及其相应的[服务 endpoint](#%E6%9C%8D%E5%8A%A1-endpoint)。 Istio 使用服务注册表生成 [envoy](#envoy) 配置。
|
||||
Istio 不提供[服务发现](https://en.wikipedia.org/wiki/Service_discovery),尽管大多数服务都是通过 pilot adapter 自动加入到服务注册表里的,而且这反映了底层平台(k8s/consul/plain DNS)的已发现的服务。 还有就是,可以使用 [`ServiceEntry`](/zh/docs/concepts/traffic-management/#service-entry) 配置手动进行注册。
|
||||
Istio 维护了一个内部服务注册表 (service registry),它包含在服务网格中运行的一组[服务](#%E6%9C%8D%E5%8A%A1)及其相应的[服务 endpoint](#%E6%9C%8D%E5%8A%A1-endpoint)。 Istio 使用服务注册表生成 [Envoy](#envoy) 配置。
|
||||
Istio 不提供[服务发现](https://en.wikipedia.org/wiki/Service_discovery),尽管大多数服务都是通过 pilot adapter 自动加入到服务注册表里的,而且这反映了底层平台(Kubernetes, Consul, plain DNS)的已发现的服务。 还有就是,可以使用 [`ServiceEntry`](/zh/docs/concepts/traffic-management/#service-entry) 配置手动进行注册。
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue