mirror of https://github.com/istio/istio.io.git
Spelling improvements (#2037)
Remove a bunch of entries that shouldn't have been in the spelling dictionary and correct content aoocrdingly. I'm disabling the Chinese spell checking for now, since I'm not able to fix the spelling errors that emerged there. Once this PR is in, I'll file an issue to get those spelling errors addressed and checking reenabled.
This commit is contained in:
parent
719aaf17f4
commit
1c300c99bd
36
.spelling
36
.spelling
|
@ -51,8 +51,6 @@ Chrony
|
|||
Circonus
|
||||
CloudWatch
|
||||
cnn.com
|
||||
Cmd
|
||||
Config
|
||||
ConfigMap
|
||||
ControlZ
|
||||
CRD
|
||||
|
@ -98,19 +96,14 @@ Kops
|
|||
Kuat
|
||||
Kube
|
||||
Kubecon
|
||||
kubectl
|
||||
Kubelet
|
||||
Kubernetes
|
||||
L3-4
|
||||
L4-L6
|
||||
LabelDescription
|
||||
LibreSSL
|
||||
LoadBalancer
|
||||
LoadBalancers
|
||||
Lyft
|
||||
macOS
|
||||
Manolache
|
||||
Memquota
|
||||
MeshPolicy
|
||||
Mesos
|
||||
Minikube
|
||||
|
@ -137,7 +130,6 @@ Rajagopalan
|
|||
RawVM
|
||||
Redis
|
||||
Redis-based
|
||||
Redisquota
|
||||
Registrator
|
||||
Reviewer1
|
||||
Reviewer2
|
||||
|
@ -202,17 +194,13 @@ backend
|
|||
backends
|
||||
base64
|
||||
bind-productpager-viewer
|
||||
bookinfo
|
||||
booksale
|
||||
bookstore.default.svc.cluster.local
|
||||
boolean
|
||||
bt
|
||||
camelCase
|
||||
canaried
|
||||
canarying
|
||||
cluster.local
|
||||
colocated
|
||||
config
|
||||
configmap
|
||||
configmaps
|
||||
containerID
|
||||
|
@ -220,13 +208,10 @@ coreos
|
|||
dataset
|
||||
datastore
|
||||
debian
|
||||
default.svc.cluster.local
|
||||
details.default.svc.cluster.local
|
||||
dev
|
||||
docker.io
|
||||
e.g.
|
||||
eBPF
|
||||
egressgateway
|
||||
enablement
|
||||
endUser-to-Service
|
||||
env
|
||||
|
@ -265,12 +250,9 @@ http2
|
|||
httpReqTimeout
|
||||
httpbin
|
||||
httpbin.org
|
||||
httpbin.yaml
|
||||
https
|
||||
hyperkube
|
||||
i.e.
|
||||
image.html
|
||||
ingressgateway
|
||||
initializer
|
||||
initializers
|
||||
int64
|
||||
|
@ -298,27 +280,22 @@ kube-public
|
|||
kube-system
|
||||
kubeconfig
|
||||
kubelet
|
||||
kubernetes
|
||||
kubernetes.default
|
||||
learnings
|
||||
lifecycle
|
||||
liveness
|
||||
logInfo
|
||||
mTLS
|
||||
machineSetup
|
||||
memcached
|
||||
memquota
|
||||
mesos-dns
|
||||
metadata
|
||||
metadata.initializers.pending
|
||||
methodName
|
||||
microservice
|
||||
microservices
|
||||
middleboxes
|
||||
minikube
|
||||
misconfigured
|
||||
mongodb
|
||||
mtls_excluded_services
|
||||
multicloud
|
||||
multicluster
|
||||
mutatingwebhookconfiguration
|
||||
|
@ -356,10 +333,6 @@ preliminary.istio.io
|
|||
preliminary.istio.io.
|
||||
prepends
|
||||
prober
|
||||
productpage
|
||||
productpage.ns.svc.cluster.local
|
||||
products.default.svc.cluster.local
|
||||
prometheus
|
||||
proto
|
||||
protobuf
|
||||
protos
|
||||
|
@ -375,13 +348,11 @@ reachability
|
|||
rearchitect
|
||||
readinessProbe
|
||||
redis
|
||||
redis-master-2353460263-1ecey
|
||||
referer
|
||||
registrator
|
||||
reimplemented
|
||||
reinject
|
||||
repo
|
||||
requestcontext
|
||||
roadmap
|
||||
roleRef
|
||||
rollout
|
||||
|
@ -402,17 +373,13 @@ sharded
|
|||
sharding
|
||||
sidecar.env
|
||||
sinkInfo
|
||||
sleep.legacy
|
||||
spiffe
|
||||
stackdriver
|
||||
statsd
|
||||
stdout
|
||||
struct
|
||||
subdomain
|
||||
subdomains
|
||||
substring
|
||||
svc
|
||||
svc.cluster.local
|
||||
svc.com
|
||||
svg
|
||||
tcp
|
||||
|
@ -466,8 +433,9 @@ embeddable
|
|||
p99
|
||||
vCPU
|
||||
AES-NI
|
||||
Stackdriver
|
||||
Statsd
|
||||
|
||||
qcc
|
||||
- search.md
|
||||
searchresults
|
||||
gcse
|
||||
|
|
|
@ -58,8 +58,8 @@ represents.
|
|||
|
||||
|Do | Don't
|
||||
|----------------------------|------
|
||||
|The `kubectl run` command creates a `Deployment`.|The "kubectl run" command creates a `Deployment`.
|
||||
|For declarative management, use `kubectl apply`.|For declarative management, use "kubectl apply".
|
||||
|The `foo run` command creates a `Deployment`.|The "foo run" command creates a `Deployment`.
|
||||
|For declarative management, use `foo apply`.|For declarative management, use "foo apply".
|
||||
|
||||
### Use `code` style for object field names
|
||||
|
||||
|
@ -105,14 +105,14 @@ It is not a proper noun.
|
|||
|
||||
|Do | Don't
|
||||
|----------------|------
|
||||
| load balancing | load-balancing
|
||||
| multicluster | multi-cluster
|
||||
| add-on | add-on
|
||||
| service mesh | Service Mesh
|
||||
| sidecar | side-car, Sidecar
|
||||
| Kubernetes | kubernetes, k8s
|
||||
| Bookinfo | BookInfo, bookinfo
|
||||
| Mixer | mixer
|
||||
| load balancing | `load-balancing`
|
||||
| multicluster | `multi-cluster`
|
||||
| add-on | `add-on`
|
||||
| service mesh | `Service Mesh`
|
||||
| sidecar | `side-car`, `Sidecar`
|
||||
| Kubernetes | `kubernetes`, `k8s`
|
||||
| Bookinfo | `BookInfo`, `bookinfo`
|
||||
| Mixer | `mixer`
|
||||
|
||||
## Best practices
|
||||
|
||||
|
|
|
@ -394,7 +394,7 @@ script `scripts/grab_reference_docs.sh` in the documentation repo.
|
|||
### Dynamic content
|
||||
|
||||
You can dynamically pull in an external file and display its content as a preformatted block. This is handy to display a
|
||||
config file or a test file. To do so, you use a statement such as:
|
||||
configuration file or a test file. To do so, you use a statement such as:
|
||||
|
||||
{{< text markdown >}}
|
||||
{{</* text_dynamic url="https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/policy/mixer-rule-ratings-ratelimit.yaml" syntax="yaml" */>}}
|
||||
|
|
|
@ -78,7 +78,7 @@ Below is our list of existing features and their current phases. This informatio
|
|||
| [Authentication policy](/docs/concepts/security/#authentication-policies) | Alpha
|
||||
| [End User (JWT) Authentication](/docs/concepts/security/#authentication) | Alpha
|
||||
| [VM: Service Credential Distribution](/docs/concepts/security/#key-management) | Beta
|
||||
| [Incremental mTLS](/docs/tasks/security/mtls-migration) | Beta
|
||||
| [Mutual TLS Migration](/docs/tasks/security/mtls-migration) | Beta
|
||||
| [OPA Checker](/docs/reference/config/policy-and-telemetry/adapters/opa/) | Alpha
|
||||
| [Authorization (RBAC)](/docs/concepts/security/#authorization) | Alpha
|
||||
|
||||
|
@ -97,7 +97,7 @@ Below is our list of existing features and their current phases. This informatio
|
|||
| VM: Ansible Envoy Installation, Interception and Registration | Alpha
|
||||
| [Pilot Integration into Consul](/docs/setup/consul/quick-start/) | Alpha
|
||||
| [Pilot Integration into Cloud Foundry Service Discovery](/docs/setup/consul/quick-start/) | Alpha
|
||||
| [Basic Config Resource Validation](https://github.com/istio/istio/issues/1894) | Alpha
|
||||
| [Basic Configuration Resource Validation](https://github.com/istio/istio/issues/1894) | Alpha
|
||||
| [Mixer Telemetry Collection (Tracing, Logging, Monitoring)](/help/faq/mixer/#mixer-self-monitoring) | Alpha
|
||||
| [Custom Mixer Build Model](https://github.com/istio/istio/wiki/Mixer-Compiled-In-Adapter-Dev-Guide) | Alpha
|
||||
| [Out of Process Mixer Adapters](https://github.com/istio/istio/wiki/Out-Of-Process-gRPC-Adapter-Dev-Guide) | Alpha
|
||||
|
|
|
@ -8,7 +8,7 @@ page_icon: /img/notes.svg
|
|||
|
||||
## General
|
||||
|
||||
- **Updated Config Model**. Istio now uses the Kubernetes [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
- **Updated Configuration Model**. Istio now uses the Kubernetes [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
model to describe and store its configuration. When running in Kubernetes, configuration can now be optionally managed using the `kubectl`
|
||||
command.
|
||||
|
||||
|
@ -68,8 +68,8 @@ to write adapters.
|
|||
|
||||
- **Improved Mixer Build Model**. It’s now easier to build a Mixer binary that includes custom adapters.
|
||||
|
||||
- **Mixer Adapter Updates**. The built-in adapters have all been rewritten to fit into the new adapter model. The stackdriver adapter has been added for this
|
||||
release. The experimental redis quota adapter has been removed in the 0.2 release, but is expected to come back in production quality for the 0.3 release.
|
||||
- **Mixer Adapter Updates**. The built-in adapters have all been rewritten to fit into the new adapter model. The `stackdriver` adapter has been added for this
|
||||
release. The experimental `redisquota` adapter has been removed in the 0.2 release, but is expected to come back in production quality for the 0.3 release.
|
||||
|
||||
- **Mixer Call Tracing**. Calls between Envoy and Mixer can now be traced and analyzed in the Zipkin dashboard.
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ significant drop in average latency for authorization checks.
|
|||
- **Improved list Adapter**. The Mixer 'list' adapter now supports regular expression matching. See the adapter's
|
||||
[configuration options](/docs/reference/config/policy-and-telemetry/adapters/list/) for details.
|
||||
|
||||
- **Config Validation**. Mixer does more extensive validation of configuration state in order to catch problems earlier.
|
||||
- **Configuration Validation**. Mixer does more extensive validation of configuration state in order to catch problems earlier.
|
||||
We expect to invest more in this area in coming releases.
|
||||
|
||||
If you're into the nitty-gritty details, you can see our more detailed low-level
|
||||
|
|
|
@ -16,7 +16,7 @@ possible for Pilot to discover CF services and service instances.
|
|||
|
||||
- **Pilot Metrics**. Pilot now collects metrics for diagnostics.
|
||||
|
||||
- **Helm Charts**. We now provide helm charts to install Istio.
|
||||
- **Helm Charts**. We now provide Helm charts to install Istio.
|
||||
|
||||
- **Enhanced Attribute Expressions**. Mixer's expression language gained a few new functions
|
||||
to make it easier to write policy rules. [Learn more](/docs/reference/config/policy-and-telemetry/expression-language/)
|
||||
|
|
|
@ -9,7 +9,7 @@ updated features detailed below.
|
|||
|
||||
## Networking
|
||||
|
||||
- **Custom Envoy Config**. Pilot now supports ferrying custom Envoy config to the
|
||||
- **Custom Envoy Configuration**. Pilot now supports ferrying custom Envoy configuration to the
|
||||
proxy. [Learn more](https://github.com/mandarjog/istioluawebhook)
|
||||
|
||||
## Mixer adapters
|
||||
|
@ -17,7 +17,7 @@ proxy. [Learn more](https://github.com/mandarjog/istioluawebhook)
|
|||
- **SolarWinds**. Mixer can now interface to AppOptics and Papertrail.
|
||||
[Learn more](/docs/reference/config/policy-and-telemetry/adapters/solarwinds/)
|
||||
|
||||
- **Redisquota**. Mixer now supports a Redis-based adapter for rate limit tracking.
|
||||
- **Redis Quota**. Mixer now supports a Redis-based adapter for rate limit tracking.
|
||||
[Learn more](/docs/reference/config/policy-and-telemetry/adapters/redisquota/)
|
||||
|
||||
- **Datadog**. Mixer now provides an adapter to deliver metric data to a Datadog agent.
|
||||
|
|
|
@ -16,5 +16,5 @@ change in 0.8 and beyond.
|
|||
|
||||
Known Issues:
|
||||
|
||||
Our [helm chart](/docs/setup/kubernetes/helm-install/)
|
||||
Our [Helm chart](/docs/setup/kubernetes/helm-install/)
|
||||
currently requires some workaround to apply the chart correctly, see [4701](https://github.com/istio/istio/issues/4701) for details.
|
||||
|
|
|
@ -8,7 +8,10 @@ This is a major release for Istio on the road to 1.0. There are a great many new
|
|||
|
||||
## Networking
|
||||
|
||||
- **Revamped Traffic Management Model**. We're finally ready to take the wraps off our [new traffic management APIs](/blog/2018/v1alpha3-routing/). We believe this new model is easier to understand while covering more real world deployment [use-cases](/docs/tasks/traffic-management/). For folks upgrading from earlier releases there is a [migration guide](/docs/setup/kubernetes/upgrading-istio/) and a conversion tool built into `istioctl` to help convert your config from the old model.
|
||||
- **Revamped Traffic Management Model**. We're finally ready to take the wraps off our
|
||||
[new traffic management APIs](/blog/2018/v1alpha3-routing/). We believe this new model is easier to understand while covering more real world
|
||||
deployment [use-cases](/docs/tasks/traffic-management/). For folks upgrading from earlier releases there is a
|
||||
[migration guide](/docs/setup/kubernetes/upgrading-istio/) and a conversion tool built into `istioctl` to help convert your configuration from the old model.
|
||||
|
||||
- **Streaming Envoy configuration**. By default Pilot now streams configuration to Envoy using its [ADS API](https://github.com/envoyproxy/data-plane-api/blob/master/XDS_PROTOCOL.md). This new approach increases effective scalability, reduces rollout delay and should eliminate spurious 404 errors.
|
||||
|
||||
|
|
|
@ -88,11 +88,15 @@ While it’s busy cutting down latency, Mixer is also inherently cutting down th
|
|||
|
||||
We have opportunities ahead to continue improving the system in many ways.
|
||||
|
||||
### Config canaries
|
||||
### Configuration canaries
|
||||
|
||||
Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time (yeah, that would be a bad day). To prevent this from happening, config changes can be canaried to a small set of Mixer instances, and then more broadly rolled out.
|
||||
Mixer is highly scaled so it is generally resistant to individual instance failures. However, Mixer is still susceptible to cascading
|
||||
failures in the case when a poison configuration is deployed which causes all Mixer instances to crash basically at the same time
|
||||
(yeah, that would be a bad day). To prevent this from happening, configuration changes can be canaried to a small set of Mixer instances,
|
||||
and then more broadly rolled out.
|
||||
|
||||
Mixer doesn’t yet do canarying of config changes, but we expect this to come online as part of Istio’s ongoing work on reliable config distribution.
|
||||
Mixer doesn’t yet do canarying of configuration changes, but we expect this to come online as part of Istio’s ongoing work on reliable
|
||||
configuration distribution.
|
||||
|
||||
### Cache tuning
|
||||
|
||||
|
|
|
@ -74,7 +74,7 @@ Oops... Instead of the book details we have the _Error fetching product details_
|
|||
caption="The Error Fetching Product Details Message"
|
||||
>}}
|
||||
|
||||
The good news is that our application did not crash. With a good microservice design, we do not have **failure propagation**. In our case, the failing _details_ microservice does not cause the _productpage_ microservice to fail. Most of the functionality of the application is still provided, despite the failure in the _details_ microservice. We have **graceful service degradation**: as you can see, the reviews and the ratings are displayed correctly, and the application is still useful.
|
||||
The good news is that our application did not crash. With a good microservice design, we do not have **failure propagation**. In our case, the failing _details_ microservice does not cause the `productpage` microservice to fail. Most of the functionality of the application is still provided, despite the failure in the _details_ microservice. We have **graceful service degradation**: as you can see, the reviews and the ratings are displayed correctly, and the application is still useful.
|
||||
|
||||
So what might have gone wrong? Ah... The answer is that I forgot to enable traffic from inside the mesh to an external service, in this case to the Google Books web service. By default, the Istio sidecar proxies ([Envoy proxies](https://www.envoyproxy.io)) **block all the traffic to destinations outside the cluster**. To enable such traffic, we must define an [egress rule](https://archive.istio.io/v0.7/docs/reference/config/istio.routing.v1alpha1/#EgressRule).
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ performed with the credentials of the `admin` user, created by default by
|
|||
$ curl -s {{< github_file >}}/samples/bookinfo/src/mysql/mysqldb-init.sql | mysql -u root -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT
|
||||
{{< /text >}}
|
||||
|
||||
1. Create a user with the name _bookinfo_ and grant it _SELECT_ privilege on the `test.ratings` table:
|
||||
1. Create a user with the name `bookinfo` and grant it _SELECT_ privilege on the `test.ratings` table:
|
||||
|
||||
{{< text bash >}}
|
||||
$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e "CREATE USER 'bookinfo' IDENTIFIED BY '<password you choose>'; GRANT SELECT ON test.ratings to 'bookinfo';"
|
||||
|
@ -73,8 +73,8 @@ performed with the credentials of the `admin` user, created by default by
|
|||
{{< /text >}}
|
||||
|
||||
Here you apply the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege). This
|
||||
means that you do not use your _admin_ user in the Bookinfo application. Instead, you create a special user for the
|
||||
Bookinfo application , _bookinfo_, with minimal privileges. In this case, the _bookinfo_ user only has the `SELECT`
|
||||
means that you do not use your `admin` user in the Bookinfo application. Instead, you create a special user for the
|
||||
Bookinfo application , `bookinfo`, with minimal privileges. In this case, the _bookinfo_ user only has the `SELECT`
|
||||
privilege on a single table.
|
||||
|
||||
After running the command to create the user, you may want to clean your bash history by checking the number of the last
|
||||
|
@ -141,8 +141,8 @@ service:
|
|||
+----------+--------+
|
||||
{{< /text >}}
|
||||
|
||||
You used the _admin_ user (and _root_ for the local database) in the last command since the _bookinfo_ user does not
|
||||
have the _UPDATE_ privilege on the `test.ratings` table.
|
||||
You used the `admin` user (and `root` for the local database) in the last command since the `bookinfo` user does not
|
||||
have the `UPDATE` privilege on the `test.ratings` table.
|
||||
|
||||
Now you are ready to deploy a version of the Bookinfo application that will use your database.
|
||||
|
||||
|
@ -151,9 +151,9 @@ Now you are ready to deploy a version of the Bookinfo application that will use
|
|||
To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with [Istio installed](/docs/setup/kubernetes/quick-start/#installation-steps). Then you deploy the
|
||||
[Istio Bookinfo sample application](/docs/examples/bookinfo/) and [apply the default destination rules](/docs/examples/bookinfo/#apply-default-destination-rules).
|
||||
|
||||
This application uses the _ratings_ microservice to fetch
|
||||
This application uses the `ratings` microservice to fetch
|
||||
book ratings, a number between 1 and 5. The ratings are displayed as stars for each review. There are several versions
|
||||
of the _ratings_ microservice. Some use [MongoDB](https://www.mongodb.com), others use [MySQL](https://www.mysql.com)
|
||||
of the `ratings` microservice. Some use [MongoDB](https://www.mongodb.com), others use [MySQL](https://www.mysql.com)
|
||||
as their database.
|
||||
|
||||
The example commands in this blog post work with Istio 0.8+, with or without
|
||||
|
@ -364,7 +364,7 @@ which could be beneficial if the consuming applications expect to use that domai
|
|||
|
||||
## Cleanup
|
||||
|
||||
1. Drop the _test_ database and the _bookinfo_ user:
|
||||
1. Drop the `test` database and the `bookinfo` user:
|
||||
|
||||
{{< text bash >}}
|
||||
$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host $MYSQL_DB_HOST --port $MYSQL_DB_PORT -e "drop database test; drop user bookinfo;"
|
||||
|
|
|
@ -29,7 +29,7 @@ metrics (coming..) among others. Following is a diagram of the pipeline:
|
|||
|
||||
{{< image width="75%" ratio="75%"
|
||||
link="./istio-analytics-using-stackdriver.png"
|
||||
caption="Diagram of exporting logs from Istio to StackDriver for analysis" >}}
|
||||
caption="Diagram of exporting logs from Istio to Stackdriver for analysis" >}}
|
||||
|
||||
Istio supports exporting logs to Stackdriver which can in turn be configured to export
|
||||
logs to your favorite sink like BigQuery, Pub/Sub or GCS. Please follow the steps
|
||||
|
@ -40,7 +40,7 @@ in Istio.
|
|||
|
||||
Common setup for all sinks:
|
||||
|
||||
1. Enable [StackDriver Monitoring API](https://cloud.google.com/monitoring/api/enable-api) for the project.
|
||||
1. Enable [Stackdriver Monitoring API](https://cloud.google.com/monitoring/api/enable-api) for the project.
|
||||
1. Make sure `principalEmail` that would be setting up the sink has write access to the project and Logging Admin role permissions.
|
||||
1. Make sure the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set. Please follow instructions [here](https://cloud.google.com/docs/authentication/getting-started) to set it up.
|
||||
|
||||
|
@ -194,10 +194,10 @@ a Stackdriver handler is described [here](/docs/reference/config/policy-and-tele
|
|||
## Understanding what happened
|
||||
|
||||
`Stackdriver.yaml` file above configured Istio to send accesslogs to
|
||||
StackDriver and then added a sink configuration where these logs could be
|
||||
Stackdriver and then added a sink configuration where these logs could be
|
||||
exported. In detail as follows:
|
||||
|
||||
1. Added a handler of kind stackdriver
|
||||
1. Added a handler of kind `stackdriver`
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
|
|
|
@ -79,7 +79,7 @@ through IP. You can still use Istio authorization to control which IP addresses
|
|||
|
||||
The [authorization task](/docs/tasks/security/role-based-access-control/) shows you how to
|
||||
use Istio's authorization feature to control namespace level and service level access using the
|
||||
[BookInfo application](/docs/examples/bookinfo/). In this section, you'll see more examples on how to achieve
|
||||
[Bookinfo application](/docs/examples/bookinfo/). In this section, you'll see more examples on how to achieve
|
||||
micro-segmentation with Istio authorization.
|
||||
|
||||
### Namespace level segmentation via RBAC + conditions
|
||||
|
|
|
@ -289,7 +289,7 @@ Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list p
|
|||
{{< /text >}}
|
||||
|
||||
If the [add-on tools](/docs/tasks/telemetry/), example
|
||||
[prometheus](/docs/tasks/telemetry//querying-metrics/), are deployed
|
||||
[Prometheus](/docs/tasks/telemetry/querying-metrics/), are deployed
|
||||
(also limited by an Istio `namespace`) the statistical results returned would represent only
|
||||
that traffic seen from that tenant's application namespace.
|
||||
|
||||
|
|
|
@ -349,12 +349,12 @@ Mesh-internal entries are like all other internal services but are used to expli
|
|||
to the mesh. They can be used to add services as part of expanding the service mesh to include unmanaged infrastructure
|
||||
(e.g., VMs added to a Kubernetes-based service mesh).
|
||||
Mesh-external entries represent services external to the mesh.
|
||||
For them, mTLS authentication is disabled and policy enforcement is performed on the client-side,
|
||||
For them, mutual TLS authentication is disabled and policy enforcement is performed on the client-side,
|
||||
instead of on the usual server-side for internal service requests.
|
||||
|
||||
Because a `ServiceEntry` configuration simply adds a destination to the internal service registry, it can be
|
||||
used in conjunction with a `VirtualService` and/or `DestinationRule`, just like any other service in the registry.
|
||||
The following `DestinationRule`, for example, can be used to initiate mTLS connections for an external service:
|
||||
The following `DestinationRule`, for example, can be used to initiate mutual TLS connections for an external service:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
|
|
|
@ -56,7 +56,7 @@ Fortio is a fast, small, reusable, embeddable go library as well as a command li
|
|||
|
||||
Fortio is also 100% open-source and with no external dependencies beside go and gRPC so you can reproduce all our results easily and add your own variants or scenarios you are interested in exploring.
|
||||
|
||||
Here is an example of scenario (one out of the 8 scenarios we run for every build) result graphing the latency distribution for istio-0.7.1 at 400 Query-Per-Second (qps) between 2 services inside the mesh (with mTLS, Mixer Checks and Telemetry):
|
||||
Here is an example of scenario (one out of the 8 scenarios we run for every build) result graphing the latency distribution for istio-0.7.1 at 400 Query-Per-Second (qps) between 2 services inside the mesh (with mutual TLS, Mixer policy checks and telemetry collection):
|
||||
|
||||
<iframe src="https://fortio.istio.io/browse?url=qps_400-s1_to_s2-0.7.1-2018-04-05-22-06.json&xMax=105&yLog=true" width="100%" height="1024" scrolling="no" frameborder="0"></iframe>
|
||||
|
||||
|
@ -132,7 +132,7 @@ Current recommendations (when using all Istio features):
|
|||
|
||||
* Latency cost/overhead is approximately [10 millisecond](https://fortio.istio.io/browse?url=qps_400-s1_to_s2-0.7.1-2018-04-05-22-06.json) for service-to-service (2 proxies involved, mixer telemetry and checks) as of 0.7.1, we expect to bring this down to a low single digit ms.
|
||||
|
||||
* mTLS costs are negligible on AES-NI capable hardware in terms of both CPU and latency.
|
||||
* Mutual TLS costs are negligible on AES-NI capable hardware in terms of both CPU and latency.
|
||||
|
||||
We plan on providing more granular guidance for customers adopting Istio "A la carte".
|
||||
|
||||
|
|
|
@ -174,7 +174,7 @@ Controlling the policy and telemetry features involves configuring three types o
|
|||
|
||||
* Configuring a set of *handlers*, which determine the set of adapters that
|
||||
are being used and how they operate. Providing a `statsd` adapter with the IP
|
||||
address for a statsd backend is an example of handler configuration.
|
||||
address for a Statsd backend is an example of handler configuration.
|
||||
|
||||
* Configuring a set of *instances*, which describe how to map request attributes into adapter inputs.
|
||||
Instances represent a chunk of data that one or more adapters will operate
|
||||
|
@ -277,7 +277,7 @@ templates and their specific configuration formats](/docs/reference/config/polic
|
|||
### Rules
|
||||
|
||||
Rules specify when a particular handler is invoked with a specific instance.
|
||||
Consider an example where you want to deliver the `requestduration` metric to the prometheus handler if
|
||||
Consider an example where you want to deliver the `requestduration` metric to the `prometheus` handler if
|
||||
the destination service is `service1` and the `x-user` request header has a specific value.
|
||||
|
||||
{{< text yaml >}}
|
||||
|
|
|
@ -224,7 +224,7 @@ Istio provides two types of authentication:
|
|||
|
||||
- Transport authentication, also known as service-to-service authentication:
|
||||
verifies the direct client making the connection. Istio offers mutual TLS
|
||||
(mTLS) as a full stack solution for transport authentication. You can
|
||||
as a full stack solution for transport authentication. You can
|
||||
easily turn on this feature without requiring service code changes. This
|
||||
solution:
|
||||
|
||||
|
@ -459,7 +459,7 @@ recommendations to avoid disruption when updating your authentication policies:
|
|||
- To enable or disable mutual TLS: Use a temporary policy with a `mode:` key
|
||||
and a `PERMISSIVE` value. This configures receiving services to accept both
|
||||
types of traffic: plain text and TLS. Thus, no request is dropped. Once all
|
||||
clients switch to the expected protocol, with or without mTLS, you can
|
||||
clients switch to the expected protocol, with or without mutual TLS, you can
|
||||
replace the `PERMISSIVE` policy with the final policy. For more information,
|
||||
visit the [Mutual TLS Migration tutorial](/docs/tasks/security/mtls-migration).
|
||||
|
||||
|
|
|
@ -15,16 +15,16 @@ pages, and so on), and a few book reviews.
|
|||
|
||||
The Bookinfo application is broken into four separate microservices:
|
||||
|
||||
* *productpage*. The productpage microservice calls the *details* and *reviews* microservices to populate the page.
|
||||
* *details*. The details microservice contains book information.
|
||||
* *reviews*. The reviews microservice contains book reviews. It also calls the *ratings* microservice.
|
||||
* *ratings*. The ratings microservice contains book ranking information that accompanies a book review.
|
||||
* `productpage`. The `productpage` microservice calls the `details` and `reviews` microservices to populate the page.
|
||||
* `details`. The `details` microservice contains book information.
|
||||
* `reviews`. The `reviews` microservice contains book reviews. It also calls the `ratings` microservice.
|
||||
* `ratings`. The `ratings` microservice contains book ranking information that accompanies a book review.
|
||||
|
||||
There are 3 versions of the reviews microservice:
|
||||
There are 3 versions of the `reviews` microservice:
|
||||
|
||||
* Version v1 doesn't call the ratings service.
|
||||
* Version v2 calls the ratings service, and displays each rating as 1 to 5 black stars.
|
||||
* Version v3 calls the ratings service, and displays each rating as 1 to 5 red stars.
|
||||
* Version v1 doesn't call the `ratings` service.
|
||||
* Version v2 calls the `ratings` service, and displays each rating as 1 to 5 black stars.
|
||||
* Version v3 calls the `ratings` service, and displays each rating as 1 to 5 red stars.
|
||||
|
||||
The end-to-end architecture of the application is shown below.
|
||||
|
||||
|
@ -36,7 +36,7 @@ The end-to-end architecture of the application is shown below.
|
|||
This application is polyglot, i.e., the microservices are written in different languages.
|
||||
It’s worth noting that these services have no dependencies on Istio, but make an interesting
|
||||
service mesh example, particularly because of the multitude of services, languages and versions
|
||||
for the reviews service.
|
||||
for the `reviews` service.
|
||||
|
||||
## Before you begin
|
||||
|
||||
|
@ -203,16 +203,16 @@ $ curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
|
|||
|
||||
You can also point your browser to `http://$GATEWAY_URL/productpage`
|
||||
to view the Bookinfo web page. If you refresh the page several times, you should
|
||||
see different versions of reviews shown in productpage, presented in a round robin style (red
|
||||
see different versions of reviews shown in `productpage`, presented in a round robin style (red
|
||||
stars, black stars, no stars), since we haven't yet used Istio to control the
|
||||
version routing.
|
||||
|
||||
## Apply default destination rules
|
||||
|
||||
Before you can use Istio to control the bookinfo version routing, you need to define the available
|
||||
Before you can use Istio to control the Bookinfo version routing, you need to define the available
|
||||
versions, called *subsets*, in destination rules.
|
||||
|
||||
Run the following command to create default destination rules for the bookinfo services:
|
||||
Run the following command to create default destination rules for the Bookinfo services:
|
||||
|
||||
* If you did **not** enable mutual TLS, execute this command:
|
||||
|
||||
|
|
|
@ -74,7 +74,7 @@ Adding `"--http_port=8081"` in the ESP deployment arguments and expose the HTTP
|
|||
name: http
|
||||
{{< /text >}}
|
||||
|
||||
1. Turn on mTLS in Istio by using the following command:
|
||||
1. Turn on mutual TLS in Istio by using the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl edit cm istio -n istio-system
|
||||
|
|
|
@ -112,13 +112,13 @@ Note that the 'mysqldb' virtual machine does not need and should not have specia
|
|||
|
||||
## Using the mysql service
|
||||
|
||||
The ratings service in bookinfo will use the DB on the machine. To verify that it works, create version 2 of the ratings service that uses the mysql db on the VM. Then specify route rules that force the review service to use the ratings version 2.
|
||||
The ratings service in Bookinfo will use the DB on the machine. To verify that it works, create version 2 of the ratings service that uses the mysql db on the VM. Then specify route rules that force the review service to use the ratings version 2.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl kube-inject -n bookinfo -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml@ | kubectl apply -n bookinfo -f -
|
||||
{{< /text >}}
|
||||
|
||||
Create route rules that will force bookinfo to use the ratings back end:
|
||||
Create route rules that will force Bookinfo to use the ratings back end:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -n bookinfo -f @samples/bookinfo/networking/virtual-service-ratings-mysql-vm.yaml@
|
||||
|
|
|
@ -7,7 +7,7 @@ keywords: [kubernetes,multicluster]
|
|||
|
||||
This example demonstrates how to use Istio' multicluster feature to join 2
|
||||
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) clusters together,
|
||||
using the [kubernetes multicluster installation instructions](/docs/setup/kubernetes/multicluster-install/).
|
||||
using the [Kubernetes multicluster installation instructions](/docs/setup/kubernetes/multicluster-install/).
|
||||
|
||||
## Before you begin
|
||||
|
||||
|
@ -167,7 +167,7 @@ $ kubectl label namespace default istio-injection=enabled
|
|||
|
||||
## Create remote cluster's kubeconfig for Istio Pilot
|
||||
|
||||
The `istio-remote` helm chart creates a service account with minimal access for use by Istio Pilot
|
||||
The `istio-remote` Helm chart creates a service account with minimal access for use by Istio Pilot
|
||||
discovery.
|
||||
|
||||
1. Prepare environment variables for building the `kubeconfig` file for the service account `istio-multi`:
|
||||
|
@ -226,7 +226,7 @@ $ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
|
|||
|
||||
## Deploy Bookinfo Example Across Clusters
|
||||
|
||||
1. Install bookinfo on the first cluster. Remove the `reviews-v3` deployment to deploy on remote:
|
||||
1. Install Bookinfo on the first cluster. Remove the `reviews-v3` deployment to deploy on remote:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
|
||||
|
|
|
@ -18,37 +18,37 @@ deployments will have agents (Envoy or Mixer adapters) that produce these attrib
|
|||
|
||||
| Name | Type | Description | Kubernetes Example |
|
||||
|------|------|-------------|--------------------|
|
||||
| `source.uid` | string | Platform-specific unique identifier for the source workload instance. | kubernetes://redis-master-2353460263-1ecey.my-namespace |
|
||||
| `source.ip` | ip_address | Source workload instance IP address. | 10.0.0.117 |
|
||||
| `source.uid` | string | Platform-specific unique identifier for the source workload instance. | `kubernetes://redis-master-2353460263-1ecey.my-namespace` |
|
||||
| `source.ip` | ip_address | Source workload instance IP address. | `10.0.0.117` |
|
||||
| `source.labels` | map[string, string] | A map of key-value pairs attached to the source instance. | version => v1 |
|
||||
| `source.name` | string | Source workload instance name. | redis-master-2353460263-1ecey |
|
||||
| `source.namespace` | string | Source workload instance namespace. | my-namespace |
|
||||
| `source.principal` | string | The identity of the source workload. | service-account-foo |
|
||||
| `source.name` | string | Source workload instance name. | `redis-master-2353460263-1ecey` |
|
||||
| `source.namespace` | string | Source workload instance namespace. | `my-namespace` |
|
||||
| `source.principal` | string | The identity of the source workload. | `service-account-foo` |
|
||||
| `source.owner` | string | Reference to the workload controlling the source workload instance. | `kubernetes://apis/extensions/v1beta1/namespaces/istio-system/deployments/istio-policy` |
|
||||
| `source.workload.uid` | string | Unique identifier of the source workload. | istio://istio-system/workloads/istio-policy |
|
||||
| `source.workload.name` | string | Source workload name. | istio-policy |
|
||||
| `source.workload.namespace` | string | Source workload namespace. | istio-system |
|
||||
| `destination.uid` | string | Platform-specific unique identifier for the server instance. | kubernetes://my-svc-234443-5sffe.my-namespace |
|
||||
| `destination.ip` | ip_address | Server IP address. | 10.0.0.104 |
|
||||
| `destination.port` | int64 | The recipient port on the server IP address. | 8080 |
|
||||
| `source.workload.uid` | string | Unique identifier of the source workload. | `istio://istio-system/workloads/istio-policy` |
|
||||
| `source.workload.name` | string | Source workload name. | `istio-policy` |
|
||||
| `source.workload.namespace` | string | Source workload namespace. | `istio-system` |
|
||||
| `destination.uid` | string | Platform-specific unique identifier for the server instance. | `kubernetes://my-svc-234443-5sffe.my-namespace` |
|
||||
| `destination.ip` | ip_address | Server IP address. | `10.0.0.104` |
|
||||
| `destination.port` | int64 | The recipient port on the server IP address. | `8080` |
|
||||
| `destination.labels` | map[string, string] | A map of key-value pairs attached to the server instance. | version => v2 |
|
||||
| `destination.name` | string | Destination workload instance name. | `istio-telemetry-2359333` |
|
||||
| `destination.namespace` | string | Destination workload instance namespace. | istio-system |
|
||||
| `destination.principal` | string | The identity of the destination workload. | service-account |
|
||||
| `destination.namespace` | string | Destination workload instance namespace. | `istio-system` |
|
||||
| `destination.principal` | string | The identity of the destination workload. | `service-account` |
|
||||
| `destination.owner` | string | Reference to the workload controlling the destination workload instance.| `kubernetes://apis/extensions/v1beta1/namespaces/istio-system/deployments/istio-telemetry` |
|
||||
| `destination.workload.uid` | string | Unique identifier of the destination workload. | istio://istio-system/workloads/istio-telemetry |
|
||||
| `destination.workload.name` | string | Destination workload name. | istio-telemetry |
|
||||
| `destination.workload.namespace`| string | Destination workload namespace. | istio-system |
|
||||
| `destination.container.name` | string | Container name of the server workload instance. | mixer |
|
||||
| `destination.workload.uid` | string | Unique identifier of the destination workload. | `istio://istio-system/workloads/istio-telemetry` |
|
||||
| `destination.workload.name` | string | Destination workload name. | `istio-telemetry` |
|
||||
| `destination.workload.namespace`| string | Destination workload namespace. | `istio-system` |
|
||||
| `destination.container.name` | string | Container name of the server workload instance. | `mixer` |
|
||||
| `destination.container.image` | string | Image source for the destination container. | `gcr.io/istio-testing/mixer:0.8.0` |
|
||||
| `destination.service.host` | string | Destination host address. | istio-telemetry.istio-system.svc.cluster.local |
|
||||
| `destination.service.uid` | string | Unique identifier of the destination service. | istio://istio-system/services/istio-telemetry |
|
||||
| `destination.service.name` | string | Destination service name. | istio-telemetry |
|
||||
| `destination.service.namespace` | string | Destination service namespace. | istio-system |
|
||||
| `destination.service.host` | string | Destination host address. | `istio-telemetry.istio-system.svc.cluster.local` |
|
||||
| `destination.service.uid` | string | Unique identifier of the destination service. | `istio://istio-system/services/istio-telemetry` |
|
||||
| `destination.service.name` | string | Destination service name. | `istio-telemetry` |
|
||||
| `destination.service.namespace` | string | Destination service namespace. | `istio-system` |
|
||||
| `request.headers` | map[string, string] | HTTP request headers. For gRPC, its metadata will be here. | |
|
||||
| `request.id` | string | An ID for the request with statistically low probability of collision. | |
|
||||
| `request.path` | string | The HTTP URL path including query string | |
|
||||
| `request.host` | string | HTTP/1.x host header or HTTP/2 authority header. | redis-master:3337 |
|
||||
| `request.host` | string | HTTP/1.x host header or HTTP/2 authority header. | `redis-master:3337` |
|
||||
| `request.method` | string | The HTTP method. | |
|
||||
| `request.reason` | string | The request reason used by auditing systems. | |
|
||||
| `request.referer` | string | The HTTP referer header. | |
|
||||
|
@ -72,17 +72,17 @@ deployments will have agents (Envoy or Mixer adapters) that produce these attrib
|
|||
| `connection.sent.bytes` | int64 | Number of bytes sent by a destination service on a connection since the last Report() for a connection. | |
|
||||
| `connection.sent.bytes_total` | int64 | Total number of bytes sent by a destination service during the lifetime of a connection. | |
|
||||
| `connection.duration` | duration | The total amount of time a connection has been open. | |
|
||||
| `connection.mtls` | boolean | Indicates whether a request is received over a mTLS enabled downstream connection. | |
|
||||
| `connection.mtls` | boolean | Indicates whether a request is received over a mutual TLS enabled downstream connection. | |
|
||||
| `connection.requested_server_name` | string | The requested server name (SNI) of the connection | |
|
||||
| `context.protocol` | string | Protocol of the request or connection being proxied. | tcp |
|
||||
| `context.protocol` | string | Protocol of the request or connection being proxied. | `tcp` |
|
||||
| `context.time` | timestamp | The timestamp of Mixer operation. | |
|
||||
| `context.reporter.kind` | string | Contextualizes the reported attribute set. Set to `inbound` for the server-side calls from sidecars and `outbound` for the client-side calls from sidecars and gateways | `inbound` |
|
||||
| `context.reporter.uid` | string | Platform-specific identifier of the attribute reporter. | kubernetes://my-svc-234443-5sffe.my-namespace |
|
||||
| `api.service` | string | The public service name. This is different than the in-mesh service identity and reflects the name of the service exposed to the client. | my-svc.com |
|
||||
| `api.version` | string | The API version. | v1alpha1 |
|
||||
| `api.operation` | string | Unique string used to identify the operation. The id is unique among all operations described in a specific <service, version>. | getPetsById |
|
||||
| `api.protocol` | string | The protocol type of the API call. Mainly for monitoring/analytics. Note that this is the frontend protocol exposed to the client, not the protocol implemented by the backend service. | "http", "https”, or "grpc" |
|
||||
| `request.auth.principal` | string | The authenticated principal of the request. This is a string of the issuer (`iss`) and subject (`sub`) claims within a JWT concatenated with "/” with a percent-encoded subject value. This attribute may come from the peer or the origin in the Istio authentication policy, depending on the binding rule defined in the Istio authentication policy. | accounts.my-svc.com/104958560606 |
|
||||
| `context.reporter.uid` | string | Platform-specific identifier of the attribute reporter. | `kubernetes://my-svc-234443-5sffe.my-namespace` |
|
||||
| `api.service` | string | The public service name. This is different than the in-mesh service identity and reflects the name of the service exposed to the client. | `my-svc.com` |
|
||||
| `api.version` | string | The API version. | `v1alpha1` |
|
||||
| `api.operation` | string | Unique string used to identify the operation. The id is unique among all operations described in a specific <service, version>. | `getPetsById` |
|
||||
| `api.protocol` | string | The protocol type of the API call. Mainly for monitoring/analytics. Note that this is the frontend protocol exposed to the client, not the protocol implemented by the backend service. | `http`, `https`, or `grpc` |
|
||||
| `request.auth.principal` | string | The authenticated principal of the request. This is a string of the issuer (`iss`) and subject (`sub`) claims within a JWT concatenated with "/” with a percent-encoded subject value. This attribute may come from the peer or the origin in the Istio authentication policy, depending on the binding rule defined in the Istio authentication policy. | `accounts.my-svc.com/104958560606` |
|
||||
| `request.auth.audiences` | string | The intended audience(s) for this authentication information. This should reflect the audience (`aud`) claim within a JWT. | ['my-svc.com', 'scopes/read'] |
|
||||
| `request.auth.presenter` | string | The authorized presenter of the credential. This value should reflect the optional Authorized Presenter (`azp`) claim within a JWT or the OAuth2 client id. | 123456789012.my-svc.com |
|
||||
| `request.auth.claims` | map[string, string] | all raw string claims from the `origin` JWT | `iss`: `issuer@foo.com`, `sub`: `sub@foo.com`, `aud`: `aud1` |
|
||||
|
@ -110,6 +110,6 @@ The following attributes have been deprecated and will be removed in subsequent
|
|||
|
||||
| Name | Type | Description | Kubernetes Example |
|
||||
|------|------|-------------|--------------------|
|
||||
| `source.service` | string | The fully qualified name of the service that the client belongs to. | redis-master.my-namespace.svc.cluster.local |
|
||||
| `source.domain` | string | The domain suffix part of the source service, excluding the name and the namespace. | svc.cluster.local |
|
||||
| `destination.domain` | string | The domain suffix part of the destination service, excluding the name and the namespace. | svc.cluster.local |
|
||||
| `source.service` | string | The fully qualified name of the service that the client belongs to. | `redis-master.my-namespace.svc.cluster.local` |
|
||||
| `source.domain` | string | The domain suffix part of the source service, excluding the name and the namespace. | `svc.cluster.local` |
|
||||
| `destination.domain` | string | The domain suffix part of the destination service, excluding the name and the namespace. | `svc.cluster.local` |
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: Expression Language
|
||||
description: Mixer config expression language reference.
|
||||
description: Mixer configuration expression language reference.
|
||||
weight: 20
|
||||
aliases:
|
||||
- /docs/reference/config/mixer/expression-language.html
|
||||
---
|
||||
|
||||
This page describes how to use the Mixer config expression language (CEXL).
|
||||
This page describes how to use the Mixer configuration expression language (CEXL).
|
||||
|
||||
## Background
|
||||
|
||||
|
@ -46,8 +46,8 @@ CEXL supports the following functions.
|
|||
|
||||
CEXL variables are attributes from the typed [attribute vocabulary](/docs/reference/config/policy-and-telemetry/attribute-vocabulary/), constants are implicitly typed and, functions are explicitly typed.
|
||||
|
||||
Mixer validates a CEXL expression and resolves it to a type during config validation.
|
||||
Selectors must resolve to a boolean value and mapping expressions must resolve to the type they are mapping into. Config validation fails if a selector fails to resolve to a boolean or if a mapping expression resolves to an incorrect type.
|
||||
Mixer validates a CEXL expression and resolves it to a type during configuration validation.
|
||||
Selectors must resolve to a boolean value and mapping expressions must resolve to the type they are mapping into. Configuration validation fails if a selector fails to resolve to a boolean or if a mapping expression resolves to an incorrect type.
|
||||
|
||||
For example, if an operator specifies a *string* label as `request.size | 200`, validation fails because the expression resolves to an integer.
|
||||
|
||||
|
|
|
@ -109,7 +109,7 @@ We will describe metrics first and then the labels for each metric.
|
|||
{{< /text >}}
|
||||
|
||||
* **Destination Service**: This identifies destination service host responsible
|
||||
for an incoming request. Ex: "details.default.svc.cluster.local".
|
||||
for an incoming request. Ex: `details.default.svc.cluster.local`.
|
||||
|
||||
{{< text yaml >}}
|
||||
destination_service: destination.service.host | "unknown"
|
||||
|
|
|
@ -30,10 +30,10 @@ plane and the sidecars for the Istio data plane.
|
|||
|
||||
1. [Install the Helm client](https://docs.helm.sh/using_helm/#installing-helm).
|
||||
|
||||
1. Istio by default uses LoadBalancer service object types. Some platforms do not support LoadBalancer
|
||||
service objects. For platforms lacking LoadBalancer support, install Istio with NodePort support
|
||||
1. Istio by default uses `LoadBalancer` service object types. Some platforms do not support `LoadBalancer`
|
||||
service objects. For platforms lacking `LoadBalancer` support, install Istio with `NodePort` support
|
||||
instead with the flags `--set gateways.istio-ingressgateway.type=NodePort --set gateways.istio-egressgateway.type=NodePort`
|
||||
appended to the end of the helm operation.
|
||||
appended to the end of the Helm operation.
|
||||
|
||||
## Installation steps
|
||||
|
||||
|
@ -133,13 +133,13 @@ With this minimal set you can install your own application and [configure reques
|
|||
$ helm delete --purge istio
|
||||
{{< /text >}}
|
||||
|
||||
If your helm version is less than 2.9.0, then you need to manually cleanup extra job resource before redeploy new version of Istio chart:
|
||||
If your Helm version is less than 2.9.0, then you need to manually cleanup extra job resource before redeploy new version of Istio chart:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system delete job --all
|
||||
{{< /text >}}
|
||||
|
||||
* If desired, delete the CRDs using kubectl:
|
||||
* If desired, delete the CRDs:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system
|
||||
|
|
|
@ -177,11 +177,11 @@ names and connect to pilot, for example:
|
|||
|
||||
* Extract the initial Istio authentication secrets and copy them to the machine. The default
|
||||
installation of Istio includes Citadel and will generate Istio secrets even if
|
||||
the automatic 'mTLS'
|
||||
setting is disabled (it creates secret for each service account, and the secret
|
||||
the automatic `mTLS`
|
||||
setting is disabled (it creates a secret for each service account, and the secret
|
||||
is named as `istio.<serviceaccount>`). It is recommended that you perform this
|
||||
step to make it easy to enable mTLS in the future and to upgrade to a future version
|
||||
that will have mTLS enabled by default.
|
||||
step to make it easy to enable mutual TLS in the future and to upgrade to a future version
|
||||
that will have mutual TLS enabled by default.
|
||||
|
||||
`ACCOUNT` defaults to 'default', or `SERVICE_ACCOUNT` environment variable
|
||||
`NAMESPACE` defaults to current namespace, or `SERVICE_NAMESPACE` environment variable
|
||||
|
|
|
@ -73,15 +73,15 @@ $ export ZIPKIN_POD_IP=$(kubectl -n istio-system get pod -l app=jaeger -o jsonpa
|
|||
|
||||
Proceed to one of the options for connecting the remote cluster to the local cluster:
|
||||
|
||||
* [via kubectl with helm](#use-kubectl-with-helm-to-connect-the-remote-cluster-to-the-local)
|
||||
* Via [`kubectl` with Helm](#use-kubectl-with-helm-to-connect-the-remote-cluster-to-the-local)
|
||||
|
||||
* [via helm plus tiller](#alternatively-use-helm-and-tiller-to-connect-the-remote-cluster-to-the-local)
|
||||
* Via [Helm plus Tiller](#alternatively-use-helm-and-tiller-to-connect-the-remote-cluster-to-the-local)
|
||||
|
||||
**Sidecar Injection.** The default behavior is to enable automatic sidecar injection on the remote clusters. For manual sidecar injection refer to the [manual sidecar example](#remote-cluster-manual-sidecar-injection-example)
|
||||
* Using *sidecar Injection.* The default behavior is to enable automatic sidecar injection on the remote clusters. For manual sidecar injection refer to the [manual sidecar example](#remote-cluster-manual-sidecar-injection-example)
|
||||
|
||||
### Use `kubectl` with Helm to connect the remote cluster to the local
|
||||
|
||||
1. Use the helm template command on a remote to specify the Istio control plane service endpoints:
|
||||
1. Use the `helm template` command on a remote to specify the Istio control plane service endpoints:
|
||||
|
||||
{{< text bash >}}
|
||||
$ helm template install/kubernetes/helm/istio-remote --namespace istio-system \
|
||||
|
@ -139,17 +139,17 @@ install one:
|
|||
|
||||
In order for the remote cluster's sidecars interaction with the Istio control plane, the `pilot`,
|
||||
`policy`, `telemetry`, `statsd`, and tracing service endpoints need to be configured in
|
||||
the `istio-remote` helm chart. The chart enables automatic sidecar injection in the remote
|
||||
the `istio-remote` Helm chart. The chart enables automatic sidecar injection in the remote
|
||||
cluster by default but it can be disabled via a chart variable. The following table describes
|
||||
the `istio-remote` helm chart's configuration values.
|
||||
the `istio-remote` Helm chart's configuration values.
|
||||
|
||||
| Helm Variable | Accepted Values | Default | Purpose of Value |
|
||||
| --- | --- | --- | --- |
|
||||
| `global.remotePilotAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's pilot Pod IP address or remote cluster DNS resolvable hostname |
|
||||
| `global.remotePolicyAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's policy Pod IP address or remote cluster DNS resolvable hostname |
|
||||
| `global.remoteTelemetryAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's telemetry Pod IP address or remote cluster DNS resolvable hostname |
|
||||
| `global.proxy.envoyStatsd.enabled` | true, false | false | Specifies whether the Istio control plane has statsd enabled |
|
||||
| `global.proxy.envoyStatsd.host` | A valid IP address or hostname | None | Specifies the Istio control plane's statsd-prom-bridge Pod IP address or remote cluster DNS resolvable hostname. Ignored if `global.proxy.envoyStatsd.enabled=false`. |
|
||||
| `global.proxy.envoyStatsd.enabled` | true, false | false | Specifies whether the Istio control plane has Statsd enabled |
|
||||
| `global.proxy.envoyStatsd.host` | A valid IP address or hostname | None | Specifies the Istio control plane's `statsd-prom-bridge` Pod IP address or remote cluster DNS resolvable hostname. Ignored if `global.proxy.envoyStatsd.enabled=false`. |
|
||||
| `global.remoteZipkinAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's tracing application Pod IP address or remote cluster DNS resolvable hostname--e.g. `zipkin` or `jaeger`. |
|
||||
| `sidecarInjectorWebhook.enabled` | true, false | true | Specifies whether to enable automatic sidecar injection on the remote cluster |
|
||||
| `global.remotePilotCreateSvcEndpoint` | true, false | false | If set, a selector-less service and endpoint for `istio-pilot` are created with the `remotePilotAddress` IP, which ensures the `istio-pilot.<namespace>` is DNS resolvable in the remote cluster. |
|
||||
|
@ -161,10 +161,10 @@ discover services, endpoints, and pod attributes. The following
|
|||
describes how to generate a `kubeconfig` file for a remote cluster to be used by
|
||||
the Istio control plane.
|
||||
|
||||
The `istio-remote` helm chart creates a Kubernetes service account named `istio-multi`
|
||||
The `istio-remote` Helm chart creates a Kubernetes service account named `istio-multi`
|
||||
in the remote cluster with the minimal RBAC access required. The following procedure
|
||||
generates a `kubeconfig` file for the remote cluster using the credentials of the
|
||||
`istio-multi` service account created by the `istio-remote` helm chart.
|
||||
`istio-multi` service account created by the `istio-remote` Helm chart.
|
||||
|
||||
The following procedure should be performed on each remote cluster to be
|
||||
added to the service mesh. The procedure requires cluster-admin user access
|
||||
|
@ -268,7 +268,7 @@ The following procedure is to be performed against the remote cluster.
|
|||
|
||||
> The endpoint IP environment variables need to be set as in the [above section](#set-environment-variables-for-pod-ips-from-istio-control-plane-needed-by-remote)
|
||||
|
||||
1. Use the helm template command on a remote to specify the Istio control plane service endpoints:
|
||||
1. Use the `helm template` command on a remote to specify the Istio control plane service endpoints:
|
||||
|
||||
{{< text bash >}}
|
||||
$ helm template install/kubernetes/helm/istio-remote --namespace istio-system --name istio-remote --set global.remotePilotAddress=${PILOT_POD_IP} --set global.remotePolicyAddress=${POLICY_POD_IP} --set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} --set global.proxy.envoyStatsd.enabled=true --set global.proxy.envoyStatsd.host=${STATSD_POD_IP} --set global.remoteZipkinAddress=${ZIPKIN_POD_IP} --set sidecarInjectorWebhook.enabled=false > $HOME/istio-remote_noautoinj.yaml
|
||||
|
@ -331,7 +331,7 @@ cluster have restarted.
|
|||
### Use load balance service type
|
||||
|
||||
In Kubernetes, you can declare a service with a service type to be
|
||||
[LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types).
|
||||
[`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types).
|
||||
A simple solution to the pod restart issue is to use load balancers for the
|
||||
Istio services. You can then use the load balancer IPs as the Istio services's
|
||||
endpoint IPs to configure the remote clusters. You may need balancer IPs for
|
||||
|
@ -351,12 +351,12 @@ can point to the same IP. The ingress gateway is then provided with destination
|
|||
rules to reach the proper Istio service in the main cluster.
|
||||
|
||||
Within this option there are 2 sub-options. One is to re-use the default Istio ingress gateway
|
||||
installed with the provided manifests or helm charts. The other option is to create another
|
||||
installed with the provided manifests or Helm charts. The other option is to create another
|
||||
Istio ingress gateway specifically for multicluster.
|
||||
|
||||
## Security
|
||||
|
||||
Istio supports deployment of mTLS between the control plane components as well as between
|
||||
Istio supports deployment of mutual TLS between the control plane components as well as between
|
||||
sidecar injected application pods.
|
||||
|
||||
### Control plane security
|
||||
|
@ -377,17 +377,17 @@ The steps to enable control plane security are as follows:
|
|||
1. Required because Istio configures the sidecar to verify the certificate subject names using the `istio-pilot.<namespace>` subject name format.
|
||||
1. Control plane IPs or resolvable host names set
|
||||
|
||||
### mTLS between application pods
|
||||
### Mutual TLS between application pods
|
||||
|
||||
The steps to enable mTLS for all application pods are as follows:
|
||||
The steps to enable mutual TLS for all application pods are as follows:
|
||||
|
||||
1. Istio control plane cluster deployed with
|
||||
1. Global mTLS enabled
|
||||
1. Global mutual TLS enabled
|
||||
1. `citadel` certificate self signing disabled
|
||||
1. a secret named `cacerts` in the Istio control plane namespace with the [CA certificates](/docs/tasks/security/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)
|
||||
|
||||
1. Istio remote clusters deployed with
|
||||
1. Global mTLS enabled
|
||||
1. Global mutual TLS enabled
|
||||
1. `citadel` certificate self signing disabled
|
||||
1. a secret named `cacerts` in the Istio control plane namespace with the [CA certificates](/docs/tasks/security/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)
|
||||
1. The CA certificate for the remote clusters needs to be signed by the same CA or root CA as the main cluster.
|
||||
|
@ -396,9 +396,9 @@ The steps to enable mTLS for all application pods are as follows:
|
|||
|
||||
### Example deployment
|
||||
|
||||
The following is an example procedure to install Istio with both control plane mTLS and application pod
|
||||
mTLS enabled. The example sets up a remote cluster with a selector-less service and endpoint for `istio-pilot` to
|
||||
allow the remote sidecars to resolve `istio-pilot.istio-system` hostname via its local kubernetes DNS.
|
||||
The following is an example procedure to install Istio with both control plane mutual TLS and application pod
|
||||
mutual TLS enabled. The example sets up a remote cluster with a selector-less service and endpoint for `istio-pilot` to
|
||||
allow the remote sidecars to resolve the `istio-pilot.istio-system` hostname via its local Kubernetes DNS.
|
||||
|
||||
1. *Primary Cluster.* Deployment of the Istio control plane cluster
|
||||
|
||||
|
@ -454,5 +454,5 @@ allow the remote sidecars to resolve `istio-pilot.istio-system` hostname via its
|
|||
|
||||
1. *Primary Cluster.* [Instantiate the credentials for each remote cluster](#instantiate-the-credentials-for-each-remote-cluster)
|
||||
|
||||
At this point all of the Istio components in both clusters are configured for mTLS between application
|
||||
At this point all of the Istio components in both clusters are configured for mutual TLS between application
|
||||
sidecars and the control plane components as well as between the other application sidecars.
|
||||
|
|
|
@ -252,5 +252,5 @@ For more details on tracing see [Understanding what happened](/docs/tasks/teleme
|
|||
|
||||
1. Select the deployment and click **Delete**.
|
||||
|
||||
1. Deployment Manager will remove all the deployed GKE artifacts - however, items such as Ingress and LoadBalancers will remain. You can delete those artifacts
|
||||
by again going to the cloud console under [**Network Services** -> **LoadBalancers**](https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list)
|
||||
1. Deployment Manager will remove all the deployed GKE artifacts - however, items such as `Ingress` and `LoadBalancers` will remain. You can delete those artifacts
|
||||
by again going to the cloud console under [**Network Services** -> **Load balancing**](https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list)
|
||||
|
|
|
@ -60,7 +60,7 @@ $ kubectl apply -f install/kubernetes/istio-demo.yaml
|
|||
|
||||
### Option 2: Install Istio with default mutual TLS authentication
|
||||
|
||||
Use this option only on a fresh kubernetes cluster where newly deployed
|
||||
Use this option only on a fresh Kubernetes cluster where newly deployed
|
||||
workloads are guaranteed to have Istio sidecars installed.
|
||||
|
||||
To Install Istio and enforce mutual TLS authentication between sidecars by
|
||||
|
@ -177,10 +177,9 @@ non-existent resources because they may have been deleted hierarchically.
|
|||
$ kubectl delete -f install/kubernetes/istio-demo-auth.yaml
|
||||
{{< /text >}}
|
||||
|
||||
* If you installed Istio with Helm, follow the [uninstall Istio with
|
||||
Helm](/docs/setup/kubernetes/helm-install/#uninstall) steps.
|
||||
* If you installed Istio with Helm, follow the [uninstall Istio with Helm](/docs/setup/kubernetes/helm-install/#uninstall) steps.
|
||||
|
||||
* If desired, delete the CRDs using kubectl:
|
||||
* If desired, delete the CRDs:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system
|
||||
|
|
|
@ -92,8 +92,8 @@ sidecar by executing:
|
|||
$ kubectl replace -f <(istioctl kube-inject -f $ORIGINAL_DEPLOYMENT_YAML)
|
||||
{{< /text >}}
|
||||
|
||||
If the sidecar was previously injected with some customized inject config
|
||||
files, you will need to change the version tag in the config files to the new
|
||||
If the sidecar was previously injected with some customized inject configuration
|
||||
files, you will need to change the version tag in the configuration files to the new
|
||||
version and re-inject the sidecar as follows:
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -140,18 +140,18 @@ Next, use `istioctl experimental convert-networking-config` to convert your exis
|
|||
1. If your yaml file contains more than the ingress definition such as deployment or service definition, move the ingress definition out to a separate yaml file for the `
|
||||
istioctl experimental convert-networking-config` tool to process.
|
||||
|
||||
1. Execute the following to generate the new network config file, where replacing FILE*.yaml with your ingress file or deprecated route rule files.
|
||||
1. Execute the following to generate the new network configuration file, where replacing FILE*.yaml with your ingress file or deprecated route rule files.
|
||||
*Tip: Make sure to feed all the files using `-f` for one or more deployments.*
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl experimental convert-networking-config -f FILE1.yaml -f FILE2.yaml -f FILE3.yaml > UPDATED_NETWORK_CONFIG.yaml
|
||||
$ istioctl experimental convert-networking-configuration-f FILE1.yaml -f FILE2.yaml -f FILE3.yaml > UPDATED_NETWORK_CONFIG.yaml
|
||||
{{< /text >}}
|
||||
|
||||
1. Edit `UPDATED_NETWORK_CONFIG.yaml` to update all namespace references to your desired namespace.
|
||||
There is a known issue with the `convert-networking-config` tool where the `istio-system` namespace
|
||||
is used incorrectly. Further, ensure the `hosts` value is correct.
|
||||
|
||||
1. Deploy the updated network config file.
|
||||
1. Deploy the updated network configuration file:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl replace -f UPDATED_NETWORK_CONFIG.yaml
|
||||
|
@ -238,9 +238,10 @@ trafficPolicy:
|
|||
mode: DISABLE
|
||||
{{< /text >}}
|
||||
|
||||
## Migrating `mtls_excluded_services` config to destination rules
|
||||
## Migrating the `mtls_excluded_services` configuration to destination rules
|
||||
|
||||
If you installed Istio with mutual TLS enabled, and used mesh config `mtls_excluded_services` to disable mutual TLS when connecting to these services (e.g kubernetes API server), you need to replace this by adding a destination rule. For example:
|
||||
If you installed Istio with mutual TLS enabled, and used the mesh configuration option `mtls_excluded_services` to
|
||||
disable mutual TLS when connecting to these services (e.g Kubernetes API server), you need to replace this by adding a destination rule. For example:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
|
|
|
@ -202,7 +202,7 @@ so the configuration to enable rate limiting on both adapters is the same.
|
|||
each service since it is not in the same namespace this `QuotaSpecBinding`
|
||||
resource was deployed into.
|
||||
|
||||
1. Refresh the `productpage` in your browser.
|
||||
1. Refresh the product page in your browser.
|
||||
|
||||
* If you are logged out, `reviews-v3` service is rate limited to 1 request
|
||||
every 5 seconds. If you keep refreshing the page, the stars should only
|
||||
|
|
|
@ -461,7 +461,7 @@ To experiment with this feature, you need a valid JWT. The JWT must correspond t
|
|||
this tutorial, we use this [JWT test]({{< github_file >}}/security/tools/jwt/samples/demo.jwt) and this
|
||||
[JWKS endpoint]({{< github_file >}}/security/tools/jwt/samples/jwks.json) from the Istio code base.
|
||||
|
||||
Also, for convenience, expose `httpbin.foo` via ingressgateway (for more details, see the [ingress task](/docs/tasks/traffic-management/ingress/)).
|
||||
Also, for convenience, expose `httpbin.foo` via `ingressgateway` (for more details, see the [ingress task](/docs/tasks/traffic-management/ingress/)).
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF | kubectl apply -f -
|
||||
|
|
|
@ -140,7 +140,7 @@ $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name
|
|||
...
|
||||
{{< /text >}}
|
||||
|
||||
> This example is borrowed from [kubernetes examples](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md).
|
||||
> This example is borrowed from [Kubernetes examples](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md).
|
||||
|
||||
### Create an HTTPS service with Istio sidecar with mutual TLS enabled
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ down once the migration is done.
|
|||
No resources found.
|
||||
{{< /text >}}
|
||||
|
||||
## Configure the server to accept both mTLS and plain text traffic
|
||||
## Configure the server to accept both mutual TLS and plain text traffic
|
||||
|
||||
In authentication policy, we have a `PERMISSIVE` mode which makes the server accept both mutual TLS and plain text traffic.
|
||||
We need to configure the server to this mode.
|
||||
|
|
|
@ -61,7 +61,7 @@ The following steps enable plugging in the certificates and key into Citadel:
|
|||
In this section, we verify that the new workload certificates and root certificates are propagated.
|
||||
This requires you have `openssl` installed on your machine.
|
||||
|
||||
1. Deploy the bookinfo application following the [instructions](/docs/examples/bookinfo/).
|
||||
1. Deploy the Bookinfo application following the [instructions](/docs/examples/bookinfo/).
|
||||
|
||||
1. Retrieve the mounted certificates.
|
||||
In the following, we take the ratings pod as an example, and verify the certificates mounted on the pod.
|
||||
|
|
|
@ -37,7 +37,7 @@ microservices running under them.
|
|||
|
||||
> If you are using a namespace other than `default`, use `kubectl -n namespace ...` to specify the namespace.
|
||||
|
||||
* There is a major update to RBAC in Istio 1.0. Please make sure to remove any existing RBAC config before continuing.
|
||||
* There is a major update to RBAC in Istio 1.0. Please make sure to remove any existing RBAC configuration before continuing.
|
||||
|
||||
* Run the following commands to disable the old RBAC functionality, these are no longer needed in Istio 1.0:
|
||||
|
||||
|
@ -60,15 +60,15 @@ for the list of supported keys in `constraints` and `properties`.
|
|||
|
||||
* Point your browser at the Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). You should see:
|
||||
|
||||
* "Book Details" section in the lower left part of the page, including type, pages, publisher, etc.
|
||||
* "Book Reviews" section in the lower right part of the page.
|
||||
* The "Book Details" section in the lower left part of the page, including type, pages, publisher, etc.
|
||||
* The "Book Reviews" section in the lower right part of the page.
|
||||
|
||||
If you refresh the page several times, you should see different versions of reviews shown in productpage,
|
||||
If you refresh the page several times, you should see different versions of reviews shown in the product page,
|
||||
presented in a round robin style (red stars, black stars, no stars)
|
||||
|
||||
## Enabling Istio authorization
|
||||
|
||||
Run the following command to enable Istio authorization for "default" namespace:
|
||||
Run the following command to enable Istio authorization for the `default` namespace:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/bookinfo/platform/kube/rbac/rbac-config-ON.yaml@
|
||||
|
@ -85,10 +85,11 @@ explicitly define access control policy to grant access to any service.
|
|||
Using Istio authorization, you can easily setup namespace-level access control by specifying all (or a collection of) services
|
||||
in a namespace are accessible by services from another namespace.
|
||||
|
||||
In our Bookinfo sample, the "productpage", "reviews", "details", "ratings" services are deployed in "default" namespace.
|
||||
The Istio components like "istio-ingressgateway" service are deployed in "istio-system" namespace. We can define a policy that
|
||||
any service in "default" namespace that has "app" label set to one of the values in ["productpage", "details", "reviews", "ratings"]
|
||||
is accessible by services in the same namespace (i.e., "default" namespace) and services in "istio-system" namespace.
|
||||
In our Bookinfo sample, the `productpage`, `reviews`, `details`, `ratings` services are deployed in the `default` namespace.
|
||||
The Istio components like `istio-ingressgateway` service are deployed in the `istio-system` namespace. We can define a policy that
|
||||
any service in the `default` namespace that has the `app` label set to one of the values of
|
||||
`productpage`, `details`, `reviews`, or `ratings`
|
||||
is accessible by services in the same namespace (i.e., `default`) and services in the `istio-system` namespace.
|
||||
|
||||
Run the following command to create a namespace-level access control policy:
|
||||
|
||||
|
@ -98,9 +99,11 @@ $ kubectl apply -f @samples/bookinfo/platform/kube/rbac/namespace-policy.yaml@
|
|||
|
||||
The policy does the following:
|
||||
|
||||
* Creates a `ServiceRole` "service-viewer" which allows read access to any service in "default" namespace that has "app" label
|
||||
set to one of the values in ["productpage", "details", "reviews", "ratings"]. Note that there is a "constraint" specifying that
|
||||
the services must have one of the listed "app" labels.
|
||||
* Creates a `ServiceRole` `service-viewer` which allows read access to any service in the `default` namespace that has
|
||||
the `app` label
|
||||
set to one of the values `productpage`, `details`, `reviews`, or `ratings`. Note that there is a
|
||||
constraint\ specifying that
|
||||
the services must have one of the listed `app` labels.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -117,7 +120,7 @@ the services must have one of the listed "app" labels.
|
|||
values: ["productpage", "details", "reviews", "ratings"]
|
||||
{{< /text >}}
|
||||
|
||||
* Creates a `ServiceRoleBinding` that assign the "service-viewer" role to all services in "istio-system" and "default" namespaces.
|
||||
* Creates a `ServiceRoleBinding` that assign the `service-viewer` role to all services in the `istio-system` and `default` namespaces.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -143,8 +146,8 @@ servicerole "service-viewer" created
|
|||
servicerolebinding "bind-service-viewer" created
|
||||
{{< /text >}}
|
||||
|
||||
Now if you point your browser at Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). You should see "Bookinfo Sample" page,
|
||||
with "Book Details" section in the lower left part and "Book Reviews" section in the lower right part.
|
||||
Now if you point your browser at Bookinfo's `productpage` (`http://$GATEWAY_URL/productpage`). You should see the "Bookinfo Sample" page,
|
||||
with the "Book Details" section in the lower left part and the "Book Reviews" section in the lower right part.
|
||||
|
||||
> There may be some delays due to caching and other propagation overhead.
|
||||
|
||||
|
@ -164,11 +167,11 @@ This task shows you how to set up service-level access control using Istio autho
|
|||
* You have [removed namespace-level authorization policy](#cleanup-namespace-level-access-control).
|
||||
|
||||
Point your browser at the Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). You should see `"RBAC: access denied"`.
|
||||
We will incrementally add access permission to the services in Bookinfo sample.
|
||||
We will incrementally add access permission to the services in the Bookinfo sample.
|
||||
|
||||
### Step 1. allowing access to "productpage" service
|
||||
### Step 1. allowing access to the `productpage` service
|
||||
|
||||
In this step, we will create a policy that allows external requests to view `productpage` service via Ingress.
|
||||
In this step, we will create a policy that allows external requests to access the `productpage` service via Ingress.
|
||||
|
||||
Run the following command:
|
||||
|
||||
|
@ -178,7 +181,7 @@ $ kubectl apply -f @samples/bookinfo/platform/kube/rbac/productpage-policy.yaml@
|
|||
|
||||
The policy does the following:
|
||||
|
||||
* Creates a `ServiceRole` "productpage-viewer" which allows read access to "productpage" service.
|
||||
* Creates a `ServiceRole` `productpage-viewer` which allows read access to the `productpage` service.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -192,7 +195,8 @@ The policy does the following:
|
|||
methods: ["GET"]
|
||||
{{< /text >}}
|
||||
|
||||
* Creates a `ServiceRoleBinding` "bind-productpager-viewer" which assigns "productpage-viewer" role to all users/services.
|
||||
* Creates a `ServiceRoleBinding` `bind-productpager-viewer` which assigns the `productpage-viewer` role to all
|
||||
users and services.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -208,18 +212,18 @@ The policy does the following:
|
|||
name: "productpage-viewer"
|
||||
{{< /text >}}
|
||||
|
||||
Point your browser at the Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). Now you should see "Bookinfo Sample"
|
||||
page. But there are errors `"Error fetching product details"` and `"Error fetching product reviews"` on the page. These errors
|
||||
are expected because we have not granted "productpage" service to access "details" and "reviews" services. We will fix the errors
|
||||
Point your browser at the Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). Now you should see the "Bookinfo Sample"
|
||||
page. But there are errors `Error fetching product details` and `Error fetching product reviews` on the page. These errors
|
||||
are expected because we have not granted the `productpage` service access to the `details` and `reviews` services. We will fix the errors
|
||||
in the following steps.
|
||||
|
||||
> There may be some delays due to caching and other propagation overhead.
|
||||
|
||||
### Step 2. allowing access to "details" and "reviews" services
|
||||
### Step 2. allowing access to the `details` and `reviews` services
|
||||
|
||||
We will create a policy to allow "productpage" service to read "details" and "reviews" services. Note that in the
|
||||
[setup step](#before-you-begin), we created a service account "bookinfo-productpage" for "productpage" service. This
|
||||
"bookinfo-productpage" service account is the authenticated identify for "productpage" service.
|
||||
We will create a policy to allow the `productpage` service to access the `details` and `reviews` services. Note that in the
|
||||
[setup step](#before-you-begin), we created the `bookinfo-productpage` service account for the `productpage` service. This
|
||||
`bookinfo-productpage` service account is the authenticated identify for the `productpage` service.
|
||||
|
||||
Run the following command:
|
||||
|
||||
|
@ -229,7 +233,7 @@ $ kubectl apply -f @samples/bookinfo/platform/kube/rbac/details-reviews-policy.y
|
|||
|
||||
The policy does the following:
|
||||
|
||||
* Creates a `ServiceRole` "details-reviews-viewer" which allows read access to "details" and "reviews" services.
|
||||
* Creates a `ServiceRole` `details-reviews-viewer` which allows access to the `details` and `reviews` services.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -243,8 +247,8 @@ The policy does the following:
|
|||
methods: ["GET"]
|
||||
{{< /text >}}
|
||||
|
||||
* Creates a `ServiceRoleBinding` "bind-details-reviews" which assigns "details-reviews-viewer" role to service
|
||||
account "cluster.local/ns/default/sa/bookinfo-productpage" (representing the "productpage" service).
|
||||
* Creates a `ServiceRoleBinding` `bind-details-reviews` which assigns the `details-reviews-viewer` role to the
|
||||
`cluster.local/ns/default/sa/bookinfo-productpage` service account (representing the `productpage` service).
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -260,21 +264,21 @@ account "cluster.local/ns/default/sa/bookinfo-productpage" (representing the "pr
|
|||
name: "details-reviews-viewer"
|
||||
{{< /text >}}
|
||||
|
||||
Point your browser at the Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). Now you should see "Bookinfo Sample"
|
||||
page with "Book Details" on the lower left part, and "Book Reviews" on the lower right part. However, in "Book Reviews" section,
|
||||
there is an error `"Ratings service currently unavailable"`. This is because "reviews" service does not have permission to access
|
||||
"ratings" service. To fix this issue, you need to grant "reviews" service read access to "ratings" service.
|
||||
Point your browser at the Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). Now you should see the "Bookinfo Sample"
|
||||
page with "Book Details" on the lower left part, and "Book Reviews" on the lower right part. However, in the "Book Reviews" section,
|
||||
there is an error `Ratings service currently unavailable`. This is because "reviews" service does not have permission to access
|
||||
"ratings" service. To fix this issue, you need to grant the `reviews` service access to the `ratings` service.
|
||||
We will show how to do that in the next step.
|
||||
|
||||
> There may be some delays due to caching and other propagation overhead.
|
||||
|
||||
### Step 3. allowing access to "ratings" service
|
||||
### Step 3. allowing access to the `ratings` service
|
||||
|
||||
We will create a policy to allow "reviews" service to read "ratings" service. Note that in the
|
||||
[setup step](#before-you-begin), we created a service account "bookinfo-reviews" for "reviews" service. This
|
||||
"bookinfo-reviews" service account is the authenticated identify for "reviews" service.
|
||||
We will create a policy to allow the `reviews` service to access the `ratings` service. Note that in the
|
||||
[setup step](#before-you-begin), we created a `bookinfo-reviews` service account for the `reviews` service. This
|
||||
service account is the authenticated identify for the `reviews` service.
|
||||
|
||||
Run the following command to create a policy that allows "reviews" service to read "ratings" service.
|
||||
Run the following command to create a policy that allows the `reviews` service to access the `ratings` service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/bookinfo/platform/kube/rbac/ratings-policy.yaml@
|
||||
|
@ -282,7 +286,7 @@ $ kubectl apply -f @samples/bookinfo/platform/kube/rbac/ratings-policy.yaml@
|
|||
|
||||
The policy does the following:
|
||||
|
||||
* Creates a `ServiceRole` "ratings-viewer" which allows read access to "ratings" service.
|
||||
* Creates a `ServiceRole` `ratings-viewer\` which allows access to the `ratings` service.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -296,8 +300,8 @@ The policy does the following:
|
|||
methods: ["GET"]
|
||||
{{< /text >}}
|
||||
|
||||
* Creates a `ServiceRoleBinding` "bind-ratings" which assigns "ratings-viewer" role to service
|
||||
account "cluster.local/ns/default/sa/bookinfo-reviews", which represents the "reviews" services.
|
||||
* Creates a `ServiceRoleBinding` `bind-ratings` which assigns `ratings-viewer` role to the
|
||||
`cluster.local/ns/default/sa/bookinfo-reviews` service account, which represents the `reviews` service.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
|
@ -314,7 +318,7 @@ account "cluster.local/ns/default/sa/bookinfo-reviews", which represents the "re
|
|||
{{< /text >}}
|
||||
|
||||
Point your browser at the Bookinfo `productpage` (`http://$GATEWAY_URL/productpage`). Now you should see
|
||||
the "black" and "red" ratings in "Book Reviews" section.
|
||||
the "black" and "red" ratings in the "Book Reviews" section.
|
||||
|
||||
> There may be some delays due to caching and other propagation overhead.
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ example application for this task.
|
|||
* Setup Istio by following the instructions in the [Installation guide](/docs/setup/).
|
||||
|
||||
Either use the `istio-demo.yaml` or `istio-demo-auth.yaml` template, which includes tracing support, or
|
||||
use the helm chart with tracing enabled by setting the `--set tracing.enabled=true` option.
|
||||
use the Helm chart with tracing enabled by setting the `--set tracing.enabled=true` option.
|
||||
|
||||
* Deploy the [Bookinfo](/docs/examples/bookinfo/) sample application.
|
||||
|
||||
|
@ -39,7 +39,7 @@ Access the Jaeger dashboard by opening your browser to [http://localhost:16686](
|
|||
With the Bookinfo application up and running, generate trace information by accessing
|
||||
`http://$GATEWAY_URL/productpage` one or more times.
|
||||
|
||||
From the left-hand pane of the Jaeger dashboard, select productpage from the Service drop-down list and click
|
||||
From the left-hand pane of the Jaeger dashboard, select `productpage` from the Service drop-down list and click
|
||||
Find Traces. You should see something similar to the following:
|
||||
|
||||
{{< image width="100%" ratio="52.68%"
|
||||
|
@ -62,14 +62,14 @@ Although every service has the same label, `istio-proxy`, because the tracing is
|
|||
the Istio sidecar (Envoy proxy) which wraps the call to the actual service,
|
||||
the label of the destination (to the right) identifies the service for which the time is represented by each line.
|
||||
|
||||
The productpage to reviews call is represented by two spans in the trace. The first of the two spans (labeled `productpage
|
||||
The call from `productpage` to `reviews` is represented by two spans in the trace. The first of the two spans (labeled `productpage
|
||||
reviews.default.svc.cluster.local:9080/`) represents the client-side span for the call. It took 24.13ms . The second span
|
||||
(labeled `reviews reviews.default.svc.cluster.local:9080/`) is a child of the first span and represents the server-side
|
||||
span for the call. It took 22.99ms .
|
||||
|
||||
The trace for the call to the reviews services reveals two subsequent RPC's in the trace. The first is to the istio-policy
|
||||
The trace for the call to the `reviews` services reveals two subsequent RPC's in the trace. The first is to the `istio-policy`
|
||||
service, reflecting the server-side Check call made for the service to authorize access. The second is the call out to
|
||||
the ratings service.
|
||||
the `ratings` service.
|
||||
|
||||
## Understanding what happened
|
||||
|
||||
|
@ -87,7 +87,7 @@ To do this, an application needs to collect and propagate the following headers
|
|||
* `x-b3-flags`
|
||||
* `x-ot-span-context`
|
||||
|
||||
If you look in the sample services, you can see that the productpage application (Python) extracts the required headers from an HTTP request:
|
||||
If you look in the sample services, you can see that the `productpage` service (Python) extracts the required headers from an HTTP request:
|
||||
|
||||
{{< text python >}}
|
||||
def getForwardHeaders(request):
|
||||
|
|
|
@ -349,7 +349,7 @@ Created config rule/istio-system/newlogtofluentd at revision 22376
|
|||
{{< /text >}}
|
||||
|
||||
Notice that the `address: "fluentd-es.logging:24224"` line in the
|
||||
handler config is pointing to the Fluentd daemon we setup in the
|
||||
handler configuration is pointing to the Fluentd daemon we setup in the
|
||||
example stack.
|
||||
|
||||
## View the new logs
|
||||
|
|
|
@ -121,12 +121,12 @@ as the example application throughout this task.
|
|||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f new_telemetry.yaml
|
||||
Created config metric/istio-system/doublerequestcount at revision 1973035
|
||||
Created config prometheus/istio-system/doublehandler at revision 1973036
|
||||
Created config rule/istio-system/doubleprom at revision 1973037
|
||||
Created config logentry/istio-system/newlog at revision 1973038
|
||||
Created config stdio/istio-system/newhandler at revision 1973039
|
||||
Created config rule/istio-system/newlogstdio at revision 1973041
|
||||
Created configuration metric/istio-system/doublerequestcount at revision 1973035
|
||||
Created configuration prometheus/istio-system/doublehandler at revision 1973036
|
||||
Created configuration rule/istio-system/doubleprom at revision 1973037
|
||||
Created configuration logentry/istio-system/newlog at revision 1973038
|
||||
Created configuration stdio/istio-system/newhandler at revision 1973039
|
||||
Created configuration rule/istio-system/newlogstdio at revision 1973041
|
||||
{{< /text >}}
|
||||
|
||||
1. Send traffic to the sample application.
|
||||
|
@ -199,12 +199,12 @@ The metrics configuration directs Mixer to send metric values to Prometheus. It
|
|||
uses three stanzas (or blocks) of configuration: *instance* configuration,
|
||||
*handler* configuration, and *rule* configuration.
|
||||
|
||||
The `kind: metric` stanza of config defines a schema for generated metric values
|
||||
The `kind: metric` stanza of configuration defines a schema for generated metric values
|
||||
(or *instances*) for a new metric named `doublerequestcount`. This instance
|
||||
configuration tells Mixer _how_ to generate metric values for any given request,
|
||||
based on the attributes reported by Envoy (and generated by Mixer itself).
|
||||
|
||||
For each instance of `doublerequestcount.metric`, the config directs Mixer to
|
||||
For each instance of `doublerequestcount.metric`, the configuration directs Mixer to
|
||||
supply a value of `2` for the instance. Because Istio generates an instance for
|
||||
each request, this means that this metric records a value equal to twice the
|
||||
total number of requests received.
|
||||
|
@ -217,14 +217,14 @@ troubleshooting application behavior.
|
|||
|
||||
The configuration instructs Mixer to populate values for these dimensions based
|
||||
on attribute values and literal values. For instance, for the `source`
|
||||
dimension, the new config requests that the value be taken from the
|
||||
dimension, the new configuration requests that the value be taken from the
|
||||
`source.workload.name` attribute. If that attribute value is not populated, the rule
|
||||
instructs Mixer to use a default value of `"unknown"`. For the `message`
|
||||
dimension, a literal value of `"twice the fun!"` will be used for all instances.
|
||||
|
||||
The `kind: prometheus` stanza of config defines a *handler* named
|
||||
The `kind: prometheus` stanza of configuration defines a *handler* named
|
||||
`doublehandler`. The handler `spec` configures how the Prometheus adapter code
|
||||
translates received metric instances into prometheus-formatted values that can
|
||||
translates received metric instances into Prometheus-formatted values that can
|
||||
be processed by a Prometheus backend. This configuration specified a new
|
||||
Prometheus metric named `double_request_count`. The Prometheus adapter prepends
|
||||
the `istio_` namespace to all metric names, therefore this metric will show up
|
||||
|
@ -236,7 +236,7 @@ metrics via the `instance_name` parameter. The `instance_name` values must be
|
|||
the fully-qualified name for Mixer instances (example:
|
||||
`doublerequestcount.metric.istio-system`).
|
||||
|
||||
The `kind: rule` stanza of config defines a new *rule* named `doubleprom`. The
|
||||
The `kind: rule` stanza of configuration defines a new *rule* named `doubleprom`. The
|
||||
rule directs Mixer to send all `doublerequestcount.metric` instances to the
|
||||
`doublehandler.prometheus` handler. Because there is no `match` clause in the
|
||||
rule, and because the rule is in the configured default configuration namespace
|
||||
|
@ -248,7 +248,7 @@ The logs configuration directs Mixer to send log entries to stdout. It uses
|
|||
three stanzas (or blocks) of configuration: *instance* configuration, *handler*
|
||||
configuration, and *rule* configuration.
|
||||
|
||||
The `kind: logentry` stanza of config defines a schema for generated log entries
|
||||
The `kind: logentry` stanza of configuration defines a schema for generated log entries
|
||||
(or *instances*) named `newlog`. This instance configuration tells Mixer _how_
|
||||
to generate log entries for requests based on the attributes reported by Envoy.
|
||||
|
||||
|
@ -268,7 +268,7 @@ with the value from the attribute `response.duration`. If there is no known
|
|||
value for `response.duration`, the `latency` field will be set to a duration of
|
||||
`0ms`.
|
||||
|
||||
The `kind: stdio` stanza of config defines a *handler* named `newhandler`. The
|
||||
The `kind: stdio` stanza of configuration defines a *handler* named `newhandler`. The
|
||||
handler `spec` configures how the `stdio` adapter code processes received
|
||||
`logentry` instances. The `severity_levels` parameter controls how `logentry`
|
||||
values for the `severity` field are mapped to supported logging levels. Here,
|
||||
|
@ -276,7 +276,7 @@ the value of `"warning"` is mapped to the `WARNING` log level. The
|
|||
`outputAsJson` parameter directs the adapter to generate JSON-formatted log
|
||||
lines.
|
||||
|
||||
The `kind: rule` stanza of config defines a new *rule* named `newlogstdio`. The
|
||||
The `kind: rule` stanza of configuration defines a new *rule* named `newlogstdio`. The
|
||||
rule directs Mixer to send all `newlog.logentry` instances to the
|
||||
`newhandler.stdio` handler. Because the `match` parameter is set to `true`, the
|
||||
rule is executed for all requests in the mesh.
|
||||
|
|
|
@ -18,7 +18,7 @@ application.
|
|||
|
||||
## Querying Istio Metrics
|
||||
|
||||
1. Verify that the prometheus service is running in your cluster.
|
||||
1. Verify that the `prometheus` service is running in your cluster.
|
||||
|
||||
In Kubernetes environments, execute the following command:
|
||||
|
||||
|
@ -63,7 +63,7 @@ The results will be similar to:
|
|||
|
||||
Other queries to try:
|
||||
|
||||
- Total count of all requests to `productpage` service:
|
||||
- Total count of all requests to the `productpage` service:
|
||||
|
||||
{{< text plain >}}
|
||||
istio_requests_total{destination_service="productpage.default.svc.cluster.local"}
|
||||
|
@ -75,9 +75,9 @@ Other queries to try:
|
|||
istio_requests_total{destination_service="reviews.default.svc.cluster.local", destination_version="v3"}
|
||||
{{< /text >}}
|
||||
|
||||
This query returns the current total count of all requests to the v3 of the reviews service.
|
||||
This query returns the current total count of all requests to the v3 of the `reviews` service.
|
||||
|
||||
- Rate of requests over the past 5 minutes to all `productpage` services:
|
||||
- Rate of requests over the past 5 minutes to all instances of the `productpage` service:
|
||||
|
||||
{{< text plain >}}
|
||||
rate(istio_requests_total{destination_service=~"productpage.*", response_code="200"}[5m])
|
||||
|
@ -98,7 +98,7 @@ The configured Prometheus add-on scrapes three endpoints:
|
|||
1. *mixer* (`istio-mixer.istio-system:9093`): all Mixer-specific metrics. Used
|
||||
to monitor Mixer itself.
|
||||
1. *envoy* (`istio-mixer.istio-system:9102`): raw stats generated by Envoy (and
|
||||
translated from statsd to prometheus).
|
||||
translated from Statsd to Prometheus).
|
||||
|
||||
For more on querying Prometheus, please read their [querying
|
||||
docs](https://prometheus.io/docs/querying/basics/).
|
||||
|
|
|
@ -19,7 +19,7 @@ the example application throughout this task.
|
|||
|
||||
## Viewing the Istio Dashboard
|
||||
|
||||
1. Verify that the prometheus service is running in your cluster.
|
||||
1. Verify that the `prometheus` service is running in your cluster.
|
||||
|
||||
In Kubernetes environments, execute the following command:
|
||||
|
||||
|
|
|
@ -99,7 +99,7 @@ First direct HTTP traffic without TLS origination
|
|||
1. Create an egress `Gateway` for _edition.cnn.com_, port 80.
|
||||
|
||||
If you have [mutual TLS Authentication](/docs/tasks/security/mutual-tls/) enabled in Istio, use the following
|
||||
command. Note that in addition to creating a `Gateway`, it creates a `DestinationRule` to specify mTLS to the egress
|
||||
command. Note that in addition to creating a `Gateway`, it creates a `DestinationRule` to specify mutual TLS to the egress
|
||||
gateway, setting SNI to `edition.cnn.com`.
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -222,7 +222,7 @@ First direct HTTP traffic without TLS origination
|
|||
|
||||
The output should be the same as in the step 2.
|
||||
|
||||
1. Check the log of the _istio-egressgateway_ pod and see a line corresponding to our request. If Istio is deployed in the `istio-system` namespace, the command to print the log is:
|
||||
1. Check the log of the `istio-egressgateway` pod and see a line corresponding to our request. If Istio is deployed in the `istio-system` namespace, the command to print the log is:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs $(kubectl get pod -l istio=egressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') egressgateway -n istio-system | tail
|
||||
|
@ -292,7 +292,7 @@ Let's perform TLS origination with the egress `Gateway`, similar to the [TLS Ori
|
|||
1. Create an egress `Gateway` for _edition.cnn.com_, port 443.
|
||||
|
||||
If you have [mutual TLS Authentication](/docs/tasks/security/mutual-tls/) enabled in Istio, use the following
|
||||
command. Note that in addition to creating a `Gateway`, it creates a `DestinationRule` to specify mTLS to the egress
|
||||
command. Note that in addition to creating a `Gateway`, it creates a `DestinationRule` to specify mutual TLS to the egress
|
||||
gateway, setting SNI to `edition.cnn.com`.
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -425,7 +425,7 @@ Let's perform TLS origination with the egress `Gateway`, similar to the [TLS Ori
|
|||
|
||||
The output should be the same as in the [TLS Origination for Egress Traffic](/docs/tasks/traffic-management/egress-tls-origination/) task, with TLS origination: without the _301 Moved Permanently_ message.
|
||||
|
||||
1. Check the log of _istio-egressgateway_ pod and see a line corresponding to our request. If Istio is deployed in the `istio-system` namespace, the command to print the log is:
|
||||
1. Check the log of the `istio-egressgateway` pod and see a line corresponding to our request. If Istio is deployed in the `istio-system` namespace, the command to print the log is:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs $(kubectl get pod -l istio=egressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') egressgateway -n istio-system | tail
|
||||
|
@ -488,7 +488,7 @@ The output should be the same as in the previous section.
|
|||
1. Create an egress `Gateway` for _edition.cnn.com_, port 443, protocol TLS.
|
||||
|
||||
If you have [mutual TLS Authentication](/docs/tasks/security/mutual-tls/) enabled in Istio, use the following
|
||||
command. Note that in addition to creating a `Gateway`, it creates a `DestinationRule` to specify mTLS to the egress
|
||||
command. Note that in addition to creating a `Gateway`, it creates a `DestinationRule` to specify mutual TLS to the egress
|
||||
gateway, setting SNI to `edition.cnn.com`.
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
|
@ -94,7 +94,7 @@ still expect the end-to-end flow to continue without any errors.
|
|||
{{< /text >}}
|
||||
|
||||
1. View the web page response times:
|
||||
1. Open the *Developer Tools* menu in IE, Chrome or Firefox (typically, key combination _Ctrl+Shift+I_ or _Alt+Cmd+I_).
|
||||
1. Open the *Developer Tools* menu in you web browser.
|
||||
1. Open the Network tab
|
||||
1. Reload the `productpage` web page. You will see that the webpage actually
|
||||
loads in about 6 seconds.
|
||||
|
|
|
@ -134,18 +134,18 @@ version of a service.
|
|||
|
||||
## Route based on user identity
|
||||
|
||||
Next, you will change the route config so that all traffic from a specific user
|
||||
Next, you will change the route configuration so that all traffic from a specific user
|
||||
is routed to a specific service version. In this case, all traffic from a user
|
||||
named Jason will be routed to the service `reviews:v2`.
|
||||
|
||||
Note that Istio doesn't have any special, built-in understanding of user
|
||||
identity. This example is enabled by the fact that the productpage service
|
||||
adds a custom "end-user" header to all outbound HTTP requests to the reviews
|
||||
identity. This example is enabled by the fact that the `productpage` service
|
||||
adds a custom `end-user` header to all outbound HTTP requests to the reviews
|
||||
service.
|
||||
|
||||
Remember, `reviews:v2` is the version that includes the star ratings feature.
|
||||
|
||||
1. Run the following command to enable the user-based routing:
|
||||
1. Run the following command to enable user-based routing:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml@
|
||||
|
@ -194,8 +194,8 @@ You have successfully configured Istio to route traffic based on user identity.
|
|||
|
||||
In this task, you used Istio to send 100% of the traffic to the `v1` version
|
||||
of each of the Bookinfo services. You then set a rule to selectively send traffic
|
||||
to version `v2` of the reviews service based on a custom "end-user" header added
|
||||
to the request by the productpage service.
|
||||
to version `v2` of the `reviews` service based on a custom `end-user` header added
|
||||
to the request by the `productpage` service.
|
||||
|
||||
Note that Kubernetes services, like the Bookinfo ones used in this task, must
|
||||
adhere to certain restrictions to take advantage of Istio's L7 routing features.
|
||||
|
@ -207,7 +207,7 @@ gradually send traffic from one version of a service to another.
|
|||
|
||||
## Cleanup
|
||||
|
||||
1. Remove the application virtual services.
|
||||
1. Remove the application virtual services:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete -f @samples/bookinfo/networking/virtual-service-all-v1.yaml@
|
||||
|
|
|
@ -113,7 +113,7 @@ Since the `reviews` service subsequently calls the `ratings` service when handli
|
|||
you used Istio to inject a 2 second delay in calls to `ratings` to cause the
|
||||
`reviews` service to take longer than half a second to complete and consequently you could see the timeout in action.
|
||||
|
||||
You observed that instead of displaying reviews, the Bookinfo productpage (which calls the `reviews` service to populate the page) displayed
|
||||
You observed that instead of displaying reviews, the Bookinfo product page (which calls the `reviews` service to populate the page) displayed
|
||||
the message: Sorry, product reviews are currently unavailable for this book.
|
||||
This was the result of it receiving the timeout error from the `reviews` service.
|
||||
|
||||
|
|
|
@ -53,13 +53,13 @@ containing:
|
|||
$ kubectl --namespace istio-system get secrets
|
||||
{{< /text >}}
|
||||
|
||||
* Config maps in `istio-system`:
|
||||
* configmaps in the `istio-system` namespace:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl --namespace istio-system get cm -o yaml
|
||||
{{< /text >}}
|
||||
|
||||
* Current and previous logs from all istio components and sidecar
|
||||
* Current and previous logs from all Istio components and sidecar
|
||||
|
||||
* Mixer logs:
|
||||
|
||||
|
|
|
@ -169,7 +169,7 @@ Verifying connectivity to Pilot is a useful troubleshooting step. Every proxy co
|
|||
$ kubectl exec -it $INGRESS_POD_NAME -n istio-system /bin/bash
|
||||
{{< /text >}}
|
||||
|
||||
1. Test connectivity to Pilot using cURL. The following example cURL's the v1 registration API using default Pilot configuration parameters and mTLS enabled:
|
||||
1. Test connectivity to Pilot using cURL. The following example cURL's the v1 registration API using default Pilot configuration parameters and mutual TLS enabled:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl -k --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem --key /etc/certs/key.pem https://istio-pilot:15003/v1/registration
|
||||
|
@ -209,9 +209,9 @@ To fix the problem, you'll need to shutdown and then restart Docker before reins
|
|||
## Envoy won't connect to my HTTP/1.0 service
|
||||
|
||||
Envoy requires HTTP/1.1 or HTTP/2 traffic for upstream services. For example, when using [NGINX](https://www.nginx.com/) for serving traffic behind Envoy, you
|
||||
will need to set the [proxy_http_version](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version) directive in your NGINX config to be "1.1", since the NGINX default is 1.0
|
||||
will need to set the [proxy_http_version](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version) directive in your NGINX configuration to be "1.1", since the NGINX default is 1.0
|
||||
|
||||
Example config:
|
||||
Example configuration:
|
||||
|
||||
{{< text plain >}}
|
||||
upstream http_backend {
|
||||
|
@ -316,7 +316,7 @@ or [manual](/docs/setup/kubernetes/sidecar-injection/#manual-sidecar-injection)
|
|||
instances to a Prometheus handler.
|
||||
<!-- todo replace ([example](https://github.com/istio/istio/blob/master/install/kubernetes/istio.yaml#L892)). -->
|
||||
|
||||
1. Verify Prometheus handler config exists.
|
||||
1. Verify the Prometheus handler configuration exists.
|
||||
|
||||
In Kubernetes environments, issue the following command:
|
||||
|
||||
|
@ -330,7 +330,7 @@ or [manual](/docs/setup/kubernetes/sidecar-injection/#manual-sidecar-injection)
|
|||
Mixer with the appropriate handler configuration.
|
||||
<!-- todo replace ([example](https://github.com/istio/istio/blob/master/install/kubernetes/istio.yaml#L819)) -->
|
||||
|
||||
1. Verify Mixer metric instances config exists.
|
||||
1. Verify the Mixer metric instance configuration exists.
|
||||
|
||||
In Kubernetes environments, issue the following command:
|
||||
|
||||
|
@ -503,7 +503,7 @@ To debug Istio with `gdb`, you will need to run the debug images of Envoy / Mixe
|
|||
|
||||
### With Tcpdump
|
||||
|
||||
Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the network namespace is shared. `iptables` will also see the pod-wide config.
|
||||
Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the network namespace is shared. `iptables` will also see the pod-wide configuration.
|
||||
|
||||
Communication between Envoy and the app happens on 127.0.0.1, and is not encrypted.
|
||||
|
||||
|
@ -526,14 +526,14 @@ You should build resilience into your application for this type of
|
|||
disconnect, but if you still want to prevent the disconnects from
|
||||
happening, you will need to disable mutual TLS and the `istio-citadel` deployment.
|
||||
|
||||
First, edit your `istio` config to disable mutual TLS
|
||||
First, edit your `istio` configuration to disable mutual TLS:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl edit configmap -n istio-system istio
|
||||
$ kubectl delete pods -n istio-system -l istio=pilot
|
||||
{{< /text >}}
|
||||
|
||||
Next, scale down the `istio-citadel` deployment to disable Envoy restarts.
|
||||
Next, scale down the `istio-citadel` deployment to disable Envoy restarts:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl scale --replicas=0 deploy/istio-citadel -n istio-system
|
||||
|
@ -541,7 +541,7 @@ $ kubectl scale --replicas=0 deploy/istio-citadel -n istio-system
|
|||
|
||||
This should stop Istio from restarting Envoy and disconnecting TCP connections.
|
||||
|
||||
## Envoy Process High CPU Usage
|
||||
## Envoy process has high CPU usage
|
||||
|
||||
For larger clusters, the default configuration that comes with Istio
|
||||
refreshes the Envoy configuration every 1 second. This can cause high
|
||||
|
|
|
@ -4,5 +4,5 @@ description: How to get health checks working when mutual TLS is enabled.
|
|||
weight: 40
|
||||
---
|
||||
You can enable a PERMISSIVE mode for your service to take both mutual TLS and plain-text traffic.
|
||||
To configure your service to accept both mTLS and plain-text traffic for health checking, please refer to the
|
||||
[PERMISSIVE mode configuration documentation](/docs/tasks/security/mtls-migration/#configure-the-server-to-accept-both-mtls-and-plain-text-traffic).
|
||||
To configure your service to accept both mutual TLS and plain-text traffic for health checking, please refer to the
|
||||
[PERMISSIVE mode configuration documentation](/docs/tasks/security/mtls-migration/#configure-the-server-to-accept-both-mutual-tls-and-plain-text-traffic).
|
||||
|
|
|
@ -5,8 +5,8 @@ weight: 5
|
|||
keywords: [debug,proxy,status,config,pilot,envoy]
|
||||
---
|
||||
|
||||
This task demonstrates how to use the [proxy-status](/docs/reference/commands/istioctl/#istioctl-proxy-status)
|
||||
and [proxy-config](/docs/reference/commands/istioctl/#istioctl-proxy-config) commands. The `proxy-status` command
|
||||
This task demonstrates how to use the [`proxy-status`](/docs/reference/commands/istioctl/#istioctl-proxy-status)
|
||||
and [`proxy-config`](/docs/reference/commands/istioctl/#istioctl-proxy-config) commands. The `proxy-status` command
|
||||
allows you to get an overview of your mesh and identify the proxy causing the problem. Then `proxy-config` can be used
|
||||
to inspect Envoy configuration and diagnose the issue.
|
||||
|
||||
|
@ -14,7 +14,7 @@ to inspect Envoy configuration and diagnose the issue.
|
|||
|
||||
* Have a Kubernetes cluster with Istio and Bookinfo installed (e.g use `istio.yaml` as described in
|
||||
[installation steps](/docs/setup/kubernetes/quick-start/#installation-steps) and
|
||||
[bookinfo installation steps](/docs/examples/bookinfo/#if-you-are-running-on-kubernetes)).
|
||||
[Bookinfo installation steps](/docs/examples/bookinfo/#if-you-are-running-on-kubernetes)).
|
||||
|
||||
OR
|
||||
|
||||
|
@ -23,7 +23,7 @@ OR
|
|||
## Get an overview of your mesh
|
||||
|
||||
The `proxy-status` command allows you to get an overview of your mesh. If you suspect one of your sidecars isn't
|
||||
receiving config or is out of sync then `proxy-status` will tell you this.
|
||||
receiving configuration or is out of sync then `proxy-status` will tell you this.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl proxy-status
|
||||
|
@ -123,7 +123,7 @@ istio-egressgateway.istio-system.svc.cluster.local
|
|||
|
||||
In order to debug Envoy you need to understand Envoy clusters/listeners/routes/endpoints and how they all interact.
|
||||
We will use the `proxy-config` command with the `-o json` and filtering flags to follow Envoy as it determines where
|
||||
to send an request from the productpage pod to the reviews pod at `reviews:9080`.
|
||||
to send a request from the `productpage` pod to the `reviews` pod at `reviews:9080`.
|
||||
|
||||
1. If you query the listener summary on a pod you will notice Istio generates the following listeners:
|
||||
* A listener on `0.0.0.0:15001` that receives all traffic into and out of the pod, then hands the request over to
|
||||
|
@ -279,9 +279,9 @@ one route that matches on everything. This route tells Envoy to send the request
|
|||
]
|
||||
{{< /text >}}
|
||||
|
||||
## Inspecting Bootstrap config
|
||||
## Inspecting Bootstrap configuration
|
||||
|
||||
So far we have looked at config retrieved (mostly) from Pilot, however Envoy requires some bootstrap config that
|
||||
So far we have looked at configuration retrieved (mostly) from Pilot, however Envoy requires some bootstrap configuration that
|
||||
includes information like where Pilot can be found. To view this use the following command:
|
||||
|
||||
{{< text bash json >}}
|
||||
|
|
|
@ -12,6 +12,6 @@ weight: 94
|
|||
|
||||
已知问题:
|
||||
|
||||
我们的 [helm chart](/docs/setup/kubernetes/helm-install/)
|
||||
我们的 [Helm chart](/docs/setup/kubernetes/helm-install/)
|
||||
目前需要一些变通的方式才能正确工作,这里 [Issue 4701](https://github.com/istio/istio/issues/4701) 有相关细节。
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ weight: 92
|
|||
|
||||
- **SignalFX**。这是新的 [`signalfx`](/docs/reference/config/policy-and-telemetry/adapters/signalfx/) 适配器。
|
||||
|
||||
- **Stackdriver**。此版本中的 [stackdriver](/docs/reference/config/policy-and-telemetry/adapters/stackdriver/) 适配器已得到充分增强,以添加新功能并提高性能。
|
||||
- **Stackdriver**。此版本中的 [`stackdriver`](/docs/reference/config/policy-and-telemetry/adapters/stackdriver/) 适配器已得到充分增强,以添加新功能并提高性能。
|
||||
|
||||
## 安全
|
||||
|
||||
|
|
|
@ -75,7 +75,7 @@ EOF
|
|||
caption="The Error Fetching Product Details Message"
|
||||
>}}
|
||||
|
||||
好消息是我们的应用程序没有崩溃, 通过良好的微服务设计,我们没有让**故障扩散**。 在我们的例子中,失败的 _details_ 微服务不会导致 _productpage_ 微服务失败, 尽管 _details_ 微服务失败,仍然提供了应用程序的大多数功能, 我们有**优雅的服务降级**:正如您所看到的,评论和评级正确显示,应用程序仍然有用。
|
||||
好消息是我们的应用程序没有崩溃, 通过良好的微服务设计,我们没有让**故障扩散**。 在我们的例子中,失败的 _details_ 微服务不会导致 `productpage` 微服务失败, 尽管 _details_ 微服务失败,仍然提供了应用程序的大多数功能, 我们有**优雅的服务降级**:正如您所看到的,评论和评级正确显示,应用程序仍然有用。
|
||||
|
||||
那可能出了什么问题? 啊......答案是我忘了启用从网格内部到外部服务的流量,在本例中是 Google Book Web服务。 默认情况下,Istio sidecar代理([Envoy proxies](https://www.envoyproxy.io))**阻止到集群外目的地的所有流量**, 要启用此类流量,我们必须定义[出口规则](https://archive.istio.io/v0.7/docs/reference/config/istio.routing.v1alpha1/#EgressRule)。
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ keywords: [traffic-management,egress,tcp]
|
|||
|
||||
## Bookinfo 示例应用程序与外部评级数据库
|
||||
|
||||
首先,我在 Kubernetes 集群之外设置了一个 MySQL 数据库实例来保存 bookinfo 评级数据, 然后我修改[Bookinfo示例应用程序](/docs/examples/bookinfo/)以使用我的数据库。
|
||||
首先,我在 Kubernetes 集群之外设置了一个 MySQL 数据库实例来保存 Bookinfo 评级数据, 然后我修改[Bookinfo示例应用程序](/docs/examples/bookinfo/)以使用我的数据库。
|
||||
|
||||
### 为评级数据设置数据库
|
||||
|
||||
|
@ -36,7 +36,7 @@ keywords: [traffic-management,egress,tcp]
|
|||
mysql -u root -p
|
||||
{{< /text >}}
|
||||
|
||||
1. 然后我创建一个名为 _bookinfo_ 的用户,并在`test.ratings` 表上授予它 _SELECT_ 权限:
|
||||
1. 然后我创建一个名为 `bookinfo` 的用户,并在`test.ratings` 表上授予它 _SELECT_ 权限:
|
||||
|
||||
{{< text bash >}}
|
||||
$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
|
||||
|
@ -52,7 +52,7 @@ keywords: [traffic-management,egress,tcp]
|
|||
"CREATE USER 'bookinfo' IDENTIFIED BY '<password you choose>'; GRANT SELECT ON test.ratings to 'bookinfo';"
|
||||
{{< /text >}}
|
||||
|
||||
在这里,我应用[最小特权原则](https://en.wikipedia.org/wiki/Principle_of_least_privilege), 这意味着我不在 Bookinfo 应用程序中使用我的 _admin_ 用户, 相反,我为应用程序 Bookinfo 创建了一个最小权限的特殊用户 _bookinfo_ , 在这种情况下,_bookinfo_ 用户只对单个表具有"SELECT”特权。
|
||||
在这里,我应用[最小特权原则](https://en.wikipedia.org/wiki/Principle_of_least_privilege), 这意味着我不在 Bookinfo 应用程序中使用我的 _admin_ 用户, 相反,我为应用程序 Bookinfo 创建了一个最小权限的特殊用户 `bookinfo` , 在这种情况下,_bookinfo_ 用户只对单个表具有"SELECT”特权。
|
||||
|
||||
在运行命令创建用户之后,我将通过检查最后一个命令的编号并运行`history -d <创建用户的命令编号>` 来清理我的bash历史记录, 我不希望新用户的密码存储在bash历史记录中, 如果我使用`mysql`,我也会删除`~/.mysql_history`文件中的最后一个命令, 在[MySQL文档](https://dev.mysql.com/doc/refman/5.5/en/create-user.html)中阅读有关新创建用户的密码保护的更多信息。
|
||||
|
||||
|
@ -114,7 +114,7 @@ keywords: [traffic-management,egress,tcp]
|
|||
+----------+--------+
|
||||
{{< /text >}}
|
||||
|
||||
我在最后一个命令中使用了 _admin_ 用户(和 _root_ 用于本地数据库),因为 _bookinfo_ 用户在 `test.ratings` 表上没有 _UPDATE_ 权限。
|
||||
我在最后一个命令中使用了 `admin` 用户(和 `root` 用于本地数据库),因为 `bookinfo` 用户在 `test.ratings` 表上没有 `UPDATE` 权限。
|
||||
|
||||
现在我准备部署使用外部数据库的 Bookinfo 应用程序版本。
|
||||
|
||||
|
@ -265,7 +265,7 @@ Created config egress-rule/default/mysql at revision 1954425
|
|||
|
||||
## 清理
|
||||
|
||||
1. 删除 _test_ 数据库和 _bookinfo_ 用户:
|
||||
1. 删除 `test` 数据库和 `bookinfo` 用户:
|
||||
|
||||
{{< text bash >}}
|
||||
$ mysqlsh --sql --ssl-mode=REQUIRED -u admin -p --host <the database host> --port <the database port> \
|
||||
|
|
|
@ -222,7 +222,7 @@ Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list p
|
|||
{{< /text >}}
|
||||
|
||||
如果部署了[遥测组件](/docs/tasks/telemetry/), 例如
|
||||
[prometheus](/docs/tasks/telemetry/querying-metrics/)(限制在 Istio 的 `namespace`),其中获得的统计结果展示的也只是租户应用命名空间的私有数据。
|
||||
[Prometheus](/docs/tasks/telemetry/querying-metrics/)(限制在 Istio 的 `namespace`),其中获得的统计结果展示的也只是租户应用命名空间的私有数据。
|
||||
|
||||
## 结语
|
||||
|
||||
|
|
|
@ -121,7 +121,7 @@ destination_version: destination.labels["version"] | "unknown"
|
|||
|
||||
控制策略和遥测功能涉及配置三种类型的资源:
|
||||
|
||||
- 配置一组处理程序,用于确定正在使用的适配器组及其操作方式。处理程序配置的一个例子:为 statsd 后端提供带有 IP 地址的 `statsd` 适配器。
|
||||
- 配置一组处理程序,用于确定正在使用的适配器组及其操作方式。处理程序配置的一个例子:为 Statsd 后端提供带有 IP 地址的 `statsd` 适配器。
|
||||
- 配置一组*实例* ,描述如何将请求属性映射到适配器输入。实例表示一个或多个适配器将操作的大量数据。例如,运维人员可能决定从诸如 `destination.service` 和 `response.code` 之类的属性中生成 `requestcount` metric 实例。
|
||||
- 配置一组规则,这些规则描述了何时调用特定适配器及哪些实例。规则包含 *match* 表达式和 *action* 。匹配表达式控制何时调用适配器,而动作决定了要提供给适配器的一组实例。例如,规则可能会将生成的 `requestcount` metric 实例发送到 `statsd` 适配器。
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ Istio 使用 [Kubernetes service account](https://kubernetes.io/docs/tasks/confi
|
|||
|
||||
* Istio 中的 Service account 表达格式为 `spiffe://<domain>/ns/<namespace>/sa/<serviceaccount>`
|
||||
|
||||
* _domain_ 目前是 _cluster.local_ ,我们将很快支持域的定制化。
|
||||
* _domain_ 目前是 `cluster.local` ,我们将很快支持域的定制化。
|
||||
* _namespace_ 是 Kubernetes service account 所在的命名空间。
|
||||
* _serviceaccount_ 是 Kubernetes service account 的名称。
|
||||
|
||||
|
|
|
@ -102,7 +102,7 @@ weight: 50
|
|||
destination_version: destination.labels["version"] | "unknown"
|
||||
{{< /text >}}
|
||||
|
||||
* **目的 Service(Destination Service)**:标识了负责处理传入请求的目的 service。例如:"details.default.svc.cluster.local"。
|
||||
* **目的 Service(Destination Service)**:标识了负责处理传入请求的目的 service。例如:`details.default.svc.cluster.local`。
|
||||
|
||||
{{< text yaml >}}
|
||||
destination_service: destination.service.host | "unknown"
|
||||
|
|
|
@ -109,7 +109,7 @@ istio-pilot-58c65f74bc-2f5xn 2/2 Running 0 1m
|
|||
$ helm delete --purge istio
|
||||
{{< /text >}}
|
||||
|
||||
如果您的 helm 版本低于 2.9.0,那么在重新部署新版 Istio chart 之前,您需要手动清理额外的 job 资源:
|
||||
如果您的 Helm 版本低于 2.9.0,那么在重新部署新版 Istio chart 之前,您需要手动清理额外的 job 资源:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-system delete job --all
|
||||
|
|
|
@ -28,4 +28,4 @@ keywords: [platform-setup,ibm,iks]
|
|||
|
||||
## IBM Cloud Private
|
||||
|
||||
[设置 kubectl 客户端](https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/manage_cluster/cfc_cli.html)以便对 IBM Cloud Private 进行访问
|
||||
[设置 `kubectl` 客户端](https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/manage_cluster/cfc_cli.html)以便对 IBM Cloud Private 进行访问
|
||||
|
|
|
@ -45,7 +45,7 @@ caption="GKE-IAM Role"
|
|||
[Istio GKE Deployment Manager](https://accounts.google.com/signin/v2/identifier?service=cloudconsole&continue=https://console.cloud.google.com/launcher/config?templateurl={{< github_file >}}/install/gcp/deployment_manager/istio-cluster.jinja&followup=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&flowName=GlifWebSignIn&flowEntry=ServiceLogin)
|
||||
|
||||
就像其他教程中的“如何访问已安装的功能”一样,我们也建议保留默认设置。工具会默认创建一个特殊设置的 GKE alpha cluster,然后安装 Istio [控制平面](/docs/concepts/what-is-istio/#architecture)、
|
||||
[BookInfo](/docs/examples/bookinfo/) 样例应用、
|
||||
[Bookinfo](/docs/examples/bookinfo/) 样例应用、
|
||||
[Grafana](/docs/tasks/telemetry/using-istio-dashboard/) 、
|
||||
[Prometheus](/docs/tasks/telemetry/querying-metrics/) 、
|
||||
[ServiceGraph](/docs/tasks/telemetry/servicegraph/) 和
|
||||
|
@ -102,7 +102,7 @@ deploy/prometheus 1 1 1 1 4
|
|||
deploy/servicegraph 1 1 1 1 4m
|
||||
{{< /text >}}
|
||||
|
||||
现在确认 BookInfo 样例应用也已经安装好:
|
||||
现在确认 Bookinfo 样例应用也已经安装好:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get deployments,ing
|
||||
|
@ -123,7 +123,7 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
|||
istio-ingressgateway LoadBalancer 10.59.251.109 35.194.26.85 80:31380/TCP,443:31390/TCP,31400:31400/TCP 6m
|
||||
{{< /text >}}
|
||||
|
||||
记录下已经给 BookInfo product page 指定好的 IP 和端口。(例子中是 `35.194.26.85:80`)
|
||||
记录下已经给 Bookinfo product page 指定好的 IP 和端口。(例子中是 `35.194.26.85:80`)
|
||||
|
||||
你也可以在 [Cloud Console](https://console.cloud.google.com/kubernetes/workload) 中的 **Kubernetes Engine -> Workloads** 章节找到这些:
|
||||
|
||||
|
@ -132,9 +132,9 @@ istio-ingressgateway LoadBalancer 10.59.251.109 35.194.26.85 80:31380/TC
|
|||
caption="GKE-Workloads"
|
||||
>}}
|
||||
|
||||
### 访问 BookInfo 样例
|
||||
### 访问 Bookinfo 样例
|
||||
|
||||
1. 为 BookInfo 的外网 IP 创建一个环境变量:
|
||||
1. 为 Bookinfo 的外网 IP 创建一个环境变量:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GATEWAY_URL=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
|
@ -248,4 +248,4 @@ $ kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=
|
|||
|
||||
1. 选择 deployment 并点击 **Delete**.
|
||||
|
||||
1. Deployment Manager 将会删除所有已经部署的 GKE 组件。但是,有一些元素会被保留,比如 Ingress 和 LoadBalancers。你可以通过再次进入 cloud console 的 [**Network Services** -> **LoadBalancers**](https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list) 来删除这些组件。
|
||||
1. Deployment Manager 将会删除所有已经部署的 GKE 组件。但是,有一些元素会被保留,比如 `Ingress` 和 `LoadBalancers`。你可以通过再次进入 cloud console 的 [**Network Services** -> **LoadBalancers**](https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list) 来删除这些组件。
|
||||
|
|
|
@ -310,7 +310,7 @@ $ kubectl apply -f install/kubernetes/istio-demo-auth.yaml
|
|||
|
||||
## 部署应用
|
||||
|
||||
您可以部署自己的应用或者示例应用程序如 [BookInfo](/docs/examples/bookinfo/)。
|
||||
您可以部署自己的应用或者示例应用程序如 [Bookinfo](/docs/examples/bookinfo/)。
|
||||
注意:应用程序必须使用 HTTP/1.1 或 HTTP/2.0 协议来传递 HTTP 流量,因为 HTTP/1.0 已经不再支持。
|
||||
|
||||
如果您启动了 [Istio-Initializer](/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection),如上所示,您可以使用 `kubectl create` 直接部署应用。Istio-Initializer 会向应用程序的 Pod 中自动注入 Envoy 容器,如果运行 Pod 的 namespace 被标记为 `istio-injection=enabled` 的话:
|
||||
|
|
|
@ -126,7 +126,7 @@ $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name
|
|||
...
|
||||
{{< /text >}}
|
||||
|
||||
> 这个例子是从 [kubernetes 的例子](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md)中引用的。
|
||||
> 这个例子是从 [Kubernetes 的例子](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md)中引用的。
|
||||
|
||||
### 用 Istio sidecar 创建一个 HTTPS 服务,并使用双向 TLS
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ keywords: [security,certificates]
|
|||
|
||||
本节中,我们要校验新的工作负载证书以及根证书是否正确传播。需要在本机安装 `openssl`。
|
||||
|
||||
1. 根据[部署文档](/docs/examples/bookinfo/)安装 bookinfo 应用。
|
||||
1. 根据[部署文档](/docs/examples/bookinfo/)安装 Bookinfo 应用。
|
||||
|
||||
1. 获取已加载的证书。
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ Fluentd 是一个开源的日志收集器,支持多种[数据输出](https://w
|
|||
[Elasticsearch](https://www.elastic.co/products/elasticsearch)是一个流行的后端日志记录程序,
|
||||
[Kibana](https://www.elastic.co/products/kibana) 用于查看。在任务结束后,一个新的日志流将被加载发送日志到示例 Fluentd/Elasticsearch/Kibana 软件栈。
|
||||
|
||||
在任务中,将使用 [BookInfo](/docs/examples/bookinfo/) 示例应用程序作为示例应用程序。
|
||||
在任务中,将使用 [Bookinfo](/docs/examples/bookinfo/) 示例应用程序作为示例应用程序。
|
||||
|
||||
## 在开始之前
|
||||
|
||||
|
@ -328,7 +328,7 @@ Created config rule/istio-system/newlogtofluentd at revision 22376
|
|||
|
||||
1. 将流量发送到示例应用程序。
|
||||
|
||||
对于 [BookInfo](/docs/examples/bookinfo/#determining-the-ingress-ip-and-port)
|
||||
对于 [Bookinfo](/docs/examples/bookinfo/#determining-the-ingress-ip-and-port)
|
||||
示例, 在浏览器中访问 `http://$GATEWAY_URL/productpage` 或发送以下命令:
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
|
@ -18,7 +18,7 @@ keywords: [traffic-management,routing]
|
|||
|
||||
## 基于内容的路由
|
||||
|
||||
由于 BookInfo 示例部署了三个版本的 reviews 微服务,因此我们需要设置默认路由。 否则,如果您当多次访问应用程序,您会注意到有时输出包含星级评分,有时又没有。
|
||||
由于 Bookinfo 示例部署了三个版本的 reviews 微服务,因此我们需要设置默认路由。 否则,如果您当多次访问应用程序,您会注意到有时输出包含星级评分,有时又没有。
|
||||
这是因为没有为应用明确指定缺省路由时,Istio 会将请求随机路由到该服务的所有可用版本上。
|
||||
|
||||
> 此任务假定您尚未设置任何路由。 如果您已经为示例应用程序创建了存在冲突的路由规则,则需要在下面的命令中使用 `replace` 代替 `create`。
|
||||
|
@ -111,10 +111,10 @@ keywords: [traffic-management,routing]
|
|||
|
||||
由于路由规则是通过异步方式分发到代理的,因此在尝试访问应用程序之前,您应该等待几秒钟,以便规则传播到所有 pod 上。
|
||||
|
||||
1. 在浏览器中打开 BookInfo 应用程序的 URL (`http://$GATEWAY_URL/productpage`)。
|
||||
1. 在浏览器中打开 Bookinfo 应用程序的 URL (`http://$GATEWAY_URL/productpage`)。
|
||||
回想一下,在部署 Bookinfo 示例时,应已参照[该说明](/docs/examples/bookinfo/#determining-the-ingress-ip-and-port)设置好了 `GATEWAY_URL` 。
|
||||
|
||||
您应该可以看到 BookInfo 应用程序的 productpage 页面。
|
||||
您应该可以看到 Bookinfo 应用程序的 productpage 页面。
|
||||
请注意, `productpage` 页面显示的内容中没有评分星级,这是为 `reviews:v1` 服务不会访问 ratings 服务。
|
||||
|
||||
1. 将来自特定用户的请求路由到 `reviews:v2`。
|
||||
|
@ -158,7 +158,7 @@ keywords: [traffic-management,routing]
|
|||
|
||||
## 理解原理
|
||||
|
||||
在此任务中,您首先使用 Istio 将 100% 的请求流量都路由到了 BookInfo 服务的 v1 版本。 然后再设置了一条路由规则,该路由规则在 productpage 服务中添加基于请求的 "end-user" 自定义 header 选择性地将特定的流量路由到了 reviews 服务的 v2 版本。
|
||||
在此任务中,您首先使用 Istio 将 100% 的请求流量都路由到了 Bookinfo 服务的 v1 版本。 然后再设置了一条路由规则,该路由规则在 productpage 服务中添加基于请求的 "end-user" 自定义 header 选择性地将特定的流量路由到了 reviews 服务的 v2 版本。
|
||||
|
||||
请注意,为了利用 Istio 的 L7 路由功能,Kubernetes 中的服务(如本任务中使用的 Bookinfo 服务)必须遵守某些特定限制。
|
||||
参考 [sidecar 注入文档](/docs/setup/kubernetes/spec-requirements)了解详情。
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 5
|
|||
keywords: [debug,proxy,status,config,pilot,envoy]
|
||||
---
|
||||
|
||||
此任务演示如何使用 [proxy-status](/docs/reference/commands/istioctl/#istioctl-proxy-status) 和 [proxy-config](/docs/reference/commands/istioctl/#istioctl-proxy-config) 命令。`proxy-status` 命令允许您获取网格的概述并识别导致问题的代理。然后,`proxy-config` 可用于检查 Envoy 配置并用于问题排查。
|
||||
此任务演示如何使用 [`proxy-status`](/docs/reference/commands/istioctl/#istioctl-proxy-status) 和 [`proxy-config`](/docs/reference/commands/istioctl/#istioctl-proxy-config) 命令。`proxy-status` 命令允许您获取网格的概述并识别导致问题的代理。然后,`proxy-config` 可用于检查 Envoy 配置并用于问题排查。
|
||||
|
||||
## 开始之前
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ check_content() {
|
|||
}
|
||||
|
||||
check_content content --en-us
|
||||
check_content content_zh --zh-cn
|
||||
#check_content content_zh --zh-cn
|
||||
|
||||
grep -nr -e "ERROR: markdown" ./public
|
||||
if [ "$?" == "0" ]
|
||||
|
|
Loading…
Reference in New Issue