Simplify the Concepts section. (#1649)
|
@ -58,7 +58,7 @@ title: Istio
|
|||
<p>
|
||||
Control traffic between services with dynamic route configuration,
|
||||
conduct A/B tests, release canaries, and gradually upgrade versions using red/black deployments.
|
||||
<a href="/docs/concepts/traffic-management/overview/">Learn more...</a>
|
||||
<a href="/docs/concepts/traffic-management/">Learn more...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -73,7 +73,7 @@ title: Istio
|
|||
<h2>Resilience Across Languages and Platforms</h2>
|
||||
<p>
|
||||
Increase reliability by shielding applications from flaky networks and cascading failures in adverse conditions.
|
||||
<a href="/docs/concepts/traffic-management/handling-failures/">Learn more...</a>
|
||||
<a href="/docs/concepts/traffic-management/#handling-failures">Learn more...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -89,7 +89,7 @@ title: Istio
|
|||
<p>
|
||||
Apply organizational policies to the interaction between services, ensure access policies are enforced and resources are fairly distributed
|
||||
among consumers.
|
||||
<a href="/docs/concepts/policies-and-telemetry/overview/">Learn more...</a>
|
||||
<a href="/docs/concepts/policies-and-telemetry/">Learn more...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -104,7 +104,7 @@ title: Istio
|
|||
<h2>In-Depth Telemetry</h2>
|
||||
<p>
|
||||
Understand the dependencies between services, the nature and flow of traffic between them, and quickly identify issues with distributed tracing.
|
||||
<a href="/docs/concepts/what-is-istio/overview/">Learn more...</a>
|
||||
<a href="/docs/concepts/what-is-istio/">Learn more...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -11,7 +11,7 @@ This is a major release for Istio on the road to 1.0. There are a great many new
|
|||
|
||||
- **Streaming Envoy configuration**. By default Pilot now streams configuration to Envoy using its [ADS API](https://github.com/envoyproxy/data-plane-api/blob/master/XDS_PROTOCOL.md). This new approach increases effective scalability, reduces rollout delay and should eliminate spurious 404 errors.
|
||||
|
||||
- **Gateway for Ingress/Egress**. We no longer support combining Kubernetes Ingress specs with Istio routing rules as it has led to several bugs and reliability issues. Istio now supports a platform independent [Gateway](/docs/concepts/traffic-management/rules-configuration/#gateways) model for ingress & egress proxies that works across Kubernetes and Cloud Foundry and works seamlessly with routing. The Gateway supports [Server Name Indication](https://en.wikipedia.org/wiki/Server_Name_Indication) based routing,
|
||||
- **Gateway for Ingress/Egress**. We no longer support combining Kubernetes Ingress specs with Istio routing rules as it has led to several bugs and reliability issues. Istio now supports a platform independent [Gateway](/docs/concepts/traffic-management/#gateways) model for ingress & egress proxies that works across Kubernetes and Cloud Foundry and works seamlessly with routing. The Gateway supports [Server Name Indication](https://en.wikipedia.org/wiki/Server_Name_Indication) based routing,
|
||||
as well as serving a certificate based on the server name presented by the client.
|
||||
|
||||
- **Constrained Inbound Ports**. We now restrict the inbound ports in a pod to the ones declared by the apps running inside that pod.
|
||||
|
|
|
@ -32,7 +32,7 @@ Whether we use one deployment or two, canary management using deployment feature
|
|||
|
||||
With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.
|
||||
|
||||
Istio’s [routing rules](/docs/concepts/traffic-management/rules-configuration/) also provide other important advantages; you can easily control
|
||||
Istio’s [routing rules](/docs/concepts/traffic-management/#rule-configuration) also provide other important advantages; you can easily control
|
||||
fine grain traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, let’s look at deploying the **helloworld** service and see how simple the problem becomes.
|
||||
|
||||
We begin by defining the **helloworld** Service, just like any other Kubernetes service, something like this:
|
||||
|
|
|
@ -27,10 +27,10 @@ Adapters are Go packages that are directly linked into the Mixer binary. It’s
|
|||
|
||||
## Philosophy
|
||||
|
||||
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes](/docs/concepts/policies-and-telemetry/config/#attributes) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
|
||||
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes](/docs/concepts/policies-and-telemetry/#attributes) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
|
||||
|
||||
{{< image width="60%" ratio="42.60%"
|
||||
link="/docs/concepts/policies-and-telemetry/config/machine.svg"
|
||||
link="/docs/concepts/policies-and-telemetry/machine.svg"
|
||||
caption="Attribute Machine"
|
||||
>}}
|
||||
|
||||
|
@ -40,7 +40,7 @@ Configuration is a complex task. In fact, evidence shows that the overwhelming m
|
|||
|
||||
Each adapter that Mixer uses requires some configuration to operate. Typically, adapters need things like the URL to their backend, credentials, caching options, and so forth. Each adapter defines the exact configuration data it needs via a [protobuf](https://developers.google.com/protocol-buffers/) message.
|
||||
|
||||
You configure each adapter by creating [*handlers*](/docs/concepts/policies-and-telemetry/config/#handlers) for them. A handler is a
|
||||
You configure each adapter by creating [*handlers*](/docs/concepts/policies-and-telemetry/#handlers) for them. A handler is a
|
||||
configuration resource which represents a fully configured adapter ready for use. There can be any number of handlers for a single adapter, making it possible to reuse an adapter in different scenarios.
|
||||
|
||||
## Templates: adapter input schema
|
||||
|
@ -55,11 +55,11 @@ Each template is specified as a [protobuf](https://developers.google.com/protoco
|
|||
## Instances: attribute mapping
|
||||
|
||||
You control which data is delivered to individual adapters by creating
|
||||
[*instances*](/docs/concepts/policies-and-telemetry/config/#instances).
|
||||
Instances control how Mixer uses the [attributes](/docs/concepts/policies-and-telemetry/config/#attributes) delivered
|
||||
[*instances*](/docs/concepts/policies-and-telemetry/#instances).
|
||||
Instances control how Mixer uses the [attributes](/docs/concepts/policies-and-telemetry/#attributes) delivered
|
||||
by the proxy into individual bundles of data that can be routed to different adapters.
|
||||
|
||||
Creating instances generally requires using [attribute expressions](/docs/concepts/policies-and-telemetry/config/#attribute-expressions). The point of these expressions is to use any attribute or literal value in order to produce a result that can be assigned to an instance’s field.
|
||||
Creating instances generally requires using [attribute expressions](/docs/concepts/policies-and-telemetry/#attribute-expressions). The point of these expressions is to use any attribute or literal value in order to produce a result that can be assigned to an instance’s field.
|
||||
|
||||
Every instance field has a type, as defined in the template, every attribute has a
|
||||
[type](https://github.com/istio/api/blob/master/policy/v1beta1/value_type.proto), and every attribute expression has a type.
|
||||
|
@ -69,7 +69,7 @@ to a string field. This kind of strong typing is designed to minimize the risk
|
|||
## Rules: delivering data to adapters
|
||||
|
||||
The last piece to the puzzle is telling Mixer which instances to send to which handler and when. This is done by
|
||||
creating [*rules*](/docs/concepts/policies-and-telemetry/config/#rules). Each rule identifies a specific handler and the set of
|
||||
creating [*rules*](/docs/concepts/policies-and-telemetry/#rules). Each rule identifies a specific handler and the set of
|
||||
instances to send to that handler. Whenever Mixer processes an incoming call, it invokes the indicated handler and gives it the specific set of instances for processing.
|
||||
|
||||
Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, it’s like the rule didn’t exist and the indicated handler isn’t invoked.
|
||||
|
|
|
@ -11,7 +11,7 @@ aliases:
|
|||
- /blog/mixer-spof-myth.html
|
||||
---
|
||||
|
||||
As [Mixer](/docs/concepts/policies-and-telemetry/overview/) is in the request path, it is natural to question how it impacts
|
||||
As [Mixer](/docs/concepts/policies-and-telemetry/) is in the request path, it is natural to question how it impacts
|
||||
overall system availability and latency. A common refrain we hear when people first glance at Istio architecture diagrams is
|
||||
"Isn't this just introducing a single point of failure?"
|
||||
|
||||
|
|
|
@ -132,7 +132,7 @@ Accessing the web page after deleting the egress rule produces the same error th
|
|||
|
||||
There is a caveat to this story. In HTTPS, all the HTTP details (hostname, path, headers etc.) are encrypted, so Istio cannot know the destination domain of the encrypted requests. Well, Istio could know the destination domain by the [SNI](https://tools.ietf.org/html/rfc3546#section-3.1) (_Server Name Indication_) field. This feature, however, is not yet implemented in Istio. Therefore, currently Istio cannot perform filtering of HTTPS requests based on the destination domains.
|
||||
|
||||
To allow Istio to perform filtering of egress requests based on domains, the microservices must issue HTTP requests. Istio then opens an HTTPS connection to the destination (performs TLS origination). The code of the microservices must be written differently or configured differently, according to whether the microservice runs inside or outside an Istio service mesh. This contradicts the Istio design goal of [maximizing transparency](/docs/concepts/what-is-istio/goals/). Sometimes we need to compromise...
|
||||
To allow Istio to perform filtering of egress requests based on domains, the microservices must issue HTTP requests. Istio then opens an HTTPS connection to the destination (performs TLS origination). The code of the microservices must be written differently or configured differently, according to whether the microservice runs inside or outside an Istio service mesh. This contradicts the Istio design goal of [maximizing transparency](/docs/concepts/what-is-istio/#design-goals). Sometimes we need to compromise...
|
||||
|
||||
The diagram below shows how the HTTPS traffic to external services is performed. On the top, a microservice outside an Istio service mesh
|
||||
sends regular HTTPS requests, encrypted end-to-end. On the bottom, the same microservice inside an Istio service mesh must send unencrypted HTTP requests inside a pod, which are intercepted by the sidecar Envoy proxy. The sidecar proxy performs TLS origination, so the traffic between the pod and the external service is encrypted.
|
||||
|
|
|
@ -1,8 +0,0 @@
|
|||
---
|
||||
title: Policies and Telemetry
|
||||
description: Introduces the policy control snd telemetry collection mechanisms.
|
||||
weight: 40
|
||||
type: section-index
|
||||
aliases:
|
||||
- /docs/concepts/policy-and-control/index.html
|
||||
---
|
Before Width: | Height: | Size: 25 KiB After Width: | Height: | Size: 25 KiB |
|
@ -1,17 +1,101 @@
|
|||
---
|
||||
title: Configuration
|
||||
description: An overview of the key concepts used to configure Istio's policy enforcement and telemetry collection features.
|
||||
weight: 30
|
||||
title: Policies and Telemetry
|
||||
description: Describes the policy enforcement and telemetry mechanisms.
|
||||
weight: 40
|
||||
keywords: [policies,telemetry,control,config]
|
||||
aliases:
|
||||
- /docs/concepts/policy-and-control/mixer.html
|
||||
- /docs/concepts/policy-and-control/mixer-config.html
|
||||
- /docs/concepts/policy-and-control/attributes.html
|
||||
---
|
||||
|
||||
Istio's policy and telemetry features are configured through a common model designed to
|
||||
put operators in control of every aspect of authorization policy and telemetry collection.
|
||||
Specific focus was given to keeping the model as simple and possible, while being powerful
|
||||
enough to control Istio's many features at scale.
|
||||
Istio provides a flexible model to enforce authorization policies and collect telemetry for the
|
||||
services in a mesh.
|
||||
|
||||
Infrastructure backends are designed to provide support functionality
|
||||
used to build services. They include such things as access control systems,
|
||||
telemetry capturing systems, quota enforcement systems, billing systems, and so
|
||||
forth. Services traditionally directly integrate with these backend systems,
|
||||
creating a hard coupling and baking-in specific semantics and usage options.
|
||||
|
||||
Istio provides a uniform abstraction that makes it possible for Istio to interface with
|
||||
an open-ended set of infrastructure backends. This is done in such a way to provide rich
|
||||
and deep controls to the operator, while imposing no burden on service developers.
|
||||
Istio is designed to change the boundaries between layers in order to reduce
|
||||
systemic complexity, eliminate policy logic from service code and give
|
||||
control to operators.
|
||||
|
||||
Mixer is the Istio component responsible for providing policy controls and telemetry collection:
|
||||
|
||||
{{< image width="55%" ratio="49.26%"
|
||||
link="./topology-without-cache.svg"
|
||||
caption="Mixer Topology"
|
||||
>}}
|
||||
|
||||
The Envoy sidecar logically calls Mixer before each request to perform precondition checks, and after each request to report telemetry.
|
||||
The sidecar has local caching such that a large percentage of precondition checks can be performed from cache. Additionally, the
|
||||
sidecar buffers outgoing telemetry such that it only calls Mixer infrequently.
|
||||
|
||||
At a high level, Mixer provides:
|
||||
|
||||
* **Backend Abstraction**. Mixer insulates the rest of Istio from the implementation details of individual infrastructure backends.
|
||||
|
||||
* **Intermediation**. Mixer allows operators to have fine-grained control over all interactions between the mesh and infrastructure backends.
|
||||
|
||||
Beyond these purely functional aspects, Mixer also has [reliability and scalability](#reliability-and-latency) benefits as outlined below.
|
||||
|
||||
Policy enforcement and telemetry collection are entirely driven from configuration.
|
||||
It's possible to completely disable these features and avoid the need to run a
|
||||
Mixer component in an Istio deployment.
|
||||
|
||||
## Adapters
|
||||
|
||||
Mixer is a highly modular and extensible component. One of its key functions is
|
||||
to abstract away the details of different policy and telemetry backend systems,
|
||||
allowing the rest of Istio to be agnostic of those backends.
|
||||
|
||||
Mixer's flexibility in dealing with different infrastructure backends comes
|
||||
from its general-purpose plug-in model. Individual plug-ins are
|
||||
known as *adapters* and they allow Mixer to interface to different
|
||||
infrastructure backends that deliver core functionality, such as logging,
|
||||
monitoring, quotas, ACL checking, and more. The exact set of
|
||||
adapters used at runtime is determined through configuration and can easily be
|
||||
extended to target new or custom infrastructure backends.
|
||||
|
||||
{{< image width="20%" ratio="138%"
|
||||
link="./adapters.svg"
|
||||
alt="Showing Mixer with adapters."
|
||||
caption="Mixer and its Adapters"
|
||||
>}}
|
||||
|
||||
Learn more about the [set of supported adapters](/docs/reference/config/policy-and-telemetry/adapters/).
|
||||
|
||||
## Reliability and latency
|
||||
|
||||
Mixer is a highly available component whose design helps increase overall availability and reduce average latency
|
||||
of services in the mesh. Key aspects of its design deliver these benefits:
|
||||
|
||||
* **Statelessness**. Mixer is stateless in that it doesn’t manage any persistent storage of its own.
|
||||
|
||||
* **Hardening**. Mixer proper is designed to be a highly reliable component. The design intent is to achieve > 99.999% uptime for any individual Mixer instance.
|
||||
|
||||
* **Caching and Buffering**. Mixer is designed to accumulate a large amount of transient ephemeral state.
|
||||
|
||||
The sidecar proxies that sit next to each service instance in the mesh must necessarily be frugal in terms of memory consumption, which constrains the possible amount of local
|
||||
caching and buffering. Mixer, however, lives independently and can use considerably larger caches and output buffers. Mixer thus acts as a highly-scaled and highly-available second-level
|
||||
cache for the sidecars.
|
||||
|
||||
{{< image width="65%" ratio="65.89%"
|
||||
link="./topology-with-cache.svg"
|
||||
caption="Mixer Topology"
|
||||
>}}
|
||||
|
||||
Since Mixer’s expected availability is considerably higher than most infrastructure backends (those often have availability of perhaps 99.9%). Mixer's local
|
||||
caches and buffers not only contribute to reduce latency, they also help mask infrastructure backend failures by being able to continue operating
|
||||
even when a backend has become unresponsive.
|
||||
|
||||
Finally, Mixer's caching and buffering helps reduce the frequency of calls to backends, and can sometimes reduce the amount of data
|
||||
sent to backends (through local aggregation). Both of these can reduce operational expense in certain cases.
|
||||
|
||||
## Attributes
|
||||
|
||||
|
@ -35,7 +119,6 @@ source.ip: 192.168.0.1
|
|||
destination.service: example
|
||||
{{< /text >}}
|
||||
|
||||
Mixer is the Istio component that implements policy and telemetry functionality.
|
||||
Mixer is in essence an attribute processing machine. The Envoy sidecar invokes Mixer for
|
||||
every request, giving Mixer a set of attributes that describe the request and the environment
|
||||
around the request. Based on its configuration and the specific set of attributes it was
|
||||
|
@ -53,40 +136,70 @@ The specific vocabulary is determined by the set of attribute producers being us
|
|||
in the deployment. The primary attribute producer in Istio is Envoy, although
|
||||
specialized Mixer adapters can also generate attributes.
|
||||
|
||||
The common baseline set of attributes available in most Istio deployments is defined
|
||||
[here](/docs/reference/config/policy-and-telemetry/attribute-vocabulary/).
|
||||
Learn more about the [common baseline set of attributes available in most Istio deployments](/docs/reference/config/policy-and-telemetry/attribute-vocabulary/).
|
||||
|
||||
### Attribute expressions
|
||||
|
||||
Attribute expressions are used when configuring [instances](#instances).
|
||||
Here's an example use of expressions:
|
||||
|
||||
{{< text yaml >}}
|
||||
destination_service: destination.service
|
||||
response_code: response.code
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
{{< /text >}}
|
||||
|
||||
The sequences on the right-hand side of the colons are the simplest forms of attribute expressions.
|
||||
The first two only consist of attribute names. The `response_code` label is assigned the value from the `request.code` attribute.
|
||||
|
||||
Here's an example of a conditional expression:
|
||||
|
||||
{{< text yaml >}}
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
{{< /text >}}
|
||||
|
||||
With the above, the `destination_version` label is assigned the value of `destination.labels["version"]`. However if that attribute
|
||||
is not present, the literal `"unknown"` is used.
|
||||
|
||||
Refer to the [attribute expression reference](/docs/reference/config/policy-and-telemetry/expression-language/) for details.
|
||||
|
||||
## Configuration model
|
||||
|
||||
Istio's policy and telemetry features are configured through a common model designed to
|
||||
put operators in control of every aspect of authorization policy and telemetry collection.
|
||||
Specific focus was given to keeping the model simple, while being powerful
|
||||
enough to control Istio's many features at scale.
|
||||
|
||||
Controlling the policy and telemetry features involves configuring three types of resources:
|
||||
|
||||
- Configuring a set of *handlers*, which determine the set of adapters that
|
||||
* Configuring a set of *handlers*, which determine the set of adapters that
|
||||
are being used and how they operate. Providing a `statsd` adapter with the IP
|
||||
address for a statsd backend is an example of handler configuration.
|
||||
|
||||
- Configuring a set of *instances*, which describe how to map request attributes into adapter inputs.
|
||||
* Configuring a set of *instances*, which describe how to map request attributes into adapter inputs.
|
||||
Instances represent a chunk of data that one or more adapters will operate
|
||||
on. For example, an operator may decide to generate `requestcount`
|
||||
metric instances from attributes such as `destination.service` and
|
||||
`response.code`.
|
||||
|
||||
- Configuring a set of *rules*, which describe when a particular adapter is called and which instances
|
||||
* Configuring a set of *rules*, which describe when a particular adapter is called and which instances
|
||||
it is given. Rules consist of a *match* expression and *actions*. The match expression controls
|
||||
when to invoke an adapter, while the actions determine the set of instances to give to the adapter.
|
||||
when to invoke an adapter, while the actions determine the set of instances to give the adapter.
|
||||
For example, a rule might send generated `requestcount` metric instances to a `statsd` adapter.
|
||||
|
||||
Configuration is based on *adapters* and *templates*:
|
||||
|
||||
- **Adapters** encapsulate the logic necessary to interface Mixer with a specific infrastructure backend.
|
||||
- **Templates** define the schema for specifying request mapping from attributes to adapter inputs.
|
||||
* **Adapters** encapsulate the logic necessary to interface Mixer with a specific infrastructure backend.
|
||||
|
||||
* **Templates** define the schema for specifying request mapping from attributes to adapter inputs.
|
||||
A given adapter may support any number of templates.
|
||||
|
||||
## Handlers
|
||||
### Handlers
|
||||
|
||||
Adapters encapsulate the logic necessary to interface Mixer with specific external infrastructure
|
||||
backends such as [Prometheus](https://prometheus.io) or [Stackdriver](https://cloud.google.com/logging).
|
||||
Individual adapters generally need operational parameters in order to do their work. For example, a logging adapter may require
|
||||
the IP address and port of the log sink.
|
||||
the IP address and port of the log collection backend.
|
||||
|
||||
Here is an example showing how to configure an adapter of kind = `listchecker`. The listchecker adapter checks an input value against a list.
|
||||
If the adapter is configured for a whitelist, it returns success if the input value is found in the list.
|
||||
|
@ -102,8 +215,6 @@ spec:
|
|||
blacklist: false
|
||||
{{< /text >}}
|
||||
|
||||
`{metadata.name}.{kind}.{metadata.namespace}` is the fully qualified name of a handler. The fully qualified name of the above handler is
|
||||
`staticversion.listchecker.istio-system` and it must be unique.
|
||||
The schema of the data in the `spec` stanza depends on the specific adapter being configured.
|
||||
|
||||
Some adapters implement functionality that goes beyond connecting Mixer to a backend.
|
||||
|
@ -136,10 +247,10 @@ spec:
|
|||
bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]
|
||||
{{< /text >}}
|
||||
|
||||
Each adapter defines its own particular format of configuration data. The exhaustive set of
|
||||
adapters and their specific configuration formats can be found [here](/docs/reference/config/policy-and-telemetry/adapters/).
|
||||
Each adapter defines its own particular format of configuration data. Learn more about [the full set of
|
||||
adapters and their specific configuration formats](/docs/reference/config/policy-and-telemetry/adapters/).
|
||||
|
||||
## Instances
|
||||
### Instances
|
||||
|
||||
Instance configuration specifies the request mapping from attributes to adapter inputs.
|
||||
The following is an example of a metric instance configuration that produces the `requestduration` metric.
|
||||
|
@ -160,10 +271,10 @@ spec:
|
|||
{{< /text >}}
|
||||
|
||||
Note that all the dimensions expected in the handler configuration are specified in the mapping.
|
||||
Templates define the specific required content of individual instances. The exhaustive set of
|
||||
templates and their specific configuration formats can be found [here](/docs/reference/config/policy-and-telemetry/templates/).
|
||||
Templates define the specific required content of individual instances. Learn more about the [set of
|
||||
templates and their specific configuration formats](/docs/reference/config/policy-and-telemetry/templates/).
|
||||
|
||||
## Rules
|
||||
### Rules
|
||||
|
||||
Rules specify when a particular handler is invoked with a specific instance.
|
||||
Consider an example where you want to deliver the `requestduration` metric to the prometheus handler if
|
||||
|
@ -186,29 +297,5 @@ spec:
|
|||
A rule contains a `match` predicate expression and a list of actions to perform if the predicate is true.
|
||||
An action specifies the list of instances to be delivered to a handler.
|
||||
A rule must use the fully qualified names of handlers and instances.
|
||||
If the rule, handlers, and instances are all in the same namespace, the namespace suffix can be elided from the fully qualified name as seen in `handler.prometheus`.
|
||||
|
||||
## Attribute expressions
|
||||
|
||||
Attribute expressions are used when configuring instances.
|
||||
You have already seen a few simple attribute expressions in the previous examples:
|
||||
|
||||
{{< text yaml >}}
|
||||
destination_service: destination.service
|
||||
response_code: response.code
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
{{< /text >}}
|
||||
|
||||
The sequences on the right-hand side of the colons are the simplest forms of attribute expressions.
|
||||
The first two only consist of attribute names. The `response_code` label is assigned the value from the `request.code` attribute.
|
||||
|
||||
Here's an example of a conditional expression:
|
||||
|
||||
{{< text yaml >}}
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
{{< /text >}}
|
||||
|
||||
With the above, the `destination_version` label is assigned the value of `destination.labels["version"]`. However if that attribute
|
||||
is not present, the literal `"unknown"` is used.
|
||||
|
||||
Refer to the [attribute expression reference](/docs/reference/config/policy-and-telemetry/expression-language/) for details.
|
||||
If the rule, handlers, and instances are all in the same namespace, the namespace suffix can be elided from
|
||||
the fully qualified name as seen in `handler.prometheus`.
|
Before Width: | Height: | Size: 86 KiB After Width: | Height: | Size: 86 KiB |
|
@ -1,94 +0,0 @@
|
|||
---
|
||||
title: Overview
|
||||
description: Describes the design of the policy and telemetry mechanisms.
|
||||
weight: 5
|
||||
keywords: [policies,telemetry,control]
|
||||
aliases:
|
||||
- /docs/concepts/policy-and-control/mixer.html
|
||||
---
|
||||
|
||||
Istio provides a flexible model to enforce authorization policies and collect telemetry for the
|
||||
services in a mesh.
|
||||
|
||||
Infrastructure backends are designed to provide support functionality
|
||||
used to build services. They include such things as access control systems,
|
||||
telemetry capturing systems, quota enforcement systems, billing systems, and so
|
||||
forth. Services traditionally directly integrate with these backend systems,
|
||||
creating a hard coupling and baking-in specific semantics and usage options.
|
||||
|
||||
Istio provides a uniform abstraction that makes it possible for Istio to interface with
|
||||
an open-ended set of infrastructure backends. This is done in such a way to provide rich
|
||||
and deep controls to the operator, while imposing no burden on service developers.
|
||||
Istio is designed to change the boundaries between layers in order to reduce
|
||||
systemic complexity, eliminate policy logic from service code and give
|
||||
control to operators.
|
||||
|
||||
Mixer is the Istio component responsible for providing policy controls and telemetry collection:
|
||||
|
||||
{{< image width="75%" ratio="49.26%"
|
||||
link="./topology-without-cache.svg"
|
||||
caption="Mixer Topology"
|
||||
>}}
|
||||
|
||||
The Envoy sidecar logically calls Mixer before each request to perform precondition checks, and after each request to report telemetry.
|
||||
The sidecar has local caching such that a relatively large percentage of precondition checks can be performed from cache. Additionally, the
|
||||
sidecar buffers outgoing telemetry such that it only actually needs to call Mixer infrequently.
|
||||
|
||||
At a high level, Mixer provides:
|
||||
|
||||
* **Backend Abstraction**. Mixer insulates the rest of Istio from the implementation details of individual infrastructure backends.
|
||||
|
||||
* **Intermediation**. Mixer allows operators to have fine-grained control over all interactions between the mesh and infrastructure backends.
|
||||
|
||||
Beyond these purely functional aspects, Mixer also has [reliability and scalability](#reliability-and-latency) benefits as outlined below.
|
||||
|
||||
Policy enforcement and telemetry collection are entirely driven from configuration.
|
||||
It's possible to completely disable these features and avoid the need to run a
|
||||
Mixer component in an Istio deployment.
|
||||
|
||||
## Adapters
|
||||
|
||||
Mixer is a highly modular and extensible component. One of its key functions is
|
||||
to abstract away the details of different policy and telemetry backend systems,
|
||||
allowing the rest of Istio to be agnostic of those backends.
|
||||
|
||||
Mixer's flexibility in dealing with different infrastructure backends is
|
||||
achieved by having a general-purpose plug-in model. Individual plug-ins are
|
||||
known as *adapters* and they allow Mixer to interface to different
|
||||
infrastructure backends that deliver core functionality, such as logging,
|
||||
monitoring, quotas, ACL checking, and more. The exact set of
|
||||
adapters used at runtime is determined through configuration and can easily be
|
||||
extended to target new or custom infrastructure backends.
|
||||
|
||||
{{< image width="35%" ratio="138%"
|
||||
link="./adapters.svg"
|
||||
alt="Showing Mixer with adapters."
|
||||
caption="Mixer and its Adapters"
|
||||
>}}
|
||||
|
||||
## Reliability and latency
|
||||
|
||||
Mixer is a highly available component whose design helps increase overall availability and reduce average latency
|
||||
of services in the mesh. Key aspects of its design deliver these benefits:
|
||||
|
||||
* **Statelessness**. Mixer is stateless in that it doesn’t manage any persistent storage of its own.
|
||||
|
||||
* **Hardening**. Mixer proper is designed to be a highly reliable component. The design intent is to achieve > 99.999% uptime for any individual Mixer instance.
|
||||
|
||||
* **Caching and Buffering**. Mixer is designed to accumulate a large amount of transient ephemeral state.
|
||||
|
||||
The sidecar proxies that sit next to each service instance in the mesh must necessarily be frugal in terms of memory consumption, which constrains the possible amount of local
|
||||
caching and buffering. Mixer, however, lives independently and can use considerably larger caches and output buffers. Mixer thus acts as a highly-scaled and highly-available second-level
|
||||
cache for the sidecars.
|
||||
|
||||
{{< image width="75%" ratio="65.89%"
|
||||
link="./topology-with-cache.svg"
|
||||
caption="Mixer Topology"
|
||||
>}}
|
||||
|
||||
Since Mixer’s expected availability is considerably higher than most infrastructure backends (those often have availability of perhaps 99.9%). Mixer's local
|
||||
caches and buffers not only contribute to reduce latency, they also help mask infrastructure backend failures by being able to continue operating
|
||||
even when a backend has become unresponsive.
|
||||
|
||||
Finally, Mixer's caching and buffering helps reduce the frequency of calls to backends, and can sometimes reduce the amount of data
|
||||
sent to backends (through local aggregation). Both of these can reduce operational expense in certain cases.
|
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 164 KiB |
Before Width: | Height: | Size: 135 KiB After Width: | Height: | Size: 135 KiB |
|
@ -13,7 +13,7 @@ Istio authentication policy enables operators to specify authentication requirem
|
|||
|
||||
* Origin: verifies the party, the original client, that makes the request (e.g end-users, devices etc). JWT is the only supported mechanism for origin authentication at the moment.
|
||||
|
||||
Istio configures the server side to perform authentication, however, it does not enforce the policy on the client side. For mutual TLS authentication, users can use [destination rules](/docs/concepts/traffic-management/rules-configuration/#destination-rules) to configure client side to follow the expected protocol. For other cases, the application is responsible to acquire and attach the credential (e.g JWT) to the request.
|
||||
Istio configures the server side to perform authentication, however, it does not enforce the policy on the client side. For mutual TLS authentication, users can use [destination rules](/docs/concepts/traffic-management/#destination-rules) to configure client side to follow the expected protocol. For other cases, the application is responsible to acquire and attach the credential (e.g JWT) to the request.
|
||||
|
||||
Identities from both authentication parts, if applicable, are output to the next layer (e.g authorization, Mixer). To simplify the authorization rules, the policy can also specify which identity (peer or origin) should be used as 'the principal'. By default, it is set to the peer's identity.
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ pair for each of the existing and new service accounts, and sends them to the AP
|
|||
|
||||
1. When a pod is created, API Server mounts the key and certificate pair according to the service account using [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
|
||||
|
||||
1. [Pilot](/docs/concepts/traffic-management/pilot/) generates the config with proper key and certificate and secure naming information,
|
||||
1. [Pilot](/docs/concepts/traffic-management/#pilot-and-envoy) generates the config with proper key and certificate and secure naming information,
|
||||
which defines what service account(s) can run a certain service, and passes it to Envoy.
|
||||
|
||||
### Deployment phase (VM/bare-metal Machines Scenario)
|
||||
|
@ -136,7 +136,7 @@ Let's consider a 3-tier application with three services: photo-frontend, photo-b
|
|||
In this scenario, a cluster admin creates 3 namespaces: istio-citadel-ns, photo-ns, and datastore-ns. Admin has access to all namespaces, and each team only has
|
||||
access to its own namespace. The photo SRE team creates 2 service accounts to run photo-frontend and photo-backend respectively in namespace photo-ns. The
|
||||
datastore SRE team creates 1 service account to run the datastore service in namespace datastore-ns. Moreover, we need to enforce the service access control
|
||||
in [Istio Mixer](/docs/concepts/policies-and-telemetry/overview/) such that photo-frontend cannot access datastore.
|
||||
in [Istio Mixer](/docs/concepts/policies-and-telemetry/) such that photo-frontend cannot access datastore.
|
||||
|
||||
In this setup, Citadel is able to provide keys and certificates management for all namespaces, and isolate
|
||||
microservice deployments from each other.
|
||||
|
|
|
@ -34,7 +34,7 @@ request context against the RBAC policies, and returns the authorization result
|
|||
|
||||
### Request context
|
||||
|
||||
In the current release, the Istio RBAC engine is implemented as a [Mixer adapter](/docs/concepts/policies-and-telemetry/overview/#adapters).
|
||||
In the current release, the Istio RBAC engine is implemented as a [Mixer adapter](/docs/concepts/policies-and-telemetry/#adapters).
|
||||
The request context is provided as an instance of the
|
||||
[authorization template](/docs/reference/config/policy-and-telemetry/templates/authorization/). The request context
|
||||
contains all the information about the request and the environment that an authorization module needs to know. In particular, it has two parts:
|
||||
|
|
Before Width: | Height: | Size: 100 KiB After Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 64 KiB After Width: | Height: | Size: 64 KiB |
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 93 KiB After Width: | Height: | Size: 93 KiB |
Before Width: | Height: | Size: 278 KiB After Width: | Height: | Size: 278 KiB |
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
title: Traffic Management
|
||||
description: Describes the various Istio features focused on traffic routing and control.
|
||||
weight: 20
|
||||
type: section-index
|
||||
---
|
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
title: Fault Injection
|
||||
description: Introduces the idea of systematic fault injection that can be used to uncover conflicting failure recovery policies across services.
|
||||
weight: 40
|
||||
keywords: [traffic-management,fault-injection]
|
||||
---
|
||||
|
||||
While Envoy sidecar/proxy provides a host of
|
||||
[failure recovery mechanisms](/docs/concepts/traffic-management/handling-failures/) to services running
|
||||
on Istio, it is still
|
||||
imperative to test the end-to-end failure recovery capability of the
|
||||
application as a whole. Misconfigured failure recovery policies (e.g.,
|
||||
incompatible/restrictive timeouts across service calls) could result in
|
||||
continued unavailability of critical services in the application, resulting
|
||||
in poor user experience.
|
||||
|
||||
Istio enables protocol-specific fault injection into the network, instead
|
||||
of killing pods, delaying or corrupting packets at TCP layer. Our rationale
|
||||
is that the failures observed by the application layer are the same
|
||||
regardless of network level failures, and that more meaningful failures can
|
||||
be injected at the application layer (e.g., HTTP error codes) to exercise
|
||||
the resilience of an application.
|
||||
|
||||
Operators can configure faults to be injected into requests that match
|
||||
specific criteria. Operators can further restrict the percentage of
|
||||
requests that should be subjected to faults. Two types of faults can be
|
||||
injected: delays and aborts. Delays are timing failures, mimicking
|
||||
increased network latency, or an overloaded upstream service. Aborts are
|
||||
crash failures that mimic failures in upstream services. Aborts usually
|
||||
manifest in the form of HTTP error codes, or TCP connection failures.
|
||||
|
||||
Refer to [Istio's traffic management rules](/docs/concepts/traffic-management/rules-configuration/) for more details.
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
title: Handling Failures
|
||||
description: An overview of failure recovery capabilities in Envoy that can be leveraged by unmodified applications to improve robustness and prevent cascading failures.
|
||||
weight: 30
|
||||
keywords: [traffic-management,handling-failures]
|
||||
---
|
||||
|
||||
Envoy provides a set of out-of-the-box _opt-in_ failure recovery features
|
||||
that can be taken advantage of by the services in an application. Features
|
||||
include:
|
||||
|
||||
1. Timeouts
|
||||
|
||||
1. Bounded retries with timeout budgets and variable jitter between retries
|
||||
|
||||
1. Limits on number of concurrent connections and requests to upstream services
|
||||
|
||||
1. Active (periodic) health checks on each member of the load balancing pool
|
||||
|
||||
1. Fine-grained circuit breakers (passive health checks) -- applied per
|
||||
instance in the load balancing pool
|
||||
|
||||
These features can be dynamically configured at runtime through
|
||||
[Istio's traffic management rules](/docs/concepts/traffic-management/rules-configuration/).
|
||||
|
||||
The jitter between retries minimizes the impact of retries on an overloaded
|
||||
upstream service, while timeout budgets ensure that the calling service
|
||||
gets a response (success/failure) within a predictable time frame.
|
||||
|
||||
A combination of active and passive health checks (4 and 5 above)
|
||||
minimizes the chances of accessing an unhealthy instance in the load
|
||||
balancing pool. When combined with platform-level health checks (such as
|
||||
those supported by Kubernetes or Mesos), applications can ensure that
|
||||
unhealthy pods/containers/VMs can be quickly weeded out of the service
|
||||
mesh, minimizing the request failures and impact on latency.
|
||||
|
||||
Together, these features enable the service mesh to tolerate failing nodes
|
||||
and prevent localized failures from cascading instability to other nodes.
|
||||
|
||||
## Fine tuning
|
||||
|
||||
Istio's traffic management rules allow
|
||||
operators to set global defaults for failure recovery per
|
||||
service/version. However, consumers of a service can also override
|
||||
[timeout](/docs/reference/config/istio.routing.v1alpha1/#HTTPTimeout)
|
||||
and
|
||||
[retry](/docs/reference/config/istio.routing.v1alpha1/#HTTPRetry)
|
||||
defaults by providing request-level overrides through special HTTP headers.
|
||||
With the Envoy proxy implementation, the headers are `x-envoy-upstream-rq-timeout-ms` and
|
||||
`x-envoy-max-retries`, respectively.
|
||||
|
||||
## FAQ
|
||||
|
||||
Q: *Do applications still handle failures when running in Istio?*
|
||||
|
||||
Yes. Istio improves the reliability and availability of services in the
|
||||
mesh. However, **applications need to handle the failure (errors)
|
||||
and take appropriate fallback actions**. For example, when all instances in
|
||||
a load balancing pool have failed, Envoy will return HTTP 503. It is the
|
||||
responsibility of the application to implement any fallback logic that is
|
||||
needed to handle the HTTP 503 error code from an upstream service.
|
||||
|
||||
Q: *Will Envoy's failure recovery features break applications that already
|
||||
use fault tolerance libraries (e.g. [Hystrix](https://github.com/Netflix/Hystrix))?*
|
||||
|
||||
No. Envoy is completely transparent to the application. A failure response
|
||||
returned by Envoy would not be distinguishable from a failure response
|
||||
returned by the upstream service to which the call was made.
|
||||
|
||||
Q: *How will failures be handled when using application-level libraries and
|
||||
Envoy at the same time?*
|
||||
|
||||
Given two failure recovery policies for the same destination service (e.g.,
|
||||
two timeouts -- one set in Envoy and another in application's library), **the
|
||||
more restrictive of the two will be triggered when failures occur**. For
|
||||
example, if the application sets a 5 second timeout for an API call to a
|
||||
service, while the operator has configured a 10 second timeout, the
|
||||
application's timeout will kick in first. Similarly, if Envoy's circuit
|
||||
breaker triggers before the application's circuit breaker, API calls to the
|
||||
service will get a 503 from Envoy.
|
|
@ -1,10 +1,305 @@
|
|||
---
|
||||
title: Rules Configuration
|
||||
description: Provides a high-level overview of the configuration model used by Istio to configure traffic management rules in the service mesh.
|
||||
weight: 50
|
||||
keywords: [traffic-management,rules]
|
||||
title: Traffic Management
|
||||
description: Describes the various Istio features focused on traffic routing and control.
|
||||
weight: 20
|
||||
keywords: [traffic-management]
|
||||
aliases:
|
||||
- /docs/concepts/traffic-management/overview
|
||||
- /docs/concepts/traffic-management/pilot
|
||||
- /docs/concepts/traffic-management/rules-configuration
|
||||
- /docs/concepts/traffic-management/fault-injection
|
||||
- /docs/concepts/traffic-management/handling-failures
|
||||
- /docs/concepts/traffic-management/load-balancing
|
||||
- /docs/concepts/traffic-management/request-routing
|
||||
---
|
||||
|
||||
This page provides an overview of how traffic management works
|
||||
in Istio, including the benefits of its traffic management
|
||||
principles. It assumes that you've already read [What is Istio?](/docs/concepts/what-is-istio/)
|
||||
and are familiar with Istio's high-level architecture.
|
||||
|
||||
Using Istio's traffic management model essentially decouples traffic flow
|
||||
and infrastructure scaling, letting operators specify via Pilot what
|
||||
rules they want traffic to follow rather than which specific pods/VMs should
|
||||
receive traffic - Pilot and intelligent Envoy proxies look after the
|
||||
rest. So, for example, you can specify via Pilot that you want 5%
|
||||
of traffic for a particular service to go to a canary version irrespective
|
||||
of the size of the canary deployment, or send traffic to a particular version
|
||||
depending on the content of the request.
|
||||
|
||||
{{< image width="85%" ratio="69.52%"
|
||||
link="./TrafficManagementOverview.svg"
|
||||
caption="Traffic Management with Istio"
|
||||
>}}
|
||||
|
||||
Decoupling traffic flow from infrastructure scaling like this allows Istio
|
||||
to provide a variety of traffic management features that live outside the
|
||||
application code. As well as dynamic [request routing](#request-routing)
|
||||
for A/B testing, gradual rollouts, and canary releases, it also handles
|
||||
[failure recovery](#handling-failures) using timeouts, retries, and
|
||||
circuit breakers, and finally [fault injection](#fault-injection) to
|
||||
test the compatibility of failure recovery policies across services. These
|
||||
capabilities are all realized through the Envoy sidecars/proxies deployed
|
||||
across the service mesh.
|
||||
|
||||
## Pilot and Envoy
|
||||
|
||||
The core component used for traffic management in Istio is **Pilot**, which
|
||||
manages and configures all the Envoy
|
||||
proxy instances deployed in a particular Istio service mesh. It lets you
|
||||
specify what rules you want to use to route traffic between Envoy proxies
|
||||
and configure failure recovery features such as timeouts, retries, and
|
||||
circuit breakers. It also maintains a canonical model of all the services
|
||||
in the mesh and uses this to let Envoys know about the other instances in
|
||||
the mesh via its discovery service.
|
||||
|
||||
Each Envoy instance maintains [load balancing information](#discovery-and-load-balancing)
|
||||
based on the information it gets from Pilot and periodic health-checks
|
||||
of other instances in its load-balancing pool, allowing it to intelligently
|
||||
distribute traffic between destination instances while following its specified
|
||||
routing rules.
|
||||
|
||||
Pilot is responsible for the lifecycle of Envoy instances deployed
|
||||
across the Istio service mesh.
|
||||
|
||||
{{< image width="60%" ratio="72.17%"
|
||||
link="./PilotAdapters.svg"
|
||||
caption="Pilot Architecture"
|
||||
>}}
|
||||
|
||||
As illustrated in the figure above, Pilot maintains a canonical
|
||||
representation of services in the mesh that is independent of the underlying
|
||||
platform. Platform-specific adapters in Pilot are responsible for
|
||||
populating this canonical model appropriately. For example, the Kubernetes
|
||||
adapter in Pilot implements the necessary controllers to watch the
|
||||
Kubernetes API server for changes to the pod registration information, ingress
|
||||
resources, and third party resources that store traffic management rules.
|
||||
This data is translated into the canonical representation. Envoy-specific
|
||||
configuration is generated based on the canonical representation.
|
||||
|
||||
Pilot enables [service discovery](https://www.envoyproxy.io/docs/envoy/latest/api-v1/cluster_manager/sds),
|
||||
dynamic updates to [load balancing pools](https://www.envoyproxy.io/docs/envoy/latest/configuration/cluster_manager/cds)
|
||||
and [routing tables](https://www.envoyproxy.io/docs/envoy/latest/configuration/http_conn_man/rds).
|
||||
|
||||
Operators can specify high-level traffic management rules through
|
||||
[Pilot's Rule configuration](/docs/reference/config/istio.networking.v1alpha3/). These rules are translated into low-level
|
||||
configurations and distributed to Envoy instances.
|
||||
|
||||
## Request routing
|
||||
|
||||
As described above, the canonical representation
|
||||
of services in a particular mesh is maintained by Pilot. The Istio
|
||||
model of a service is independent of how it is represented in the underlying
|
||||
platform (Kubernetes, Mesos, Cloud Foundry,
|
||||
etc.). Platform-specific adapters are responsible for populating the
|
||||
internal model representation with various fields from the metadata found
|
||||
in the platform.
|
||||
|
||||
Istio introduces the concept of a service version, which is a finer-grained
|
||||
way to subdivide service instances by versions (`v1`, `v2`) or environment
|
||||
(`staging`, `prod`). These variants are not necessarily different API
|
||||
versions: they could be iterative changes to the same service, deployed in
|
||||
different environments (prod, staging, dev, etc.). Common scenarios where
|
||||
this is used include A/B testing or canary rollouts. Istio's [traffic
|
||||
routing rules](#rule-configuration) can refer to service versions to provide
|
||||
additional control over traffic between services.
|
||||
|
||||
### Communication between services
|
||||
|
||||
{{< image width="60%" ratio="100.42%"
|
||||
link="./ServiceModel_Versions.svg"
|
||||
alt="Showing how service versions are handled."
|
||||
caption="Service Versions"
|
||||
>}}
|
||||
|
||||
As illustrated in the figure above, clients of a service have no knowledge
|
||||
of different versions of the service. They can continue to access the
|
||||
services using the hostname/IP address of the service. The Envoy sidecar/proxy
|
||||
intercepts and forwards all requests/responses between the client and the
|
||||
service.
|
||||
|
||||
Envoy determines its actual choice of service version dynamically
|
||||
based on the routing rules specified by the operator using Pilot. This
|
||||
model enables the application code to decouple itself from the evolution of its dependent
|
||||
services, while providing other benefits as well (see
|
||||
[Mixer](/docs/concepts/policies-and-telemetry/)). Routing
|
||||
rules allow Envoy to select a version based
|
||||
on criteria such as headers, tags associated with
|
||||
source/destination, and/or by weights assigned to each version.
|
||||
|
||||
Istio also provides load balancing for traffic to multiple instances of
|
||||
the same service version. You can find out more about this in [Discovery
|
||||
and Load Balancing](/docs/concepts/traffic-management/#discovery-and-load-balancing).
|
||||
|
||||
Istio does not provide a DNS. Applications can try to resolve the
|
||||
FQDN using the DNS service present in the underlying platform (kube-dns,
|
||||
mesos-dns, etc.).
|
||||
|
||||
### Ingress and egress
|
||||
|
||||
Istio assumes that all traffic entering and leaving the service mesh
|
||||
transits through Envoy proxies. By deploying the Envoy proxy in front of
|
||||
services, operators can conduct A/B testing, deploy canary services,
|
||||
etc. for user-facing services. Similarly, by routing traffic to external
|
||||
web services (for instance, accessing the Maps API, or a video service API)
|
||||
via the sidecar Envoy, operators can add failure recovery features such as
|
||||
timeouts, retries, circuit breakers, etc., and obtain detailed metrics on
|
||||
the connections to these services.
|
||||
|
||||
{{< image width="60%" ratio="28.88%"
|
||||
link="./ServiceModel_RequestFlow.svg"
|
||||
alt="Ingress and Egress through Envoy."
|
||||
caption="Request Flow"
|
||||
>}}
|
||||
|
||||
## Discovery and load balancing
|
||||
|
||||
Istio load balances traffic across instances of a service in a service mesh.
|
||||
|
||||
Istio assumes the presence of a service registry
|
||||
to keep track of the pods/VMs of a service in the application. It also
|
||||
assumes that new instances of a service are automatically registered with
|
||||
the service registry and unhealthy instances are automatically removed.
|
||||
Platforms such as Kubernetes, Mesos already provide such functionality for
|
||||
container-based applications. A plethora of solutions exist for VM-based
|
||||
applications.
|
||||
|
||||
Pilot consumes information from the service
|
||||
registry and provides a platform-agnostic service discovery
|
||||
interface. Envoy instances in the mesh perform service discovery and
|
||||
dynamically update their load balancing pools accordingly.
|
||||
|
||||
{{< image width="55%" ratio="74.79%"
|
||||
link="./LoadBalancing.svg"
|
||||
caption="Discovery and Load Balancing"
|
||||
>}}
|
||||
|
||||
As illustrated in the figure above, services in the mesh access each other
|
||||
using their DNS names. All HTTP traffic bound to a service is automatically
|
||||
re-routed through Envoy. Envoy distributes the traffic across instances in
|
||||
the load balancing pool. While Envoy supports several
|
||||
[sophisticated load balancing algorithms](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/load_balancing),
|
||||
Istio currently allows three load balancing modes:
|
||||
round robin, random, and weighted least request.
|
||||
|
||||
In addition to load balancing, Envoy periodically checks the health of each
|
||||
instance in the pool. Envoy follows a circuit breaker style pattern to
|
||||
classify instances as unhealthy or healthy based on their failure rates for
|
||||
the health check API call. In other words, when the number of health
|
||||
check failures for a given instance exceeds a pre-specified threshold, it
|
||||
will be ejected from the load balancing pool. Similarly, when the number of
|
||||
health checks that pass exceed a pre-specified threshold, the instance will
|
||||
be added back into the load balancing pool. You can find out more about Envoy's
|
||||
failure-handling features in [Handling Failures](#handling-failures).
|
||||
|
||||
Services can actively shed load by responding with a HTTP 503 to a health
|
||||
check. In such an event, the service instance will be immediately removed
|
||||
from the caller's load balancing pool.
|
||||
|
||||
## Handling failures
|
||||
|
||||
Envoy provides a set of out-of-the-box _opt-in_ failure recovery features
|
||||
that can be taken advantage of by the services in an application. Features
|
||||
include:
|
||||
|
||||
1. Timeouts
|
||||
|
||||
1. Bounded retries with timeout budgets and variable jitter between retries
|
||||
|
||||
1. Limits on number of concurrent connections and requests to upstream services
|
||||
|
||||
1. Active (periodic) health checks on each member of the load balancing pool
|
||||
|
||||
1. Fine-grained circuit breakers (passive health checks) -- applied per
|
||||
instance in the load balancing pool
|
||||
|
||||
These features can be dynamically configured at runtime through
|
||||
[Istio's traffic management rules](#rule-configuration).
|
||||
|
||||
The jitter between retries minimizes the impact of retries on an overloaded
|
||||
upstream service, while timeout budgets ensure that the calling service
|
||||
gets a response (success/failure) within a predictable time frame.
|
||||
|
||||
A combination of active and passive health checks (4 and 5 above)
|
||||
minimizes the chances of accessing an unhealthy instance in the load
|
||||
balancing pool. When combined with platform-level health checks (such as
|
||||
those supported by Kubernetes or Mesos), applications can ensure that
|
||||
unhealthy pods/containers/VMs can be quickly weeded out of the service
|
||||
mesh, minimizing the request failures and impact on latency.
|
||||
|
||||
Together, these features enable the service mesh to tolerate failing nodes
|
||||
and prevent localized failures from cascading instability to other nodes.
|
||||
|
||||
### Fine tuning
|
||||
|
||||
Istio's traffic management rules allow
|
||||
operators to set global defaults for failure recovery per
|
||||
service/version. However, consumers of a service can also override
|
||||
[timeout](/docs/reference/config/istio.routing.v1alpha1/#HTTPTimeout)
|
||||
and
|
||||
[retry](/docs/reference/config/istio.routing.v1alpha1/#HTTPRetry)
|
||||
defaults by providing request-level overrides through special HTTP headers.
|
||||
With the Envoy proxy implementation, the headers are `x-envoy-upstream-rq-timeout-ms` and
|
||||
`x-envoy-max-retries`, respectively.
|
||||
|
||||
### Failure handling FAQ
|
||||
|
||||
Q: *Do applications still handle failures when running in Istio?*
|
||||
|
||||
Yes. Istio improves the reliability and availability of services in the
|
||||
mesh. However, **applications need to handle the failure (errors)
|
||||
and take appropriate fallback actions**. For example, when all instances in
|
||||
a load balancing pool have failed, Envoy will return HTTP 503. It is the
|
||||
responsibility of the application to implement any fallback logic that is
|
||||
needed to handle the HTTP 503 error code from an upstream service.
|
||||
|
||||
Q: *Will Envoy's failure recovery features break applications that already
|
||||
use fault tolerance libraries (e.g. [Hystrix](https://github.com/Netflix/Hystrix))?*
|
||||
|
||||
No. Envoy is completely transparent to the application. A failure response
|
||||
returned by Envoy would not be distinguishable from a failure response
|
||||
returned by the upstream service to which the call was made.
|
||||
|
||||
Q: *How will failures be handled when using application-level libraries and
|
||||
Envoy at the same time?*
|
||||
|
||||
Given two failure recovery policies for the same destination service (e.g.,
|
||||
two timeouts -- one set in Envoy and another in application's library), **the
|
||||
more restrictive of the two will be triggered when failures occur**. For
|
||||
example, if the application sets a 5 second timeout for an API call to a
|
||||
service, while the operator has configured a 10 second timeout, the
|
||||
application's timeout will kick in first. Similarly, if Envoy's circuit
|
||||
breaker triggers before the application's circuit breaker, API calls to the
|
||||
service will get a 503 from Envoy.
|
||||
|
||||
## Fault injection
|
||||
|
||||
While Envoy sidecar/proxy provides a host of
|
||||
[failure recovery mechanisms](#handling-failures) to services running
|
||||
on Istio, it is still
|
||||
imperative to test the end-to-end failure recovery capability of the
|
||||
application as a whole. Misconfigured failure recovery policies (e.g.,
|
||||
incompatible/restrictive timeouts across service calls) could result in
|
||||
continued unavailability of critical services in the application, resulting
|
||||
in poor user experience.
|
||||
|
||||
Istio enables protocol-specific fault injection into the network, instead
|
||||
of killing pods, delaying or corrupting packets at TCP layer. Our rationale
|
||||
is that the failures observed by the application layer are the same
|
||||
regardless of network level failures, and that more meaningful failures can
|
||||
be injected at the application layer (e.g., HTTP error codes) to exercise
|
||||
the resilience of an application.
|
||||
|
||||
Operators can configure faults to be injected into requests that match
|
||||
specific criteria. Operators can further restrict the percentage of
|
||||
requests that should be subjected to faults. Two types of faults can be
|
||||
injected: delays and aborts. Delays are timing failures, mimicking
|
||||
increased network latency, or an overloaded upstream service. Aborts are
|
||||
crash failures that mimic failures in upstream services. Aborts usually
|
||||
manifest in the form of HTTP error codes, or TCP connection failures.
|
||||
|
||||
## Rule configuration
|
||||
|
||||
Istio provides a simple configuration model to
|
||||
control how API calls and layer-4 traffic flow across various
|
||||
services in an application deployment. The configuration model allows an operator to
|
||||
|
@ -71,7 +366,7 @@ A few important aspects of these resources are described below.
|
|||
See [networking reference](/docs/reference/config/istio.networking.v1alpha3/)
|
||||
for detailed information.
|
||||
|
||||
## Virtual Services
|
||||
### Virtual Services
|
||||
|
||||
A [VirtualService](/docs/reference/config/istio.networking.v1alpha3/#VirtualService)
|
||||
defines the rules that control how requests for a service are routed within an Istio service mesh.
|
||||
|
@ -80,7 +375,7 @@ to a completely different service than was requested.
|
|||
Requests can be routed based on the request source and destination, HTTP paths and
|
||||
header fields, and weights associated with individual service versions.
|
||||
|
||||
### Rule destinations
|
||||
#### Rule destinations
|
||||
|
||||
Routing rules correspond to one or more request destination hosts that are specified in
|
||||
a `VirtualService` configuration. These hosts may or may not be the same as the actual
|
||||
|
@ -101,7 +396,7 @@ expand to an implementation specific FQDN. For example, in a Kubernetes environm
|
|||
the full name is derived from the cluster and namespace of the `VirtualSevice`
|
||||
(e.g., `reviews.default.svc.cluster.local`).
|
||||
|
||||
### Qualify rules by source/headers
|
||||
#### Qualify rules by source/headers
|
||||
|
||||
Rules can optionally be qualified to only apply to requests that match some
|
||||
specific criteria such as the following:
|
||||
|
@ -220,7 +515,7 @@ spec:
|
|||
...
|
||||
{{< /text >}}
|
||||
|
||||
### Split traffic between service versions
|
||||
#### Splitting traffic between versions
|
||||
|
||||
Each route rule identifies one or more weighted backends to call when the rule is activated.
|
||||
Each backend corresponds to a specific version of the destination service,
|
||||
|
@ -252,7 +547,7 @@ spec:
|
|||
weight: 25
|
||||
{{< /text >}}
|
||||
|
||||
### Timeouts and retries
|
||||
#### Timeouts and retries
|
||||
|
||||
By default, the timeout for http requests is 15 seconds,
|
||||
but this can be overridden in a route rule as follows:
|
||||
|
@ -296,11 +591,11 @@ spec:
|
|||
{{< /text >}}
|
||||
|
||||
Note that request timeouts and retries can also be
|
||||
[overridden on a per-request basis](/docs/concepts/traffic-management/handling-failures#fine-tuning).
|
||||
[overridden on a per-request basis](#fine-tuning).
|
||||
|
||||
See the [request timeouts task](/docs/tasks/traffic-management/request-timeouts/) for a demonstration of timeout control.
|
||||
See the [request timeouts task](/docs/tasks/traffic-management/request-timeouts) for a demonstration of timeout control.
|
||||
|
||||
### Injecting faults in the request path
|
||||
#### Injecting faults
|
||||
|
||||
A route rule can specify one or more faults to inject
|
||||
while forwarding http requests to the rule's corresponding request destination.
|
||||
|
@ -383,7 +678,7 @@ spec:
|
|||
|
||||
To see fault injection in action, see the [fault injection task](/docs/tasks/traffic-management/fault-injection/).
|
||||
|
||||
### HTTP route rules have precedence
|
||||
#### Precedence
|
||||
|
||||
When there are multiple rules for a given destination,
|
||||
they are evaluated in the order they appear
|
||||
|
@ -441,7 +736,7 @@ request, it will be executed and the rule-evaluation process will
|
|||
terminate. That's why it's very important to carefully consider the
|
||||
priorities of each rule when there is more than one.
|
||||
|
||||
## Destination Rules
|
||||
### Destination rules
|
||||
|
||||
A [DestinationRule](/docs/reference/config/istio.networking.v1alpha3/#DestinationRule)
|
||||
configures the set of policies to be applied to a request after `VirtualService` routing has occurred. They are
|
||||
|
@ -480,7 +775,7 @@ spec:
|
|||
Notice that multiple policies (e.g., default and v2-specific) can be
|
||||
specified in a single `DestinationRule` configuration.
|
||||
|
||||
### Circuit breakers
|
||||
#### Circuit breakers
|
||||
|
||||
A simple circuit breaker can be set based on a number of criteria such as connection and request limits.
|
||||
|
||||
|
@ -506,7 +801,7 @@ spec:
|
|||
|
||||
See the [circuit-breaking task](/docs/tasks/traffic-management/circuit-breaking/) for a demonstration of circuit breaker control.
|
||||
|
||||
### DestinationRule evaluation
|
||||
#### Rule evaluation
|
||||
|
||||
Similar to route rules, policies are associated with a
|
||||
particular *host* however if they are subset specific,
|
||||
|
@ -595,7 +890,7 @@ rules are going to be needed.
|
|||
Therefore, setting a default rule for every service, right from the
|
||||
start, is generally considered a best practice in Istio.
|
||||
|
||||
## Service Entries
|
||||
### Service entries
|
||||
|
||||
A [ServiceEntry](/docs/reference/config/istio.networking.v1alpha3/#ServiceEntry)
|
||||
is used to add additional entries into the service registry that Istio maintains internally.
|
||||
|
@ -663,7 +958,7 @@ of multiple versions of an external service.
|
|||
See the [egress task](/docs/tasks/traffic-management/egress/) for a more
|
||||
about accessing external services.
|
||||
|
||||
## Gateways
|
||||
### Gateways
|
||||
|
||||
A [Gateway](/docs/reference/config/istio.networking.v1alpha3/#Gateway)
|
||||
configures a load balancer for HTTP/TCP traffic, most commonly operating at the edge of the
|
|
@ -1,49 +0,0 @@
|
|||
---
|
||||
title: Discovery & Load Balancing
|
||||
description: Describes how traffic is load balanced across instances of a service in the mesh.
|
||||
weight: 25
|
||||
keywords: [traffic-management,load-balancing]
|
||||
---
|
||||
|
||||
This page describes how Istio load balances traffic across instances of a
|
||||
service in a service mesh.
|
||||
|
||||
**Service registration:** Istio assumes the presence of a service registry
|
||||
to keep track of the pods/VMs of a service in the application. It also
|
||||
assumes that new instances of a service are automatically registered with
|
||||
the service registry and unhealthy instances are automatically removed.
|
||||
Platforms such as Kubernetes, Mesos already provide such functionality for
|
||||
container-based applications. A plethora of solutions exist for VM-based
|
||||
applications.
|
||||
|
||||
**Service Discovery:** Pilot consumes information from the service
|
||||
registry and provides a platform-agnostic service discovery
|
||||
interface. Envoy instances in the mesh perform service discovery and
|
||||
dynamically update their load balancing pools accordingly.
|
||||
|
||||
{{< image width="80%" ratio="74.79%"
|
||||
link="./LoadBalancing.svg"
|
||||
caption="Discovery and Load Balancing"
|
||||
>}}
|
||||
|
||||
As illustrated in the figure above, services in the mesh access each other
|
||||
using their DNS names. All HTTP traffic bound to a service is automatically
|
||||
re-routed through Envoy. Envoy distributes the traffic across instances in
|
||||
the load balancing pool. While Envoy supports several
|
||||
[sophisticated load balancing algorithms](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/load_balancing),
|
||||
Istio currently allows three load balancing modes:
|
||||
round robin, random, and weighted least request.
|
||||
|
||||
In addition to load balancing, Envoy periodically checks the health of each
|
||||
instance in the pool. Envoy follows a circuit breaker style pattern to
|
||||
classify instances as unhealthy or healthy based on their failure rates for
|
||||
the health check API call. In other words, when the number of health
|
||||
check failures for a given instance exceeds a pre-specified threshold, it
|
||||
will be ejected from the load balancing pool. Similarly, when the number of
|
||||
health checks that pass exceed a pre-specified threshold, the instance will
|
||||
be added back into the load balancing pool. You can find out more about Envoy's
|
||||
failure-handling features in [Handling Failures](/docs/concepts/traffic-management/handling-failures/).
|
||||
|
||||
Services can actively shed load by responding with a HTTP 503 to a health
|
||||
check. In such an event, the service instance will be immediately removed
|
||||
from the caller's load balancing pool.
|
|
@ -1,57 +0,0 @@
|
|||
---
|
||||
title: Overview
|
||||
description: Provides a conceptual overview of traffic management in Istio and the features it enables.
|
||||
weight: 1
|
||||
keywords: [traffic-management]
|
||||
---
|
||||
|
||||
This page provides an overview of how traffic management works
|
||||
in Istio, including the benefits of its traffic management
|
||||
principles. It assumes that you've already read [What Is Istio?](/docs/concepts/what-is-istio/overview/)
|
||||
and are familiar with Istio's high-level architecture. You can
|
||||
find out more about individual traffic management features in the other
|
||||
guides in this section.
|
||||
|
||||
## Pilot and Envoy
|
||||
|
||||
The core component used for traffic management in Istio is
|
||||
[Pilot](/docs/concepts/traffic-management/pilot/), which manages and configures all the Envoy
|
||||
proxy instances deployed in a particular Istio service mesh. It lets you
|
||||
specify what rules you want to use to route traffic between Envoy proxies
|
||||
and configure failure recovery features such as timeouts, retries, and
|
||||
circuit breakers. It also maintains a canonical model of all the services
|
||||
in the mesh and uses this to let Envoys know about the other instances in
|
||||
the mesh via its discovery service.
|
||||
|
||||
Each Envoy instance maintains [load balancing information](/docs/concepts/traffic-management/load-balancing/)
|
||||
based on the information it gets from Pilot and periodic health-checks
|
||||
of other instances in its load-balancing pool, allowing it to intelligently
|
||||
distribute traffic between destination instances while following its specified
|
||||
routing rules.
|
||||
|
||||
## Traffic management benefits
|
||||
|
||||
Using Istio's traffic management model essentially decouples traffic flow
|
||||
and infrastructure scaling, letting operators specify via Pilot what
|
||||
rules they want traffic to follow rather than which specific pods/VMs should
|
||||
receive traffic - Pilot and intelligent Envoy proxies look after the
|
||||
rest. So, for example, you can specify via Pilot that you want 5%
|
||||
of traffic for a particular service to go to a canary version irrespective
|
||||
of the size of the canary deployment, or send traffic to a particular version
|
||||
depending on the content of the request.
|
||||
|
||||
{{< image width="85%" ratio="69.52%"
|
||||
link="./TrafficManagementOverview.svg"
|
||||
caption="Traffic Management with Istio"
|
||||
>}}
|
||||
|
||||
Decoupling traffic flow from infrastructure scaling like this allows Istio
|
||||
to provide a variety of traffic management features that live outside the
|
||||
application code. As well as dynamic [request routing](/docs/concepts/traffic-management/request-routing/)
|
||||
for A/B testing, gradual rollouts, and canary releases, it also handles
|
||||
[failure recovery](/docs/concepts/traffic-management/handling-failures/) using timeouts, retries, and
|
||||
circuit breakers, and finally [fault injection](/docs/concepts/traffic-management/fault-injection/) to
|
||||
test the compatibility of failure recovery policies across services. These
|
||||
capabilities are all realized through the Envoy sidecars/proxies deployed
|
||||
across the service mesh.
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
---
|
||||
title: Pilot
|
||||
description: Introduces Pilot, the component responsible for managing a distributed deployment of Envoy proxies in the service mesh.
|
||||
weight: 10
|
||||
keywords: [traffic-management,pilot]
|
||||
aliases:
|
||||
- /docs/concepts/traffic-management/manager.html
|
||||
---
|
||||
|
||||
Pilot is responsible for the lifecycle of Envoy instances deployed
|
||||
across the Istio service mesh.
|
||||
|
||||
{{< image width="60%" ratio="72.17%"
|
||||
link="./PilotAdapters.svg"
|
||||
caption="Pilot Architecture"
|
||||
>}}
|
||||
|
||||
As illustrated in the figure above, Pilot maintains a canonical
|
||||
representation of services in the mesh that is independent of the underlying
|
||||
platform. Platform-specific adapters in Pilot are responsible for
|
||||
populating this canonical model appropriately. For example, the Kubernetes
|
||||
adapter in Pilot implements the necessary controllers to watch the
|
||||
Kubernetes API server for changes to the pod registration information, ingress
|
||||
resources, and third party resources that store traffic management rules.
|
||||
This data is translated into the canonical representation. Envoy-specific
|
||||
configuration is generated based on the canonical representation.
|
||||
|
||||
Pilot exposes APIs for [service discovery](https://www.envoyproxy.io/docs/envoy/latest/api-v1/cluster_manager/sds),
|
||||
dynamic updates to [load balancing pools](https://www.envoyproxy.io/docs/envoy/latest/configuration/cluster_manager/cds)
|
||||
and [routing tables](https://www.envoyproxy.io/docs/envoy/latest/configuration/http_conn_man/rds).
|
||||
These APIs decouple Envoy from platform-specific nuances, simplifying the
|
||||
design and increasing portability across platforms.
|
||||
|
||||
Operators can specify high-level traffic management rules through
|
||||
[Pilot's Rules API](/docs/reference/config/istio.routing.v1alpha1/). These rules are translated into low-level
|
||||
configurations and distributed to Envoy instances via the discovery API.
|
|
@ -1,76 +0,0 @@
|
|||
---
|
||||
title: Request Routing
|
||||
description: Describes how requests are routed between services in an Istio service mesh.
|
||||
weight: 20
|
||||
keywords: [traffic-management,routing]
|
||||
---
|
||||
|
||||
This page describes how requests are routed between services in an Istio service mesh.
|
||||
|
||||
## Service model and service versions
|
||||
|
||||
As described in [Pilot](/docs/concepts/traffic-management/pilot/), the canonical representation
|
||||
of services in a particular mesh is maintained by Pilot. The Istio
|
||||
model of a service is independent of how it is represented in the underlying
|
||||
platform (Kubernetes, Mesos, Cloud Foundry,
|
||||
etc.). Platform-specific adapters are responsible for populating the
|
||||
internal model representation with various fields from the metadata found
|
||||
in the platform.
|
||||
|
||||
Istio introduces the concept of a service version, which is a finer-grained
|
||||
way to subdivide service instances by versions (`v1`, `v2`) or environment
|
||||
(`staging`, `prod`). These variants are not necessarily different API
|
||||
versions: they could be iterative changes to the same service, deployed in
|
||||
different environments (prod, staging, dev, etc.). Common scenarios where
|
||||
this is used include A/B testing or canary rollouts. Istio's [traffic
|
||||
routing rules](/docs/concepts/traffic-management/rules-configuration/) can refer to service versions to provide
|
||||
additional control over traffic between services.
|
||||
|
||||
## Communication between services
|
||||
|
||||
{{< image width="60%" ratio="100.42%"
|
||||
link="./ServiceModel_Versions.svg"
|
||||
alt="Showing how service versions are handled."
|
||||
caption="Service Versions"
|
||||
>}}
|
||||
|
||||
As illustrated in the figure above, clients of a service have no knowledge
|
||||
of different versions of the service. They can continue to access the
|
||||
services using the hostname/IP address of the service. The Envoy sidecar/proxy
|
||||
intercepts and forwards all requests/responses between the client and the
|
||||
service.
|
||||
|
||||
Envoy determines its actual choice of service version dynamically
|
||||
based on the routing rules specified by the operator using Pilot. This
|
||||
model enables the application code to decouple itself from the evolution of its dependent
|
||||
services, while providing other benefits as well (see
|
||||
[Mixer](/docs/concepts/policies-and-telemetry/overview/)). Routing
|
||||
rules allow Envoy to select a version based
|
||||
on criteria such as headers, tags associated with
|
||||
source/destination, and/or by weights assigned to each version.
|
||||
|
||||
Istio also provides load balancing for traffic to multiple instances of
|
||||
the same service version. You can find out more about this in [Discovery
|
||||
and Load Balancing](/docs/concepts/traffic-management/load-balancing/).
|
||||
|
||||
Istio does not provide a DNS. Applications can try to resolve the
|
||||
FQDN using the DNS service present in the underlying platform (kube-dns,
|
||||
mesos-dns, etc.).
|
||||
|
||||
## Ingress and egress
|
||||
|
||||
Istio assumes that all traffic entering and leaving the service mesh
|
||||
transits through Envoy proxies. By deploying the Envoy proxy in front of
|
||||
services, operators can conduct A/B testing, deploy canary services,
|
||||
etc. for user-facing services. Similarly, by routing traffic to external
|
||||
web services (for instance, accessing the Maps API, or a video service API)
|
||||
via the sidecar Envoy, operators can add failure recovery features such as
|
||||
timeouts, retries, circuit breakers, etc., and obtain detailed metrics on
|
||||
the connections to these services.
|
||||
|
||||
{{< image width="60%" ratio="28.88%"
|
||||
link="./ServiceModel_RequestFlow.svg"
|
||||
alt="Ingress and Egress through Envoy."
|
||||
caption="Request Flow"
|
||||
>}}
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
title: What is Istio?
|
||||
description: A broad overview of the Istio system.
|
||||
weight: 10
|
||||
type: section-index
|
||||
---
|
Before Width: | Height: | Size: 112 KiB After Width: | Height: | Size: 112 KiB |
|
@ -1,34 +0,0 @@
|
|||
---
|
||||
title: Design Goals
|
||||
description: Describes the core principles that Istio's design adheres to.
|
||||
weight: 20
|
||||
---
|
||||
|
||||
This page outlines the core principles that guide Istio's design.
|
||||
|
||||
Istio’s architecture is informed by a few key design goals that are essential to making the system capable of dealing with services at scale and with high
|
||||
performance.
|
||||
|
||||
- **Maximize Transparency**.
|
||||
To adopt Istio, an operator or developer should be required to do the minimum amount of work possible to get real value from the system. To this end, Istio
|
||||
can automatically inject itself into all the network paths between services. Istio uses sidecar proxies to capture traffic, and where possible, automatically
|
||||
program the networking layer to route traffic through those proxies without any changes to the deployed application code. In Kubernetes, the proxies are
|
||||
injected into pods and traffic is captured by programming iptables rules. Once the sidecar proxies are injected and traffic routing is programmed, Istio is
|
||||
able to mediate all traffic. This principle also applies to performance. When applying Istio to a deployment, operators should see a minimal increase in
|
||||
resource costs for the
|
||||
functionality being provided. Components and APIs must all be designed with performance and scale in mind.
|
||||
|
||||
- **Incrementality**.
|
||||
As operators and developers become more dependent on the functionality that Istio provides, the system must grow with their needs. While we expect to
|
||||
continue adding new features ourselves, we expect the greatest need will be the ability to extend the policy system, to integrate with other sources of policy and control and to propagate signals about mesh behavior to other systems for analysis. The policy runtime supports a standard extension mechanism for plugging in other services. In addition, it allows for the extension of its vocabulary to allow policies to be enforced based on new signals that the mesh produces.
|
||||
|
||||
- **Portability**.
|
||||
The ecosystem in which Istio will be used varies along many dimensions. Istio must run on any cloud or on-prem environment with minimal effort. The task of
|
||||
porting Istio-based services to new environments should be trivial, and it should be possible to operate a single service deployed into multiple
|
||||
environments (on multiple clouds for redundancy for example) using Istio.
|
||||
|
||||
- **Policy Uniformity**.
|
||||
The application of policy to API calls between services provides a great deal of control over mesh behavior, but it can be equally important to apply
|
||||
policies to resources which are not necessarily expressed at the API level. For example, applying quota to the amount of CPU consumed by an ML training task
|
||||
is more useful than applying quota to the call which initiated the work. To this end, the policy system is maintained as a distinct service with its own API
|
||||
rather than being baked into the proxy/sidecar, allowing services to directly integrate with it as needed.
|
|
@ -1,15 +1,16 @@
|
|||
---
|
||||
title: Overview
|
||||
description: Provides a conceptual introduction to Istio, including the problems it solves and its high-level architecture.
|
||||
title: What is Istio?
|
||||
description: Introduces Istio, the problems it solves, its high-level architecture and design goals.
|
||||
weight: 15
|
||||
aliases:
|
||||
- /docs/concepts/what-is-istio/overview
|
||||
- /docs/concepts/what-is-istio/goals
|
||||
---
|
||||
|
||||
This document introduces Istio: an open platform to connect, manage, and secure microservices. Istio provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, configured and managed using Istio's control plane functionality.
|
||||
|
||||
Istio currently supports service deployment on Kubernetes, as well as services registered with Consul or Eureka and services running on individual VMs.
|
||||
|
||||
For detailed conceptual information about Istio components see our other [Concepts](/docs/concepts/) guides.
|
||||
|
||||
## Why use Istio?
|
||||
|
||||
Istio addresses many of the challenges faced by developers and operators as monolithic applications transition towards a distributed microservice architecture. The term **service mesh** is often used to describe the network of
|
||||
|
@ -69,17 +70,17 @@ Istio uses an extended version of the [Envoy](https://envoyproxy.github.io/envoy
|
|||
Istio leverages Envoy’s many built-in features such as dynamic service discovery, load balancing, TLS termination, HTTP/2 & gRPC proxying, circuit breakers,
|
||||
health checks, staged rollouts with %-based traffic split, fault injection, and rich metrics.
|
||||
|
||||
Envoy is deployed as a **sidecar** to the relevant service in the same Kubernetes pod. This allows Istio to extract a wealth of signals about traffic behavior as [attributes](/docs/concepts/policies-and-telemetry/config/#attributes), which in turn it can use in [Mixer](/docs/concepts/policies-and-telemetry/overview/) to enforce policy decisions, and be sent to monitoring systems to provide information about the behavior of the entire mesh. The sidecar proxy model also allows you to add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code. You can read more about why we chose this approach in our [Design Goals](/docs/concepts/what-is-istio/goals/).
|
||||
Envoy is deployed as a **sidecar** to the relevant service in the same Kubernetes pod. This allows Istio to extract a wealth of signals about traffic behavior as [attributes](/docs/concepts/policies-and-telemetry/#attributes), which in turn it can use in [Mixer](/docs/concepts/policies-and-telemetry/) to enforce policy decisions, and be sent to monitoring systems to provide information about the behavior of the entire mesh. The sidecar proxy model also allows you to add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code. You can read more about why we chose this approach in our [Design Goals](/docs/concepts/what-is-istio/#design-goals).
|
||||
|
||||
### Mixer
|
||||
|
||||
[Mixer](/docs/concepts/policies-and-telemetry/overview/) is a platform-independent component responsible for enforcing access control and usage policies across the service mesh and collecting telemetry data from the Envoy proxy and other
|
||||
services. The proxy extracts request level [attributes](/docs/concepts/policies-and-telemetry/config/#attributes), which are sent to Mixer for evaluation. More information on this attribute extraction and policy
|
||||
evaluation can be found in [Mixer Configuration](/docs/concepts/policies-and-telemetry/config/). Mixer includes a flexible plugin model enabling it to interface with a variety of host environments and infrastructure backends, abstracting the Envoy proxy and Istio-managed services from these details.
|
||||
[Mixer](/docs/concepts/policies-and-telemetry/) is a platform-independent component responsible for enforcing access control and usage policies across the service mesh and collecting telemetry data from the Envoy proxy and other
|
||||
services. The proxy extracts request level [attributes](/docs/concepts/policies-and-telemetry/#attributes), which are sent to Mixer for evaluation. More information on this attribute extraction and policy
|
||||
evaluation can be found in [Mixer Configuration](/docs/concepts/policies-and-telemetry/#configuration-model). Mixer includes a flexible plugin model enabling it to interface with a variety of host environments and infrastructure backends, abstracting the Envoy proxy and Istio-managed services from these details.
|
||||
|
||||
### Pilot
|
||||
|
||||
[Pilot](/docs/concepts/traffic-management/pilot/) provides
|
||||
[Pilot](/docs/concepts/traffic-management/#pilot-and-envoy) provides
|
||||
service discovery for the Envoy sidecars, traffic management capabilities
|
||||
for intelligent routing (e.g., A/B tests, canary deployments, etc.),
|
||||
and resiliency (timeouts, retries, circuit breakers, etc.). It converts
|
||||
|
@ -98,3 +99,32 @@ interface for traffic management.
|
|||
credential management. It can be used to upgrade unencrypted traffic in the service mesh, and provides operators the ability to enforce
|
||||
policy based on service identity rather than network controls. Starting from release 0.5, Istio supports
|
||||
[role-based access control](/docs/concepts/security/rbac/) to control who can access your services.
|
||||
|
||||
## Design Goals
|
||||
|
||||
Istio’s architecture is informed by a few key design goals that are essential to making the system capable of dealing with services at scale and with high
|
||||
performance.
|
||||
|
||||
- **Maximize Transparency**.
|
||||
To adopt Istio, an operator or developer should be required to do the minimum amount of work possible to get real value from the system. To this end, Istio
|
||||
can automatically inject itself into all the network paths between services. Istio uses sidecar proxies to capture traffic, and where possible, automatically
|
||||
program the networking layer to route traffic through those proxies without any changes to the deployed application code. In Kubernetes, the proxies are
|
||||
injected into pods and traffic is captured by programming iptables rules. Once the sidecar proxies are injected and traffic routing is programmed, Istio is
|
||||
able to mediate all traffic. This principle also applies to performance. When applying Istio to a deployment, operators should see a minimal increase in
|
||||
resource costs for the
|
||||
functionality being provided. Components and APIs must all be designed with performance and scale in mind.
|
||||
|
||||
- **Incrementality**.
|
||||
As operators and developers become more dependent on the functionality that Istio provides, the system must grow with their needs. While we expect to
|
||||
continue adding new features ourselves, we expect the greatest need will be the ability to extend the policy system, to integrate with other sources of policy and control and to propagate signals about mesh behavior to other systems for analysis. The policy runtime supports a standard extension mechanism for plugging in other services. In addition, it allows for the extension of its vocabulary to allow policies to be enforced based on new signals that the mesh produces.
|
||||
|
||||
- **Portability**.
|
||||
The ecosystem in which Istio will be used varies along many dimensions. Istio must run on any cloud or on-prem environment with minimal effort. The task of
|
||||
porting Istio-based services to new environments should be trivial, and it should be possible to operate a single service deployed into multiple
|
||||
environments (on multiple clouds for redundancy for example) using Istio.
|
||||
|
||||
- **Policy Uniformity**.
|
||||
The application of policy to API calls between services provides a great deal of control over mesh behavior, but it can be equally important to apply
|
||||
policies to resources which are not necessarily expressed at the API level. For example, applying quota to the amount of CPU consumed by an ML training task
|
||||
is more useful than applying quota to the call which initiated the work. To this end, the policy system is maintained as a distinct service with its own API
|
||||
rather than being baked into the proxy/sidecar, allowing services to directly integrate with it as needed.
|
|
@ -199,7 +199,7 @@ version routing.
|
|||
|
||||
You can now use this sample to experiment with Istio's features for
|
||||
traffic routing, fault injection, rate limiting, etc..
|
||||
To proceed, refer to one or more of the [Istio Guides](/docs/guides),
|
||||
To proceed, refer to one or more of the [Istio Examples](/docs/examples),
|
||||
depending on your interest. [Intelligent Routing](/docs/examples/intelligent-routing/)
|
||||
is a good place to start for beginners.
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ aliases:
|
|||
---
|
||||
|
||||
Attributes are a central concept used throughout Istio. You can find a description of what attributes are
|
||||
and what they are used for [here](/docs/concepts/policies-and-telemetry/config/#attributes).
|
||||
and what they are used for [here](/docs/concepts/policies-and-telemetry/#attributes).
|
||||
|
||||
A given Istio deployment has a fixed vocabulary of attributes that it understands. The specific vocabulary is
|
||||
determined by the set of attribute producers being used in the deployment. The primary attribute producer in Istio
|
||||
|
|
|
@ -10,7 +10,7 @@ This page describes how to use the Mixer config expression language (CEXL).
|
|||
|
||||
## Background
|
||||
|
||||
Mixer configuration uses an expression language (CEXL) to specify match expressions and [mapping expressions](/docs/concepts/policies-and-telemetry/config/#attribute-expressions). CEXL expressions map a set of typed [attributes](/docs/concepts/policies-and-telemetry/config/#attributes) and constants to a typed
|
||||
Mixer configuration uses an expression language (CEXL) to specify match expressions and [mapping expressions](/docs/concepts/policies-and-telemetry/#attribute-expressions). CEXL expressions map a set of typed [attributes](/docs/concepts/policies-and-telemetry/#attributes) and constants to a typed
|
||||
[value](https://github.com/istio/api/blob/master/policy/v1beta1/value_type.proto).
|
||||
|
||||
## Syntax
|
||||
|
|
|
@ -46,7 +46,7 @@ caption="GKE-IAM Role"
|
|||
[Istio GKE Deployment Manager](https://accounts.google.com/signin/v2/identifier?service=cloudconsole&continue=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/{{<branch_name>}}/install/gcp/deployment_manager/istio-cluster.jinja&followup=https://console.cloud.google.com/launcher/config?templateurl=https://raw.githubusercontent.com/istio/istio/master/install/gcp/deployment_manager/istio-cluster.jinja&flowName=GlifWebSignIn&flowEntry=ServiceLogin)
|
||||
|
||||
We recommend that you leave the default settings as the rest of this tutorial shows how to access the installed features. By default the tool creates a
|
||||
GKE alpha cluster with the specified settings, then installs the Istio [control plane](/docs/concepts/what-is-istio/overview/#architecture), the
|
||||
GKE alpha cluster with the specified settings, then installs the Istio [control plane](/docs/concepts/what-is-istio/#architecture), the
|
||||
[Bookinfo](/docs/examples/bookinfo/) sample app,
|
||||
[Grafana](/docs/tasks/telemetry/using-istio-dashboard/) with
|
||||
[Prometheus](/docs/tasks/telemetry/querying-metrics/),
|
||||
|
|
|
@ -142,7 +142,7 @@ sidecars injected in the future.
|
|||
|
||||
## Migrating per-service mutual TLS enablement via annotations to authentication policy
|
||||
|
||||
If you use service annotations to override global mutual TLS enablement for a service, you need to replace it with [authentication policy](/docs/concepts/security/authn-policy/) and [destination rules](/docs/concepts/traffic-management/rules-configuration/#destination-rules).
|
||||
If you use service annotations to override global mutual TLS enablement for a service, you need to replace it with [authentication policy](/docs/concepts/security/authn-policy/) and [destination rules](/docs/concepts/traffic-management/#destination-rules).
|
||||
|
||||
For example, if you install Istio with mutual TLS enabled, and disable it for service `foo` using a service annotation like below:
|
||||
|
||||
|
|
|
@ -275,6 +275,7 @@ spec:
|
|||
|
||||
Create the resources:
|
||||
|
||||
<div class="workaround_for_hugo_bug">
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f logging-stack.yaml
|
||||
namespace "logging" created
|
||||
|
@ -286,6 +287,7 @@ configmap "fluentd-es-config" created
|
|||
service "kibana" created
|
||||
deployment "kibana" created
|
||||
{{< /text >}}
|
||||
</div>
|
||||
|
||||
## Configure Istio
|
||||
|
||||
|
|
|
@ -109,7 +109,7 @@ from within your Istio cluster. In this task we will use
|
|||
### Setting route rules on an external service
|
||||
|
||||
Similar to inter-cluster requests, Istio
|
||||
[routing rules](/docs/concepts/traffic-management/rules-configuration/)
|
||||
[routing rules](/docs/concepts/traffic-management/#rule-configuration)
|
||||
can also be set for external services that are accessed using `ServiceEntry` configurations.
|
||||
To illustrate we will use [istioctl](/docs/reference/commands/istioctl/)
|
||||
to set a timeout rule on calls to the httpbin.org service.
|
||||
|
@ -167,7 +167,7 @@ to set a timeout rule on calls to the httpbin.org service.
|
|||
|
||||
If you want to completely bypass Istio for a specific IP range,
|
||||
you can configure the Envoy sidecars to prevent them from
|
||||
[intercepting](/docs/concepts/traffic-management/request-routing/#communication-between-services)
|
||||
[intercepting](/docs/concepts/traffic-management/#communication-between-services)
|
||||
the external requests. This can be done by setting the `global.proxy.includeIPRanges` variable of
|
||||
[Helm](/docs/setup/kubernetes/helm-install/#customization-with-helm) and updating the `ConfigMap` _istio-sidecar-injector_ by `kubectl apply`. After _istio-sidecar-injector_ is updated, the value of `global.proxy.includeIPRanges` will affect all the future deployments of the application pods.
|
||||
|
||||
|
|
|
@ -122,7 +122,7 @@ microservice also has its own application-level timeout (3 seconds) for calls to
|
|||
Notice that in this task we used an Istio route rule to set the timeout to 1 second.
|
||||
Had you instead set the timeout to something greater than 3 seconds (e.g., 4 seconds) the timeout
|
||||
would have had no effect since the more restrictive of the two will take precedence.
|
||||
More details can be found [here](/docs/concepts/traffic-management/handling-failures/#faq).
|
||||
More details can be found [here](/docs/concepts/traffic-management/#failure-handling-faq).
|
||||
|
||||
One more thing to note about timeouts in Istio is that in addition to overriding them in route rules,
|
||||
as you did in this task, they can also be overridden on a per-request basis if the application adds
|
||||
|
|
|
@ -3,5 +3,5 @@ title: Istio doesn't work - what do I do?
|
|||
weight: 90
|
||||
---
|
||||
|
||||
Check out the [troubleshooting guide](/help/troubleshooting/) for finding solutions and our
|
||||
Check out the [operations guide](/help/ops/) for finding solutions and our
|
||||
[bug reporting](/help/bugs/) page for filing bugs.
|
||||
|
|
|
@ -34,6 +34,7 @@ EOF
|
|||
However, the following rules will not work because they use regular
|
||||
expressions in the path and `ingress.kubernetes.io` annotations:
|
||||
|
||||
<div class="Workaround_for_hugo_bug">
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
apiVersion: extensions/v1beta1
|
||||
|
@ -55,5 +56,4 @@ rules:
|
|||
servicePort: grpc
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
<i class="a hack needed to prevent Hugo from inserting a spurious paragraph around the previous code block"></i>
|
||||
</div>
|
||||
|
|
|
@ -8,4 +8,4 @@ monitoring, quotas, ACL checking, and more.
|
|||
The exact set of adapters used at runtime is determined through configuration and can easily be
|
||||
extended to target new or custom infrastructure backends.
|
||||
|
||||
[Learn more about adapters](/docs/concepts/policies-and-telemetry/overview/#adapters).
|
||||
[Learn more about adapters](/docs/concepts/policies-and-telemetry/#adapters).
|
||||
|
|
|
@ -4,4 +4,4 @@ title: Mixer
|
|||
|
||||
The Istio component responsible for enforcing access control and usage policies across the [service mesh](#service-mesh) and collecting telemetry data
|
||||
from [Envoy](#envoy) and other services.
|
||||
[Learn more about Mixer](/docs/concepts/policies-and-telemetry/overview/).
|
||||
[Learn more about Mixer](/docs/concepts/policies-and-telemetry/).
|
||||
|
|
|
@ -21,7 +21,7 @@ title: Istio
|
|||
<h2>智能路由和负载均衡</h2>
|
||||
<p>
|
||||
通过动态路由配置控制服务之间的流量,进行A/B测试、金丝雀发布,使用红/黑部署逐步升级版本。
|
||||
<a href="/docs/concepts/traffic-management/overview/">了解更多...</a>
|
||||
<a href="/docs/concepts/traffic-management/">了解更多...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -36,7 +36,7 @@ title: Istio
|
|||
<h2>跨语言和跨平台的弹性</h2>
|
||||
<p>
|
||||
通过屏蔽来自片状网络的应用和恶劣条件下的级联故障来提高可靠性。
|
||||
<a href="/docs/concepts/traffic-management/handling-failures/">了解更多...</a>
|
||||
<a href="/docs/concepts/traffic-management/#handling-failures">了解更多...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -51,7 +51,7 @@ title: Istio
|
|||
<h2>舰队范围的策略执行</h2>
|
||||
<p>
|
||||
在服务交互间应用编制的策略,确保访问策略得到执行且资源在消费者之间公平分配。
|
||||
<a href="/docs/concepts/policies-and-telemetry/overview/">了解更多...</a>
|
||||
<a href="/docs/concepts/policies-and-telemetry/">了解更多...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -66,7 +66,7 @@ title: Istio
|
|||
<h2>深度遥测</h2>
|
||||
<p>
|
||||
了解服务之间的依赖关系、服务间流量的性质及流向,使用分布式跟踪快速识别问题。
|
||||
<a href="/docs/concepts/what-is-istio/overview/">了解更多...</a>
|
||||
<a href="/docs/concepts/what-is-istio/">了解更多...</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
title: 关于Istio
|
||||
linktitle: 关于
|
||||
description: 关于Istio的说明。
|
||||
sidebar_singlecard: true
|
||||
weight: 15
|
||||
type: section-index
|
||||
---
|
||||
|
|
|
@ -8,7 +8,7 @@ weight: 30
|
|||
|
||||
## 开始之前
|
||||
|
||||
首先要 Fork Istio 的文档仓库,这一过程在 [Creating a Doc Pull Request](/about/contribute/creating-a-pull-request/) 中有具体讲解。
|
||||
首先要 Fork Istio 的文档仓库,这一过程在 [Working with GitHub](/about/contribute/github/) 中有具体讲解。
|
||||
|
||||
## 选择页面类型
|
||||
|
||||
|
|
|
@ -201,7 +201,7 @@ $ kubectl get pods -n istio-system1
|
|||
Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list pods in the namespace "istio-system1"
|
||||
{{< /text >}}
|
||||
|
||||
租户管理员能够在租户指定的应用命名空间中进行应用部署。例如可以修改一下 [Bookinfo](/docs/guides/bookinfo/) 的 Yaml 然后部署到租户的命名空间 `ns-0` 中,然后租户管理员就可以在这一命名空间中列出 Pod 了:
|
||||
租户管理员能够在租户指定的应用命名空间中进行应用部署。例如可以修改一下 [Bookinfo](/docs/examples/bookinfo/) 的 Yaml 然后部署到租户的命名空间 `ns-0` 中,然后租户管理员就可以在这一命名空间中列出 Pod 了:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pods -n ns-0
|
||||
|
|
|
@ -107,7 +107,7 @@ Gateway 可以用于建模边缘代理或纯粹的内部代理,如第一张图
|
|||
|
||||
实际上,发生的变化是:在之前的模型中,需要用一组相互独立的配置规则来为特定的目的服务设置路由规则,并通过 precedence 字段来控制这些规则的顺序;在新的 API 中,则直接对(虚拟)服务进行配置,该虚拟服务的所有规则以一个有序列表的方式配置在对应的 [VirtualService](/docs/reference/config/istio.networking.v1alpha3/#VirtualService) 资源中。
|
||||
|
||||
例如,之前在 [Bookinfo](/docs/guides/bookinfo/) 应用程序的 reviews 服务中有两个 `RouteRule` 资源,如下所示:
|
||||
例如,之前在 [Bookinfo](/docs/examples/bookinfo/) 应用程序的 reviews 服务中有两个 `RouteRule` 资源,如下所示:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: config.istio.io/v1alpha2
|
||||
|
|
|
@ -12,5 +12,5 @@ type: section-index
|
|||
- 最新的 Istio 月度版本是 {{< istio_version >}}:[下载 {{< istio_version >}}](https://github.com/istio/istio/releases),
|
||||
[发布说明](/about/notes/{{< istio_version >}}/)。
|
||||
- 怀念以前的版本?我们保存了[以前版本的文档](https://archive.istio.io/)。
|
||||
- 我们一直在寻求帮助以改进我们的文档,所以如果您遇到问题,请不要犹豫[提issue](https://github.com/istio/istio.github.io/issues/new)。或者更好的是,提交你自己的[贡献](/about/contribute/editing/),以帮助我们的文档做的更好。
|
||||
- 我们一直在寻求帮助以改进我们的文档,所以如果您遇到问题,请不要犹豫[提issue](https://github.com/istio/istio.github.io/issues/new)。或者更好的是,提交你自己的[贡献](/about/contribute/),以帮助我们的文档做的更好。
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ Istio 提供统一抽象,使得 Istio 可以与一组开放式基础设施后
|
|||
Mixer 是负责提供策略控制和遥测收集的 Istio 组件:
|
||||
|
||||
{{< image width="75%" ratio="49.26%"
|
||||
link="/docs/concepts/policies-and-telemetry/overview/topology-without-cache.svg"
|
||||
link="/docs/concepts/policies-and-telemetry/topology-without-cache.svg"
|
||||
caption="Mixer 拓扑"
|
||||
>}}
|
||||
|
||||
|
@ -38,7 +38,7 @@ Mixer 是高度模块化和可扩展的组件。他的一个关键功能就是
|
|||
Mixer 处理不同基础设施后端的灵活性是通过使用通用插件模型实现的。每个插件都被称为 **Adapter**,Mixer通过它们与不同的基础设施后端连接,这些后端可提供核心功能,例如日志、监控、配额、ACL 检查等。通过配置能够决定在运行时使用的确切的适配器套件,并且可以轻松扩展到新的或定制的基础设施后端。
|
||||
|
||||
{{< image width="35%" ratio="138%"
|
||||
link="/docs/concepts/policies-and-telemetry/overview/adapters.svg"
|
||||
link="/docs/concepts/policies-and-telemetry/adapters.svg"
|
||||
alt="显示 Mixer 及其适配器"
|
||||
caption="Mixer 及其适配器"
|
||||
>}}
|
||||
|
@ -54,7 +54,7 @@ Mixer 是一种高度可用的组件,其设计有助于提高整体可用性
|
|||
位于网格中每个服务实例旁边的sidecar代理必须在内存消耗方面节约,这限制了本地缓存和缓冲的可能数量。然而,Mixer独立运行,可以使用相当大的缓存和输出缓冲区。因此,Mixer可用作Sidecar的高度扩展且高度可用的二级缓存。
|
||||
|
||||
{{< image width="75%" ratio="65.89%"
|
||||
link="/docs/concepts/policies-and-telemetry/overview/topology-with-cache.svg"
|
||||
link="/docs/concepts/policies-and-telemetry/topology-with-cache.svg"
|
||||
caption="Mixer 拓扑"
|
||||
>}}
|
||||
|
||||
|
|
|
@ -5,10 +5,10 @@ weight: 40
|
|||
keywords: [traffic-management,fault-injection]
|
||||
---
|
||||
|
||||
虽然 Envoy sidecar/proxy 为在 Istio 上运行的服务提供了大量的[故障恢复机制](/docs/concepts/traffic-management/handling-failures/),但测试整个应用程序端到端的故障恢复能力依然是必须的。错误配置的故障恢复策略(例如,跨服务调用的不兼容/限制性超时)可能导致应用程序中的关键服务持续不可用,从而破坏用户体验。
|
||||
虽然 Envoy sidecar/proxy 为在 Istio 上运行的服务提供了大量的[故障恢复机制](/docs/concepts/traffic-management/#handling-failures),但测试整个应用程序端到端的故障恢复能力依然是必须的。错误配置的故障恢复策略(例如,跨服务调用的不兼容/限制性超时)可能导致应用程序中的关键服务持续不可用,从而破坏用户体验。
|
||||
|
||||
Istio 能在不杀死 Pod 的情况下,将协议特定的故障注入到网络中,在 TCP 层制造数据包的延迟或损坏。我们的理由是,无论网络级别的故障如何,应用层观察到的故障都是一样的,并且可以在应用层注入更有意义的故障(例如,HTTP 错误代码),以检验和改善应用的弹性。
|
||||
|
||||
运维人员可以为符合特定条件的请求配置故障,还可以进一步限制遭受故障的请求的百分比。可以注入两种类型的故障:延迟和中断。延迟是计时故障,模拟网络延迟上升或上游服务超载的情况。中断是模拟上游服务的崩溃故障。中断通常以 HTTP 错误代码或 TCP 连接失败的形式表现。
|
||||
|
||||
有关详细信息,请参阅 [Istio 的流量管理规则](/docs/concepts/traffic-management/rules-configuration/)。
|
||||
有关详细信息,请参阅 [Istio 的流量管理规则](/docs/concepts/traffic-management/#rule-configuration)。
|
|
@ -13,7 +13,7 @@ Envoy 提供了一套开箱即用,**可选的**的故障恢复功能,对应
|
|||
1. 对负载均衡池中的每个成员进行主动(定期)运行健康检查
|
||||
1. 细粒度熔断器(被动健康检查)- 适用于负载均衡池中的每个实例
|
||||
|
||||
这些功能可以使用 [Istio 的流量管理规则](/docs/concepts/traffic-management/rules-configuration/)在运行时进行动态配置。
|
||||
这些功能可以使用 [Istio 的流量管理规则](/docs/concepts/traffic-management/#rule-configuration)在运行时进行动态配置。
|
||||
|
||||
对超载的上游服务来说,重试之间的抖动极大的降低了重试造成的影响,而超时预算确保调用方服务在可预测的时间范围内获得响应(成功/失败)。
|
||||
|
||||
|
|
|
@ -12,11 +12,11 @@ keywords: [traffic-management,load-balancing]
|
|||
**服务发现**:Pilot 使用来自服务注册的信息,并提供与平台无关的服务发现接口。网格中的 Envoy 实例执行服务发现,并相应地动态更新其负载均衡池。
|
||||
|
||||
{{<image width="80%" ratio="74.79%"
|
||||
link="/docs/concepts/traffic-management/load-balancing/LoadBalancing.svg"
|
||||
link="/docs/concepts/traffic-management/LoadBalancing.svg"
|
||||
caption="发现与负载均衡">}}
|
||||
|
||||
如上图所示,网格中的服务使用其 DNS 名称访问彼此。服务的所有 HTTP 流量都会通过 Envoy 自动重新路由。Envoy 在负载均衡池中的实例之间分发流量。虽然 Envoy 支持多种[复杂的负载均衡算法](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/load_balancing),但 Istio 目前仅允许三种负载平衡模式:轮循、随机和带权重的最少请求。
|
||||
|
||||
除了负载均衡外,Envoy 还会定期检查池中每个实例的运行状况。Envoy 遵循熔断器风格模式,根据健康检查 API 调用的失败率将实例分类为不健康或健康。换句话说,当给定实例的健康检查失败次数超过预定阈值时,它将从负载均衡池中弹出。类似地,当通过的健康检查数超过预定阈值时,该实例将被添加回负载均衡池。您可以在[处理故障](/docs/concepts/traffic-management/handling-failures/)中了解更多有关 Envoy 的故障处理功能。
|
||||
除了负载均衡外,Envoy 还会定期检查池中每个实例的运行状况。Envoy 遵循熔断器风格模式,根据健康检查 API 调用的失败率将实例分类为不健康或健康。换句话说,当给定实例的健康检查失败次数超过预定阈值时,它将从负载均衡池中弹出。类似地,当通过的健康检查数超过预定阈值时,该实例将被添加回负载均衡池。您可以在[处理故障](/docs/concepts/traffic-management/#handling-failures)中了解更多有关 Envoy 的故障处理功能。
|
||||
|
||||
服务可以通过使用 HTTP 503 响应健康检查来主动减轻负担。在这种情况下,服务实例将立即从调用者的负载均衡池中删除。
|
|
@ -5,20 +5,20 @@ weight: 1
|
|||
keywords: [traffic-management]
|
||||
---
|
||||
|
||||
本页概述了 Istio 中流量管理的工作原理,包括流量管理原则的优点。本文假设你已经阅读了 [Istio 是什么?](/docs/concepts/what-is-istio/overview/)并熟悉 Istio 的高级架构。有关单个流量管理功能的更多信息,您可以在本节其他指南中了解。
|
||||
本页概述了 Istio 中流量管理的工作原理,包括流量管理原则的优点。本文假设你已经阅读了 [Istio 是什么?](/docs/concepts/what-is-istio/)并熟悉 Istio 的高级架构。有关单个流量管理功能的更多信息,您可以在本节其他指南中了解。
|
||||
|
||||
## Pilot 和 Envoy
|
||||
|
||||
Istio 流量管理的核心组件是 [Pilot](/docs/concepts/traffic-management/pilot/),它管理和配置部署在特定 Istio 服务网格中的所有 Envoy 代理实例。它允许您指定在 Envoy 代理之间使用什么样的路由流量规则,并配置故障恢复功能,如超时、重试和熔断器。它还维护了网格中所有服务的规范模型,并使用这个模型通过发现服务让 Envoy 了解网格中的其他实例。
|
||||
Istio 流量管理的核心组件是 [Pilot](/docs/concepts/traffic-management/#pilot-and-envoy),它管理和配置部署在特定 Istio 服务网格中的所有 Envoy 代理实例。它允许您指定在 Envoy 代理之间使用什么样的路由流量规则,并配置故障恢复功能,如超时、重试和熔断器。它还维护了网格中所有服务的规范模型,并使用这个模型通过发现服务让 Envoy 了解网格中的其他实例。
|
||||
|
||||
每个 Envoy 实例都会维护[负载均衡信息](/docs/concepts/traffic-management/load-balancing/),负载均衡信息是基于从 Pilot 获得的信息,以及其负载均衡池中的其他实例的定期健康检查。从而允许其在目标实例之间智能分配流量,同时遵循其指定的路由规则。
|
||||
每个 Envoy 实例都会维护[负载均衡信息](/docs/concepts/traffic-management/#discovery-and-load-balancing),负载均衡信息是基于从 Pilot 获得的信息,以及其负载均衡池中的其他实例的定期健康检查。从而允许其在目标实例之间智能分配流量,同时遵循其指定的路由规则。
|
||||
|
||||
## 流量管理的好处
|
||||
|
||||
使用 Istio 的流量管理模型,本质上是将流量与基础设施扩容解耦,让运维人员可以通过 Pilot 指定流量遵循什么规则,而不是执行哪些 pod/VM 应该接收流量——Pilot 和智能 Envoy 代理会帮我们搞定。因此,例如,您可以通过 Pilot 指定特定服务的 5% 流量可以转到金丝雀版本,而不必考虑金丝雀部署的大小,或根据请求的内容将流量发送到特定版本。
|
||||
|
||||
{{< image width="85%" ratio="69.52%"
|
||||
link="/docs/concepts/traffic-management/overview/TrafficManagementOverview.svg"
|
||||
link="/docs/concepts/traffic-management/TrafficManagementOverview.svg"
|
||||
caption="Istio 中的流量管理">}}
|
||||
|
||||
将流量从基础设施扩展中解耦,这样就可以让 Istio 提供各种流量管理功能,这些功能在应用程序代码之外。除了 A/B 测试的动态[请求路由](/docs/concepts/traffic-management/request-routing/),逐步推出和金丝雀发布之外,它还使用超时、重试和熔断器处理[故障恢复](/docs/concepts/traffic-management/handling-failures/),最后还可以通过[故障注入](/docs/concepts/traffic-management/fault-injection/)来测试服务之间故障恢复策略的兼容性。这些功能都是通过在服务网格中部署的 Envoy sidecar/代理来实现的。
|
||||
将流量从基础设施扩展中解耦,这样就可以让 Istio 提供各种流量管理功能,这些功能在应用程序代码之外。除了 A/B 测试的动态[请求路由](/docs/concepts/traffic-management/#request-routing),逐步推出和金丝雀发布之外,它还使用超时、重试和熔断器处理[故障恢复](/docs/concepts/traffic-management/#handling-failures),最后还可以通过[故障注入](/docs/concepts/traffic-management/#fault-injection)来测试服务之间故障恢复策略的兼容性。这些功能都是通过在服务网格中部署的 Envoy sidecar/代理来实现的。
|
||||
|
|
|
@ -8,7 +8,7 @@ keywords: [traffic-management,pilot]
|
|||
Pilot 负责部署在 Istio 服务网格中的 Envoy 实例的生命周期管理。
|
||||
|
||||
{{<image width="60%" ratio="72.17%"
|
||||
link="/docs/concepts/traffic-management/pilot/PilotAdapters.svg"
|
||||
link="/docs/concepts/traffic-management/PilotAdapters.svg"
|
||||
caption="Pilot 架构">}}
|
||||
|
||||
如上图所示,Pilot 维护了网格中的服务的规范表示,这个表示是独立于底层平台的。Pilot 中的平台特定适配器负责适当填充此规范模型。例如,Pilot 中的 Kubernetes 适配器实现必要的控制器来 watch Kubernetes API server 中 pod 注册信息、ingress 资源以及用于存储流量管理规则的第三方资源的更改。该数据被翻译成规范表示。Envoy 特定配置是基于规范表示生成的。
|
||||
|
|
|
@ -9,23 +9,23 @@ keywords: [traffic-management,routing]
|
|||
|
||||
## 服务模型和服务版本
|
||||
|
||||
如 [Pilot](/docs/concepts/traffic-management/pilot/) 所述,特定网格中服务的规范表示由 Pilot 维护。服务的 Istio 模型和在底层平台(Kubernetes、Mesos 以及 Cloud Foundry 等)中的表达无关。特定平台的适配器负责从各自平台中获取元数据的各种字段,然后对服务模型进行填充。
|
||||
如 [Pilot](/docs/concepts/traffic-management/#pilot-and-envoy) 所述,特定网格中服务的规范表示由 Pilot 维护。服务的 Istio 模型和在底层平台(Kubernetes、Mesos 以及 Cloud Foundry 等)中的表达无关。特定平台的适配器负责从各自平台中获取元数据的各种字段,然后对服务模型进行填充。
|
||||
|
||||
Istio 引入了服务版本的概念,可以通过版本(`v1`、`v2`)或环境(`staging`、`prod`)对服务进行进一步的细分。这些版本不一定是不同的 API 版本:它们可能是部署在不同环境(prod、staging 或者 dev 等)中的同一服务的不同迭代。使用这种方式的常见场景包括 A/B 测试或金丝雀部署。Istio 的[流量路由规则](/docs/concepts/traffic-management/rules-configuration/)可以根据服务版本来对服务之间流量进行附加控制。
|
||||
Istio 引入了服务版本的概念,可以通过版本(`v1`、`v2`)或环境(`staging`、`prod`)对服务进行进一步的细分。这些版本不一定是不同的 API 版本:它们可能是部署在不同环境(prod、staging 或者 dev 等)中的同一服务的不同迭代。使用这种方式的常见场景包括 A/B 测试或金丝雀部署。Istio 的[流量路由规则](/docs/concepts/traffic-management/#rule-configuration)可以根据服务版本来对服务之间流量进行附加控制。
|
||||
|
||||
## 服务之间的通讯
|
||||
|
||||
{{< image width="60%" ratio="100.42%"
|
||||
link="/docs/concepts/traffic-management/request-routing/ServiceModel_Versions.svg"
|
||||
link="/docs/concepts/traffic-management/ServiceModel_Versions.svg"
|
||||
alt="服务版本的处理。"
|
||||
caption="服务版本"
|
||||
>}}
|
||||
|
||||
如上图所示,服务的客户端不知道服务不同版本间的差异。他们可以使用服务的主机名或者 IP 地址继续访问服务。Envoy sidecar/代理拦截并转发客户端和服务器之间的所有请求和响应。
|
||||
|
||||
运维人员使用 Pilot 指定路由规则,Envoy 根据这些规则动态地确定其服务版本的实际选择。该模型使应用程序代码能够将它从其依赖服务的演进中解耦出来,同时提供其他好处(参见 [Mixer](/docs/concepts/policies-and-telemetry/overview/))。路由规则让 Envoy 能够根据诸如 header、与源/目的地相关联的标签和/或分配给每个版本的权重等标准来进行版本选择。
|
||||
运维人员使用 Pilot 指定路由规则,Envoy 根据这些规则动态地确定其服务版本的实际选择。该模型使应用程序代码能够将它从其依赖服务的演进中解耦出来,同时提供其他好处(参见 [Mixer](/docs/concepts/policies-and-telemetry/))。路由规则让 Envoy 能够根据诸如 header、与源/目的地相关联的标签和/或分配给每个版本的权重等标准来进行版本选择。
|
||||
|
||||
Istio 还为同一服务版本的多个实例提供流量负载均衡。可以在[服务发现和负载均衡](/docs/concepts/traffic-management/load-balancing/)中找到更多信息。
|
||||
Istio 还为同一服务版本的多个实例提供流量负载均衡。可以在[服务发现和负载均衡](/docs/concepts/traffic-management/#discovery-and-load-balancing)中找到更多信息。
|
||||
|
||||
Istio 不提供 DNS。应用程序可以尝试使用底层平台(kube-dns,mesos-dns 等)中存在的 DNS 服务来解析 FQDN。
|
||||
|
||||
|
@ -34,7 +34,7 @@ Istio 不提供 DNS。应用程序可以尝试使用底层平台(kube-dns,me
|
|||
Istio 假定进入和离开服务网络的所有流量都会通过 Envoy 代理进行传输。通过将 Envoy 代理部署在服务之前,运维人员可以针对面向用户的服务进行 A/B 测试,部署金丝雀服务等。类似地,通过使用 Envoy 将流量路由到外部 Web 服务(例如,访问 Maps API 或视频服务 API)的方式,运维人员可以为这些服务添加超时控制、重试、断路器等功能,同时还能从服务连接中获取各种细节指标。
|
||||
|
||||
{{< image width="60%" ratio="28.88%"
|
||||
link="/docs/concepts/traffic-management/request-routing/ServiceModel_RequestFlow.svg"
|
||||
link="/docs/concepts/traffic-management/ServiceModel_RequestFlow.svg"
|
||||
alt="通过 Envoy 的 Ingress 和 Egress。"
|
||||
caption="请求流"
|
||||
>}}
|
||||
|
|
|
@ -236,7 +236,7 @@ spec:
|
|||
perTryTimeout: 2s
|
||||
~~~
|
||||
|
||||
注意请求的重试和超时还可以[针对每个请求分别设置](/docs/concepts/traffic-management/handling-failures#fine-tuning)。
|
||||
注意请求的重试和超时还可以[针对每个请求分别设置](/docs/concepts/traffic-management/#fine-tuning)。
|
||||
|
||||
[请求超时任务](/docs/tasks/traffic-management/request-timeouts/)中展示了超时控制的相关示例。
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ Istio服务网格逻辑上分为**数据面板**和**控制面板**。
|
|||
下图显示了构成每个面板的不同组件:
|
||||
|
||||
{{< image width="80%" ratio="56.25%"
|
||||
link="/docs/concepts/what-is-istio/overview/arch.svg"
|
||||
link="/docs/concepts/what-is-istio/arch.svg"
|
||||
alt="基于 Istio 的应用程序架构概览"
|
||||
caption="Istio 架构"
|
||||
>}}
|
||||
|
@ -47,15 +47,15 @@ Istio服务网格逻辑上分为**数据面板**和**控制面板**。
|
|||
|
||||
Istio 使用 [Envoy](https://www.envoyproxy.io/) 代理的扩展版本,Envoy 是以 C ++ 开发的高性能代理,用于调解服务网格中所有服务的所有入站和出站流量。Envoy 的许多内置功能被 istio 发扬光大,例如动态服务发现、负载均衡、TLS终止、HTTP/2 & gRPC 代理、熔断器、健康检查、基于百分比流量拆分的分段推出以及故障注入和丰富的度量指标。
|
||||
|
||||
Envoy 被部署为 **sidecar**,和对应服务在同一个 Kubernetes pod 中。这允许 Istio 将大量关于流量行为的信号作为[属性](/docs/concepts/policies-and-telemetry/config/#attributes)提取出来,而这些属性又可以在 [Mixer](/docs/concepts/policies-and-telemetry/overview/) 中用于执行策略决策,并发送给监控系统,以提供整个网格行为的信息。Sidecar 代理模型还可以将 Istio 的功能添加到现有部署中,而无需重新构建或重写代码。可以阅读更多来了解为什么我们在[设计目标](/docs/concepts/what-is-istio/goals/)中选择这种方式。
|
||||
Envoy 被部署为 **sidecar**,和对应服务在同一个 Kubernetes pod 中。这允许 Istio 将大量关于流量行为的信号作为[属性](/docs/concepts/policies-and-telemetry/#attributes)提取出来,而这些属性又可以在 [Mixer](/docs/concepts/policies-and-telemetry/) 中用于执行策略决策,并发送给监控系统,以提供整个网格行为的信息。Sidecar 代理模型还可以将 Istio 的功能添加到现有部署中,而无需重新构建或重写代码。可以阅读更多来了解为什么我们在[设计目标](/docs/concepts/what-is-istio/#design-goals)中选择这种方式。
|
||||
|
||||
### Mixer
|
||||
|
||||
[Mixer](/docs/concepts/policies-and-telemetry/overview) 是一个独立于平台的组件,负责在服务网格上执行访问控制和使用策略,并从 Envoy 代理和其他服务收集遥测数据。代理提取请求级[属性](/docs/concepts/policies-and-telemetry/config/#attributes),发送到 Mixer 进行评估。有关属性提取和策略评估的更多信息,请参见 [Mixer 配置](/docs/concepts/policies-and-telemetry/config)。Mixer 包括一个灵活的插件模型,使其能够接入到各种主机环境和基础设施后端,从这些细节中抽象出 Envoy 代理和 Istio管理的服务。
|
||||
[Mixer](/docs/concepts/policies-and-telemetry/) 是一个独立于平台的组件,负责在服务网格上执行访问控制和使用策略,并从 Envoy 代理和其他服务收集遥测数据。代理提取请求级[属性](/docs/concepts/policies-and-telemetry/#attributes),发送到 Mixer 进行评估。有关属性提取和策略评估的更多信息,请参见 [Mixer 配置](/docs/concepts/policies-and-telemetry/#configuration-model)。Mixer 包括一个灵活的插件模型,使其能够接入到各种主机环境和基础设施后端,从这些细节中抽象出 Envoy 代理和 Istio管理的服务。
|
||||
|
||||
### Pilot
|
||||
|
||||
[Pilot](/docs/concepts/traffic-management/pilot/) 为 Envoy sidecar 提供服务发现功能,为智能路由(例如 A/B 测试、金丝雀部署等)和弹性(超时、重试、断路器等)提供流量管理功能。它将控制流量行为的高级路由规则转换为特定于 Envoy 的配置,并在运行时将它们传播到 sidecar。Pilot 将平台特定的服务发现机制抽象化并将其合成为符合 [Envoy 数据平面 API](https://github.com/envoyproxy/data-plane-api) 的任何 sidecar 都可以使用的标准格式。这种松散耦合使得 Istio 能够在多种环境下运行(例如,Kubernetes、Consul/Nomad),同时保持用于流量管理的相同操作界面。
|
||||
[Pilot](/docs/concepts/traffic-management/#pilot-and-envoy) 为 Envoy sidecar 提供服务发现功能,为智能路由(例如 A/B 测试、金丝雀部署等)和弹性(超时、重试、断路器等)提供流量管理功能。它将控制流量行为的高级路由规则转换为特定于 Envoy 的配置,并在运行时将它们传播到 sidecar。Pilot 将平台特定的服务发现机制抽象化并将其合成为符合 [Envoy 数据平面 API](https://github.com/envoyproxy/data-plane-api) 的任何 sidecar 都可以使用的标准格式。这种松散耦合使得 Istio 能够在多种环境下运行(例如,Kubernetes、Consul/Nomad),同时保持用于流量管理的相同操作界面。
|
||||
|
||||
### Citadel
|
||||
|
||||
|
@ -63,7 +63,7 @@ Envoy 被部署为 **sidecar**,和对应服务在同一个 Kubernetes pod 中
|
|||
|
||||
## 下一步
|
||||
|
||||
- 了解 Istio 的[设计目标](/docs/concepts/what-is-istio/goals/)。
|
||||
- 了解 Istio 的[设计目标](/docs/concepts/what-is-istio/#design-goals)。
|
||||
- 探索我们的[指南](/docs/examples/)。
|
||||
- 在我们其他的[概念](/docs/concepts/)指南中详细了解 Istio 组件。
|
||||
- 使用我们的[任务](/docs/tasks/)指南,了解如何将自己的服务部署到 Istio。
|
||||
|
|
|
@ -3,6 +3,7 @@ title: 帮助
|
|||
description: 一堆帮助您部署、配置和使用Istio的资源。
|
||||
weight: 10
|
||||
type: section-index
|
||||
sidebar_singlecard: true
|
||||
---
|
||||
|
||||
不要忘记我们充满活力的[社区](/community/)随时准备为棘手的问题伸出援助之手。
|