There is no telemetry from the controller client currently.
This change adds a new scope (`control_`) of metrics including HTTP
metrics for the client to the proxy-api.
When the inbound proxy receives requests, these requests may have
relative `:authority` values like _web:8080_. Because these requests can
come from hosts with a variety of DNS configurations, the inbound proxy
can't make a sufficient guess about the fully qualified name (e.g.
_web.ns.svc.cluster.local._).
In order for the inbound proxy to discover inbound service profiles, we
need to establish some means for the inbound proxy to determine the
"canonical" name of the service for each request.
This change introduces a new `l5d-dst-canonical` header that is set by
the outbound proxy and used by the remote inbound proxy to determine
which profile should be used.
The outbound proxy determines the canonical destination by performing
DNS resolution as requests are routed and uses this name for profile and
address discovery. This change removes the proxy's hardcoded Kubernetes
dependency.
The `LINKERD2_PROXY_DESTINATION_GET_SUFFIXES` and
`LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES` environment variables
control which domains may be discovered via the destination service.
Finally, HTTP settings detection has been moved into a dedicated routing
layer at the "bottom" of the stack. This is done do that
canonicalization and discovery need not be done redundantly for each set
of HTTP settings. Now, HTTP settings, only configure the HTTP client
stack within an endpoint.
Fixeslinkerd/linkerd2#1798
Since this stack pieces will never error, we can mark their
`Error`s with a type that can "never" be created. When seeing an `Error
= ()`, it can either mean the error never happens, or that the detailed
error is dealt with elsewhere and only a unit is passed on. When seeing
`Error = Never`, it is clearer that the error case never happens.
Besides helping humans, LLVM can also remove the error branchs entirely.
Signed-off-by: Sean McArthur <sean@buoyant.io>
The Destination Profile API---provided by linkerd2-proxy-api v0.1.3--
allows the proxy to discovery route information for an HTTP service. As
the proxy processes outbound requests, in addition to doing address
resolution through the Destination service, the proxy may also discover
profiles including route patterns and labels.
When the proxy has route information for a destination, it applies the
RequestMatch for each route to find the first-matching route. The
route's labels are used to expose `route_`-prefixed HTTP metrics (and
each label is prefixed with `rt_`).
Furthermore, if a route includes ResponseMatches, they are used to
perform classification (i.e. for the `response_total` and
`route_response_total` metrics).
A new `proxy::http::profiles` module implements a router that consumes
routes from an infinite stream of route lists.
The `app::profiles` module implements a client that continually and
repeatedly tries to establish a watch for the destination's routes (with
some backoff).
Route discovery does not _block_ routing; that is, the first request to
a destination will likely be processed before the route information is
retrieved from the controller (i.e. on the default route). Route
configuration is applied in a best-effort fashion.
As described in https://github.com/linkerd/linkerd2/issues/1832, our eager
classification is too complicated.
This changes the `classification` label to only be used with the `response_total` label.
The following changes have been made:
1. response_latency metrics only include a status_code and not a classification.
2. response_total metrics include classification labels.
3. transport metrics no longer expose a `classification` label (since it's misleading).
now the `errno` label is set to be empty when there is no error.
4. Only gRPC classification applies when the request's content type starts
with `application/grpc+`
The `proxy::http::classify` APIs have been changed so that classifiers cannot
return a classification before the classifier is fully consumed.
The controller's client is instantiated in the
`control::destination::background` module and is tightly coupled to its
use for address resolution.
In order to share this client across different modules---and to bring it
into line with the rest of the proxy's modular layout---the controller
client is now configured and instantiated in `app::main`. The
`app::control` module includes additional stack modules needed to
configure this client.
Our dependency on tower-buffer has been updated so that buffered
services may be cloned.
The `proxy::reconnect` module has been extended to support a
configurable fixed reconnect backoff; and this backoff delay has been
made configurable via the environment.
Previously, stacks were built with `Layer::and_then`. This pattern
severely impacts compile-times as stack complexity grows.
In order to ameliorate this, `app::main` has been changed to build
stacks from the "bottom" (endpoint client) to "top" (serverside
connection) by _push_-ing Layers onto a concrete stack, i.e. and not
composing layers for an abstract stack.
While doing this, we take the oppportunity to remove a ton of
now-unnecessary `PhantomData`. A new, dedicated `phantom_data` stack
module can be used to aid type inference as needed.
Other stack utilities like `map_target` and `map_err` have been
introduced to assist this transition.
Furthermore, all instances of `Layer::new` have been changed to a free
`fn layer` to improve readability.
This change sets up two upcoming changes: a stack-oriented `controller`
client and, subsequently, service-profile-based routing.
* Prepare HTTP metrics for per-route classification
In order to support Service Profiles, the proxy will add a new scope of
HTTP metrics prefixed with `route_`, i.e. so that the proxy exposes
`request_total` and `route_request_total` independently.
Furthermore, the proxy must be able to use different
response-classification logic for each route, and this classification
logic should apply to both metrics scopes.
This alters the `proxy::http::metrics` module so that:
1. HTTP metrics may be scoped with a prefix (as the stack is described).
2. The HTTP metrics layer now discovers the classifier by trying to
extract it from each request's extensions or fall back to a `Default`
implementation. Only a default implementation is used presently.
3. It was too easy to use the `Classify` trait API incorrectly.
Non-default classify implementation could cause a runtime panic!
The API has been changed so that the type system ensures correct
usage.
4. The HTTP classifier must be configurable per-request. In order to do
this, we expect a higher stack layer will add response classifiers to
request extensions when appropriate (i.e., in a follow-up).
Finally, the `telemetry::Report` type requires updating every time a new
set of metrics is added. We don't need a struct to represent this.
`FmtMetrics::and_then` has been added as a combinator so that a fixed
type is not necessary.
Previously, stacks were built with `Layer::and_then`. This pattern
severely impacts compile-times as stack complexity grows.
In order to ameliorate this, `app::main` has been changed to build
stacks from the "bottom" (endpoint client) to "top" (serverside
connection) by _push_-ing Layers onto a concrete stack, i.e. and not
composing layers for an abstract stack.
While doing this, we take the oppportunity to remove a ton of
now-unnecessary `PhantomData`. A new, dedicated `phantom_data` stack
module can be used to aid type inference as needed.
Other stack utilities like `map_target` and `map_err` have been
introduced to assist this transition.
Furthermore, all instances of `Layer::new` have been changed to a free
`fn layer` to improve readability.
This change sets up two upcoming changes: a stack-oriented `controller`
client and, subsequently, service-profile-based routing.
The `proxy::http::balance` module uses the `proxy::resolve::Resolve`
trait to implement a `Discover`.
This coupling between the balance and resolve modules prevents
integrating the destination profile API such that there is a per-route,
per-endpoint stack.
This change makes the `balance` stack generic over a stack that produces
a `Discover`. The `resolve` module now implements a stack that produces
a `Discover` and is generic over a per-endpoint stack.
The TLS-configuration-watching logic in `app::outbound::tls_config` need
not be specific to the outbound types, or even TLS configuration.
Instead, this change extends the `watch` stack module with a Stack type
that can satisfy the TLS use case independently of the concrete types at
play.
Previously, the `client` module was responsible for instrument
reconnects. Now, the reconnect module becomes its own stack layer that
composes over NewService stacks.
Additionally, the `proxy::http::client` module can now layer over an
underlying Connect stack.
As the proxy's functionality has grown, the HTTP routing functionality
has become complex. Module boundaries have become ill-defined, which
leads to tight coupling--especially around the `ctx` metadata types and
`Service` type signatures.
This change introduces a `Stack` type (and subcrate) that is used as the
base building block for proxy functionality. The `proxy` module now
exposes generic components--stack layers--that are configured and
instantiated in the `app::main` module.
This change reorganizes the repo as follows:
- Several auxiliary crates have been split out from the `src/` directory
into `lib/fs-watch`, `lib/stack` and `lib/task`.
- All logic specific to configuring and running the linkerd2 sidecar
proxy has been moved into `src/app`. The `Main` type has been moved
from `src/lib.rs` to `src/app/main.rs`.
- The `src/proxy` has reusable, generic components useful for building
proxies in terms of `Stack`s.
The logic contained in `lib/bind.rs`, pertaining to per-endpoint service
behavior, has almost entirely been moved into `app::main`.
`control::destination` has changed so that it is not responsible for
building services. (It used to take a clone of `Bind` and use it to
create per-endpoint services). Instead, the destination service
implements the new `proxy::Resolve` trait, which produces an infinite
`Resolution` stream for each lookup. This allows the `proxy::balance`
module to be generic over the servie discovery source.
Furthermore, the `router::Recognize` API has changed to only expose a
`recgonize()` method and not a `bind_service()` method. The
`bind_service` logic is now modeled as a `Stack`.
The `telemetry::http` module has been replaced by a
`proxy::http::metrics` module that is generic over its metadata types
and does not rely on the old telemetry event system. These events are
now a local implementation detail of the `tap` module.
There are no user-facing changes in the proxy's behavior.