This untangles some of the HTTP/gRPC glue, providing services/stacks
that have more specific focuses. The `HyperServerSvc` now *only*
converts to a `tower::Service`, and the HTTP/1.1 and Upgrade pieces were
moved to a specific `proxy::http::upgrade::Service`.
Several stack modules were added to `proxy::grpc`, which can map request
and response bodies into `Payload`, or into `grpc::Body`, as needed.
Signed-off-by: Sean McArthur <sean@buoyant.io>
It turns out that increasing the recursion limit for the `tap` test
crate _actually_ fixes the compiler error that's broken the last several
builds on master.
Since I'm now able to locally reproduce the error (which occurs only
when running the tests in release mode), I've verified that this
actually does fix the issue. Thus, I've also reverted the previous
commit (7c35f27ad3) which claimed to fix
this issue, as it turns out that was not actually necessary.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch replaces the `export CARGO_VERBOSE=1` on CI release-mode
builds with the `travis_wait` script. Verbose mode was previously being
set to prevent long release-mode builds from timing out. However, there
appears to be a bug in `rustc` 1.31.0, which causes the compiler to
crash when building the proxy with verbose mode enabled. Hopefully, this
will fix the build on master.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch changes the proxy's accept logic so that the proxy will no
longer attempt to terminate TLS on ports which are configured to skip
protocol detection. This means that a Linkerd deployment with
`--tls optional` will no longer break server-speaks-first protocols like
MySQL (although that traffic will not be encrypted).
Since it's necessary to get the connection's original destination to
determine if it's on a port which should skip protocol detection, I've
moved the SO_ORIGINAL_DST call down the stack from `Server` to
`BoundPort`. However, to avoid making an additional unnecessary syscall,
the original destination is propagated to the server, along with the
information about whether or not protocol detection is enabled. This is
the approach described in
https://github.com/linkerd/linkerd2/issues/1270#issuecomment-406124236.
I've also written a new integration test for server-speaks-first
protocols with TLS enabled. This test is essentially the same as the
existing `transparency::tcp_server_first` test, but with TLS enabled for
the test proxy. I've confirmed that this fails against master.
Furthermore, I've validated this change by deploying the `booksapp` demo
with MySQL with TLS enabled, which [previously didn't work](https://github.com/linkerd/linkerd2/issues/1648#issuecomment-432867702).
Closeslinkerd/linkerd2#1270
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The profile router currently is responsible for driving the state of
profile discovery; but this means that, if a service is not polled for
traffic, the proxy may not drive discovery (so that requests may
timeout, etc).
This change moves this discovery onto a daemon task that sends profile
updates to the service over an mpsc with capacity of 1.
When debugging issues that users believe is related to discovery, it's
helpful to get a narrow set of logs out to determine whether the proxy
is observing discovery updates.
With this change, a user can inject the proxy with
```
LINKERD2_PROXY_LOG='warn,linkerd2_proxy=info,linkerd2_proxy::app::outbound::discovery=debug'
```
and the proxy's logs will include messages like:
```
DBUG voting-svc.emojivoto.svc.cluster.local:8080 linkerd2_proxy::app::outbound::discovery adding 10.233.70.98:8080 to voting-svc.emojivoto.svc.cluster.local:8080
DBUG voting-svc.emojivoto.svc.cluster.local:8080 linkerd2_proxy::app::outbound::discovery removing 10.233.66.36:8080 from voting-svc.emojivoto.svc.cluster.local:8080
```
This change also turns-down some overly chatty INFO logging in main.
As we well know, gRPC responses may include the `grpc-status` header
when there is no response payload.
This change ensures that tap response end events include this value when
it is set on response headers, since grpc-status is handled specially
in the Tap API.
* 80b4ec5 (tag: v0.1.13) Bump version to v0.1.13 (#324)
* 6b23542 Add client support for server push (#314)
* 6d8554a Reassign capacity from reset streams. (#320)
* b116605 Check whether the send side is not idle, not the recv side (#313)
* a4ed615 Check minimal versions (#322)
* ea8b8ac Avoid prematurely unlinking streams in `send_reset`, in some cases. (#319)
* 9bbbe7e Disable length_delimited deprecation warning. (#321)
* 00ca534 Update examples to use new Tokio (#316)
* 12e0d26 Added functions to access io::Error in h2::Error (#311)
* 586106a Fix push promise frame parsing (#309)
* 2b960b8 Add Reset::INTERNAL_ERROR helper to test support (#308)
* d464c6b set deny(warnings) only when cfg(test) (#307)
* b0db515 fix some autolinks that weren't resolving in docs (#305)
* 66a5d11 Shutdown the stream along with connection (#304)
@adleong suggested that profile matching should always be anchored
so that users must be explicit about unexpected path components.
This change modifies the Profile client to always build anchore
regular expressions.
Route labels are not queryable by tap, nor are they exposed to in tap
events.
This change uses the newly-added fields in linkerd/linkerd2-proxy-api#17
to make Tap route-aware.
canonicalize: Drive resolution on a background task
canonicalize::Service::poll_ready may not be called enough to drive
resolution, so a background task must be spawned to watch DNS.
Updates are published into service over an mpsc, so the task exits
gracefully when the service is dropped.
Previously, as the proxy processed requests, it would:
Obtain the taps mutex ~4x per request to determine whether taps are active.
Construct an "event" ~4x per request, regardless of whether any taps were
active.
Furthermore, this relied on fragile caching logic, where the grpc server
manages individual stream states in a Map to determine when all streams have
been completed. And, beyond the complexity of caching, this approach makes it
difficult to expand Tap functionality (for instance, to support tapping of
payloads).
This change entirely rewrites the proxy's Tap logic to (1) prevent the need
to acquire muteces in the request path, (2) only produce events as needed to
satisfy tap requests, and (3) provide clear (private) API boundaries between
the Tap server and Stack, completely hiding gRPC details from the tap service.
The tap::service module now provides a middleware that is generic over a
way to discover Taps; and the tap::grpc module (previously,
control::observe), implements a gRPC service that advertises Taps such that
their lifetimes are managed properly, leveraging RAII instead of hand-rolled
map-based caching.
There is one user-facing change: tap stream IDs are now calculated relative to
the tap server. The base id is assigned from the count of tap requests that have
been made to the proxy; and the stream ID corresponds to an integer on [0, limit).
When developing, it's convenient to use `unimplemented!` as a
placeholder for work that has not yet been done. However, we also use
`unimplemented!` in various tests in stubbed methods; so searching the
project for `unimplemented` produces false positives.
This change replaces these with `unreachable!`, which is functionaly
equivalent, but better indicates that the current usage does not reach
these methods and disambiguates usage of `unimplemented!`.
This implements Prometheus reset semantics for counters, in order to
preserve precision when deriving rate of increase.
Wrapping is based on the fact that Prometheus models counters as `f64`
(52-bits mantissa), thus integer values over 2^53 are not guaranteed to
be correctly exposed.
Signed-off-by: Luca Bruno <luca.bruno@coreos.com>
Our test artifacts end up consuming several GB of disk space, largely
due to debug symbols. This can prevent CI from passing, as CI hosts only
have about 9G of real estate.
By disabling debug symbols, we reduce artifact size by >90% (and total
target directory size from 14G to 4G).
There is no telemetry from the controller client currently.
This change adds a new scope (`control_`) of metrics including HTTP
metrics for the client to the proxy-api.
When the inbound proxy receives requests, these requests may have
relative `:authority` values like _web:8080_. Because these requests can
come from hosts with a variety of DNS configurations, the inbound proxy
can't make a sufficient guess about the fully qualified name (e.g.
_web.ns.svc.cluster.local._).
In order for the inbound proxy to discover inbound service profiles, we
need to establish some means for the inbound proxy to determine the
"canonical" name of the service for each request.
This change introduces a new `l5d-dst-canonical` header that is set by
the outbound proxy and used by the remote inbound proxy to determine
which profile should be used.
The outbound proxy determines the canonical destination by performing
DNS resolution as requests are routed and uses this name for profile and
address discovery. This change removes the proxy's hardcoded Kubernetes
dependency.
The `LINKERD2_PROXY_DESTINATION_GET_SUFFIXES` and
`LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES` environment variables
control which domains may be discovered via the destination service.
Finally, HTTP settings detection has been moved into a dedicated routing
layer at the "bottom" of the stack. This is done do that
canonicalization and discovery need not be done redundantly for each set
of HTTP settings. Now, HTTP settings, only configure the HTTP client
stack within an endpoint.
Fixeslinkerd/linkerd2#1798
Since this stack pieces will never error, we can mark their
`Error`s with a type that can "never" be created. When seeing an `Error
= ()`, it can either mean the error never happens, or that the detailed
error is dealt with elsewhere and only a unit is passed on. When seeing
`Error = Never`, it is clearer that the error case never happens.
Besides helping humans, LLVM can also remove the error branchs entirely.
Signed-off-by: Sean McArthur <sean@buoyant.io>
The router's `Recognize` trait is now essentially a function.
This change provides an implementation of `Recognize` over a `Fn` so
that it's possible to implement routers without defining 0-point marker
types that implement `Recognize`.
The `linkerd2_stack::Either` type is used to implement Layer, Stack, and
Service for alternate underlying implementations. However, the Service
implementation requires that both inner services emit the same type of
Error.
In order to allow the underlying types to emit different errors, this
change uses `Either` to wrap the underlying errors, and implements
`Error` for `Either`.
It was possible for a metrics scope to be deregistered for active
routes. This could cause metrics to disappear and never be recorded in
some situations.
This change ensure that metrics are only evicted for scopes that are not
active (i.e. in a router, load balancer, etc).
With the introduction of profile-based classification, the proxy would
not perform normal gRPC classification in some cases when it could &
should.
This change simplifies our default classifier logic and falls back to
the default grpc-aware behavior whenever another classification cannot
be performed.
Furthermore, this change moves the `proxy::http::classify` module to
`proxy::http::metrics::classify`, as these modules should only be relied
on for metrics classification. Other module (for instance, retries),
should provide their own abstractions.
Finally, this change fixes a test error-formatting issue.
Currently, the proxy uses a variety of types to represent the logical
destination of a request. Outbound destinations use a `NameAddr` type
which may be either a `DnsNameAndPort` or a `SocketAddr`. Other parts of
the code used a `HostAndPort` enum that always contained a port and also
contained a `Host` which could either be a `dns::Name` or a `IpAddr`.
Furthermore, we coerce these types into a `http::uri::Authority` in many
cases.
All of these types represent the same thing; and it's not clear when/why
it's appropriate to use a given variant.
In order to simplify the situtation, a new `addr` module has been
introduced with `Addr` and `NameAddr` types. A `Addr` may
contain either a `NameAddr` or a `SocketAddr`.
The `Host` value has been removed from the `Settings::Http1` type,
replaced by a boolean, as it's redundant information stored elsewhere in
the route key.
There is one small change in behavior: The `authority` metrics label is
now omitted only for requests that include an `:authority` or `Host`
with a _name_ (i.e. and not an IP address).
The Destination Profile API---provided by linkerd2-proxy-api v0.1.3--
allows the proxy to discovery route information for an HTTP service. As
the proxy processes outbound requests, in addition to doing address
resolution through the Destination service, the proxy may also discover
profiles including route patterns and labels.
When the proxy has route information for a destination, it applies the
RequestMatch for each route to find the first-matching route. The
route's labels are used to expose `route_`-prefixed HTTP metrics (and
each label is prefixed with `rt_`).
Furthermore, if a route includes ResponseMatches, they are used to
perform classification (i.e. for the `response_total` and
`route_response_total` metrics).
A new `proxy::http::profiles` module implements a router that consumes
routes from an infinite stream of route lists.
The `app::profiles` module implements a client that continually and
repeatedly tries to establish a watch for the destination's routes (with
some backoff).
Route discovery does not _block_ routing; that is, the first request to
a destination will likely be processed before the route information is
retrieved from the controller (i.e. on the default route). Route
configuration is applied in a best-effort fashion.
As described in https://github.com/linkerd/linkerd2/issues/1832, our eager
classification is too complicated.
This changes the `classification` label to only be used with the `response_total` label.
The following changes have been made:
1. response_latency metrics only include a status_code and not a classification.
2. response_total metrics include classification labels.
3. transport metrics no longer expose a `classification` label (since it's misleading).
now the `errno` label is set to be empty when there is no error.
4. Only gRPC classification applies when the request's content type starts
with `application/grpc+`
The `proxy::http::classify` APIs have been changed so that classifiers cannot
return a classification before the classifier is fully consumed.
The controller's client is instantiated in the
`control::destination::background` module and is tightly coupled to its
use for address resolution.
In order to share this client across different modules---and to bring it
into line with the rest of the proxy's modular layout---the controller
client is now configured and instantiated in `app::main`. The
`app::control` module includes additional stack modules needed to
configure this client.
Our dependency on tower-buffer has been updated so that buffered
services may be cloned.
The `proxy::reconnect` module has been extended to support a
configurable fixed reconnect backoff; and this backoff delay has been
made configurable via the environment.
When a gRPC service fails a request eagerly, before it begins sending a
response, a `grpc-status` header is simply added to the initial response
header (rather than added to trailers).
This change ensures that classification honors these status codes.
Fixeslinkerd/linkerd2#1819
Previously, stacks were built with `Layer::and_then`. This pattern
severely impacts compile-times as stack complexity grows.
In order to ameliorate this, `app::main` has been changed to build
stacks from the "bottom" (endpoint client) to "top" (serverside
connection) by _push_-ing Layers onto a concrete stack, i.e. and not
composing layers for an abstract stack.
While doing this, we take the oppportunity to remove a ton of
now-unnecessary `PhantomData`. A new, dedicated `phantom_data` stack
module can be used to aid type inference as needed.
Other stack utilities like `map_target` and `map_err` have been
introduced to assist this transition.
Furthermore, all instances of `Layer::new` have been changed to a free
`fn layer` to improve readability.
This change sets up two upcoming changes: a stack-oriented `controller`
client and, subsequently, service-profile-based routing.
* Prepare HTTP metrics for per-route classification
In order to support Service Profiles, the proxy will add a new scope of
HTTP metrics prefixed with `route_`, i.e. so that the proxy exposes
`request_total` and `route_request_total` independently.
Furthermore, the proxy must be able to use different
response-classification logic for each route, and this classification
logic should apply to both metrics scopes.
This alters the `proxy::http::metrics` module so that:
1. HTTP metrics may be scoped with a prefix (as the stack is described).
2. The HTTP metrics layer now discovers the classifier by trying to
extract it from each request's extensions or fall back to a `Default`
implementation. Only a default implementation is used presently.
3. It was too easy to use the `Classify` trait API incorrectly.
Non-default classify implementation could cause a runtime panic!
The API has been changed so that the type system ensures correct
usage.
4. The HTTP classifier must be configurable per-request. In order to do
this, we expect a higher stack layer will add response classifiers to
request extensions when appropriate (i.e., in a follow-up).
Finally, the `telemetry::Report` type requires updating every time a new
set of metrics is added. We don't need a struct to represent this.
`FmtMetrics::and_then` has been added as a combinator so that a fixed
type is not necessary.
Previously, stacks were built with `Layer::and_then`. This pattern
severely impacts compile-times as stack complexity grows.
In order to ameliorate this, `app::main` has been changed to build
stacks from the "bottom" (endpoint client) to "top" (serverside
connection) by _push_-ing Layers onto a concrete stack, i.e. and not
composing layers for an abstract stack.
While doing this, we take the oppportunity to remove a ton of
now-unnecessary `PhantomData`. A new, dedicated `phantom_data` stack
module can be used to aid type inference as needed.
Other stack utilities like `map_target` and `map_err` have been
introduced to assist this transition.
Furthermore, all instances of `Layer::new` have been changed to a free
`fn layer` to improve readability.
This change sets up two upcoming changes: a stack-oriented `controller`
client and, subsequently, service-profile-based routing.
The `proxy::http::balance` module uses the `proxy::resolve::Resolve`
trait to implement a `Discover`.
This coupling between the balance and resolve modules prevents
integrating the destination profile API such that there is a per-route,
per-endpoint stack.
This change makes the `balance` stack generic over a stack that produces
a `Discover`. The `resolve` module now implements a stack that produces
a `Discover` and is generic over a per-endpoint stack.
The control client implements a backoff service that dampens reconnect
attempts to the control plane by waiting a fixed period of time after a
failure.
Furthermore, the control client logs errors each time a reconnect
attempt fails.
This change moves backoff logic from
control::destination::background::client to proxy::reconnect.
Because the reconnect module handles connection errors uniformly, muting
repeated errors, it also has enough context to know when a backoff
should be applied -- when the underlying NewService cannot produce a
Service.
If polling the inner service fails once the Service has been
established, we do not want to apply a backoff, since this may
just be the result of a connection being terminated, a process being
restarted, etc.
The TLS-configuration-watching logic in `app::outbound::tls_config` need
not be specific to the outbound types, or even TLS configuration.
Instead, this change extends the `watch` stack module with a Stack type
that can satisfy the TLS use case independently of the concrete types at
play.