This PR removes the `Arc`s from the various label types in the proxy's
`metrics` modules. This should make the write side of the metrics code
much more efficient (and makes the code much simpler! :D).
This change was particularly easy to implement for the TCP `TransportLabels`
and `TransportCloseLabels`, which consisted of only `struct`s and `enum`s,
and could easily be changed to derive `Copy`.
For protocol-level `RequestLabels`, the request's authority was a `String`,
which still needs to be reference-counted, as the overhead of cloning `String`s
is almost certainly worse than that added by ref-counting. However, rather than
adding an additional `Arc<str>`, I changed `RequestLabels` to store the
authority as a `http::uri::Authority`, which is backed by a `ByteStr` and thus
already ref-counted. Now, when constructing `RequestLabels`, we just take
another reference to the `Authority` already stored in the request context.
Since `Authority` implements `fmt::Display` already, formatting the labels
still works.
`ResponseLabels` already store the `DstLabels` string in an `Arc`, so no
additional changes there were necessary. By removing the outer `Arc` around
`ResponseLabels`, we now only have to ref-count the portion of the label type
that would actually be inefficient to clone.
@olix0r ran the benchmarks from #874 against this branch, and it seems to be
a small but noticeable improvement:
```
test record_many_dsts ... bench: 151,076 ns/iter (+/- 182,151)
test record_one_conn_request ... bench: 1,599 ns/iter (+/- 209)
test record_response_end ... bench: 676 ns/iter (+/- 144)
```
before:
```
test record_many_dsts ... bench: 158,403 ns/iter (+/- 130,241)
test record_one_conn_request ... bench: 1,823 ns/iter (+/- 1,408)
test record_response_end ... bench: 547 ns/iter (+/- 70)
```
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Before changing the telemetry implementation, we should have a means to
understand the impacts of such changes.
To run, you must use a nightly toolchain:
```
rustup run nightly cargo bench -p conduit-proxy -- record
```
This PR adds the unit tests for the proxy metrics module's Histogram
implementation that I wrote in #775 to @olix0r's Histogram implementation
added in #868. The tests weren't too difficult to adapt for the new code,
and everything seems to work correctly!
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
In order to support histograms measured in, for instance, microseconds,
Histogram should blind store integers without being aware of the unit.
In order to accomplish this, we make `Histogram` generic over a `V:
Into<u64>`, such that all values added to the histogram must be of type
`V`.
In doing this, we also make the histogram buckets configurable, though
we maintain the same defaults used for latency values.
The `Histogram` type has been moved to a new module, and the `Bucket`
and `Bounds` helper types have been introduced to help make histogram
logic clearer and latency-agnostic.
In case there are any errors while peeking the connection to do protocol
detection, the sensors will now be in place to detect them. Besides just
errors, this will also allow reporting about connections that are
accepted, but then immediately closed.
Additionally:
- add write_buf implementation for Transport sensor, can help
performance for http1/http2
- add better logs for tcp connections errors
- add printlns for when tests fail
Signed-off-by: Sean McArthur <sean@seanmonstar.com>
In preparation for a larger metrics refactor, this change splits the
Counter and Gauge types into their own modules.
Furthermore, this makes the minor change to these types: incr() and
decr() no longer return `self`. We were not actually ever using the
returned self references, and I find the unit return type to more
obviously indicate the side-effecty-ness of these calls. #smpfy
Previously, the proxy exposed separate _accept_ and _connect_ metrics
for some metric types, but not for all. This leads to confusing
aggregations, particularly for read and write taotals.
This change primarily introduces the `peer` prometheus label (with
possible values _src_ or _dst_) to indicate which side of the proxy the
metric reflects.
Additionally, the `received_bytes` and `sent_bytes` metrics have been
renamed as `tcp_read_bytes_total` and `tcp_write_bytes_total`,
resectively. This more naturally fits into existing idioms. Stream
classification is not applied to these metrics, as we plan to increment
them throughout stream lifetime and not only on close.
The `tcp_connections_open` metric has also been renamed to
`tcp_open_connections` to reflect Prometheus idioms.
Finally, `msg1` and `msg2` have been constified in telemetry test
fixtures so that tests are somewhat easier to read.
trust-dns-resolver is a more complete implementation. In particular,
it supports CNAMES correctly, which is needed for PR #764. It also
supports /etc/hosts, which will help with issue #62.
Use the 0.8.2 pre-release since it hasn't been released yet. It was
created at our request.
Signed-off-by: Brian Smith <brian@briansmith.org>
Fixes#846
The proxy `metrics_compression` test contained an assertion that a compressed scrape contained the `request_duration_ms_count` metric. This was chosen completely arbitrarily, and was only intended as an assertion that metrics were updated between compressed scrapes. Unfortunately, that metric was removed in d9112abc93, so when #665 merged to master, this test broke. CI didn't catch this since we don't build merges for PRs --- we should probably (re)enable this in Travis?
This PR fixes the test to assert on a metric that wasn't removed. Sorry for the ❌s!
Closes#598.
According to the Prometheus documentation, metrics export endpoints should support serving metrics compressed using GZIP. I've modified the proxy's `/metrics` endpoint to serve metrics compressed with GZIP when an `Accept-Encoding: gzip` request header is sent.
I've also added a new unit test that attempts to get the proxy's metrics endpoint as GZIP, and asserts that the metrics are decompressed successfully.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The `controller` part of the proxy will now use a default, removing the
need to pass the exact same `controller::new().run()` in every test
case.
The TCP server and client will include their socket addresses in some
panics.
Signed-off-by: Sean McArthur <sean@seanmonstar.com>
This PR removes the unused `request_duration_ms` and `response_duration_ms` histogram metrics from the proxy. It also removes them from the `simulate-proxy` script's output, and from `docs/proxy-metrics.md`
Closes#821
Fixes#831.
Proxy metrics tests `transport::inbound_tcp_accept` and `transport::inbound_tcp_duration` are known to be flaky and should be ignored on CI. Note that the outbound versions of these tests were already marked as flaky, so this was almost certainly either an oversight or the result of an incorrect merge.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The refactoring of how metrics are formatted in 674ce87588 inadvertently introduced a bug that caused the `process_start_time_seconds` metric to be formatted as just a number without the metric name. This causes Prometheus to fail with a parse error rather than accepting the metrics.
I've fixed this issue, and added a unit test to detect regressions in the future.
This PR adds a `classification` label to transport level metrics collected on transport close. Like the `classification` label on HTTP response metrics, the value may be either `"success"` or `"failure"`. The label value is determined based on the `clean` field on the `TransportClose` event, which indicates whether a transport closed cleanly or due to an error.
I've updated the tests for transport-level metrics to reflect the addition of the new label. I'd like to also modify the test support code to allow us to close transports with errors, in order to test that the errors are correctly classified as failures.
Now, the tap server may specify that requests should be matched by destination
label.
For example, if the controller's Destination service returns the labels:
`{"service": "users", "namespace": "prod"}` for an endpoint, then tap would be
able to specify a match like `namespace=prod` to match requests destined to
that namespace.
This branch adds all the transport-level Prometheus metrics as described in #742, with the exception of the `tcp_connections_open` gauge (to be added in a subsequent branch).
A brief description of the metrics added in this branch:
* `tcp_accept_open_total`: counter of the number of connections accepted by the proxy
* `tcp_accept_close_total`: counter of the number of accepted connections that have closed
* `tcp_connect_open_total`: counter of the number of connections opened by the proxy
* `tcp_connect_close_total`: counter of the number of connections opened by the proxy that have been closed.
* `tcp_connection_duration_ms`: histogram of the total duration of each TCP connection (incremented on connection close)
* `sent_bytes`: counter of the total number of bytes sent on TCP connections (incremented on connection close)
* `received_bytes`: counter of the total number of bytes received on TCP connections (incremented on connection close)
These metrics are labeled with the direction (inbound or outbound) and whether the connection was proxied as raw TCP or corresponds to an HTTP request.
Additionally, I've added several proxy tests for these metrics. Note that there are some cases which are currently untested; in particular, while there are tests for the `tcp_accept_close_total` counter, it's more difficult to test the `tcp_connect_close_total` counter, due to connection pooling. I'd like to improve the tests for this code in additional branches.
The Tap API supports key-value labels on endpoint metadata. The proxy was not
setting these labels previously.
In order to add these labels onto tap events, we store the original set of
labels in an `Arc<HashMap>` on `DstLabels`. When tap events are emitted, the
destination' labels are copied from the `DstLabels` into each event.
The `Labeled` middleware is used to add `DstLabels` to each request. Now that
each client context maintains a watch on its endpoint's `DstLabels`, the
`Labeled` middleware can safely be removed.
This has one subtle behavior change: labels are associated with requests
_lazily_, whereas before they were determined _eagerly_. This means that if an
endpoints labels are updated before the telemetry system captures the labels
for the request, it may use the newer labels. Previously, it would only use the
labels at the time that the request originated.
Currently, only the request context holds destination labels. However,
destination labels are more accurately associated with the client context,
since the client context is what tracks the remote peer address (and
destination labels are associated with this address).
No functional changes.
Building on #796, this creates a new `Endpoint` type that wraps `SocketAddr`.
Still, no functional change has been introduced, but this sets up to move
destination labels into the bind stack directly (by adding the labels watch to
the `Endpoint` type).
Currently, the mock controller, which is used in tests, takes all of its
updates a priori, which makes it hard to control when an update occurs within a
test.
Now, the controller exposes a `DstSender`, which wraps an unbounded channel of
destination updates. This allows tests to trigger updates at a specific point
in the test.
In order to accomplish this, the controller's hand-rolled gRPC server
implementation has been discarded in favor of a real gRPC destination service.
This requires that the `controller-grpc` project now builds both clients
and servers for the destination service. Additionally, we now build a tap
client as well (assuming that we'll want to write tests against our tap
server).
Previously, `Bind` required that it bind to `SocketAddr` (and `SocketAddr`
only). This makes it hard to pass additional information from service discovery
into the client's stack.
To resolve this, `Bind` now has an additional `Endpoint` trait-generic type,
and `Bind::bind` accepts an `Endpoint` rather than a `SocketAddr`.
No additional endpoints have been introduced yet. There are no functional
changes in this refactor.
This changes the public api to have a new rpc type, `TapByResource`.
This api supersedes the Tap api. `TapByResource` is richer, more closely
reflecting the proxy's capabilities.
The proxy's Tap api is extended to select over destination labels,
corresponding with those returned by the Destination api.
Now both `Tap` and `TapByResource`'s responses may include destination
labels.
This change avoids breaking backwards compatibility by:
* introducing the new `TapByResource` rpc type, opting not to change Tap
* extending the proxy's Match type with a new, optional, `destination_label` field.
* `TapEvent` is extended with a new, optional, `destination_meta`.
Currently, the request open timestamp, which is used for calculating latency, is captured in the `sensor::http::Http` middleware. However, the sensor middleware is placed fairly low in the stack, below some of the proxy's components that can add measurable latency (e.g. the router).
This PR moves the request_open timestamp out of the `Http` middleware and into a new `TimestampRequestOpen` middleware, which is installed at the top of the stack (before the router). The `TimestampRequestOpen` middleware adds the timestamp as a request extension, so that it can later be consumed by the `Http` sensor to generate the request stats.
By moving the timestamping to the top of the stack, the timestamp should more accurately cover the overhead of the proxy, but a majority of the telemetry work can still be done where it was previously.
I'd like to have included unit tests for this change, but since the expected improvement is in the accuracy of latency measurements, there's no easy way to test this programmatically.
This is a fairly minor refactor to the proxy telemetry tests. b07b554d2b added a `Fixture` in the Destination service labeling tests added in #661 to reduce the repetition of copied and pasted code in those tests. I've refactored most of the other telemetry tests to also use the test fixture. Significantly less code is copied and pasted now.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The proxy `telemetry::metrics::prometheus` module was initially added in order to give the Prometheus metrics export code a separate namespace from the controller push metrics. Since the controller push metrics code was removed from the proxy in #616, we no longer need a separate module for the Prometheus-specific metrics code. Therefore, I've moved that code to the root `telemetry::metrics` module, which should hopefully make the proxy source tree structure a little simpler.
This is a fairly trivial refactor.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Closes#713. This is a follow-up from #688.
This PR makes a number of refactorings to the proxy's `control::Cache` module and removes all but one of the `clone` calls.
The `CacheChange` enum now contains the changed key and a reference to the changed value when applicable. This simplifies `on_change` functions, which no longer have to take both a tuple of `(K, V)` and a `CacheChange` and can now simply destructure the `CacheChange`, and since the changed value is passed as a reference, the `on_change` function can now decide whether or not it should be cloned. This means that we can remove a majority of the clones previously present here.
I've also rewritten `Cache::update_union` so that it no longer clones values (twice if the cache was invalidated). There's still one `clone` call in `Cache::update_intersection`, but it seems like it will be fairly tricky to remove. However, I've moved the `V: Clone` bound to that function specifically. `Cache::clear` and `Cache::update_union` so that they no longer call `Cache::update_intersection` internally, so they don't need a `V: Clone` bound.
In addition, I've added some unit tests that test that `on_change` is called with the correct `CacheChange`s when key/value pairs are modified.
This reverts commit d38a2acff8.
The change being reverted here did reduce downloads that occur when
Cargo.lock is updated. However, it had the unwanted side-effect of
invalidating at least part of the Cargo download cache when other
files, including in particular files under proto/, were modified.
Signed-off-by: Brian Smith <brian@briansmith.org>
Reduce the dependencies on files under proto/ to eliminate Docker
detecting false dependencies that trigger rebuilds.
Signed-off-by: Brian Smith <brian@briansmith.org>
The tests for label metadata updates from the control plane are flaky on CI. This is likely due to the CI containers not having enough cores to execute the test proxy thread, the test proxy's controller client thread, the mock controller thread, and the test server thread simultaneously --- see #751 for more information.
For now, I'm ignoring these on CI. Eventually, I'd like to change the mock controller code in test support so that we can trigger it to send a second metadata update only after the request has finished.
I think this issue also makes merging #738 a higher priority, so that we can still have some tests running on CI that exercise some part of the label update behaviour.
PR #654 adds pod-based metric labels to the Destination API responses for cluster-local services.
This PR modifies the proxy to actually add these labels to reported Prometheus metrics for outbound requests to local services.
It enhances the proxy's `control::discovery` module to track these labels and add a `LabelRequest` middleware to the service stack built in `Bind` for labeled services. Requests transiting `LabelRequest` are given an `Extension` which contains these labels, which are then added to events produced by the `Sensors` for these requests. When these events are aggregated to Prometheus metrics, the labels are added.
I've also added some tests in `test/telemetry.rs` ensuring that these metrics are added correctly when the Destination service provides labels.
Closes#660
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
- The listener is immediately closed on receipt of a shutdown signal.
- All in-progress server connections are now counted, and the process will
not shutdown until the connection count has dropped to zero.
- In the case of HTTP1, idle connections are closed. In the case of HTTP2,
the HTTP2 graceful shutdown steps are followed of sending various
GOAWAYs.
Previously when the proxy could tell, by parsing, the request-target
is not in the cluster, it would not override the destination. That is,
load balancing would be disabled for such destinations.
With this change, the proxy will do L7 load balancing for all HTTP
services as long as the request-target has a DNS name.
Signed-off-by: Brian Smith <brian@briansmith.org>
No change in behavior is intended here.
Split poll_destination() into two parts, one that operates locally
on the DestinationSet, and the other that operates on data that isn't
wholly local to the DestinationSet. This makes the code easier to
understand. This is being done in preparation for adding DNS fallback
polling to poll_destination().
Signed-off-by: Brian Smith <brian@briansmith.org>
Only the destination service needs normalized names (and even then,
that's just temporary). The rest of the code needs the name as it was
given, except case-normalized (lowercased). Because DNS fallack isn't
implemented in service discovery yet, Outbound still a temporary
workaround using FullyQualifiedName to keep things working; thta will
be removed once DNS fallback is implemented in service discovery.
Signed-off-by: Brian Smith <brian@briansmith.org>
This PR changes the proxy's `control::Cache` module from a set to a key-value map.
This change is made in order to use the values in the map to store metadata from the Destination API, but allow evictions and insertions to be based only on the `SocketAddr` of the destination entry. This will make code in PR #661 much simpler, by removing the need to wrap `SocketAddr`s in the cache in a `Labeled` struct for storing metadata, and the need for custom `Borrow` implementations on that type.
Furthermore, I've changed from using a standard library `HashSet`/`HashMap` as the underlying collection to using `IndexMap`, as we suspect that this will result in performance improvements.
Currently, as `master` has no additional metadata to associate with cache entries, the type of the values in the map is `()`. When #661 merges, the values will actually contain metadata.
If we suspect that there are many other use-cases for `control::Cache` where it will be treated as a set rather than a map, we may want to provide a separate set of impls for `Cache<T, ()>` (like `std::HashSet`) to make the API more ergonomic in this case.
This PR adds the pretty-printing for durations I added in #676 to the panic message from the `assert_eventually!` macro added in #669.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
No change in behavior is intended here.
Split poll_destination() into two parts, one that operates locally
on the DestinationSet, and the other that operates on data that isn't
wholly local to the DestinationSet. This makes the code easier to
understand. This is being done in preparation for adding DNS fallback
polling to poll_destination().
Signed-off-by: Brian Smith <brian@briansmith.org>
Proxy: Refactor DNS name parsing and normalization
Only the destination service needs normalized names (and even then,
that's just temporary). The rest of the code needs the name as it was
given, except case-normalized (lowercased). Because DNS fallack isn't
implemented in service discovery yet, Outbound still a temporary
workaround using FullyQualifiedName to keep things working; thta will
be removed once DNS fallback is implemented in service discovery.
Signed-off-by: Brian Smith <brian@briansmith.org>
This branch adds simple pretty-printing to duration in log timeout messages. If the duration is >= 1 second, it's printed in seconds with a fractional part. If the duration is less than 1 second, it is printed in milliseconds. This simple formatting may not be sufficient as a formatting rule for all cases, but should be sufficient for printing our relatively small timeouts.
Log messages now look something like this:
```
ERROR 2018-04-04T20:05:49Z: conduit_proxy: turning operation timed out after 100 ms into 500
```
Previously, they looked like this:
```
ERROR 2018-04-04T20:07:26Z: conduit_proxy: turning operation timed out after Duration { secs: 0, nanos: 100000000 } into 500
```
I made this change partially because I wanted to make the panics from the `eventually!` macro added in #669 more readable.
The proxy's `telemetry/metrics/prometheus.rs` file was starting to get long and hard to find one's way around in. I split the prometheus labels code out into a separate submodule and `RequestLabels` and `ResponseLabels` public. This seems like a reasonable division of the code, and the resultant files are much easier to read.
The proxy's control::discovery module is becoming a bit dense in terms
of what it implements.
In order to make this code more understandable, and to be able to use a
similar caching strategy in other parts of the controller, the
`control::cache` module now holds discovery's cache implementation.
This module is only visible within the `control` module, and it now
exposes two new public methods: `values()` and
`set_reset_on_next_modification()`.
* Extracted logic from destination server
* Make tests follow style used elsewhere in the code
* Extract single interface for resolvers
* Add tests for k8s and ipv4 resolvers
* Fix small usability issues
* Update dep
* Act on feedback
* Add pod-based metric_labels to destinations response
* Add documentation on running control plane to BUILD.md
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Fix mock controller in proxy tests (#656)
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Address review feedback
* Rename files in the destination package
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
- Adds environment variables to configure a set of ports that, when an
incoming connection has an SO_ORIGINAL_DST with a port matching, will
disable protocol detection for that connection and immediately start a
TCP proxy.
- Adds a default list of well known ports: SMTP and MySQL.
Closes#339
Previosuly, when the proxy was disconnected from the Destination
service and then reconnects, the proxy would not forget old, outdated
entries in its cache of endpoints. If those endpoints had been removed
while the proxy was disconnected then the proxy would never become
aware of that.
Instead, on the first message after a reconnection, replace the entire
set of cached entries with the new set, which may be empty.
Prior to this change, the new test
outbound_destinations_reset_on_reconnect_followed_by_no_endpoints_exists
passed already
but outbound_destinations_reset_on_reconnect_followed_by_add_none
and outbound_destinations_reset_on_reconnect_followed_by_remove_none
failed. Now all these tests pass.
Fixes#573
Signed-off-by: Brian Smith <brian@briansmith.org>
* Proxy: Factor out Destination service connection logic
Centralize the connection initiation logic for the Destination service
to make it easier to maintain. Clarify that the `rx` field isn't needed
prior to a (re)connect.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Rename `rx` to `query`.
Signed-off-by: Brian Smith <brian@briansmith.org>
* "recoonect" -> "reconnect"
Signed-off-by: Brian Smith <brian@briansmith.org>
This PR adds a `classification` label to proxy response metrics, as @olix0r described in https://github.com/runconduit/conduit/issues/634#issuecomment-376964083. The label is either "success" or "failure", depending on the following rules:
+ **if** the response had a gRPC status code, *then*
- gRPC status code 0 is considered a success
- all others are considered failures
+ **else if** the response had an HTTP status code, *then*
- status codes < 500 are considered success,
- status codes >= 500 are considered failures
+ **else if** the response stream failed **then**
- the response is a failure.
I've also added end-to-end tests for the classification of HTTP responses (with some work towards classifying gRPC responses as well). Additionally, I've updated `doc/proxy_metrics.md` to reflect the added `classification` label.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
In #602, @olix0r suggested that telemetry counters should wrap on overflows, as "most timeseries systems (like prometheus) are designed to handle this case gracefully."
This PR changes counters to use explicitly wrapping arithmetic.
Closes#602.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Have the controller tell the client whether the service exists, not
just what are available. This way we can implement fallback logic to
alternate service discovery mechanisms for ambigious names.
Signed-off-by: Brian Smith <brian@briansmith.org>
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
As described in #619. `process_start_time_seconds` is the idiomatic way of reporting to Prometheus the uptime of a process. It should contain the time in seconds since the beginning of the Unix epoch.
The proxy now exports this metric:
```
➜ http get localhost:4191/metrics
HTTP/1.1 200 OK
Content-Length: 902
Content-Type: text/plain; charset=utf-8
Date: Mon, 26 Mar 2018 22:09:55 GMT
# HELP request_total A counter of the number of requests the proxy has received.
# TYPE request_total counter
# HELP request_duration_ms A histogram of the duration of a request. This is measured from when the request headers are received to when the request stream has completed.
# TYPE request_duration_ms histogram
# HELP response_total A counter of the number of responses the proxy has received.
# TYPE response_total counter
# HELP response_duration_ms A histogram of the duration of a response. This is measured from when theresponse headers are received to when the response stream has completed.
# TYPE response_duration_ms histogram
# HELP response_latency_ms A histogram of the total latency of a response. This is measured from whenthe request headers are received to when the response stream has completed.
# TYPE response_latency_ms histogram
process_start_time_seconds 1522102089
```
Closes#619
Flaky proxy tests were not actually being ignored properly. This is due to our use of a Cargo workspace; as it turns out that Cargo doesn't propagate feature flags from the workspace to the crates in the workspace (see rust-lang/cargo#4753).
If I run `cargo test --no-default-features` in the root directory, the `flaky_tests` feature is still passed, and the flaky tests still run:
```
➜ cargo test --no-default-features
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running target/debug/deps/conduit_proxy-0e0ab2829c6b743f
running 13 tests
test fully_qualified_authority::tests::test_normalized_authority ... ok
test ctx::transport::tests::same_addr_ip6_compat_ipv4 ... ok
test ctx::transport::tests::same_addr_ipv4 ... ok
test ctx::transport::tests::same_addr_ip6_mapped_ipv4 ... ok
test ctx::transport::tests::same_addr_ipv6 ... ok
test telemetry::tap::match_::tests::http_from_proto ... ok
test inbound::tests::recognize_default_no_ctx ... ok
test telemetry::tap::match_::tests::tcp_from_proto ... ok
test telemetry::tap::match_::tests::tcp_matches ... ok
test inbound::tests::recognize_default_no_loop ... ok
test transparency::tcp::tests::duplex_doesnt_hang_when_one_half_finishes ... ok
test inbound::tests::recognize_default_no_orig_dst ... ok
test inbound::tests::recognize_orig_dst ... ok
test result: ok. 13 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/conduit_proxy-74584a35ef749a60
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/discovery-73cd0b65bd7a45ae
running 16 tests
test http1::absolute_uris::outbound_reconnects_if_controller_stream_ends ... ok
test http1::outbound_reconnects_if_controller_stream_ends ... ok
test http1::absolute_uris::outbound_uses_orig_dst_if_not_local_svc ... ok
test http1::outbound_asks_controller_without_orig_dst ... ok
test http1::absolute_uris::outbound_asks_controller_api ... ok
test http1::outbound_asks_controller_api ... ok
test http1::absolute_uris::outbound_asks_controller_without_orig_dst ... ok
test http2::outbound_reconnects_if_controller_stream_ends ... ok
test http2::outbound_asks_controller_api ... ok
test http2::outbound_asks_controller_without_orig_dst ... ok
test http1::outbound_uses_orig_dst_if_not_local_svc ... ok
server h1 error: invalid HTTP version specified
test http2::outbound_uses_orig_dst_if_not_local_svc ... ok
ERROR 2018-03-26T20:54:09Z: conduit_proxy: turning Error caused by underlying HTTP/2 error: protocol error: frame with invalid size into 500
test outbound_updates_newer_services ... ok
ERROR 2018-03-26T20:54:09Z: conduit_proxy: turning operation timed out after Duration { secs: 0, nanos: 100000000 } into 500
test http1::absolute_uris::outbound_times_out ... ok
ERROR 2018-03-26T20:54:09Z: conduit_proxy: turning operation timed out after Duration { secs: 0, nanos: 100000000 } into 500
test http2::outbound_times_out ... ok
ERROR 2018-03-26T20:54:09Z: conduit_proxy: turning operation timed out after Duration { secs: 0, nanos: 100000000 } into 500
test http1::outbound_times_out ... ok
test result: ok. 16 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/telemetry-cb5bee2d2b94332c
running 12 tests
test metrics_endpoint_inbound_request_count ... ok
test metrics_endpoint_inbound_request_duration ... ok
test metrics_endpoint_outbound_request_count ... ok
test records_latency_statistics ... ignored
test telemetry_report_errors_are_ignored ... ok
test metrics_endpoint_outbound_request_duration ... ok
test metrics_have_no_double_commas ... ok
test http1_inbound_sends_telemetry ... ok
test inbound_sends_telemetry ... ok
test inbound_aggregates_telemetry_over_several_requests ... ok
test metrics_endpoint_inbound_response_latency ... ok
test metrics_endpoint_outbound_response_latency ... ok
test result: ok. 11 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out
Running target/debug/deps/transparency-9d14bf92d8ba3700
running 19 tests
ERROR 2018-03-26T20:54:10Z: conduit_proxy: turning Error caused by underlying HTTP/2 error: protocol error: unexpected internal error encountered into 500
test http11_upgrade_not_supported ... ok
test http11_absolute_uri_differs_from_host ... ok
test http10_without_host ... ok
test http1_head_responses ... ok
test http10_with_host ... ok
test http1_connect_not_supported ... ok
test http1_bodyless_responses ... ok
test http1_content_length_zero_is_preserved ... ok
test http1_removes_connection_headers ... ok
test http1_one_connection_per_host ... ok
test inbound_http1 ... ok
test inbound_tcp ... ok
test http1_requests_without_body_doesnt_add_transfer_encoding ... ok
test http1_response_end_of_file ... ok
test http1_requests_without_host_have_unique_connections ... ok
test outbound_tcp ... ok
test tcp_with_no_orig_dst ... ok
test tcp_connections_close_if_client_closes ... ok
test outbound_http1 ... ok
test result: ok. 19 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/conduit_proxy_controller_grpc-7fdac3528475b1dc
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/conduit_proxy_router-024926cac5d328ee
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/convert-ae9bd3b8fee21c85
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/futures_mpsc_lossy-4afd31454ff77b40
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests conduit-proxy
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests conduit-proxy-controller-grpc
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests convert
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests conduit-proxy-router
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests futures-mpsc-lossy
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
```
This also happens if the `-p` flag is used to run tests only in the `conduit-proxy` crate:
```
➜ cargo test -p conduit-proxy --no-default-features
Compiling conduit-proxy v0.3.0 (file:///Users/eliza/Code/go/src/github.com/runconduit/conduit/proxy)
Finished dev [unoptimized + debuginfo] target(s) in 17.27 secs
Running target/debug/deps/conduit_proxy-0e0ab2829c6b743f
running 13 tests
test fully_qualified_authority::tests::test_normalized_authority ... ok
test ctx::transport::tests::same_addr_ip6_mapped_ipv4 ... ok
test ctx::transport::tests::same_addr_ipv6 ... ok
test ctx::transport::tests::same_addr_ipv4 ... ok
test ctx::transport::tests::same_addr_ip6_compat_ipv4 ... ok
test inbound::tests::recognize_default_no_loop ... ok
test telemetry::tap::match_::tests::http_from_proto ... ok
test inbound::tests::recognize_default_no_orig_dst ... ok
test inbound::tests::recognize_default_no_ctx ... ok
test transparency::tcp::tests::duplex_doesnt_hang_when_one_half_finishes ... ok
test telemetry::tap::match_::tests::tcp_from_proto ... ok
test inbound::tests::recognize_orig_dst ... ok
test telemetry::tap::match_::tests::tcp_matches ... ok
test result: ok. 13 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/conduit_proxy-74584a35ef749a60
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/discovery-73cd0b65bd7a45ae
running 16 tests
test http1::absolute_uris::outbound_reconnects_if_controller_stream_ends ... ok
test http1::outbound_reconnects_if_controller_stream_ends ... ok
test http1::absolute_uris::outbound_asks_controller_without_orig_dst ... ok
test http1::absolute_uris::outbound_uses_orig_dst_if_not_local_svc ... ok
test http1::outbound_asks_controller_without_orig_dst ... ok
test http1::absolute_uris::outbound_asks_controller_api ... ok
test http1::outbound_asks_controller_api ... ok
test http1::outbound_uses_orig_dst_if_not_local_svc ... ok
test http2::outbound_reconnects_if_controller_stream_ends ... ok
test http2::outbound_asks_controller_without_orig_dst ... ok
test http2::outbound_asks_controller_api ... ok
test http2::outbound_uses_orig_dst_if_not_local_svc ... ok
server h1 error: invalid HTTP version specified
ERROR 2018-03-26T20:56:50Z: conduit_proxy: turning Error caused by underlying HTTP/2 error: protocol error: frame with invalid size into 500
test outbound_updates_newer_services ... ok
ERROR 2018-03-26T20:56:50Z: conduit_proxy: turning operation timed out after Duration { secs: 0, nanos: 100000000 } into 500
test http1::absolute_uris::outbound_times_out ... ok
ERROR 2018-03-26T20:56:50Z: conduit_proxy: turning operation timed out after Duration { secs: 0, nanos: 100000000 } into 500
test http1::outbound_times_out ... ok
ERROR 2018-03-26T20:56:50Z: conduit_proxy: turning operation timed out after Duration { secs: 0, nanos: 100000000 } into 500
test http2::outbound_times_out ... ok
test result: ok. 16 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/telemetry-cb5bee2d2b94332c
running 12 tests
test metrics_endpoint_inbound_request_duration ... ok
test metrics_endpoint_inbound_request_count ... ok
test metrics_endpoint_outbound_request_count ... ok
test metrics_endpoint_outbound_request_duration ... ok
test telemetry_report_errors_are_ignored ... ok
test metrics_have_no_double_commas ... ok
test inbound_sends_telemetry ... ok
test http1_inbound_sends_telemetry ... ok
test inbound_aggregates_telemetry_over_several_requests ... ok
test metrics_endpoint_inbound_response_latency ... ok
test metrics_endpoint_outbound_response_latency ... ok
test records_latency_statistics ... ok
test result: ok. 12 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/transparency-9d14bf92d8ba3700
running 19 tests
ERROR 2018-03-26T20:56:55Z: conduit_proxy: turning Error caused by underlying HTTP/2 error: protocol error: unexpected internal error encountered into 500
test http1_connect_not_supported ... ok
test http11_upgrade_not_supported ... ok
test http10_without_host ... ok
test http11_absolute_uri_differs_from_host ... ok
test http1_head_responses ... ok
test http10_with_host ... ok
test http1_bodyless_responses ... ok
test http1_content_length_zero_is_preserved ... ok
test http1_removes_connection_headers ... ok
test http1_one_connection_per_host ... ok
test http1_response_end_of_file ... ok
test http1_requests_without_host_have_unique_connections ... ok
test inbound_http1 ... ok
test inbound_tcp ... ok
test http1_requests_without_body_doesnt_add_transfer_encoding ... ok
test outbound_tcp ... ok
test tcp_with_no_orig_dst ... ok
test tcp_connections_close_if_client_closes ... ok
test outbound_http1 ... ok
test result: ok. 19 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests conduit-proxy
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
```
However, if I `cd` into the `proxy` directory (so that Cargo treats the `conduit-proxy` crate as the root project, rather than the workspace) and pass the `--no-default-features` flag, the flaky tests are skipped as expected:
```
➜ (cd proxy && exec cargo test --no-default-features)
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running /Users/eliza/Code/go/src/github.com/runconduit/conduit/target/debug/deps/conduit_proxy-ac198a96228a056e
running 13 tests
test fully_qualified_authority::tests::test_normalized_authority ... ok
test ctx::transport::tests::same_addr_ipv4 ... ok
test ctx::transport::tests::same_addr_ip6_compat_ipv4 ... ok
test ctx::transport::tests::same_addr_ipv6 ... ok
test ctx::transport::tests::same_addr_ip6_mapped_ipv4 ... ok
test telemetry::tap::match_::tests::tcp_from_proto ... ok
test telemetry::tap::match_::tests::http_from_proto ... ok
test transparency::tcp::tests::duplex_doesnt_hang_when_one_half_finishes ... ok
test telemetry::tap::match_::tests::tcp_matches ... ok
test inbound::tests::recognize_default_no_ctx ... ok
test inbound::tests::recognize_default_no_loop ... ok
test inbound::tests::recognize_default_no_orig_dst ... ok
test inbound::tests::recognize_orig_dst ... ok
test result: ok. 13 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running /Users/eliza/Code/go/src/github.com/runconduit/conduit/target/debug/deps/conduit_proxy-41e0f900f97e194b
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running /Users/eliza/Code/go/src/github.com/runconduit/conduit/target/debug/deps/discovery-7ba7fe16345a347a
running 16 tests
test http1::absolute_uris::outbound_times_out ... ignored
test http1::outbound_times_out ... ignored
test http1::absolute_uris::outbound_reconnects_if_controller_stream_ends ... ok
test http1::outbound_reconnects_if_controller_stream_ends ... ok
test http1::absolute_uris::outbound_uses_orig_dst_if_not_local_svc ... ok
test http1::outbound_uses_orig_dst_if_not_local_svc ... ok
test http1::absolute_uris::outbound_asks_controller_without_orig_dst ... ok
test http1::outbound_asks_controller_without_orig_dst ... ok
test http1::outbound_asks_controller_api ... ok
test http1::absolute_uris::outbound_asks_controller_api ... ok
test http2::outbound_times_out ... ignored
server h1 error: invalid HTTP version specified
ERROR 2018-03-26T21:48:32Z: conduit_proxy: turning Error caused by underlying HTTP/2 error: protocol error: frame with invalid size into 500
test http2::outbound_reconnects_if_controller_stream_ends ... ok
test http2::outbound_uses_orig_dst_if_not_local_svc ... ok
test http2::outbound_asks_controller_api ... ok
test http2::outbound_asks_controller_without_orig_dst ... ok
test outbound_updates_newer_services ... ok
test result: ok. 13 passed; 0 failed; 3 ignored; 0 measured; 0 filtered out
Running /Users/eliza/Code/go/src/github.com/runconduit/conduit/target/debug/deps/telemetry-b0763b64edd8fc68
running 12 tests
test metrics_endpoint_inbound_request_count ... ignored
test metrics_endpoint_inbound_request_duration ... ignored
test metrics_endpoint_inbound_response_latency ... ignored
test metrics_endpoint_outbound_request_count ... ignored
test metrics_endpoint_outbound_request_duration ... ignored
test metrics_endpoint_outbound_response_latency ... ignored
test records_latency_statistics ... ignored
test telemetry_report_errors_are_ignored ... ok
test metrics_have_no_double_commas ... ok
test http1_inbound_sends_telemetry ... ok
test inbound_sends_telemetry ... ok
test inbound_aggregates_telemetry_over_several_requests ... ok
test result: ok. 5 passed; 0 failed; 7 ignored; 0 measured; 0 filtered out
Running /Users/eliza/Code/go/src/github.com/runconduit/conduit/target/debug/deps/transparency-300fd801daa85ccf
running 19 tests
ERROR 2018-03-26T21:48:32Z: conduit_proxy: turning Error caused by underlying HTTP/2 error: protocol error: unexpected internal error encountered into 500
test http1_connect_not_supported ... ok
test http11_upgrade_not_supported ... ok
test http10_without_host ... ok
test http10_with_host ... ok
test http11_absolute_uri_differs_from_host ... ok
test http1_head_responses ... ok
test http1_bodyless_responses ... ok
test http1_removes_connection_headers ... ok
test http1_content_length_zero_is_preserved ... ok
test http1_one_connection_per_host ... ok
test http1_response_end_of_file ... ok
test http1_requests_without_body_doesnt_add_transfer_encoding ... ok
test inbound_tcp ... ok
test inbound_http1 ... ok
test http1_requests_without_host_have_unique_connections ... ok
test outbound_tcp ... ok
test tcp_connections_close_if_client_closes ... ok
test tcp_with_no_orig_dst ... ok
test outbound_http1 ... ok
test result: ok. 19 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests conduit-proxy
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
```
I'm wrapping the `cd` and `cargo test` command in a subshell so that the CWD on Travis is still in the repo root when the command exits, but the return value from `cargo test` is propagated.
Closes#625
Use `VecDeqeue` to make the queue structure clear. Follow good practice
by minimizing the amount of time the lock is held. Clarify how
defaulting logic works.
Signed-off-by: Brian Smith <brian@briansmith.org>
The metrics endpoint tests are flaky because there are no guarantees
that the metrics pipeline has processed events before the metrics
endpoint is read. This can cause CI to fail spuriously.
Disable these tests from running in CI until #613 is resolved.
The inject code detects the object it is being injected into, and writes
self-identifying information into the CONDUIT_PROMETHEUS_LABELS
environment variable, so that conduit-proxy may read this information
and report it to Prometheus at collection time.
This change puts the self-identifying information directly into
Kubernetes labels, which Prometheus already collects, removing the need
for conduit-proxy to be aware of this information. The resulting label
in Prometheus is recorded in the form `k8s_deployment`.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This PR adds the `request_duration_ms` metric to the Prometheus metrics exported by the proxy. It also modifies the `request_total` metric so that it is incremented when a request stream finishes, rather than when it opens, for consistency with how the `response_total` metric is generated.
Making this change required modifying `telemetry::sensors::http` to generate a `StreamRequestEnd` event similar to the `StreamResponseEnd` event. This is done similarly to how sensors are added to response bodies, by generalizing the `ResponseBody` type into a `MeasuredBody` type that can wrap a request or response body. Since this changed the type of request bodies, it necessitated changing request types pretty much everywhere else in the proxy codebase in order to fix the resulting type errors, which is why the diff for this PR is so large.
Closes#570
Fixes#600
The proxy metrics endpoint has a bug where metrics recorded in the outbound direction can contain two commas in a row when no outbound label is present. This occurs because the code for formatting the outbound direction label mistakenly assumed that there would always be a destination pod owner label as well, but the proxy isn't currently aware of the destination's pod owner (waiting for #429).
I've fixed this issue by moving the place where the comma is output from the `fmt::Display` impl for `RequestLabels` to the `fmt::Display` impl for `OutboudnLabels`. This way, the comma between the `direction` and `dst_*` labels is only output when the `dst_*` label is present.
This bug made it to master since all of the proxy end-to-end tests for metrics only test the inbound router. I've rectified this issue by adding tests on the outbound router as well (which would fail against the current master due to the double comma bug). I've also added a test that asserts there are no double commas in exported metrics, to protect against regressions to this bug.
This PR adds an endpoint to the proxy that serves metrics in Prometheus' text exposition format. The endpoint currently serves the `request_total`, `response_total`, `response_latency_ms`, and `response_duration_ms metrics`, as described in #536. The endpoint's port and address are configurable with the `CONDUIT_PROXY_METRICS_LISTENER` environment variable.
Tests have been added in t`ests/telemetry.rs`
`cargo fetch` doesn't consider the target platform and downloads
all crates needed to build for any target.
Stop using `cargo fetch` and instead use the implicit fetch done
by `cargo build`, which *does* consider the target platform.
This change results in 12 (soon 15) fewer crates downloaded.
This is a non-trivial savings in build time for a full rebuild
since cargo downloads crates in parallel.
```diff
- Downloading bitflags v1.0.1
- Downloading fuchsia-zircon v0.3.3
- Downloading fuchsia-zircon-sys v0.3.3
- Downloading miow v0.2.1
- Downloading redox_syscall v0.1.37
- Downloading redox_termios v0.1.1
- Downloading termion v1.5.1
- Downloading winapi v0.3.4
- Downloading winapi-i686-pc-windows-gnu v0.4.0
- Downloading winapi-x86_64-pc-windows-gnu v0.4.0
- Downloading wincolor v0.1.6
- Downloading ws2_32-sys v0.2.1
```
I verified that no downloads are done during an incremental
build.
Signed-off-by: Brian Smith <brian@briansmith.org>
When the proxy is run in a Docker container, it runs as PID 1, with
no default signal handlers setup. In order to react to signals from
Kubernetes about shutting down, we need to set up explicit handlers.
This adds handlers for SIGTERM and SIGINT.
Closes#549
In order to ensure we catch discovery and routing issues arising from different logic for HTTP/1 and HTTP/2 requests, I've modified tests/discovery.rs to run all applicable tests with both HTTP/1 and HTTP/2 requests. The tests themselves are largely unchanged, but now there are separate modules containing HTTP/1 and HTTP/2 versions of a majority of the tests.
Commit 569d6939a7 introduced a regression that caused the proxy to stop using the Destination service for outbound HTTP/1 requests with no authority in the request URI but a valid authority in the `Host:` header.
The bug is due to some code in `Outbound::recognize` which assumed that a request had already been passed through `normalize_our_view_of_uri`. This was valid at one point while I was writing #492, as URIs were normalized prior to `recognize` and a request `Extension` was used to mark that they had been rewritten, and the host header and request URI could be assumed to be in agreement, but after merging #514 into the dev branch for #492, this behaviour changed and I forgot to update the logic in `recognize`.
I've fixed the issue by adding the logic for routing on `Host:` headers back into `Outbound::recognize`.
@seanmonstar added a test in `discovery.rs`, `outbound_http1_asks_controller_about_host`, which should exercise this case. I've added a couple more unit tests in that file to try and ensure we cover more of the different cases that can occur here.
Fixes#552
In some cases, we would adjust an existing Host header, or add one. And in all cases when an HTTP/1 request was received with an absolute-form target, it was not passed on.
Now, the Host header is never changed. And if the Uri was in absolute-form, it is sent in the same format.
Closes#518
An infinite loop exists in the TCP proxy, which could be triggered by any raw TCP connection (including HTTPS requests). The connection will be proxied successfully, but instead of closing, it will remain open, and the proxy's CPU usage will remain extremely high indefinitely.
Since `Duplex::poll` will call `half_in.copy_into()`/`half_out.copy_into()` repeatedly, even after they return `Async::Ready`, when one half has shut down and returned ready, it may still be polled again, as `Duplex::poll` waits until _both_ halves have returned `Ready`. Because of the guard that `!dst.is_shutdown`, intended to prevent the destination from shutting down twice, the function will not return if it is polled again after returning `Async::Ready` once.
I've fixed this by moving the guard against double shutdowns out of the loop, so that the function will return `Async::Ready` again if it is polled after shutting down the destination.
I've also included a unit test against regressions to this bug. The unit test fails against master.
Fixes#519
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Co-Authored-By: Andrew Seigner <andrew@sig.gy>
* Proxy: Don't resolve absolute names outside zone using Destinations service
Many absolute names were being resolved using the Destinations service due to logic error
in the proxy's matching of the zone to the default zone.
Fix that bug.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Temporarily stop trying to support configurable zones in the proxy.
None of the zone configuration is tested and lots of things assume the cluster
zone is `cluster.local`. Further, how exactly the proxy will actually learn the
cluster zone hasn't been decided yet.
Just hard-code the zone as "cluster.local" in the proxy until configurable zones
are fully implemented and tested to be working correctly.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Remove the CONDUIT_PROXY_DESTINATIONS_AUTOCOMPLETE_FQDN setting
The way that Kubernetes configures DNS search suffixes has some negative
consequences as some names like "example.com" are ambiguous: depending on
whether there is a service "example" in the "com" namespace, "example.com"
may refer to an external service or an internal service, and this can
fluctuate over time. In recognition of that we added the
CONDUIT_PROXY_DESTINATIONS_AUTOCOMPLETE_FQDN setting, thinking this would
be part of a solution for users to opt out of the unfortunate behavior
if their applications didn't depend on the DNS search suffix feature.
It turns out similar effects can be acheived using a custom dnsConfig,
starting in Kubernetes 1.10 when dnsConfig reaches the beta stability level.
Now any CONDUIT_PROXY_DESTINATIONS_AUTOCOMPLETE_FQDN-based seems duplicative.
Further, attempting to support it optionally made the code complex and hard
to read.
Therefore, let's just remove it. If/when somebody actually requests this
functionality then we can add it back, if dnsConfig isn't a valid alternative
for them.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Further hard-code "cluster.local" as the zone, temporarily.
Addresses review feedback.
Signed-off-by: Brian Smith <brian@briansmith.org>
Wwe will be able to simplify service discovery in the near future if we
can rely on the namespace being available.
Signed-off-by: Brian Smith <brian@briansmith.org>
This PR ensures that the mapping of requests to outbound connections is segregated by `Host:` header values. In most cases, the desired behavior is provided by Hyper's connection pooling. However, Hyper does not handle the case where a request had no `Host:` header and the request URI had no authority part, and the request was routed based on the SO_ORIGINAL_DST in the desired manner. We would like these requests to each have their own outbound connection, but Hyper will reuse the same connection for such requests.
Therefore, I have modified `conduit_proxy_router::Recognize` to allow implementations of `Recognize` to indicate whether the service for a given key can be cached, and to only cache the service when it is marked as cachable. I've also changed the `reconstruct_uri` function, which rewrites HTTP/1 requests, to mark when a request had no authority and no `Host:` header, and the authority was rewritten to be the request's ORIGINAL_DST. When this is the case, the `Recognize` implementations for `Inbound` and `Outbound` will mark these requests as non-cachable.
I've also added unit tests ensuring that A, connections are created per `Host:` header, and B, that requests with no `Host:` header each create a new connection. The first test passes without any additional changes, but the second only passes on this branch. The tests were added in PR #489, but this branch supersedes that branch.
Fixes#415. Closes#489.
As a goal of being a transparent proxy, we want to proxy requests and
responses with as little modification as possible. Basically, servers
and clients should see messages that look the same whether the proxy was
injected or not.
With that goal in mind, we want to make sure that body headers (things
like `Content-Length`, `Transfer-Encoding`, etc) are left alone. Prior
to this commit, we at times were changing behavior. Sometimes
`Transfer-Encoding` was added to requests, or `Content-Length: 0` may
have been removed. While RC 7230 defines that differences are
semantically the same, implementations may not handle them correctly.
Now, we've added some fixes to prevent any of these header changes
from occurring, along with tests to make sure library updates don't
regress.
For requests:
- With no message body, `Transfer-Encoding: chunked` should no longer be
added.
- With `Content-Length: 0`, the header is forwarded untouched.
For responses:
- Tests were added that responses not allowed to have bodies (to HEAD
requests, 204, 304) did not have `Transfer-Encoding` added.
- Tests that `Content-Length: 0` is preserved.
- Tests that HTTP/1.0 responses with no body headers do not have
`Transfer-Encoding` added.
- Tests that `HEAD` responses forward `Content-Length` headers (but not
an actual body).
Closes#447
Signed-off-by: Sean McArthur <sean@seanmonstar.com>
Currently, the `Reconnect` middleware does not reconnect on connection errors (see #491) and treats them as request errors. This means that when a connection timeout is wrapped in a `Reconnect`, timeout errors are treated as request errors, and the request returns HTTP 500. Since this is not the desired behavior, the connection timeouts should be removed, at least until their errors can be handled differently.
This PR removes the connect timeouts from `Bind`, as described in https://github.com/runconduit/conduit/pull/483#issuecomment-369380003.
It removes the `CONDUIT_PROXY_PUBLIC_CONNECT_TIMEOUT_MS` environment variable, but _not_ the `CONDUIT_PROXY_PRIVATE_CONNECT_TIMEOUT_MS` variable, since this is also used for the TCP connect timeouts. If we want also want to remove the TCP connection timeouts, I can do that as well.
Closes#483. Fixes#491.
This PR changes the proxy to log error messages using `fmt::Display` whenever possible, which should lead to much more readable and meaningful error messages
This is part of the work I started last week on issue #442. While I haven't finished everything for that issue (all errors still are mapped to HTTP 500 error codes), I wanted to go ahead and open a PR for the more readable error messages. This is partially because I found myself merging these changes into other branches to aid in debugging, and because I figured we may as well have the nicer logging on master.
We previously `join`ed on piping data from both sides, meaning
that the future didn't complete until **both** sides had disconnected.
Even if the client disconnected, it was possible the server never knew,
and we "leaked" this future.
To fix this, the `join` is replaced with a `Duplex` future, which pipes
from both ends into the other, while also detecting when one side shuts
down. When a side does shutdown, a write shutdown is forwarded to the
other side, to allow draining to occur for deployments that half-close
sockets.
Closes#434
As requested by @briansmith in https://github.com/runconduit/conduit/issues/415#issuecomment-369026560 and https://github.com/runconduit/conduit/issues/415#issuecomment-369032059, I've refactored `FullyQualifiedAuthority::normalize` to _always_ return a `FullyQualifiedAuthority`, along with a boolean value indicating whether or not the Destination service should be used for that authority.
This is in contrast to returning an `Option<FullyQualifiedAuthority>` where `None` indicated that the Destination service should not be used, which is what this function did previously.
This is required for further progress on #415.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Currently we have to download and build two different versions of
the ordermap crate.
I will submit similar PRs for the dependent crates so that we will
eventually all be using the same version of indexmap.
Signed-off-by: Brian Smith <brian@briansmith.org>
This was caused by the fact that a new instance of `env_logger::init()`
was added after the PR that rewrote them all to `env_logger::try_init()`
was added.
Fixes#469
Signed-off-by: Brian Smith <brian@briansmith.org>
Stop initializing env_logger in every test. In env_logger 0.5, it
may only be initialized once per process.
Also, Prost will soon upgrade to env_logger 0.5 and this will
(eventually) help reduce the number of versions of env_logger we
have to build. Turning off the regex feature will (eventually) also
reduce the number of dependencies we have to build. Unfortunately,
as it is now, the number of dependencies has increased because
env_logger increased its dependencies in 0.5.
Signed-off-by: Brian Smith <brian@briansmith.org>
Turning off the default features of quickcheck removes its
`env_logger` and `log` dependencies. It uses older versions of
those packages than conduit-proxy will use, so this will
(eventually) reduce the number of versions of those packages that
get downloaded and built.
Signed-off-by: Brian Smith <brian@briansmith.org>
Hyper depends on tokio-proto with a default feature. By turning off
its default features, we can avoid that dependency. That reduces the
number of dependencies by 4.
Signed-off-by: Brian Smith <brian@briansmith.org>
Version 1.7.0 of the url crate seems to be broken which means we cannot
`cargo update` the proxy without locking url to version 1.6. Since we only
use it in a very limited way anyway, and since we use http::uri for parsing
much more, just switch all uses of the url crate to use http::uri for parsing
instead.
This eliminates some build dependencies.
Signed-off-by: Brian Smith <brian@briansmith.org>
Closes#403.
When the Destination service does not return a result for a service, the proxy connection for that service will hang indefinitely waiting for a result from Destination. If, for example, the requested name doesn't exist, this means that the proxy will wait forever, rather than responding with an error.
I've added a timeout wrapping the service returned from `<Outbound as Recognize>::bind_service`. The timeout can be configured by setting the `CONDUIT_PROXY_BIND_TIMEOUT` environment variable, and defaults to 10 seconds (because that's the default value for [a similar configuration in Linkerd](https://linkerd.io/config/1.3.5/linkerd/index.html#router-parameters)).
Testing with @klingerf's reproduction from #403:
```
curl -sIH 'Host: httpbin.org' $(minikube service proxy-http --url)/get | head -n1
HTTP/1.1 500 Internal Server Error
```
proxy logs:
```rust
proxy-5698f79b66-8rczl conduit-proxy INFO conduit_proxy using controller at HostAndPort { host: Domain("proxy-api.conduit.svc.cluster.local"), port: 8086 }
proxy-5698f79b66-8rczl conduit-proxy INFO conduit_proxy routing on V4(127.0.0.1:4140)
proxy-5698f79b66-8rczl conduit-proxy INFO conduit_proxy proxying on V4(0.0.0.0:4143) to None
proxy-5698f79b66-8rczl conduit-proxy INFO conduit_proxy::transport::connect "controller-client", DNS resolved proxy-api.conduit.svc.cluster.local to 10.0.0.240
proxy-5698f79b66-8rczl conduit-proxy ERR! conduit_proxy::map_err turning service error into 500: Inner(Timeout(Duration { secs: 10, nanos: 0 }))
```
This PR adds a `flaky_tests` cargo feature to control whether or not to ignore tests that are timing-dependent. This feature is enabled by default in local builds, but disabled on CI and in all Docker builds.
Closes#440
Currently, the max number of in-flight requests in the proxy is
unbounded. This is due to the `Buffer` middleware being unbounded.
This is resolved by adding an instance of `InFlightLimit` around
`Buffer`, capping the max number of in-flight requests for a given
endpoint.
Currently, the limit is hardcoded to 10,000. However, this will
eventually become a configuration value.
Fixes#287
Signed-off-by: Carl Lerche <me@carllerche.com>
The proxy will check that the requested authority looks like a local service, and if it doesn't, it will no longer ask the Destination service about the request, instead just using the SO_ORIGINAL_DST, enabling egress naturally.
The rules used to determine if it looks like a local service come from this comment:
> If default_zone.is_none() and the name is in the form $a.$b.svc, or if !default_zone.is_none() and the name is in the form $a.$b.svc.$default_zone, for some a and some b, then use the Destination service. Otherwise, use the IP given.
Removed the `method` label from Prometheus, and removed HTTP methods from reports. Removed `StreamSummary` from reports and replaced it with a `u32` count of streams.
Closes#266
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Follow-up from #315.
Now that the UIs don't report per-path metrics, we can remove the path label from Prometheus, the path aggregation and filtering options from the telemetry API, and the path field from the proxy report API.
I've modified the tests to no longer expect the removed fields, and manually verified that Conduit still works after making these changes.
Closes#265
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The proxy currently stores latency values in an `OrderMap` and reports every observed latency value to the controller's telemetry API since the last report. The telemetry API then sends each individual value to Prometheus. This doesn't scale well when there are a large number of proxies making reports.
I've modified the proxy to use a fixed-size histogram that matches the histogram buckets in Prometheus. Each report now includes an array indicating the histogram bounds, and each response scope contains a set of counts corresponding to each index in the bounds array, indicating the number of times a latency in that bucket was observed. The controller then reports the upper bound of each bucket to Prometheus, and can use the proxy's reported set of bucket bounds so that the observed values will be correct even if the bounds in the control plane are changed independently of those set in the proxy.
I've also modified `simulate-proxy` to generate the new report structure, and added tests in the proxy's telemetry test suite validating the new behaviour.
Currently, the conduit proxy uses a simplistic Round-Robin load
balancing algorithm. This strategy degrades severely when individual
endpoints exhibit abnormally high latency.
This change improves this situation somewhat by making the load balancer
aware of the number of outstanding requests to each endpoint. When nodes
exhibit high latency, they should tend to have more pending requests
than faster nodes; and the Power-of-Two-Choices node selector can be
used to distribute requests to lesser-loaded instances.
From the finagle guide:
The algorithm randomly picks two nodes from the set of ready endpoints
and selects the least loaded of the two. By repeatedly using this
strategy, we can expect a manageable upper bound on the maximum load of
any server.
The maximum load variance between any two servers is bound by
ln(ln(n))` where `n` is the number of servers in the cluster.
Signed-off-by: Oliver Gould <ver@buoyant.io>
The current proxy Dockerfile configuration does not cache dependencies
well, which can increase build times substantially.
By carefully splitting proxy/Dockerfile into several stages that mock
parts of the project, dependencies may be built and cached in Docker
such that changes to the proxy only require building the conduit-proxy
crate.
Furthermore, proxy/Dockerfile now runs the proxy's tests before
producing an artifact, unless the ` PROXY_SKIP_TESTS` build-arg is set
and not-empty.
The `PROXY_UNOPTIMIZED` build-arg has been added to support quicker,
debug-friendly builds.
The proxy depends on `protoc`-generated gRPC bindings to communicate
with the controller. In order to generate these bindings, build-time
dependencies must be compiled.
In order to support a more granular, cacheable build scheme, a new crate
has been created to house these gRPC bindings,
`conduit-proxy-controller-grpc`.
Because `TryFrom` and `TryInto` conversions are implemented for
protobuf-defined types, the `convert` module also had to be moved to
into a dedicated crate.
Furthermore, because the proxy's tests require that
`quickcheck::Aribtrary` be implemented for protobuf types, the
`conduit-proxy-controller-grpc` crate supports an _arbitrary_ feature
fla protobuf types, the `conduit-proxy-controller-grpc` crate supports
an _arbitrary_ feature flag.
While we're moving these libraries around, the `tower-router` crate has
been moved to `proxy/router` and renamed to `conduit-proxy-router.`
`futures-mpsc-lossy` has been moved into the proxy directory but has not
been renamed.
Finally, the `proxy/Dockerfile-deps` image has been updated to avoid the
wasteful building of dependency artifacts, as they are not actually used
by `proxy/Dockerfile`.
The conduit.io/* k8s labels and annotations we're redundant in some
cases, and not flexible enough in others.
This change modifies the labels in the following ways:
`conduit.io/plane: control` => `conduit.io/controller-component: web`
`conduit.io/controller: conduit` => `conduit.io/controller-ns: conduit`
`conduit.io/plane: data` => (remove, redundant with `conduit.io/controller-ns`)
It also centralizes all k8s labels and annotations into
pkg/k8s/labels.go, and adds tests for the install command.
Part of #201
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The conduit repo includes several library projects that have since been
moved into external repos, including `tower-grpc` and `tower-h2`.
This change removes these vendored libraries in favor of using the new
external crates.
Response End events were only triggered after polling the trailers of
a response, but when the Response is given to a hyper h1 server, it
doesn't know about trailers, so they were never polled!
The fix is that the `BodyStream` glue will now poll the wrapped body for
trailers after it sees the end of the data, before telling hyper the
stream is over. This ensures a ResponseEnd event is emitted.
Includes a proxy telemetry test over h1 connections.
If docker image tags were out of date, ci would not fail until the
docker-deploy stage (master merge).
Modify ci to validate tags as part of the default ci run.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The cargo commands in our docker and ci scripts were at risk for
modifying Cargo.lock and cache.
Using cargo's --frozen flag (and --locked during fetch) ensures our
build is consistent with what's defined across Cargo.toml, Cargo.lock,
and cached build artifacts.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Make Eos optional in TapEvent
grpc_status not being set in protobuf is the same as being set to zero,
which is also status OK
Modify TapEvent to include an optional EOS struct
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Part of #198
* Add Eos to proto & proxy tap end-of-stream events
The proxy now outputs `Eos` instead of `grpc_status` in all end-of-stream tap events. The EOS value is set to `grpc_status_code` when the response ended with a `grpc_status` trailer, `http_reset_code` when the response ended with a reset, and no `Eos` when the response ended gracefully without a `grpc_status` trailer.
This PR updates the proxy. The proto and controller changes are in PR #204.
Part of #198. Closes#202
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The proxy will now try to detect what protocol new connections are
using, and route them accordingly. Specifically:
- HTTP/2 stays the same.
- HTTP/1 is now accepted, and will try to send an HTTP/1 request
to the target.
- If neither HTTP/1 nor 2, assume a TCP stream and simply forward
between the source and destination.
* tower-h2: fix Server Clone bounds
* proxy: implement Async{Read,Write} extra methods for Connection
Closes#130Closes#131
Previously, proxy-deps and go-deps included the source tree for local
projects. This can cause build conflicts when files are renamed.
By adopting a multi-stage build for the proxy-deps image, we can be sure
that we only preserve essential dependencies & manifests in the
proxy-deps and go-deps images.
Furthermore, `bin/update-go-deps-shas` and `bin/update-proxy-deps-shas` have
been added to ease maintenance when files are changed.
Fixes#159
Signed-off-by: Oliver Gould <ver@buoyant.io>
As @seanmonstar noticed, the build script will currently re-compile all the protobufs regardless of whether or not they have changed, making the build much slower.
This PR modifies it to emit `cargo:rerun-if-changed=` for all the protobuf files, so they will only be regenerated if one of them changes.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
See #132. This PR adds a protocol field to the ClientTransport and ServerTransport messages, and modifies the proxy to report a value for this field (currently, it's only ever HTTP).
Currently, HTTP/1 and HTTP/2 are collapsed into one Protocol variant, see #132 (comment). I expect that we can treat H1 as a subset of H2 as far as metrics goes.
Note that after discussing it with @klingerf, I learned that the control plane telemetry API currently does not do anything with the ClientTransport and ServerTransport messages, so beyond regenerating the protobuf-generated code, no controller changes were actually necessary. As we actually add metrics to TCP transports, we'll want to make some additions to the telemetry API to ingest these metrics. If any metrics are shared between HTTP and raw TCP transports (say, bytes sent), we'll want to differentiate between them in Prometheus. All the metrics that the control plane currently ingests from telemetry reports are likely to be HTTP-specific (requests, responses, response latencies), or at least, do not apply to raw TCP.
Actually adding metrics to raw TCP transports will probably have to wait until there are raw TCP transports implemented in the proxy...
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The image tags for gcr.io/runconduit/go-deps and
gcr.io/runconduit/proxy-deps were not updating to account for all
changes in those images.
Modify SHA generation to include all files that affect the base
dependency images. Also add instructions to README.md for updating
hard-coded SHAs in Dockerfile's.
Fixes#115
Signed-off-by: Andrew Seigner <andrew@sig.gy>
Because whether or not to build a new deps image is based on the SHA of Cargo.lock, changes to the deps Dockerfile will not cause a new deps image to be built. Because of this, the current proxy deps Docker image is based on the wrong Rust version, breaking the build. See #115 for details on this issue.
I've appended a newline to Cargo.lock to change the lockfile's SHA and trigger a rebuild of the deps Docker image on CI. I've also added a comment in the Dockerfile noting that it is necessary to do this when changing that file.
Signed-off-by: Eliza Weisman eliza@buoyant.io
After merging #104, Conduit will not build against pre-1.23 Rust versions. This PR updates the Dockerfile to require this version. This should fix the build on master.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Since the methods on this trait were moved to direct implementations on the
implementing types, this produces an unused import warning with the latest
(1.23) Rust standard library. As we set `deny(warnings)`, this breaks the build.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Previously there was a default controller URL in the proxy. This
default was never used for any proxy injected by `conduit inject` and
it was the wrong default when using the proxy outside of Kubernetes.
Also more generally this is such an important setting in terms of
correctness and security that it was dangerous to let it be implied in
any context.
Remove the default, requiring that it be set in order for the proxy to
start.
* Proxy: Map unqualified/partially-qualified names to FQDN
Previously we required the service to fully qualify all service names
for outbound traffic. Many services are written assuming that
Kubernetes will complete names using its DNS search path, and those
services weren't working with Conduit.
Now add an option, used by default, to fully-qualify the domain names.
Currently only Kubernetes-like name completion for services is
supported, but the configuration syntax is open-ended to allow for
alternatives in the future. Also, the auto-completion can be disabled
for applications that prefer to ensure they're always using unambiguous
names. Once routing is implemented then it is likely that (default)
routing rules will replace these hard-coded rules.
Unit tests for the name completion logic are included.
Part of the solution for #9. The changes to `conduit inject` to
actually use this facility will be in another PR.
Previously `connection::Connection` was only being used for inbound
connections, not outbound connections. This led to some duplicate
logic and also made it difficult to adapt that code to enable TLS.
Now outbound connections use `connection::Connection` too. This will
allow the upcoming TLS logic to guarantee that `TCP_NODELAY` is
enabled at the right time, and the TLS logic also control access to
the underlying plaintext socket for security reasons.
Previously every use of `BoundPort` repeated a bunch of logic.
Move the repeated logic to `BoundPort` itself. Just remove the no-op
handshaking logic; new handshaking logic will be added to `BoundPort`
when TLS is added.
Previously the default value of this setting was in lib.rs instead of
being automatically set in `Config` like all the other defaults, which
was inconsistent and confusing.
Fix this by moving the defaulting logic to `Config`.
Validated by running the test suite.
Previously the logic related to listening for incoming TCP connections
was duplicated in several places.
Begin centralizing this logic. Future commits will centralize it
further.
No validation was done other than running the test suite.
Previously `Process` did its own environment variable parsing and did
not benefit from the improved error handling that `config` now has.
Additionally, future changes will need access to these same environment
variables in other parts of the proxy.
Move `Process`'s environment variable parsing to `config` to address
both of these issues. Now there are no uses of `env::var` outside of
`config` except for logging, which is the final desired state.
I validated this manually.
* Proxy: Use production config parsing in tests
Previosuly the testing code for the proxy was sensitive to the values
of environment variables unintentionally, because `Config` looked at
the environment variables. Also, the tests were largely avoiding
testing the production configuration parsing code since they were
doing their own parsing.
Now the tests avoid looking at environment variables other than
`ENV_LOG`, which makes them more resilient. Also the tests now parse
the settings using the same code as production use uses.
I validated this manually.
Previously, as soon as we would encounter one environment variable with
an invalid value we would exit. This is frustrating behavior when
deploying to Kubernetes and there are multiple problems because the
edit-compile-test cycle is so slow.
Fix this by parsing all the environment variables and logging error
messages before exiting.
I validated this manually.