This branch adds buckets for latencies below 10 ms to the proxy's latency
histograms, and removes the buckets for 100, 200, 300, 400, and 500
seconds, so the largest non-infinity bucket is 50,000 ms. It also removes
comments that claimed that these buckets were the same as those created
by the control plane, as this is no longer true (the metrics are now scraped
by Prometheus from the proxy directly).
Closes#1208
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Update dest service with a different tls identity strategy
* Send controller namespace as separate field
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
PR #1128 introduced new proxy process stats.
Introduce Grafana graphs that expose these new proxy process stats.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
#1203 introduced a bug in the implementation of `Future` for
`connection::ConditionallyUpgradeServerToTls`. If the attempt to match
the current peek buffer was incomplete, the `Future` implementation
would return `Ok(Async::NotReady)`. This results in the task yielding.
However, in this case the task would not be notified again, as the
`NotReady` state wasn't from an underlying IO resource. Instead, the
would _never_ be ready.
This branch fixes this issue by simply continuing the loop, so that
we instead try to read more bytes from the socket and try to match
again, until the match is successful or the _socket_ returns `NotReady`.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
When the proxy receives a `CONNECT` request, the HTTP Upgrade pieces
are used since a CONNECT is very similar to an Upgrade. If the CONNECT
response back from the proxied client request is successful, the
connection is converted into a TCP proxy, just like with Upgrades.
There are currently two issues which can lead to false positives (changes being
reported when files have not actually changed) in the polling-based filesystem
watch implementation.
The first issue is that when checking each watched file for changes, the loop
iterating over each path currently short-circuits as soon as it detects a
change. This means that if two or more files have changed, the first time we
poll the fs, we will see the first change, then if we poll again, we will see
the next change, and so on.
This branch fixes that issue by always hashing all the watched files, even if a
change has already been detected. This way, if all the files change between one
poll and the next, we no longer generate additional change events until a file
actually changes again.
The other issue is that the old implementation would treat any instance of a
"file not found" error as indicating that the file had been deleted, and
generate a change event. This leads to changes repeatedly being detected as
long as a file does not exist, rather than a single time when the file's
existence state actually changes.
This branch fixes that issue as well, by only generating change events on
"file not found" errors if the file existed the last time it was polled.
Otherwise, if a file did not previously exist, we no longer generate a new
event.
I've verified both of these fixes through manual testing, as well as a new
test for the second issue. The new test fails on master but passes on this
branch.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
On the server (accept) side of TLS, if the traffic isn't targetting the
proxy (as determined by the TLS ClientHello SNI), or if the traffic
isn't TLS, then pass it through.
Signed-off-by: Brian Smith <brian@briansmith.org>
- the current test setup requires a NODE_ENV variable to be set for the tests to work, that is not yet documented. Following the test docs will cause the tests to fail.
- The env is set either thorugh a test script that was added or manually setting.
- This commit addresses the documentation fix
Signed-off-by: Franziska von der Goltz <franziska@vdgoltz.eu>
Copy most of the implementation of `connection::Connection` to create
a way to prefix a `TcpStream` with some previously-read bytes. This
will allow us to read and parse a TLS ClientHello message to see if it
is intended for the proxy to process, and then "rewind" and feed it
back into the TLS implementation if so.
This must be in the `transport` submodule in order for it to implement
the private `Io` trait.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Proxy: Add parser to distinguish proxy TLS traffic from other traffic.
Distinguish incoming TLS traffic intended for the proxy to terminate
from TLS traffic intended for the proxied service to terminate and from
non-TLS traffic.
The new version of `untrusted` is required for this to work.
Signed-off-by: Brian Smith <brian@briansmith.org>
* More tests
Signed-off-by: Brian Smith <brian@briansmith.org>
* Stop abusing `futures::Async`.
Signed-off-by: Brian Smith <brian@briansmith.org>
As the TLS client config watch stored in `ctx::Process` is used only in
`Bind`, it's not necessary for it to be part of the process context.
Instead, it can be explicitly passed into `Bind`.
The resultant code is simpler, and resolves a potential cyclic
dependency caused when adding `Sensors` to the watch (see
https://github.com/runconduit/conduit/pull/1141#issuecomment-400082357).
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Add Sidebar links to Pods, Deployments, and Replication Controllers
In #1016 we removed the sidebar links to individual resource pages in favour of a namespace
page that lists all resources. These resource pages require no additional code so they're still
in our UI (accessible under /pods, /deployments etc), just not easily findable. I find them
useful to check when in development mode, or when debugging something, so I'd like to
re-add links.
If we don't want them in permanently, we can gate them behind `NODE_ENV=development`
This branch adds the rebinding logic added to outbound clients in #1185
to the controller client used in the proxy's `control::destination::background`
module. Now, if we are communicating with the control plane over TLS, we will
rebind the controller client stack if the TLS client configuration changes,
using the `WatchService` added in #1177.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Brian Smith <brian@briansmith.org>
Co-authored-by: Brian Smith <brian@briansmith.org>
control/mod.rs contains a variety of miscelaneous utilities. In
preparation of adding other types into the root of `control`, this
change creates a `control::util` module that holds them.
Rearrange the TLS configuration loading tests to enable them to be
extended outside the tls::config submodule.
Signed-off-by: Brian Smith <brian@briansmith.org>
Simplify the code and make it easier to report finer-grained
reasoning about what part(s) of the TLS configuration are
missing.
This is based on Eliza's PR #1186.
Signed-off-by: Brian Smith <brian@briansmith.org>
This branch adds process stats to the proxy's metrics, as described in
https://prometheus.io/docs/instrumenting/writing_clientlibs/#process-metrics.
In particular, it adds metrics for the process's total CPU time, number of
open file descriptors and max file descriptors, virtual memory size, and
resident set size.
This branch adds a dependency on the `procinfo` crate. Since this crate and the
syscalls it wraps are Linux-specific, these stats are only reported on Linux.
On other operating systems, they aren't reported.
Manual testing
Metrics scrape:
```
eliza@ares:~$ curl http://localhost:4191/metrics
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 19
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1024
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 45252608
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 12132352
# HELP process_start_time_seconds Time that the process started (in seconds since the UNIX epoch)
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1529017536
```
Note that the `process_cpu_seconds_total` stat is 0 because I just launched this conduit instance and it's not seeing any load; it does go up after i sent a few requests to it.
Confirm RSS & virtual memory stats w/ `ps`, and get Conduit's pid so we can check the fd stats
(note that `ps` reports virt/rss in kb while Conduit's metrics reports them in bytes):
```
eliza@ares:~$ ps aux | grep conduit | grep -v grep
eliza 16766 0.0 0.0 44192 12956 pts/2 Sl+ 16:05 0:00 target/debug/conduit-proxy
```
Count conduit process's open fds:
```
eliza@ares:~$ cd /proc/16766/fd
eliza@ares:/proc/16766/fd$ ls -l | wc -l
18
```
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch changes the proxy's `Bind` module to add a middleware layer
which watches for TLS cliend configuration changes and rebinds the
endpoint stacks of any endpoints with which it is able to communicate with over
TLS (i.e. those with `TlsIdentity` metadata) when the client config changes. The
rebinding is done at the level of individual endpoint stacks, rather than for the
entire service stack for the destination.
This obsoletes my previous PRs #1169 and #1175.
Closes#1161
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
WatchService is a middleware that rebinds its inner service
each time a Watch updates.
This is planned to be used to rebind endpoint stacks when TLS
configuration changes. Later, it should probably be moved into
the tower repo.
PR #978 introduced usage of parallel in docker-build. Unfortunately this
breaks if the system has non-GNU parallel.
Remove usage of parallel until we can do at least one of the following:
- detect version of parallel installed
- make usage of parallel optional and off by default
- confirm this speeds up builds for a majority of use cases
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Add CA certificate bundle distributor to conduit install
* Update ca-distributor to use shared informers
* Only install CA distributor when --enable-tls flag is set
* Only copy CA bundle into namespaces where inject pods have the same controller
* Update API config to only watch pods and configmaps
* Address review feedback
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
While investigating TLS configuration, I found myself wanting a
docstring on `tls::config::watch_for_config_changes`.
This has one minor change in functionality: now, `future::empty()`
is returned instead of `future:ok(())` so that the task never completes.
It seems that, ultimately, we'll want to treat it as an error if we lose
the ability to receive configuration updates.
* Proxy: Implement TLS conditional accept more like TLS conditional connect.
Clean up the accept side of the TLS configuration logic.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Add controller admin servers and readiness probes
* Tweak readiness probes to be more sane
* Refactor based on review feedback
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Any HTTP/1.1 requests seen by the proxy will automatically set up
to prepare such that if the proxied responses agree to an upgrade,
the two connections will converted into a standard TCP proxy duplex.
Implementation
-----------------
This adds a new type, `transparency::Http11Upgrade`, which is a sort of rendezvous type for triggering HTTP/1.1 upgrades. In the h1 server service, if a request looks like an upgrade (`h1::wants_upgrade`), the request body is decorated with this new `Http11Upgrade` type. It is actually a pair, and so the second half is put into the request extensions, so that the h1 client service may look for it right before serialization. If it finds the half in the extensions, it decorates the *response* body with that half (if it looks like a response upgrade (`h1::is_upgrade`)).
The `HttpBody` type now has a `Drop` impl, which will look to see if its been decorated with an `Http11Upgrade` half. If so, it will check for hyper's new `Body::on_upgrade()` future, and insert that into the half.
When both `Http11Upgrade` halves are dropped, its internal `Drop` will look to if both halves have supplied an upgrade. If so, the two `OnUpgrade` futures from hyper are joined on, and when they succeed, a `transparency::tcp::duplex()` future is created. This chain is spawned into the default executor.
The `drain::Watch` signal is carried along, to ensure upgraded connections still count towards active connections when the proxy wants to shutdown.
Closes#195
This adds `Io::write_buf_erased` that doesn't required `Self: Sized`, so
it can be called on trait objects. By using this method, specialized
methods of `TcpStream` (and others) can use their `write_buf` to do
vectored writes.
Since it can be easy to forget to call `Io::write_buf_erased` instead of
`Io::write_buf`, the concept of making a `Box<Io>` has been made
private. A new type, `BoxedIo`, implements all the super traits of `Io`,
while making the `Io` trait private to the `transport` module. Anything
hoping to use a `Box<Io>` can use a `BoxedIo` instead, and know that
the write buf erase dance is taken care of.
Adds a test to `transport::io` checking that the dance we've done does
indeed call the underlying specialized `write_buf` method.
Closes#1162
* Enable optional parallel build of docker images
By default, docker does image builds in a single thread. For our containers, this is a little slow on my system. Using `parallel` allows for *optional* improvements in speed there.
Before: 41s
After: 22s
* Move parallel help text to stderr
Don't allow the CLI or Web UI to request named resources if --all-namespaces is used.
This follows kubectl, which also does not allow requesting named resources
over all namespaces.
This PR also updates the Web API's behaviour to be in line with the CLI's.
Both will now default to the default namespace if no namespace is specified.
* Proxy: More carefully keep track of the reason TLS isn't used.
There is only one case where we dynamically don't know whether we'll
have an identity to construct a TLS connection configuration. Refactor
the code with that in mind, better documenting all the reasons why an
identity isn't available.
Signed-off-by: Brian Smith <brian@briansmith.org>
Move TLS cipher suite configuration to tls::config.
Use the same configuration to act as a client and a server.
Signed-off-by: Brian Smith <brian@briansmith.org>
Problem
`conduit stat` would cause a panic for any resource that wasn't in the list
of StatAllResourceTypes
This bug was introduced by https://github.com/runconduit/conduit/pull/1088/files
Solution
Fix writeStatsToBuffer to not depend on what resources are in StatAllResourceTypes
Also adds a unit test and integration test for `conduit stat ns`
* dest service: close open streams on shutdown
* Log instead of print in pkg packages
* Convert ServerClose to a receive-only channel
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The comments in Outbound::recognize had become somewhat stale as the
logic changed. Furthermore, this implementation may be easier to
understand if broken into smaller pieces.
This change reorganizes the Outbound:recognize method into helper
methods--`destination`, `host_port`, and `normalize`--each with
accompanying docstrings that more accurately reflect the current
implementation.
This also has the side-effect benefit of eliminating a string clone on
every request.
- If error messages are very long, truncate them and display a toggle to show the full message
- Tweak the headings - remove Pod, Container and Image - instead show them as titles
- Also move over from using Ant's Modal.method to the plain Modal component, which is a
little simpler to hook into our other renders.
Depends on #1047.
This PR adds a `tls="true"` label to metrics produced by TLS connections and
requests/responses on those connections, and a `tls="no_config"` label on
connections where TLS was enabled but the proxy has not been able to load
a valid TLS configuration.
Currently, these labels are only set on accepted connections, as we are not yet
opening encrypted connections, but I wired through the `tls_status` field on
the `Client` transport context as well, so when we start opening client
connections with TLS, the label will be applied to their metrics as well.
Closes#1046
Signed-off-by: Eliza Weisman <eliza@buoyanbt.io>
* Proxy: Make TLS server aware of its own identity.
When validating the TLS configuration, make sure the certificate is
valid for the current pod. Make the pod's identity available at that
point in time so it can do so. Since the identity is available now,
simplify the validation of our own certificate by using Rustls's API
instead of dropping down to the lower-level webpli API.
This is a step towards the server differentiating between TLS
handshakes it is supposed to terminate vs. TLS handshakes it is
supposed to pass through.
This is also a step toward the client side (connect) of TLS, which will
reuse much of the configuration logic.
Signed-off-by: Brian Smith <brian@briansmith.org>