Refactor `conduit inject` code to make it unit-testable.
Refactor the conduit inject code to make it easier to add unit tests. This work was done by @deebo91 in #365. This is the same PR without the conduit install changes, so that it can land ahead of #365. In particular, this will be used for testing the fix for high-priority bug #366.
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
Signed-off-by: Brian Smith <brian@briansmith.org>
File sizes (in bytes) before and after this change:
conduit-darwin conduit-linux conduit-windows
Before: 27,056,288 27,282,364 27,359,744
After: 20,023,456 18,080,576 18,262,528
----------------------------------------------------
Diff 7,032,832 9,201,788 9,097,216
Fixes#352.
Signed-off-by: Brian Smith <brian@briansmith.org>
The instance cache that powers the ListPods API is stored in memory in the telemetry service. This means that when there are multiple replicas of the telemetry service, each replica will have a distinct, incomplete view of the added pods based on which pods report to that telemetry replica. This causes the data plane bubbles on the dashboard to not all be filled in, and to flicker with each data refresh.
We create a Prometheus counter called reports_total which has pod as a label. Whenever a telemetry service instance receives a report from a pod, it increments reports_total for that pod. This allows us to remove the in-memory instance cache and instead query Prometheus to see if each pod has had a report in the last 30 seconds.
Fixes#337
Signed-off-by: Alex Leong <alex@buoyant.io>
In PR #298 we moved time window parsing (10s => (time.now - 10s,
time.now) down the stack to immediately before the query. This had the
unintended effect of creating parallel latency quantile requests with
slightly different timestamps.
This change parses the time window prior to latency quantile fan out,
ensuring all requests have the same timestamp.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Test the proxy in release mode in Docker in CI on the master branch.
Previously we were not running the proxy tests in the release configuration.
Run the proxy tests in the release configuration through Docker.
Docker builds with tests in release mode are too slow to run on every
pull request so release mode tests will only be run on the master
branch.
Signed-off-by: Brian Smith <brian@briansmith.org>
PR #298 moved summary (non-timeseries) requests to Prometheus' Query
endpoint, with no timestamp provided. This Query endpoint returns a
single data point with whatever timestamp was provided in the request.
In the absense of a timestamp, it uses current server time. This causes
the Public API to return discreet data points with slightly different
timestamps, which is unexpected behavior.
Modify the Public API -> Telemetry -> Prometheus request path to always
require a timestamp for single data point requests.
Fixes#340
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
On my system (i9-7960x running Docker natively in Linux) this regularly saves
over 11 seconds of build time when a file under pkg/ changes and over 1.5
seconds of build time when a file under controller/ changes. Since most
contributors are running Docker in a VM on less powerful computers, the
savings for most contributors should be significantly greater.
I imagine the savings for web/ and cli/ and proxy-init/ are similar, but I
did not measure them.
Signed-off-by: Brian Smith <brian@briansmith.org>
Conduit has been on Prometheus 1.8.1. Prometheus 2.x promises better
performance.
Upgrade Conduit to Prometheus 2.1.0
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Precompiling pkg/ in an earlier layer saves ~10 seconds of wall clock
time on an incremental build on my machine (i9-7960x) when I update a
file in controller/ such as controller/destination/server.go. This
makes a significant difference in the edit-build-test loop.
Signed-off-by: Brian Smith <brian@briansmith.org>
Previously Dockerfile-go-deps would run `dep ensure` whenever anything in the
source tree changed. Also, because it was a multi-stage Dockerfile it did not
work well with Docker's `--cache-from` feature.
Change Dockerfile-go-deps to only re-run `dep ensure` when Gopkg.{toml,lock}
and/or bin/dep change. Simplify it to a single stage so that it works better
with Docker's `--cache-from` feature.
Signed-off-by: Brian Smith <brian@briansmith.org>
bin/dep verifies the digest of the `dep` downloaded `dep` executable,
whereas previously Dockerfile-go-deps wasn't.
Signed-off-by: Brian Smith <brian@briansmith.org>
UI cleanups. Remove repetitive labels in the UI, remove unused elements,
remove graphs until we improve their utility.
- remove “Deployment” from the headers of the Deployment Detail Page
- remove Routes in sidebar
- kill leftmost 100px of sidebear
- remove word controller from service mesh page first table
- add twitter and GitHub and slack links
- kill the graphs, replace with one large header (request rate, success rate, latency top bar)
put upstream/downstream diagram before upstream downstream tables
* Clean up DeploymentList page (#321)
- remove "Most active deployments" graphs from the Deployments List page
- remove the scatterplot sections of the page as I don't think we'll be using them for a while
The Public APIs stat endpoint copies a slice of values to a slice of
pointers prior to gRPC response. Go's range clause re-uses the same
pointer for each iteration of the loop, causing a slice of {1,2,3}
becoming {3,3,3}.
Fix the range loop to directly reference pointers in the slice of
values, ignoring the range variable. Also add tests to catch this case.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Removed the `method` label from Prometheus, and removed HTTP methods from reports. Removed `StreamSummary` from reports and replaced it with a `u32` count of streams.
Closes#266
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
All requests from the public API service to the Telemetry service were
done serially. In some cases a single request to the public API's Stat
endpoint resulted in 5 serial requests to the Telemetry service.
Make all requests from the Public API to Telemetry concurrent.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Part of #299
Follow-up from #315.
Now that the UIs don't report per-path metrics, we can remove the path label from Prometheus, the path aggregation and filtering options from the telemetry API, and the path field from the proxy report API.
I've modified the tests to no longer expect the removed fields, and manually verified that Conduit still works after making these changes.
Closes#265
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
I've removed per-path metrics from the web dashboard and from the `conduit stat` command.
Manually validated that these metrics are no longer displayed.
Closes #263
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Prometheus queries from the Telemetry service were taking seconds or 10s
of seconds.
Optimize these queries:
- Move all summary queries requiring a single point data off of Prometheus'
QueryRange() endpoint, onto Query()
- Set `defaultVectorRange` to 30s, and also use it regardless of time
window
Also add tests for grpc_server and telemetry server
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#260
This PR updates the web UI to remove the pod detail page, and to remove the links to that page from pod names in metrics tables. It also removes the `pods` option from `conduit stat`, and the `sourcePod` and `targetPod` fields from the controller API proto's `MetricMetadata` message.
I've updated the `conduit stat` tests to reflect these changes, and manually verified the web UI changes.
Closes#261
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Travis CI now installs Docker 17.09 or later, which is good enough for
us, so avoid installing Docker manually.
Signed-off-by: Brian Smith <brian@briansmith.org>
The proxy currently stores latency values in an `OrderMap` and reports every observed latency value to the controller's telemetry API since the last report. The telemetry API then sends each individual value to Prometheus. This doesn't scale well when there are a large number of proxies making reports.
I've modified the proxy to use a fixed-size histogram that matches the histogram buckets in Prometheus. Each report now includes an array indicating the histogram bounds, and each response scope contains a set of counts corresponding to each index in the bounds array, indicating the number of times a latency in that bucket was observed. The controller then reports the upper bound of each bucket to Prometheus, and can use the proxy's reported set of bucket bounds so that the observed values will be correct even if the bounds in the control plane are changed independently of those set in the proxy.
I've also modified `simulate-proxy` to generate the new report structure, and added tests in the proxy's telemetry test suite validating the new behaviour.
* Consolidate latency, success, request metrics into one tab
on the SortableMetricsTable
- removes sparklines from the table
- makes tables sortable by default
- move pod table in DeploymentDetail to its own row
* remove request distribution column, reorder columns
Currently, the conduit proxy uses a simplistic Round-Robin load
balancing algorithm. This strategy degrades severely when individual
endpoints exhibit abnormally high latency.
This change improves this situation somewhat by making the load balancer
aware of the number of outstanding requests to each endpoint. When nodes
exhibit high latency, they should tend to have more pending requests
than faster nodes; and the Power-of-Two-Choices node selector can be
used to distribute requests to lesser-loaded instances.
From the finagle guide:
The algorithm randomly picks two nodes from the set of ready endpoints
and selects the least loaded of the two. By repeatedly using this
strategy, we can expect a manageable upper bound on the maximum load of
any server.
The maximum load variance between any two servers is bound by
ln(ln(n))` where `n` is the number of servers in the cluster.
Signed-off-by: Oliver Gould <ver@buoyant.io>
* Stop running "cargo check" in CI
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Attempt to clear cargo cache
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Remove cache clearing step
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The current proxy Dockerfile configuration does not cache dependencies
well, which can increase build times substantially.
By carefully splitting proxy/Dockerfile into several stages that mock
parts of the project, dependencies may be built and cached in Docker
such that changes to the proxy only require building the conduit-proxy
crate.
Furthermore, proxy/Dockerfile now runs the proxy's tests before
producing an artifact, unless the ` PROXY_SKIP_TESTS` build-arg is set
and not-empty.
The `PROXY_UNOPTIMIZED` build-arg has been added to support quicker,
debug-friendly builds.
We now create a new test HTTP server per test case instead of sharing it across them all.
This should solve the data races we have experienced on Travis.
Signed-off-by: Phil Calcado <phil@buoyant.io>
The proxy depends on `protoc`-generated gRPC bindings to communicate
with the controller. In order to generate these bindings, build-time
dependencies must be compiled.
In order to support a more granular, cacheable build scheme, a new crate
has been created to house these gRPC bindings,
`conduit-proxy-controller-grpc`.
Because `TryFrom` and `TryInto` conversions are implemented for
protobuf-defined types, the `convert` module also had to be moved to
into a dedicated crate.
Furthermore, because the proxy's tests require that
`quickcheck::Aribtrary` be implemented for protobuf types, the
`conduit-proxy-controller-grpc` crate supports an _arbitrary_ feature
fla protobuf types, the `conduit-proxy-controller-grpc` crate supports
an _arbitrary_ feature flag.
While we're moving these libraries around, the `tower-router` crate has
been moved to `proxy/router` and renamed to `conduit-proxy-router.`
`futures-mpsc-lossy` has been moved into the proxy directory but has not
been renamed.
Finally, the `proxy/Dockerfile-deps` image has been updated to avoid the
wasteful building of dependency artifacts, as they are not actually used
by `proxy/Dockerfile`.
Previously we didn't verify that the downloaded dep binary is the right
binary.
Verify that the downloaded binary is correct.
Signed-off-by: Brian Smith <brian@briansmith.org>
The logic for choosing the 32-bit vs. 64-bit version of dep was
inverted.
Fix this by simply always using the 64-bit version.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Control metricsWindow from root of app
- Add buttons [currently hidden] on metrics pages to control window of metrics requests
- Consolidate metricsWindow usage (stop passing it around)
- Add a ConduitLink component so we can stop passing around pathPrefix
- Add tests for ApiHelpers
* Hide the time window buttons; fix bug in absolute links
* Add a note explaining why metricWindow buttons are disabled
* Convert ConduitLink in to a component that wraps another
We previously did not have race detection enabled because our tests
would fail. Following #249, this is no longer the case.
Enable race detection in ci and build instructions. This change also
fixes client_test.go attempting to allocate a 2GB buffer due to bad test
input.
Fixes#173
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The conduit dashboard command asychronously shells out and runs "kubectl
proxy".
This change replaces the shelling out with calls to kubernetes proxy
APIs. It also allows us to enable race detection in our go tests, as the
shell out code tests did not pass race detection.
Fixes#173
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Add `bin/dep` which fetches a fixed version of `dep` to be used.
* Upgrade from dep 0.3.1 to 0.4.1
* Fix inconsistent Gopkg.lock by checking in the result of `bin/dep ensure`
Signed-off-by: Alex Leong <alex@buoyant.io>
The conduit.io/* k8s labels and annotations we're redundant in some
cases, and not flexible enough in others.
This change modifies the labels in the following ways:
`conduit.io/plane: control` => `conduit.io/controller-component: web`
`conduit.io/controller: conduit` => `conduit.io/controller-ns: conduit`
`conduit.io/plane: data` => (remove, redundant with `conduit.io/controller-ns`)
It also centralizes all k8s labels and annotations into
pkg/k8s/labels.go, and adds tests for the install command.
Part of #201
Signed-off-by: Andrew Seigner <siggy@buoyant.io>