When attempting to tap N pods when N is greater than the target rps, a rounding error occurs that requests 0 rps from each pod and no tap data is returned.
Ensure that tap requests at least 1 rps from each target pod.
Tested in Kubernetes on docker-for-desktop with a 15 replica deployment and a maxRps of 10.
Signed-off-by: Alex Leong <alex@buoyant.io>
The control plane is proxied through the Conduit proxy. The Conduit
proxy is based on the base image, and the control plane containers
and the proxy share a networking namespace. This means we don't
need the extra base utilities in the controller images since we can
use the utilties in the proxy image.
This is a step towards building the initial no-networking Conduit CA
pod. Since the Conduit CA will not do any networking of its own, we
networking debugging utilties are not helpful for it. They are
actually an unnecessary risk because they could facilitate the
exfiltration of the private key of the CA. (The Conduit CA pod won't
have the Conduit Proxy injected into it either.)
This also simplifies & slightly speeds up the building of the
controller images. This is a stepping stone towards being able to
build the controller images without `docker build` to improve build
times.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Use Go 1.10.0 to build Go components.
Take advantage of the new build cache in Go 1.10. Future work on improving
build performance will utilize the build cache further.
Signed-off-by: Brian Smith <brian@briansmith.org>
Previously Dockerfile-go-deps was converted from a multi-stage Dockefile
to a single-stage Dockerfile in anticipation of enabling efficient use
of `--cache-from` in CI. However, that resulted in the image ballooning
in size because it contained the Git repo for every package downloaded
by `dep ensure`.
Bring the image back down to the proper size by removing the temporary
files created.
Signed-off-by: Brian Smith <brian@briansmith.org>
The instance cache that powers the ListPods API is stored in memory in the telemetry service. This means that when there are multiple replicas of the telemetry service, each replica will have a distinct, incomplete view of the added pods based on which pods report to that telemetry replica. This causes the data plane bubbles on the dashboard to not all be filled in, and to flicker with each data refresh.
We create a Prometheus counter called reports_total which has pod as a label. Whenever a telemetry service instance receives a report from a pod, it increments reports_total for that pod. This allows us to remove the in-memory instance cache and instead query Prometheus to see if each pod has had a report in the last 30 seconds.
Fixes#337
Signed-off-by: Alex Leong <alex@buoyant.io>
In PR #298 we moved time window parsing (10s => (time.now - 10s,
time.now) down the stack to immediately before the query. This had the
unintended effect of creating parallel latency quantile requests with
slightly different timestamps.
This change parses the time window prior to latency quantile fan out,
ensuring all requests have the same timestamp.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
PR #298 moved summary (non-timeseries) requests to Prometheus' Query
endpoint, with no timestamp provided. This Query endpoint returns a
single data point with whatever timestamp was provided in the request.
In the absense of a timestamp, it uses current server time. This causes
the Public API to return discreet data points with slightly different
timestamps, which is unexpected behavior.
Modify the Public API -> Telemetry -> Prometheus request path to always
require a timestamp for single data point requests.
Fixes#340
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
On my system (i9-7960x running Docker natively in Linux) this regularly saves
over 11 seconds of build time when a file under pkg/ changes and over 1.5
seconds of build time when a file under controller/ changes. Since most
contributors are running Docker in a VM on less powerful computers, the
savings for most contributors should be significantly greater.
I imagine the savings for web/ and cli/ and proxy-init/ are similar, but I
did not measure them.
Signed-off-by: Brian Smith <brian@briansmith.org>
Precompiling pkg/ in an earlier layer saves ~10 seconds of wall clock
time on an incremental build on my machine (i9-7960x) when I update a
file in controller/ such as controller/destination/server.go. This
makes a significant difference in the edit-build-test loop.
Signed-off-by: Brian Smith <brian@briansmith.org>
Previously Dockerfile-go-deps would run `dep ensure` whenever anything in the
source tree changed. Also, because it was a multi-stage Dockerfile it did not
work well with Docker's `--cache-from` feature.
Change Dockerfile-go-deps to only re-run `dep ensure` when Gopkg.{toml,lock}
and/or bin/dep change. Simplify it to a single stage so that it works better
with Docker's `--cache-from` feature.
Signed-off-by: Brian Smith <brian@briansmith.org>
bin/dep verifies the digest of the `dep` downloaded `dep` executable,
whereas previously Dockerfile-go-deps wasn't.
Signed-off-by: Brian Smith <brian@briansmith.org>
The Public APIs stat endpoint copies a slice of values to a slice of
pointers prior to gRPC response. Go's range clause re-uses the same
pointer for each iteration of the loop, causing a slice of {1,2,3}
becoming {3,3,3}.
Fix the range loop to directly reference pointers in the slice of
values, ignoring the range variable. Also add tests to catch this case.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Removed the `method` label from Prometheus, and removed HTTP methods from reports. Removed `StreamSummary` from reports and replaced it with a `u32` count of streams.
Closes#266
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
All requests from the public API service to the Telemetry service were
done serially. In some cases a single request to the public API's Stat
endpoint resulted in 5 serial requests to the Telemetry service.
Make all requests from the Public API to Telemetry concurrent.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Part of #299
Follow-up from #315.
Now that the UIs don't report per-path metrics, we can remove the path label from Prometheus, the path aggregation and filtering options from the telemetry API, and the path field from the proxy report API.
I've modified the tests to no longer expect the removed fields, and manually verified that Conduit still works after making these changes.
Closes#265
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Prometheus queries from the Telemetry service were taking seconds or 10s
of seconds.
Optimize these queries:
- Move all summary queries requiring a single point data off of Prometheus'
QueryRange() endpoint, onto Query()
- Set `defaultVectorRange` to 30s, and also use it regardless of time
window
Also add tests for grpc_server and telemetry server
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#260
This PR updates the web UI to remove the pod detail page, and to remove the links to that page from pod names in metrics tables. It also removes the `pods` option from `conduit stat`, and the `sourcePod` and `targetPod` fields from the controller API proto's `MetricMetadata` message.
I've updated the `conduit stat` tests to reflect these changes, and manually verified the web UI changes.
Closes#261
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The proxy currently stores latency values in an `OrderMap` and reports every observed latency value to the controller's telemetry API since the last report. The telemetry API then sends each individual value to Prometheus. This doesn't scale well when there are a large number of proxies making reports.
I've modified the proxy to use a fixed-size histogram that matches the histogram buckets in Prometheus. Each report now includes an array indicating the histogram bounds, and each response scope contains a set of counts corresponding to each index in the bounds array, indicating the number of times a latency in that bucket was observed. The controller then reports the upper bound of each bucket to Prometheus, and can use the proxy's reported set of bucket bounds so that the observed values will be correct even if the bounds in the control plane are changed independently of those set in the proxy.
I've also modified `simulate-proxy` to generate the new report structure, and added tests in the proxy's telemetry test suite validating the new behaviour.
We now create a new test HTTP server per test case instead of sharing it across them all.
This should solve the data races we have experienced on Travis.
Signed-off-by: Phil Calcado <phil@buoyant.io>
We previously did not have race detection enabled because our tests
would fail. Following #249, this is no longer the case.
Enable race detection in ci and build instructions. This change also
fixes client_test.go attempting to allocate a 2GB buffer due to bad test
input.
Fixes#173
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The conduit dashboard command asychronously shells out and runs "kubectl
proxy".
This change replaces the shelling out with calls to kubernetes proxy
APIs. It also allows us to enable race detection in our go tests, as the
shell out code tests did not pass race detection.
Fixes#173
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Add `bin/dep` which fetches a fixed version of `dep` to be used.
* Upgrade from dep 0.3.1 to 0.4.1
* Fix inconsistent Gopkg.lock by checking in the result of `bin/dep ensure`
Signed-off-by: Alex Leong <alex@buoyant.io>
The conduit.io/* k8s labels and annotations we're redundant in some
cases, and not flexible enough in others.
This change modifies the labels in the following ways:
`conduit.io/plane: control` => `conduit.io/controller-component: web`
`conduit.io/controller: conduit` => `conduit.io/controller-ns: conduit`
`conduit.io/plane: data` => (remove, redundant with `conduit.io/controller-ns`)
It also centralizes all k8s labels and annotations into
pkg/k8s/labels.go, and adds tests for the install command.
Part of #201
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Add -log-level flag for install and inject commands
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Turn off all CLI logging by default, rename inject and install flags
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Re-enable color logging
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
We added basic prometheus instrumentation, but this only encapsulated basic go metrics and
request counts. This adds latency and response size metrics exporting as well, to the
public-api server, theweb server and the telemetry server.
Since the util function in grpc.go was basically used to wrap the server creation in a prometheus handler, I added the other prometheus constants in there and renamed the file to prometheus.go.
- Add request duration and response size instrumentation to web and public api
- Also add latency monitoring to telemetry service requests
- Rename util/grpc.go to util/prometheus.go
* Set conduit version to match conduit docker tags
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Remove --skip-inbound-ports for emojivoto
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Rename git_sha => git_sha_head
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Switch to using the go linker for setting the version
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Log conduit version when go servers start
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Cleanup conduit script
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Add --short flag to head sha command
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Set CONDUIT_VERSION in docker-compose env
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Previously, running `$conduit tap` would return a `Unexpected EOF` error when the server wasn't available. This was due to a few problems with the way we were handling errors all the way down the tap server. This change fixes that and cleans some of the protobuf-over-HTTP code.
- first step towards #49
- closes#106
* Make Eos optional in TapEvent
grpc_status not being set in protobuf is the same as being set to zero,
which is also status OK
Modify TapEvent to include an optional EOS struct
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Part of #198
* Add Eos to proto & proxy tap end-of-stream events
The proxy now outputs `Eos` instead of `grpc_status` in all end-of-stream tap events. The EOS value is set to `grpc_status_code` when the response ended with a `grpc_status` trailer, `http_reset_code` when the response ended with a reset, and no `Eos` when the response ended gracefully without a `grpc_status` trailer.
This PR updates the proxy. The proto and controller changes are in PR #204.
Part of #198. Closes#202
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Currently, all "success"/"failure" classifications in the telemetry API are made based on the `grpc-status` trailer. If the trailer is not present, then a request is assumed to have failed. As we start proxying non-gRPC traffic, the controller needs to also be aware of HTTP status codes, so that non-gRPC requests are not assumed to always fail.
I've modified the telemetry API server to classify requests based on their HTTP status codes when the `grpc-status` trailer is not present.
I've also modified the `simulate-proxy` script to generate fake HTTP/2 traffic without the `grpc-status` trailer.
Closes#196
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Stop ignoring the most significant labels of Destination names
Previously the destinations service was ignoring all the labels in a
destination name after the first two labels. Thus, for example,
"name.ns.another.domain.example.com" would be
considered the same as "name.ns.svc.cluster.local". This was very
wrong.
Match destination names taking into consideration every label in the
destination name.
Provisions have been made for the case where the controller and the
proxies with the zone name to use. However, currently neither the
controller nor the proxies are actually configured with the zone, so
the implementation was made to work in the current configuration too,
as long as fully-qualified names are not used.
A negative consequence of this change is that a name like
"name.ns.svc.cluster.local" won't resolve in the current configuration,
because the controller doesn't know the zone is "cluster.local"
Unit tests are included for the new mapping rules.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Allow external controller public api clients that don't rely on a kubeconfig to interact with Conduit CLI
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
simulate-proxy sends HTTP example data.
Modify this test script to also send TCP example data.
Part of #132
Signed-off-by: Andrew Seigner <andrew@sig.gy>
Previously, proxy-deps and go-deps included the source tree for local
projects. This can cause build conflicts when files are renamed.
By adopting a multi-stage build for the proxy-deps image, we can be sure
that we only preserve essential dependencies & manifests in the
proxy-deps and go-deps images.
Furthermore, `bin/update-go-deps-shas` and `bin/update-proxy-deps-shas` have
been added to ease maintenance when files are changed.
Fixes#159
Signed-off-by: Oliver Gould <ver@buoyant.io>
* Move healthcheck proto to separate file, use throughout
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Remove Check message from healthcheck.proto
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Standardize healthcheck protobuf import name
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Use stdout as writer for tap command
fixes#136
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add --log-level to command line
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Abstract Conduit API client from protobuf interface to add new features
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Consolidate mock api clients
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add simple implementation of healthcheck for conduit api
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Change NextSteps to FriendlyMessageToUser
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add grpc check for status on the client
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add simple server-side check for Conduit API
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Fix feedback from PR
Signed-off-by: Phil Calcado <phil@buoyant.io>
See #132. This PR adds a protocol field to the ClientTransport and ServerTransport messages, and modifies the proxy to report a value for this field (currently, it's only ever HTTP).
Currently, HTTP/1 and HTTP/2 are collapsed into one Protocol variant, see #132 (comment). I expect that we can treat H1 as a subset of H2 as far as metrics goes.
Note that after discussing it with @klingerf, I learned that the control plane telemetry API currently does not do anything with the ClientTransport and ServerTransport messages, so beyond regenerating the protobuf-generated code, no controller changes were actually necessary. As we actually add metrics to TCP transports, we'll want to make some additions to the telemetry API to ingest these metrics. If any metrics are shared between HTTP and raw TCP transports (say, bytes sent), we'll want to differentiate between them in Prometheus. All the metrics that the control plane currently ingests from telemetry reports are likely to be HTTP-specific (requests, responses, response latencies), or at least, do not apply to raw TCP.
Actually adding metrics to raw TCP transports will probably have to wait until there are raw TCP transports implemented in the proxy...
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Sort imports
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Upgrade k8s.io/client-go to v6.0.0
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Make k8s store initialization blocking with timeout
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The existing startup/shutdown log info messages had spacing issues and
used fmt.
Update the log messages to use logrus for consistency, and fix spacing
issues.
Signed-off-by: Andrew Seigner <andrew@sig.gy>
The image tags for gcr.io/runconduit/go-deps and
gcr.io/runconduit/proxy-deps were not updating to account for all
changes in those images.
Modify SHA generation to include all files that affect the base
dependency images. Also add instructions to README.md for updating
hard-coded SHAs in Dockerfile's.
Fixes#115
Signed-off-by: Andrew Seigner <andrew@sig.gy>
* Rename constructor functions from MakeXyz to NewXyz
As it is more commonly used in the codebase
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Make Conduit client depend on KubernetesAPI
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Move Conduit client and k8s logic to standard go package dir for internal libs
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Move dependencies to /pkg
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Make conduit client more testable
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Remove unused config object
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add more test cases for marhsalling
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Move client back to controller
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Sort imports
Signed-off-by: Phil Calcado <phil@buoyant.io>