* Add validation to CRD for service profiles
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
* Use properties instead of oneof
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
* Proxy grafana requests through web service
* Fix -grafana-addr default, clarify -api-addr flag
* Fix version check in grafana dashboards
* Fix comment typo
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Fix reporting of injected resources
When reporting resources provided as a list, it was picking just one
item from the list and ignoring the rest.
Also improved test for injecting list of resources.
Fixes#1676
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Create new service accounts for linkerd-web and linkerd-grafana. Change 'serviceAccount:' to 'serviceAccountName:'
* Use dynamic namespace name
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
* Allow input of a volume name for prometheus and grafana
* Make Prometheus and Grafana volume names 'data' by default and disallow user editing via cli flags
* Remove volume name from options
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
* add securityContext with runAsUser: {{.ProxyUID}} to the various containers in the install template
* Update golden to reflect new additions
* changed to a different user id than the proxy user id
* Added a controller-uid install option
* change the port that the proxy-injector runs
* The initContainers needs to be run as the root user.
* move security contexts to container level
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
Add support for service profiles created on external (non-service) authorities. For example, this allows you to create a service profile named `linkerd.io` which will apply to calls made to `linkerd.io`.
This is done by changing the `LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES` to `.` so that the proxy will attempt to lookup a service profile for any authority. We provide the `--disable-external-profiles` proxy flag to revert this behavior in case it is a problem.
We also refactor the proxy-api implementation of GetProfiles so that it does the profile lookup, regardless of if the authority looks like a Kubernetes service name or not. To simplify this, support for multiple resolves (which was unused) was removed.
Signed-off-by: Alex Leong <alex@buoyant.io>
The proxy-api service _always_ suggests that two meshed pods communicate
via HTTP/2 (i.e. via transparent protocol upgrading, if necessary).
This can complicate debugging and diagnostics at times, so it's
important that we have a way to deploy linkerd without this auto-upgrade
behavior.
This change adds a `-disable-h2-upgrade` flag to the `linkerd install`
command that disables transparent upgrading for the whole cluster.
We rework the routes command so that it can accept any Kubernetes resource, making it act much more similarly to the stat command.
Signed-off-by: Alex Leong <alex@buoyant.io>
Filtering by Kubernetes job was not supported. Also filtering by any unknown
type caused a panic.
Add filtering support by Kubernetes job, with special case mapping `job` to
`k8s_job`, to not conflict with Prometheus' job label.
Fix panic when unknown type specified as a `--from` or `--to` flag.
Fix `job` label from `linkerd-proxy` overwriting Prometheus `job` label at
collection time. This caused all metrics collected by proxy sidecars in
Kubernetes jobs to be collected into an incorrect Prometheus job, rather than
the expected `linkerd-proxy` Prometheus job.
Fix `unsupported resource type` tap error message incorrectly printing the
target resource rather than the destination.
Set `--controller-log-level debug` in `install_test.go` for easier debugging.
Expose `slow-cooker`'s metrics via a k8s service in the tap integration test, to
validate proxy requests with a job as destination.
Fixes#1872
Part of #627
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Adjust proxy, Prometheus, and Grafana probes
High `readinessProbe.initialDelaySeconds` values delayed the controller's
readiness by up to 30s, preventing cli commands from succeeding shortly after
control plane deployment.
Decrease `readinessProbe.initialDelaySeconds` in the proxy, Prometheus, and
Grafana to the default 0s. Also change `linkerd check` controller pod ordering
to: controller, prometheus, web, grafana.
Detailed probe changes:
- proxy
- decrease `readinessProbe.initialDelaySeconds` from 10s to 0s
- prometheus
- decrease `readinessProbe.initialDelaySeconds` from 30s to 0s
- decrease `readinessProbe.timeoutSeconds` from 30s to 1s
- decrease `livenessProbe.timeoutSeconds` from 30s to 1s
- grafana
- decrease `readinessProbe.initialDelaySeconds` from 30s to 0s
- decrease `readinessProbe.timeoutSeconds` from 30s to 1s
- decrease `readinessProbe.failureThreshold` from 10 to 3
- increase `livenessProbe.initialDelaySeconds` from 0s to 30s
Fixes#1804
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This change allows some advised production config to be applied to the install of the control plane.
Currently this runs 3x replicas of the controller and adds some pretty sane requests to each of the components + containers of the control plane.
Fixes#1101
Signed-off-by: Ben Lambert <ben@blam.sh>
Add a routes command which displays per-route stats for services that have service profiles defined.
This change has three parts:
* A new public-api RPC called `TopRoutes` which serves per-route stat data about a service
* An implementation of TopRoutes in the public-api service. This implementation reads per-route data from Prometheus. This is very similar to how the StatSummaries RPC and much of the code was able to be refactored and shared.
* A new CLI command called `routes` which displays the per-route data in a tabular or json format. This is very similar to the `stat` command and much of the code was able to be refactored and shared.
Note that as of the currently targeted proxy version, only outbound route stats are supported so the `--from` flag must be included in order to see data. This restriction will be lifted in an upcoming change once we add support for inbound route stats as well.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Refactor util.BuildResource so it can deal with multiple resources
First step to address #1487: Allow stat summary to query for multiple
resources
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Update the stat cli help text to explain the new multi resource querying ability
Propsal for #1487: Allow stat summary to query for multiple resources
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Allow stat summary to query for multiple resources
Implement this ability by issuing parallel requests to requestStatsFromAPI()
Proposal for #1487
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Update tests as part of multi-resource support in `linkerd stat` (#1487)
- Refactor stat_test.go to reuse the same logic in multiple tests, and
add cases and files for json output.
- Add a couple of cases to api_utils_test.go to test multiple resources
validation.
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* `linkerd stat` called with multiple resources should keep an ordering (#1487)
Add SortedRes holding the order of resources to be followed when
querying `linkerd stat` with multiple resources
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Extra validations for `linkerd stat` with multiple resources (#1487)
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* `linkerd stat` resource grouping, ordering and name prefixing (#1487)
- Group together stats per resource type.
- When more than one resource, prepend name with type.
- Make sure tables always appear in the same order.
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Allow `linkerd stat` to be called with multiple resources
A few final refactorings as per code review.
Fixes#1487
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
A container called `proxy-api` runs in the Linkerd2 controller pod. This container listens on port 8086 and serves the proxy-api but does nothing other than forward gRPC requests to the destination container which listens on port 8089.
We remove the proxy-api container altogether and change the destination container to listen on port 8086 instead of 8089. The result is that clients still use the proxy-api by connecting to `proxy-api.<ns>.svc.cluster.local:8086` but the controller has one fewer containers. This results in a simpler system that is easier to reason about.
Signed-off-by: Alex Leong <alex@buoyant.io>
We implement the getProfiles method in the destination service. This method returns a stream of destination profiles for a given authority. It does this by looking up the ServiceProfile resource in the controller namespace named `<svc>.<ns>` where `<svc>` is the name of the service and `<ns>` is the namespace of the service.
This PR includes:
* Adding a ServiceProfile Custom Resource Definition to linkerd install
* A watch based implementation of the getProfiles method in the destination service, similar to the implementation of get.
* An update to the destination client script that allows querying the getProfiles method.
Signed-off-by: Alex Leong <alex@buoyant.io>
Added support for json output in `linkerd stat` through a new (-o|--output)=json option.
Fixes#1417
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
Horizontal Pod Autoscaling does not work when container definitions in pods do not all have resource requests, so here's the ability to add CPU + Memory requests to install + inject commands by proving proxy options --proxy-cpu + --proxy-memory
Fixes#1480
Signed-off-by: Ben Lambert <ben@blam.sh>
In linkerd/linkerd2-proxy#99, several proxy configuration variables were
deprecated.
This change updates the CLI to use the updated names to avoid
deprecation warnings during startup.
`linkerd inject` was not checking its input for known sidecars and
initContainers.
Modify `linkerd inject` to check for existing sidecars and
initContainers, specifically, Linkerd, Istio, and Contour.
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
linkerd only routes TCP data, but `linkerd inject` does not warn when it
injects into pods with ports set to `protocol: UDP`.
Modify `linkerd inject` to warn when injected into a pod with
`protocol: UDP`. The Linkerd sidecar will still be injected, but the
stderr output will include a warning.
Also add stderr checking on all inject unit tests.
Part of #1516.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
If an input file is un-injectable, existing inject behavior is to simply
output a copy of the input.
Introduce a report, printed to stderr, that communicates the end state
of the inject command. Currently this includes checking for hostNetwork
and unsupported resources.
Malformed YAML documents will continue to cause no YAML output, and return
error code 1.
This change also modifies integration tests to handle stdout and stderr separately.
example outputs...
some pods injected, none with host networking:
```
hostNetwork: pods do not use host networking...............................[ok]
supported: at least one resource injected..................................[ok]
Summary: 4 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
deploy/vote-bot
```
some pods injected, one host networking:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/vote-bot uses "hostNetwork: true"
supported: at least one resource injected..................................[ok]
Summary: 3 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
```
no pods injected:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/emoji, deploy/voting, deploy/web, deploy/vote-bot use "hostNetwork: true"
supported: at least one resource injected..................................[warn] -- no supported objects found
Summary: 0 of 8 YAML document(s) injected
```
TODO: check for UDP and other init containers
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Previously, we would tap any resource's pods, regardless of whether the pods
were meshed or not. We can't actually tap non-meshed pods, so I'm adding a check
that will filter out non-meshed pods from the pods that tap watches.
Previous behaviour:
When attempting to hang a non meshed pod, it would establish
a watch on the pods, but then never return any results. In the CLI you could
just cancel it with Ctrl-C. In the web, clicking Stop would send a
WebSocket.close(1000) but wouldn't actually close the connection...
Behaviour after change :
If no pods under the specified resource are meshed, it'll
return an error of no pods being found to tap
Adds basic probes to the linkerd-proxy containers injected by linkerd inject.
- Currently the Readiness and Liveness probes are configured to be the same.
- I haven't supplied a periodSeconds, but the default is 10.
- I also set the initialDelaySeconds to 10, but that might be a bit high.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
Closes#1170.
This branch adds a `-o wide` (or `--output wide`) flag to the Tap CLI.
Passing this flag adds `src_res` and `dst_res` elements to the Tap
output, as described in #1170. These use the metadata labels in the tap
event to describe what Kubernetes resource the source and destination
peers belong to, based on what resource type is being tapped, and fall
back to pods if either peer is not a member of the specified resource
type.
In addition, when the resource type is not `namespace`, `src_ns` and
`dst_ns` elements are added, which show what namespaces the the source
and destination peers are in. For peers which are not in the Kubernetes
cluster, none of these labels are displayed.
The source metadata added in #1434 is used to populate the `src_res` and
`src_ns` fields.
Also, this branch includes some refactoring to how tap output is
formatted.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Currently, when a cluster has over 100 pods injected with the Linkerd2
proxy, Prometheus metrics are not collected correctly. This is because
Prometheus appears to be making more concurrent requests than its'
proxy's outbound router cache can handle See issue #1322 for further
details.
This branch introduces a workaround for this issue, by increasing the
outbound router cache capacity to 10000 routes for the Prometheus pod's
proxy only. The router capacity limit of 100 active routes is primarily
due to the limitation of the number of active Destination service
lookups, so increasing the capacity for the Prometheus pod specifically
is probably okay, as the scrape requests are made to IP addresses
directly and therefore will not cause service discovery lookups.
This change was originally implemented and tested in @siggy's PR #1228.
I've rebased his branch onto the current `master`, and updated the code
to reflect the project name change.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Co-authored-by: Andrew Seigner <siggy@buoyant.io>
This change is a simplified implementation of the Builder.Path() and
Visitor().ExpandPathsToFileVisitors() functions used by kubectl to parse files
and directories. The filepath.Walk() function is used to recursively traverse
directories. Every .yaml or .json resource file in the directory is read
into its own io.Reader. All the readers are then passed to the YAMLDecoder in the
InjectYAML() function.
Fixes#1376
Signed-off-by: ihcsim <ihcsim@gmail.com>
`ca-bundle-distributor` described the original role of the program but
`ca` ("Certificate Authority") better describes its current role.
Signed-off-by: Brian Smith <brian@briansmith.org>
These files were created with the executable bit set accidentally due
to the way my network file system setup was configured.
Signed-off-by: Brian Smith <brian@briansmith.org>
* update grafana dashboards to remove conduit reference and replace with linkerd instances
* update test install fixtures to reflect changes
Fixes: #1315
Signed-off-by: Franziska von der Goltz <franziska@vdgoltz.eu>
The control-plane's `ClusterRole` and `ClusterRoleBinding` objects are
global. Because their names did not vary across multiple control-plane
deployments, it prevented multiple control-planes from coexisting (when
RBAC is enabled).
Modify the `ClusterRole` and `ClusterRoleBinding` objects to include the
control-plane's namespace in their names. Also modify the integration
test to first install two control-planes, and then perform its full
suite of tests, to prevent regression.
Fixes#1292.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This PR begins to migrate Conduit to Linkerd2:
* The proxy has been completely removed from this repo, and is now located at
github.com/linkerd/linkerd2-proxy.
* A `Dockerfile-proxy` has been added to fetch the most-recently published proxy
binary from build.l5d.io.
* Proxy-specific protobuf bindings have been moved to
github.com/linkerd/linkerd2-proxy-api.
* All docker images now use the gcr.io/linkerd-io registry.
* `inject` now uses `LINKERD2_PROXY_` environment variables
* Go paths have been updated to reflect the new (future) repo location.
Create a ephemeral, in-memory TLS certificate authority and integrate it into the certificate distributor.
Remove the re-creation of deleted ConfigMaps; this will be added back later in #1248.
Signed-off-by: Brian Smith brian@briansmith.org
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The proxy's metrics are instrumented with a `tls` label that describes
the state of TLS for each connection and associated messges.
This same level of detail is useful to get in `tap` output as well.
This change updates Tap in the following ways:
* `TapEvent` protobuf updated:
* Added `source_meta` field including source labels
* `proxy_direction` enum indicates which proxy server was used.
* The proxy adds a `tls` label to both source and destination meta indicating the state of each peer's connection
* The CLI uses the `proxy_direction` field to determine which `tls` label should be rendered.
If the controller address has a loopback host then don't use TLS to connect
to it. TLS isn't needed for security in that case. In mormal configurations
the proxy isn't terminating TLS for loopback connections anyway.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Add TLS support to `conduit inject`.
Add the settings needed to enable TLs when `--tls=optional` is passed on the
commend line. Later the requirement to add `--tls` will be removed.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Add CA certificate bundle distributor to conduit install
* Update ca-distributor to use shared informers
* Only install CA distributor when --enable-tls flag is set
* Only copy CA bundle into namespaces where inject pods have the same controller
* Update API config to only watch pods and configmaps
* Address review feedback
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Add controller admin servers and readiness probes
* Tweak readiness probes to be more sane
* Refactor based on review feedback
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The `kubernetes-nodes-cadvisor` Prometheus queries node-level data via
the Kubernetes API server. In some configurations of Kubernetes, namely
minikube and at least one baremetal kubespray cluster, this API call
requires the `get` verb on the `nodes/proxy` resource.
Enable `get` for `nodes/proxy` for the `conduit-prometheus` service
account.
Fixes#912
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
- It would be nice to display container errors in the UI. This PR gets the pod's container
statuses and returns them in the public api
- Also add a terminationMessagePolicy to conduit's inject so that we can capture the
proxy's error messages if it terminates
* Add readiness/liveness checks for third party components
Any possible issues with the third party control plane components can wedge the services.
Take the best practices for prometheus/grafana and add them to our template. See #1116
* Update test fixtures for new output
* conduit inject: Add flag to set proxy bind timeout (#863)
* fix test
* fix flag to get it working with #909
* Add time parsing
* Use the variable to set the default value
Signed-off-by: Sacha Froment <sfroment42@gmail.com>
Running `conduit install --api-port xxx` where xxx != 8086 would yield a
broken install.
Fix the install command to correctly propagate the `api-port` flag,
setting it as the serve address in the proxy-api container.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Grafana provides default dashboards for Prometheus and Grafana health.
The community also provides Kubernetes-specific dashboards. Conduit was
not taking advantage of these.
Introduce new Grafana dashboards focused on Grafana, Kubernetes, and
Prometheus health. Tag all Conduit dashboards for easier UI navigation.
Also fix layout in Conduit Health dashboard.
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The TapByResource command now has access to destination labels from the
proxy, but was not outputting them on the cli.
Modify the TapByResource output to print the destination pod label,
rather than the ip, when available.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
There have been a number of performance improvements and bug fixes since
v2.1.0.
Bump our Prometheus container to v2.2.1.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
When prometheus queries the proxy for data, these requests are reported
as inbound traffic to the pod. This leads to misleading stats when a pod
otherwise receives little/no traffic.
In order to prevent these requests being proxied, the metrics port is
now added to the default inbound skip-ports list (as is already case for
the tap server).
Fixes#769
* Add namespace as a resource type in public-api
The cli and public-api only supported deployments as a resource type.
This change adds support for namespace as a resource type in the cli and
public-api. This also change includes:
- cli statsummary now prints `-`'s when objects are not in the mesh
- cli statsummary prints `No resources found.` when applicable
- removed `out-` from cli statsummary flags, and analagous proto changes
- switched public-api to use native prometheus label types
- misc error handling and logging fixes
Part of #627
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Refactor filter and groupby label formulation
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Rename stat_summary.go to stat.go in cli
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Update rbac privileges for namespace stats
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Remove the telemetry service
The telemetry service is no longer needed, now that prometheus scrapes
metrics directly from proxies, and the public-api talks directly to
prometheus. In this branch I'm removing the service itself as well as
all of the telemetry protobuf, and updating the conduit install command
to no longer install the service. I'm also removing the old version of
the stat command, which required the telemetry service, and renaming the
statsummary command to stat.
* Fix time window tests
* Remove deprecated controller scrape config
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Start implementing new conduit stat summary endpoint.
Changes the public-api to call prometheus directly instead of the
telemetry service. Wired through to `api/stat` on the web server,
as well as `conduit statsummary` on the CLI. Works for deployments only.
Current implementation just retrieves requests and mesh/total pod count
(so latency stats are always 0).
Uses API defined in #663
Example queries the stat endpoint will eventually satisfy in #627
This branch includes commits from @klingerf
* run ./bin/dep ensure
* run ./bin/update-go-deps-shas
The Destination service used slightly different labels than the
telemetry pipeline expected, specifically, prefixed with `k8s_*`.
Make all Prometheus labels consistent by dropping `k8s_*`. Also rename
`pod_name` to `pod` for consistency with `deployement`, etc. Also update
and reorganize `proxy-metrics.md` to reflect new labelling.
Fixes#655
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Using a vanilla Grafana Docker image as part of `conduit install`
avoided maintaining a conduit-specific Grafana Docker image, but made
packaging dashboard json files cumbersome.
Roll our own Grafana Docker image, that includes conduit-specific
dashboard json files. This significantly decreases the `conduit install`
output size, and enables dashboard integration in the docker-compose
environment.
Fixes#567
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The Grafana dashboards were displaying all proxy-enabled pods, including
conduit controller pods. In the old telemetry pipeline filtering these
out required knowledge of the controller's namespace, which the
dashboards are agnostic to.
This change leverages the new `conduit_io_control_plane_component`
prometheus label to filter out proxy-enabled controller components.
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The Top-line and Deployment Grafana dashboards relied on the
soon-to-be-removed telemetry pipeline metrics.
Update the Grafana dashboards to query for the new, proxy-based metrics.
Grafana dashboard layouts have not changed.
Depends on #635 to render metrics.
Part of #420.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Previously we were using the instance label to uniquely identify a pod.
This meant that getting stats by pod name would require extra queries to
Kubernetes to map pod name to instance.
This change adds a pod_name label to metrics at collection time. This
should not affect cardinality as pod_name is invariant with respect to
instance.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
this is a known issue with grafana in k8s. grafana/grafana:5.0.4 was just released today. update the repo from 5.0.3 to 5.0.4
fixed issues #582
Signed-off-by: Deshi Xiao <xiaods@gmail.com>
The Prometheus scrape config collects from Conduit proxies, and maps
Kubernetes labels to Prometheus labels, appending "k8s_".
This change keeps the resultant Prometheus labels consistent with their
source Kubernetes labels. For example: "deployment" and
"pod_template_hash".
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The inject code detects the object it is being injected into, and writes
self-identifying information into the CONDUIT_PROMETHEUS_LABELS
environment variable, so that conduit-proxy may read this information
and report it to Prometheus at collection time.
This change puts the self-identifying information directly into
Kubernetes labels, which Prometheus already collects, removing the need
for conduit-proxy to be aware of this information. The resulting label
in Prometheus is recorded in the form `k8s_deployment`.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
In addition to dashboards display service health, we need a dashboard to
display health of the Conduit service mesh itself.
This change introduces a conduit-health dashboard. It currently only
displays health metrics for the control plane components. Proxy health
will come later.
Fixes#502
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Update the inject command to set a CONDUIT_PROMETHEUS_LABELS proxy environment variable with the name of the pod spec that the proxy is injected into. This will later be used as a label value when the proxy is exposing metrics.
Fixes: #426
Signed-off-by: Alex Leong <alex@buoyant.io>
The existing telemetry pipeline relies on Prometheus scraping the
Telemetry service, which will soon be removed.
This change configures Prometheus to scrape the conduit proxies directly
for telemetry data, and the control plane components for control-plane
health information. This affects the output of both conduit install
and conduit inject.
Fixes#428, #501
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The existing `conduit dashboard` command supported opening the conduit
dashboard, or displaying the conduit dashboard URL, via a `url` boolean
flag.
Replace the `url` boolean flag with a `show` string flag, with three
modes:
`conduit dashboard --show conduit`: default, open conduit dashboard
`conduit dashboard --show grafana`: open grafana dashboard
`conduit dashboard --show url`: display dashboard URLs
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Successful `conduit check` commands now take into account `[ok]`
and `\n` tokens when constraining line length.
Fixes#554
Signed-off-by: Andy Hume <andyhume@gmail.com>
Existing Grafana configuration contained no dashboards, just a skeleton
for testing.
Introduce two Grafana dashboards:
1) Top Line: Overall health of all Conduit-enabled services
2) Deployment: Health of a specific conduit-enabled deployment
Fixes#500
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Most controller listeners should only bind on localhost
* Use default listening addresses in controller components
* Review feedback
* Revert test_helper change
* Revert use of absolute domains
Signed-off-by: Alex Leong <alex@buoyant.io>
* Temporarily stop trying to support configurable zones in the proxy.
None of the zone configuration is tested and lots of things assume the cluster
zone is `cluster.local`. Further, how exactly the proxy will actually learn the
cluster zone hasn't been decided yet.
Just hard-code the zone as "cluster.local" in the proxy until configurable zones
are fully implemented and tested to be working correctly.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Remove the CONDUIT_PROXY_DESTINATIONS_AUTOCOMPLETE_FQDN setting
The way that Kubernetes configures DNS search suffixes has some negative
consequences as some names like "example.com" are ambiguous: depending on
whether there is a service "example" in the "com" namespace, "example.com"
may refer to an external service or an internal service, and this can
fluctuate over time. In recognition of that we added the
CONDUIT_PROXY_DESTINATIONS_AUTOCOMPLETE_FQDN setting, thinking this would
be part of a solution for users to opt out of the unfortunate behavior
if their applications didn't depend on the DNS search suffix feature.
It turns out similar effects can be acheived using a custom dnsConfig,
starting in Kubernetes 1.10 when dnsConfig reaches the beta stability level.
Now any CONDUIT_PROXY_DESTINATIONS_AUTOCOMPLETE_FQDN-based seems duplicative.
Further, attempting to support it optionally made the code complex and hard
to read.
Therefore, let's just remove it. If/when somebody actually requests this
functionality then we can add it back, if dnsConfig isn't a valid alternative
for them.
Signed-off-by: Brian Smith <brian@briansmith.org>
* Further hard-code "cluster.local" as the zone, temporarily.
Addresses review feedback.
Signed-off-by: Brian Smith <brian@briansmith.org>
Kubernetes will do multiple DNS lookups for a name like
`proxy-api.conduit.svc.cluster.local` based on the default search settings
in /etc/resolv.conf for each container:
1. proxy-api.conduit.svc.cluster.local.conduit.svc.cluster.local. IN A
2. proxy-api.conduit.svc.cluster.local.svc.cluster.local. IN A
3. proxy-api.conduit.svc.cluster.local.cluster.local. IN A
4. proxy-api.conduit.svc.cluster.local. IN A
We do not need or want this search to be done, so avoid it by making each
name absolute by appending a period so that the first three DNS queries
are skipped for each name.
The case for `localhost` is even worse because we expect that `localhost` will
always resolve to 127.0.0.1 and/or ::1, but this is not guaranteed if the default
search is done:
1. localhost.conduit.svc.cluster.local. IN A
2. localhost.svc.cluster.local. IN A
3. localhost.cluster.local. IN A
4. localhost. IN A
Avoid these unnecessary DNS queries by making each name absolute, so that the
first three DNS queries are skipped for each name.
Signed-off-by: Brian Smith <brian@briansmith.org>
Grafana by default calls out to grafana.com to check for updates. As
user's of Conduit do not have direct control over updating Grafana
directly, this update check is not needed.
Disable Grafana's update check via grafana.ini.
This is also a workaround for #155, root cause of #519.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
When the conduit proxy is injected into the controller pod, we observe controller pod proxy stats show up as an "outbound" deployment for an unrelated upstream deployment. This may cause confusion when monitoring deployments in the service mesh.
This PR filters out this "misleading" stat in the public api whenever the dashboard requests metric information for a specific deployment.
* exclude telemetry generated by the control plane when requesting deployment metrics
fixes#370
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
In the initial review for this code (preceding the creation of the
runconduit/conduit repository), it was noted that this container is not
actually used, so this is actually dead code.
Further, this container actualy causes a minor problem, as it doesn't
implement any retry logic, thus it will sometimes often cause errors to
be logged. See
https://github.com/runconduit/conduit/issues/496#issuecomment-370105328.
Further, this is a "buoyantio/" branded container. IF we actually need
such a container then it should be a Conduit-branded container.
See https://github.com/runconduit/conduit/issues/478 for additional
context.
Signed-off-by: Brian Smith <brian@briansmith.org>
`conduit install` deploys prometheus, but lacks a general-purpose way to
visualize that data.
This change adds a Grafana container to the `conduit install` command. It
includes two sample dashboards, viz and health, in their own respective
source files.
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The telemetry service in the controller pod uses a non-fully qualified URL to connect to the prometheus pod in the control plane. This PR changes the URL the telemetry's prometheus URL to be fully qualified to be consistent with other URLs in the control plane. This change was tested in minikube. The logs report no errors and looking at the prometheus dashboard shows that stats are being recorded from all conduit proxies.
fixes#414
Signed-off-by: Dennis Adjei-Baah dennis@buoyant.io
Refactor `conduit install` test into a data-driven test. Then add a test of the actual default output of `conduit install`.
This test is useful to make it clear when we change the default
settings of `conduit install`.
Signed-off-by: Brian Smith <brian@briansmith.org>
The `app` label should be reserved for end-user applications and we
shouldn't use it ourselves. We already have a Conduit-specific label
that is is prefixed with the `conduit.io/` prefix to avoid naming
collisions with users' labels, so just use that one instead.
Signed-off-by: Brian Smith <brian@briansmith.org>
In order to take advantage of the benefits the conduit proxy gives to deployments, this PR injects the conduit proxy into the control plane pod. This helps us lay the groundwork for future work such as TLS, control plane observability etc.
Fixes#311
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
* Update go-run to set version equal to root-tag
* Fix inject tests for undefined version change
* Pass inject version explitictly as arg
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Remove kubectl dependency, validate k8s server version via api
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Remove unused MockKubectl
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Remame kubectl.go to version.go
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
When the `inject` command is used on a YAML file that is invalid, it prints out an invalid YAML file with the injected proxy. This may give a false indication to the user that the inject was successful even though the inject command prints out an error message further down the terminal window. This PR fixes#303 and contains a test input and output file that indicates what should be shown.
This PR also fixes#390.
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
- reduce row spacing on tables to make them more compact
- Rename TabbedMetricsTable to MetricsTable since it's not tabbed any more
- Format latencies greater than 1000ms as seconds
- Make sidebar collapsible
- poll the /pods endpoint from the sidebar in order to refresh the list of deployments in the autocomplete
- display the conduit namespace in the service mesh details table
- Use floats rather than Col for more responsive layout (fixes#224)
The init container injected by conduit inject rewrites the iptables configuration for its network namespace. This causes havoc when the network namespace isn't restricted to the pod, i.e. when hostNetwork=true.
Skip pods with hostNetwork=true to avoid this problem.
Fixes#366.
Signed-off-by: Brian Smith <brian@briansmith.org>
Refactor `conduit inject` code to make it unit-testable.
Refactor the conduit inject code to make it easier to add unit tests. This work was done by @deebo91 in #365. This is the same PR without the conduit install changes, so that it can land ahead of #365. In particular, this will be used for testing the fix for high-priority bug #366.
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
Signed-off-by: Brian Smith <brian@briansmith.org>
Conduit has been on Prometheus 1.8.1. Prometheus 2.x promises better
performance.
Upgrade Conduit to Prometheus 2.1.0
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This PR updates the web UI to remove the pod detail page, and to remove the links to that page from pod names in metrics tables. It also removes the `pods` option from `conduit stat`, and the `sourcePod` and `targetPod` fields from the controller API proto's `MetricMetadata` message.
I've updated the `conduit stat` tests to reflect these changes, and manually verified the web UI changes.
Closes#261
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The conduit.io/* k8s labels and annotations we're redundant in some
cases, and not flexible enough in others.
This change modifies the labels in the following ways:
`conduit.io/plane: control` => `conduit.io/controller-component: web`
`conduit.io/controller: conduit` => `conduit.io/controller-ns: conduit`
`conduit.io/plane: data` => (remove, redundant with `conduit.io/controller-ns`)
It also centralizes all k8s labels and annotations into
pkg/k8s/labels.go, and adds tests for the install command.
Part of #201
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Abstract Conduit API client from protobuf interface to add new features
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Consolidate mock api clients
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add simple implementation of healthcheck for conduit api
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Change NextSteps to FriendlyMessageToUser
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add grpc check for status on the client
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add simple server-side check for Conduit API
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Fix feedback from PR
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add framework for healthcheck in CLI
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add self-checked for kubectl
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Clear formatting code
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Removed ununsed objects from status
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Removed ununsed parameter
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Ignore errored self checkers
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Make the check error by default
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Log error, format changes
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add func to rsolve kubectl-like names to canonical names
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Refactor API instantiation
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Make version command testable
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Make get command testable
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Add tests for api utils
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Make stat command testable
Signed-off-by: Phil Calcado <phil@buoyant.io>
* Make tap command testablë
Signed-off-by: Phil Calcado <phil@buoyant.io>