The control-plane components relied on a `--single-namespace` param,
passed from `linkerd install` into each individual component, to
determine which namespaces they were authorized to access, and whether
to support ServiceProfiles. This command-line flag was redundant given
the authorization rules encoded in the parent `linkerd install` output,
via [Cluster]Role[Binding]s.
Modify the control-plane components to query Kubernetes at startup to
determine which namespaces they are authorized to access, and whether
ServiceProfile support is available. This allows removal of the
`--single-namespace` flag on the components.
Also update `bin/test-cleanup` to cleanup the ServiceProfile CRD.
TODO:
- Remove `--single-namespace` flag on `linkerd install`, part of #2164
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
In #2195 we introduced `linkerd endpoints` on the CLI. I would like similar
information to be on the web.
This PR adds an api endpoint at `/api/endpoints`, and introduces a new debugging
pagethat shows a table of endpoints, available at `/debug`
Fixes#2077
When looking up service profiles, Linkerd always looks for the service profile objects in the Linkerd control namespace. This is limiting because service owners who wish to create service profiles may not have write access to the Linkerd control namespace.
Instead, we have the control plane look for the service profile in both the client namespace (as read from the proxy's `proxy_id` field from the GetProfiles request and from the service's namespace. If a service profile exists in both namespaces, the client namespace takes priority. In this way, clients may override the behavior dictated by the service.
Signed-off-by: Alex Leong <alex@buoyant.io>
JavaScript assets could be cached across Linkerd releases, showing an
out of date ui, or a broken page.
Modify the webpack build pipeline to add a hash to the JS bundle
filename. Move all logic around webpack-dev-server state from Go into
JS, via a templatized index_bundle.js file, generated at build time.
Disable caching of index_bundle.js in Go, via a `Cache-Control` header.
Fixes#1996
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Proxy grafana requests through web service
* Fix -grafana-addr default, clarify -api-addr flag
* Fix version check in grafana dashboards
* Fix comment typo
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Adds an endpoint, at /profiles/new that allows you to input a service name and
namespace, and download a service profile yaml template.
This will enable future work, where we can add more of the yaml customization via
a form in the dashboard, and use that data to help the user configure routes.
This PR begins to migrate Conduit to Linkerd2:
* The proxy has been completely removed from this repo, and is now located at
github.com/linkerd/linkerd2-proxy.
* A `Dockerfile-proxy` has been added to fetch the most-recently published proxy
binary from build.l5d.io.
* Proxy-specific protobuf bindings have been moved to
github.com/linkerd/linkerd2-proxy-api.
* All docker images now use the gcr.io/linkerd-io registry.
* `inject` now uses `LINKERD2_PROXY_` environment variables
* Go paths have been updated to reflect the new (future) repo location.
Problem
If you navigate directly to (or do a hard refresh on) a path with more than one segment,
e.g. http://localhost:8084/namespaces/conduit, the dashboard js is not served.
Pages with two paths have to be accessed by loading the dashboard on a different
path and then clicking through.
When accessing the dashboard via conduit dashboard we append a path prefix so that
we can connect using the k8s proxy. This means that moving the dashboard to serve
images off relative paths won't work, because we need to serve images whether the
dashboard is loaded from http://localhost:8084/namespaces/conduit or
from http://localhost:8084/namespaces.
Solution
Check whether we're serving the dashboard with the proxy url, and if we are, adjust
the url at which we serve the index bundle from.
I've also added a very manual override if the conduit logo can't be found at the usual url.
- reduce row spacing on tables to make them more compact
- Rename TabbedMetricsTable to MetricsTable since it's not tabbed any more
- Format latencies greater than 1000ms as seconds
- Make sidebar collapsible
- poll the /pods endpoint from the sidebar in order to refresh the list of deployments in the autocomplete
- display the conduit namespace in the service mesh details table
- Use floats rather than Col for more responsive layout (fixes#224)
* Sort imports
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Upgrade k8s.io/client-go to v6.0.0
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Make k8s store initialization blocking with timeout
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
We’ve built Conduit from the ground up to be the fastest, lightest,
simplest, and most secure service mesh in the world. It features an
incredibly fast and safe data plane written in Rust, a simple yet
powerful control plane written in Go, and a design that’s focused on
performance, security, and usability. Most importantly, Conduit
incorporates the many lessons we’ve learned from over 18 months of
production service mesh experience with Linkerd.
This repository contains a few tightly-related components:
- `proxy` -- an HTTP/2 proxy written in Rust;
- `controller` -- a control plane written in Go with gRPC;
- `web` -- a UI written in React, served by Go.