To support reading and writing of the ServiceProfile custom resource, we add a codegen'd Kubernetes client for this resource.
* Adding the ServiceProfile type and related boilerplate to /controller/gen/apis/serviceprofile. This boilerplate also contains directives that control how codegen works.
* A script in /hack which invokes codegen that generates Kubernetes client machinery for interacting with ServiceProfile resources. The majority of the generated code lives in /controller/gen/client.
* The above-mentioned generated code.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
Horizontal Pod Autoscaling does not work when container definitions in pods do not all have resource requests, so here's the ability to add CPU + Memory requests to install + inject commands by proving proxy options --proxy-cpu + --proxy-memory
Fixes#1480
Signed-off-by: Ben Lambert <ben@blam.sh>
Add checks to `linkerd check --pre` to verify that the user has permission to create:
* namespaces
* serviceaccounts
* clusterroles
* clusterrolebindings
* services
* deployments
* configmaps
Signed-off-by: Alex Leong <alex@buoyant.io>
* Upgrade to dep 0.5.0, go 1.10.3
* Remove existing dep binary if it's the wrong version
* Add version in filename of dep binary to prevent version conflicts
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
This an initial implementation of the `linkerd top` command. This command launches an ncurses style tabular view of current requests (using data from tap). Most of the command line arguments are the same as tap and allow selecting the resource to inspect and filtering which requests to view.
Fixes#1283
Signed-off-by: Alex Leong <alex@buoyant.io>
Adds a tap endpoint in the web api that communicates with the dashboard
via websockets.
I've moved a bunch of code from the cli tap.go into utils so that the code
can be shared between web and CLI. I think we should consider making the
display more suited to web, but in the short term, reusing the CLI's
rendering of tap events works.
Adds a Tap page in the Web UI that you can use to make tap requests.
The form currently only allows you to enter a resource and namespace,
other filters coming in a follow-up branch.
This PR begins to migrate Conduit to Linkerd2:
* The proxy has been completely removed from this repo, and is now located at
github.com/linkerd/linkerd2-proxy.
* A `Dockerfile-proxy` has been added to fetch the most-recently published proxy
binary from build.l5d.io.
* Proxy-specific protobuf bindings have been moved to
github.com/linkerd/linkerd2-proxy-api.
* All docker images now use the gcr.io/linkerd-io registry.
* `inject` now uses `LINKERD2_PROXY_` environment variables
* Go paths have been updated to reflect the new (future) repo location.
Create a ephemeral, in-memory TLS certificate authority and integrate it into the certificate distributor.
Remove the re-creation of deleted ConfigMaps; this will be added back later in #1248.
Signed-off-by: Brian Smith brian@briansmith.org
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Add CA certificate bundle distributor to conduit install
* Update ca-distributor to use shared informers
* Only install CA distributor when --enable-tls flag is set
* Only copy CA bundle into namespaces where inject pods have the same controller
* Update API config to only watch pods and configmaps
* Address review feedback
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Update destination service ot use shared informer instead of custom endpoints informer
* Add additional tests for dst svc endpoints watcher
* Remove service ports when all listeners unsubscribed
* Update go deps
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
After this was implemented we found that ExternalName services are
represented in DNS as CNAMEs, which means that the proxy's DNS
fallback logic can be used instead of doing DNS in the control
plane. Besides simplifying the controller, this will also increase
fidelity with the proxied pods' DNS configuration (improve
transparency).
Signed-off-by: Brian Smith <brian@briansmith.org>
The `conduit tap` command is now deprecated.
Replace `conduit tap` with `connduit tapByResource`. Rename tapByResource
to tap. The underlying protobuf for tap remains, the tap gRPC endpoint now
returns Unimplemented.
Fixes#804
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
public-api and and tap were both using their own implementations of
the Kubernetes Informer/Lister APIs.
This change factors out all Informer/Lister usage into the Lister
module. This also introduces a new `Lister.GetObjects` method.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The TapByResource endpoint was previously a stub.
Implement end-to-end tapByResource functionality, with support for
specifying any kubernetes resource(s) as target and destination.
Fixes#803, #49
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The Destination service does not provide ReplicaSet information to the
proxy.
The `pod-template-hash` label approximates selecting over all pods in a
ReplicaSet or ReplicationController. Modify the Destination service to
provide this label to the proxy.
Relates to #508 and #741
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Conduit was relying on apps/v1 to Deployment and ReplicaSet APIs.
apps/v1 is not available on Kubernetes 1.8. This prevented the
public-api from starting.
Switch Conduit to use apps/v1beta2. Also increase the Kubernetes API
cache sync timeout from 10 to 60 seconds, as it was taking 11 seconds on
a test cluster.
Fixes#761
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The new StatSummary endpoint was only providing request volume and
successs rate information.
Add support for retrieving latency stats via StatSummary. Also make
all prometheus calls in parallel, and implement kubernetes test
fixtures.
Fixes#681
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Switch public API to use cached k8s resources
* Move shared informer code to separate goroutine
* Fix spelling issue
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
The new statsummary command accepted friendly k8s names, which worked
for k8s queries, but Prometheus requires a specific key.
Modify the statsummary query to map friendly k8s names to canonical k8s
names when constructing the query. Then during the query, map the
canonical k8s name to a specific Prometheus label.
Fixes#695
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Start implementing new conduit stat summary endpoint.
Changes the public-api to call prometheus directly instead of the
telemetry service. Wired through to `api/stat` on the web server,
as well as `conduit statsummary` on the CLI. Works for deployments only.
Current implementation just retrieves requests and mesh/total pod count
(so latency stats are always 0).
Uses API defined in #663
Example queries the stat endpoint will eventually satisfy in #627
This branch includes commits from @klingerf
* run ./bin/dep ensure
* run ./bin/update-go-deps-shas
* Add tests/utils/scripts for running integration tests
Add a suite of integration tests in the `test/` directory, as well as
utilities for testing in the `testutil/` directory.
You can use the `bin/test-run` script to run the full suite of tests,
and the `bin/test-cleanup` script to cleanup after the tests.
The test/README.md file has more information about running tests.
@pcalcado, @franziskagoltz, and @rmars also contributed to this change.
* Create TEST.md file at the root of the repo
* Update based on review feedback
* Relax external service IP timeout for GKE
* Update TEST.md with more info about different types of test runs
* More updates to TEST.md based on review feedback
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Shortly after conduit is installed in k8s environment. The control plane component that establishes a watch endpoint with k8s run in to networking issues during proxy initialization. During failure, each watcher fails to retry its connection to k8s watch endpoint which leads to timeouts and eventually, multiple controller pod restarts.
This PR adds retry logic to each "watch" enabled package.
fixes#478
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
* Use Go 1.10.0 to build Go components.
Take advantage of the new build cache in Go 1.10. Future work on improving
build performance will utilize the build cache further.
Signed-off-by: Brian Smith <brian@briansmith.org>
Prometheus queries from the Telemetry service were taking seconds or 10s
of seconds.
Optimize these queries:
- Move all summary queries requiring a single point data off of Prometheus'
QueryRange() endpoint, onto Query()
- Set `defaultVectorRange` to 30s, and also use it regardless of time
window
Also add tests for grpc_server and telemetry server
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#260
The conduit dashboard command asychronously shells out and runs "kubectl
proxy".
This change replaces the shelling out with calls to kubernetes proxy
APIs. It also allows us to enable race detection in our go tests, as the
shell out code tests did not pass race detection.
Fixes#173
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Add `bin/dep` which fetches a fixed version of `dep` to be used.
* Upgrade from dep 0.3.1 to 0.4.1
* Fix inconsistent Gopkg.lock by checking in the result of `bin/dep ensure`
Signed-off-by: Alex Leong <alex@buoyant.io>
The conduit.io/* k8s labels and annotations we're redundant in some
cases, and not flexible enough in others.
This change modifies the labels in the following ways:
`conduit.io/plane: control` => `conduit.io/controller-component: web`
`conduit.io/controller: conduit` => `conduit.io/controller-ns: conduit`
`conduit.io/plane: data` => (remove, redundant with `conduit.io/controller-ns`)
It also centralizes all k8s labels and annotations into
pkg/k8s/labels.go, and adds tests for the install command.
Part of #201
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Sort imports
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Upgrade k8s.io/client-go to v6.0.0
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Make k8s store initialization blocking with timeout
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
We’ve built Conduit from the ground up to be the fastest, lightest,
simplest, and most secure service mesh in the world. It features an
incredibly fast and safe data plane written in Rust, a simple yet
powerful control plane written in Go, and a design that’s focused on
performance, security, and usability. Most importantly, Conduit
incorporates the many lessons we’ve learned from over 18 months of
production service mesh experience with Linkerd.
This repository contains a few tightly-related components:
- `proxy` -- an HTTP/2 proxy written in Rust;
- `controller` -- a control plane written in Go with gRPC;
- `web` -- a UI written in React, served by Go.