Fix broken docker build by moving Service Profile conversion and validation into `/pkg`.
Fix broken integration test by adding service profile validation output to `check`'s expected output.
Testing done:
* `gotest -v ./...`
* `bin/docker-build`
* `bin/test-run (pwd)/bin/linkerd`
Signed-off-by: Alex Leong <alex@buoyant.io>
Add a check to `linkerd check` which validates all service profile resources. In particular it checks:
* does the service profile refer to an existent service
* is the service profile valid
Signed-off-by: Alex Leong <alex@buoyant.io>
* Add Grafana dashboard for Authorities
Proposal for #1225
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Implement code review suggestions
Modified Inbound by Deployment and Inbound by Pod graphs according to klingerf's feedback.
Removed template variables values.
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
We implement the getProfiles method in the destination service. This method returns a stream of destination profiles for a given authority. It does this by looking up the ServiceProfile resource in the controller namespace named `<svc>.<ns>` where `<svc>` is the name of the service and `<ns>` is the namespace of the service.
This PR includes:
* Adding a ServiceProfile Custom Resource Definition to linkerd install
* A watch based implementation of the getProfiles method in the destination service, similar to the implementation of get.
* An update to the destination client script that allows querying the getProfiles method.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Ensure that the proxy injector mutating webhook preserves the original labels
and annotations
The deployment's selector must also match the pod template labels in
newer version of Kubernetes.
This resolves issue #1756.
* Add the Linkerd labels to the deployment metadata during auto proxy
injection
* Remove selector match labels JSON patch from proxy injector
This isn't needed to resolve the selector label mismatch errors.
Signed-off-by: ihcsim <ihcsim@gmail.com>
Use jest for assertions, removing the need for mocha and chai
- Clean up test dependencies
- Move dev dependencies to devDependencies
- yarn remove chai remove sinon-chai mocha
Jest is faster, has more flexibility to run a subset of the tests, and will allow
us to remove a bunch of our assertion libraries.
Many thanks to @grampelberg for prior work on this (#1000)
This PR:
- changes the test runner from karma to jest
- moves individual tests from /test/ to/js/components` where jest expects them
Appending proxy-init to the end of the list ensures that it won't
interfere with other init containers from accessing the network,
before the proxy container is created.
This resolves bug #1760
Signed-off-by: ihcsim <ihcsim@gmail.com>
Added support for json output in `linkerd stat` through a new (-o|--output)=json option.
Fixes#1417
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
Adding some quick links and help to the sidebar to guide users that
are stuck the channels that are most relevant
Fixes#1716
Signed-off-by: Ben Lambert <ben@blam.sh>
Updates to the Kubernetes utility code in `/controller/k8s` to support interacting with ServiceProfiles.
This makes use of the code generated client added in #1752
Signed-off-by: Alex Leong <alex@buoyant.io>
* Make room for columns in `linkerd top`
Make room for columns in `linkerd top`. Columns with data longer than some predetermined minimum length were stepping over each other.
Proposal for #1728
* Removed unneeded truncations
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
To support reading and writing of the ServiceProfile custom resource, we add a codegen'd Kubernetes client for this resource.
* Adding the ServiceProfile type and related boilerplate to /controller/gen/apis/serviceprofile. This boilerplate also contains directives that control how codegen works.
* A script in /hack which invokes codegen that generates Kubernetes client machinery for interacting with ServiceProfile resources. The majority of the generated code lives in /controller/gen/client.
* The above-mentioned generated code.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
`go test` was failing with
`Fatalf call has arguments but no formatting directives`
Fix test failure, make all logrus api calls consistent.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This PR updates the proxy SHA the build is pinned. This is in order to
track dependency updates in the proxy for the upcoming edge release.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
Pin the proxy version to a specific SHA instead of floating on latest. This allows breaking changes in the proxy repo to not break the main Linkerd 2 repo.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Added --context flag to specify the context to use to talk to the Kubernetes apiserver
* Fix tests that are failing
* Updated context flag description
Signed-off-by: Darko Radisic <ffd2subroutine@users.noreply.github.com>
Horizontal Pod Autoscaling does not work when container definitions in pods do not all have resource requests, so here's the ability to add CPU + Memory requests to install + inject commands by proving proxy options --proxy-cpu + --proxy-memory
Fixes#1480
Signed-off-by: Ben Lambert <ben@blam.sh>
In linkerd/linkerd2-proxy#99, several proxy configuration variables were
deprecated.
This change updates the CLI to use the updated names to avoid
deprecation warnings during startup.
* Use ListPods always for data plane HC
* Missing changes in grpc_server.go
* Address review comments
* Read proxy version from spec
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
* Add a clear button to the tap and top query forms
Add clear functionality to Tap
Add clear functionality to Top
Fix a react key error when there were multiple unmeshed upstreams
Fix bug where tap results from other queries persisted when changing the query
Fix key error in autocomplete dropdown
Use the same tap data we use to display the unmeshed resources in the Octopus
graph to add the unmeshed rows to the Inbound stat table.
The unmeshed rows are filtered by resource type, so if we're on a Deployments
page, only upstreams which are deployments will show in the table. (Others, such
as IPs, will still show in the octopus graph).
* Use the list of unmeshed resources to display unmeshed sources in the table
* Keep track of number of pods in unmeshed sources
When I deleted a resource, I noticed that hard refreshing the page resulted in
js errors that would break the UI (e.g. if you were on a pod page, and the pod's
deployment was deleted, the pod would no longer be found, and the page would
error). This PR better handles not-present resources so that the UI still shows
up and shows you that there aren't metrics for that resource.
Also clean up the undefined/undefined octopus node that would show up in that
case.
Previously, we would display source and destination info in the Top/Tap table
popovers in a vertical format. This PR places them in a table so that each type
of source/dest (ip, pod, pod owner) can be read left to right.
* Display the popover Source/Destination info for the tap and top tables in a
tabular format
* Added an arrow column between Src and Dst
When pods or deployments are in an "Initialization" phase we currently see a "warning" icon that represents pods going under some kind of change. This may sometimes seem alarming when initially injecting pods after installing Linkerd.
This PR adds a new icon that shows up when pods are in the "PodInitializing" phase and shows the former "warning" icon when there is an error in starting pods.
fixes#1652
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
Draw customizable SVG paths for octopus arms.
This also combines all the unmeshed resources into a list and displays them in
one resource box, instead of adding one box per unmeshed resource. This helps
keep the box heights constant, which I want to draw the arrows.
When a resource has no tap events being streamed to the Tap UI, and a user hits the "Stop" button in the Tap page, the tap stream is left open due to the WebSocket connection not being closed.
It looks like the web server's tap client that is created to stream events from the tap server blocks the main request thread in the web server. This causes the web server to stop receiving any subsequent close frames from the UI i.e. when the "Stop" button is clicked.
This PR moves the tapClient initialization code to a separate goroutine, specifically, the goroutine that reads tap events from the incoming grpc tap stream. This allows the main thread to continue reading messages from the WebSocket connection and allow it to receive close frames.
fixes#1665
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>