Adding some quick links and help to the sidebar to guide users that
are stuck the channels that are most relevant
Fixes#1716
Signed-off-by: Ben Lambert <ben@blam.sh>
Updates to the Kubernetes utility code in `/controller/k8s` to support interacting with ServiceProfiles.
This makes use of the code generated client added in #1752
Signed-off-by: Alex Leong <alex@buoyant.io>
* Make room for columns in `linkerd top`
Make room for columns in `linkerd top`. Columns with data longer than some predetermined minimum length were stepping over each other.
Proposal for #1728
* Removed unneeded truncations
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
To support reading and writing of the ServiceProfile custom resource, we add a codegen'd Kubernetes client for this resource.
* Adding the ServiceProfile type and related boilerplate to /controller/gen/apis/serviceprofile. This boilerplate also contains directives that control how codegen works.
* A script in /hack which invokes codegen that generates Kubernetes client machinery for interacting with ServiceProfile resources. The majority of the generated code lives in /controller/gen/client.
* The above-mentioned generated code.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
`go test` was failing with
`Fatalf call has arguments but no formatting directives`
Fix test failure, make all logrus api calls consistent.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This PR updates the proxy SHA the build is pinned. This is in order to
track dependency updates in the proxy for the upcoming edge release.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
Pin the proxy version to a specific SHA instead of floating on latest. This allows breaking changes in the proxy repo to not break the main Linkerd 2 repo.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Added --context flag to specify the context to use to talk to the Kubernetes apiserver
* Fix tests that are failing
* Updated context flag description
Signed-off-by: Darko Radisic <ffd2subroutine@users.noreply.github.com>
Horizontal Pod Autoscaling does not work when container definitions in pods do not all have resource requests, so here's the ability to add CPU + Memory requests to install + inject commands by proving proxy options --proxy-cpu + --proxy-memory
Fixes#1480
Signed-off-by: Ben Lambert <ben@blam.sh>
In linkerd/linkerd2-proxy#99, several proxy configuration variables were
deprecated.
This change updates the CLI to use the updated names to avoid
deprecation warnings during startup.
* Use ListPods always for data plane HC
* Missing changes in grpc_server.go
* Address review comments
* Read proxy version from spec
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
* Add a clear button to the tap and top query forms
Add clear functionality to Tap
Add clear functionality to Top
Fix a react key error when there were multiple unmeshed upstreams
Fix bug where tap results from other queries persisted when changing the query
Fix key error in autocomplete dropdown
Use the same tap data we use to display the unmeshed resources in the Octopus
graph to add the unmeshed rows to the Inbound stat table.
The unmeshed rows are filtered by resource type, so if we're on a Deployments
page, only upstreams which are deployments will show in the table. (Others, such
as IPs, will still show in the octopus graph).
* Use the list of unmeshed resources to display unmeshed sources in the table
* Keep track of number of pods in unmeshed sources
When I deleted a resource, I noticed that hard refreshing the page resulted in
js errors that would break the UI (e.g. if you were on a pod page, and the pod's
deployment was deleted, the pod would no longer be found, and the page would
error). This PR better handles not-present resources so that the UI still shows
up and shows you that there aren't metrics for that resource.
Also clean up the undefined/undefined octopus node that would show up in that
case.
Previously, we would display source and destination info in the Top/Tap table
popovers in a vertical format. This PR places them in a table so that each type
of source/dest (ip, pod, pod owner) can be read left to right.
* Display the popover Source/Destination info for the tap and top tables in a
tabular format
* Added an arrow column between Src and Dst
When pods or deployments are in an "Initialization" phase we currently see a "warning" icon that represents pods going under some kind of change. This may sometimes seem alarming when initially injecting pods after installing Linkerd.
This PR adds a new icon that shows up when pods are in the "PodInitializing" phase and shows the former "warning" icon when there is an error in starting pods.
fixes#1652
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
Draw customizable SVG paths for octopus arms.
This also combines all the unmeshed resources into a list and displays them in
one resource box, instead of adding one box per unmeshed resource. This helps
keep the box heights constant, which I want to draw the arrows.
When a resource has no tap events being streamed to the Tap UI, and a user hits the "Stop" button in the Tap page, the tap stream is left open due to the WebSocket connection not being closed.
It looks like the web server's tap client that is created to stream events from the tap server blocks the main request thread in the web server. This causes the web server to stop receiving any subsequent close frames from the UI i.e. when the "Stop" button is clicked.
This PR moves the tapClient initialization code to a separate goroutine, specifically, the goroutine that reads tap events from the incoming grpc tap stream. This allows the main thread to continue reading messages from the WebSocket connection and allow it to receive close frames.
fixes#1665
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
The success rate mini chart shows a colour based on SR, and also shows the SR
via the proportion of the chart that's filled out. If the success rate is 0% (as
in the VotePoop endpoint in the emojivoto demo), the chart would be zero
percent filled out, causing it to be entirely gray. Really, it should be entirely
red, since zero SR is pretty bad.
Fix: fully fill the bar with red if there is a zero SR
Prevent error when trying to tap without having a namespace and resource
selected by disabling the tap button.
Fixes#1670
Signed-off-by: Mathis Wiehl <mathis.wiehl@sinnerschrader.com>
* Update version checks to support release channels
* Update based on review feedback
* Fix sidebar tests
* Update CI config for edge and stable tags
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
If you go to /top and select a namespace and then a resource, and then clear the
resource, there would be a javascript error that would cause the whole app not
to render. Fix this.
Problem
Previously, we'd display one row in top per sourcePod -> dstPod. When
viewing resources at a higher level though (e.g. deployments with multiple pods)
the src/dst column displays the resource at that level, and displaying multiple
rows with deploy/foo is confusing.
Solution
Key the top table off of the resource currently being requested, so
that all the rows are rolled up appropriately. In the popover for that column,
display a list of pods/ips that are rolled up.
This branch also adds a generic list of resources to the tap/top dropdown (you
were always able to tap them, but when I switched from autocomplete to select
for this dropdown, you lost the ability to type in arbitrary resources).
If you select a from namespace and from resource in /tap and try to clear them
using the little x in the form field, there would be a huge js error causing the
app to not render. Fix this.
Also removes filterOptions which wasn't being used any more. This will probably
make parsing tap results ever so slightly faster as we're now not trying to also
aggregate potential filter options.
* Fix js errors on Tap form when Clear button is hit
* Remove filter options code since we're not using the filters anywhere
When a websocket connection is closed between Chrome and a server, we get a 1006 error code signifying abnormal closure of the websocket connection. It seems as if we only get this error on Chrome web clients. Firefox and Safari do not encounter this issue.
The solution is to suppress 1006 errors that occur in the web browser since the connection is closed anyway. There is no negative side effect that occurs when the connection is closed abnormally and so the error message is benign.
fixes#1630
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
* Don't display RPS unit in metrics table
* Fix Tap and Top icons not being minimized correctly
* remove metric tooltip on RPS column
* Fix extra spacing on Tap/Top in sidebar