Add a new CLI command: `linkerd profile --template` which outputs a sample service profile yaml. Users can edit this sample and then `kubectl apply` it to add a service profile. The sample serves as "documentation by example" of what service profiles may contain.
Example usage:
```bash
linkerd profile -n emojivoto --template web-svc > web-svc-profile.yaml
# edit web-svc-profile.yaml in your favorite editor
kubectl apply -f web-svc-profile.yaml
```
Signed-off-by: Alex Leong <alex@buoyant.io>
We implement the getProfiles method in the destination service. This method returns a stream of destination profiles for a given authority. It does this by looking up the ServiceProfile resource in the controller namespace named `<svc>.<ns>` where `<svc>` is the name of the service and `<ns>` is the namespace of the service.
This PR includes:
* Adding a ServiceProfile Custom Resource Definition to linkerd install
* A watch based implementation of the getProfiles method in the destination service, similar to the implementation of get.
* An update to the destination client script that allows querying the getProfiles method.
Signed-off-by: Alex Leong <alex@buoyant.io>
Added support for json output in `linkerd stat` through a new (-o|--output)=json option.
Fixes#1417
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Make room for columns in `linkerd top`
Make room for columns in `linkerd top`. Columns with data longer than some predetermined minimum length were stepping over each other.
Proposal for #1728
* Removed unneeded truncations
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
* Added --context flag to specify the context to use to talk to the Kubernetes apiserver
* Fix tests that are failing
* Updated context flag description
Signed-off-by: Darko Radisic <ffd2subroutine@users.noreply.github.com>
Horizontal Pod Autoscaling does not work when container definitions in pods do not all have resource requests, so here's the ability to add CPU + Memory requests to install + inject commands by proving proxy options --proxy-cpu + --proxy-memory
Fixes#1480
Signed-off-by: Ben Lambert <ben@blam.sh>
In linkerd/linkerd2-proxy#99, several proxy configuration variables were
deprecated.
This change updates the CLI to use the updated names to avoid
deprecation warnings during startup.
* Remove wait option and make it a default for check
* Switch the wait default to true
* Wait by default also for dashboard
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
`linkerd inject` was not checking its input for known sidecars and
initContainers.
Modify `linkerd inject` to check for existing sidecars and
initContainers, specifically, Linkerd, Istio, and Contour.
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#1593
Add a `--hide-sources` flag to `linkerd top`. Setting this removes the source column from the output.
Signed-off-by: Alex Leong <alex@buoyant.io>
linkerd only routes TCP data, but `linkerd inject` does not warn when it
injects into pods with ports set to `protocol: UDP`.
Modify `linkerd inject` to warn when injected into a pod with
`protocol: UDP`. The Linkerd sidecar will still be injected, but the
stderr output will include a warning.
Also add stderr checking on all inject unit tests.
Part of #1516.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
If an input file is un-injectable, existing inject behavior is to simply
output a copy of the input.
Introduce a report, printed to stderr, that communicates the end state
of the inject command. Currently this includes checking for hostNetwork
and unsupported resources.
Malformed YAML documents will continue to cause no YAML output, and return
error code 1.
This change also modifies integration tests to handle stdout and stderr separately.
example outputs...
some pods injected, none with host networking:
```
hostNetwork: pods do not use host networking...............................[ok]
supported: at least one resource injected..................................[ok]
Summary: 4 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
deploy/vote-bot
```
some pods injected, one host networking:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/vote-bot uses "hostNetwork: true"
supported: at least one resource injected..................................[ok]
Summary: 3 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
```
no pods injected:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/emoji, deploy/voting, deploy/web, deploy/vote-bot use "hostNetwork: true"
supported: at least one resource injected..................................[warn] -- no supported objects found
Summary: 0 of 8 YAML document(s) injected
```
TODO: check for UDP and other init containers
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The default value for the max-rps argument to the tap and top commands is an overly conservative 1rps. This causes the data to come in very slowly and much data to be discarded. Furthermore, because tap requests are windowed to 10 seconds, this causes long pauses between updates.
We fix this in two ways. Firstly we reduce the window size to 1s so that updates will come in at least once per second, even when the actual RPS of the data path is extremely high. Secondly, we increase the default max-rps parameter from 1 to 100. This allows tap to paint an accurate picture of the data much more quickly and sidesteps some sampling bias that happens when the max-rps is low.
In general, tap events tend to happen in bursts. For example, one request in may trigger one or more requests out. Likewise, a single upstream event may trigger several requests to the tapped pod in quick succession. Sampling bias will occur when the max-rps is less than the actual rps and when the tap event limit subdivides these event bursts (biasing towards the first few events in the burst). The greater the max-rps, the less the effects of this bias.
Fixes#1525
Signed-off-by: Alex Leong <alex@buoyant.io>
Previously, we would tap any resource's pods, regardless of whether the pods
were meshed or not. We can't actually tap non-meshed pods, so I'm adding a check
that will filter out non-meshed pods from the pods that tap watches.
Previous behaviour:
When attempting to hang a non meshed pod, it would establish
a watch on the pods, but then never return any results. In the CLI you could
just cancel it with Ctrl-C. In the web, clicking Stop would send a
WebSocket.close(1000) but wouldn't actually close the connection...
Behaviour after change :
If no pods under the specified resource are meshed, it'll
return an error of no pods being found to tap
Adds basic probes to the linkerd-proxy containers injected by linkerd inject.
- Currently the Readiness and Liveness probes are configured to be the same.
- I haven't supplied a periodSeconds, but the default is 10.
- I also set the initialDelaySeconds to 10, but that might be a bit high.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
Closes#1170.
This branch adds a `-o wide` (or `--output wide`) flag to the Tap CLI.
Passing this flag adds `src_res` and `dst_res` elements to the Tap
output, as described in #1170. These use the metadata labels in the tap
event to describe what Kubernetes resource the source and destination
peers belong to, based on what resource type is being tapped, and fall
back to pods if either peer is not a member of the specified resource
type.
In addition, when the resource type is not `namespace`, `src_ns` and
`dst_ns` elements are added, which show what namespaces the the source
and destination peers are in. For peers which are not in the Kubernetes
cluster, none of these labels are displayed.
The source metadata added in #1434 is used to populate the `src_res` and
`src_ns` fields.
Also, this branch includes some refactoring to how tap output is
formatted.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This an initial implementation of the `linkerd top` command. This command launches an ncurses style tabular view of current requests (using data from tap). Most of the command line arguments are the same as tap and allow selecting the resource to inspect and filtering which requests to view.
Fixes#1283
Signed-off-by: Alex Leong <alex@buoyant.io>
Closes#776.
This branch adds the following validation to the `linkerd stat` command:
* The `--to` and `--from` flags are now mutually exclusive
* The `--to-namespace` and `--from-namespace` commands are also mutually
exclusive.
* The `namespace` resource type conflicts with the `--namespace`,
`--to-namespace`, and `--from-namespace` flags.
Examples:
```
$ bin/go-run cli/main.go stat deploy --to deploy/foo --from deploy/bar
Error: --to and --from flags are mutually exclusive
Usage:
linkerd stat [flags] (RESOURCE)
...
```
```
$ bin/go-run cli/main.go stat deploy --to-namespace foo --from-namespace bar
Error: --to-namespace and --from-namespace flags are mutually exclusive
Usage:
linkerd stat [flags] (RESOURCE)
...
```
```
$ bin/go-run cli/main.go stat namespace foo --namespace bar
Error: --namespace flag is incompatible with namespace resource type
Usage:
linkerd stat [flags] (RESOURCE)
...
```
```
$ bin/go-run cli/main.go stat ns --to-namespace bar
Error: --to-namespace flag is incompatible with namespace resource type
Usage:
linkerd stat [flags] (RESOURCE)
...
```
```
$ bin/go-run cli/main.go stat namespace --from-namespace bar
Error: --from-namespace flag is incompatible with namespace resource type
Usage:
linkerd stat [flags] (RESOURCE)
...
```
```
$ bin/go-run cli/main.go stat ns/foo --from-namespace bar
Error: --from-namespace flag is incompatible with namespace resource type
Usage:
linkerd stat [flags] (RESOURCE)
...
```
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Currently, when a cluster has over 100 pods injected with the Linkerd2
proxy, Prometheus metrics are not collected correctly. This is because
Prometheus appears to be making more concurrent requests than its'
proxy's outbound router cache can handle See issue #1322 for further
details.
This branch introduces a workaround for this issue, by increasing the
outbound router cache capacity to 10000 routes for the Prometheus pod's
proxy only. The router capacity limit of 100 active routes is primarily
due to the limitation of the number of active Destination service
lookups, so increasing the capacity for the Prometheus pod specifically
is probably okay, as the scrape requests are made to IP addresses
directly and therefore will not cause service discovery lookups.
This change was originally implemented and tested in @siggy's PR #1228.
I've rebased his branch onto the current `master`, and updated the code
to reflect the project name change.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Co-authored-by: Andrew Seigner <siggy@buoyant.io>
This change is a simplified implementation of the Builder.Path() and
Visitor().ExpandPathsToFileVisitors() functions used by kubectl to parse files
and directories. The filepath.Walk() function is used to recursively traverse
directories. Every .yaml or .json resource file in the directory is read
into its own io.Reader. All the readers are then passed to the YAMLDecoder in the
InjectYAML() function.
Fixes#1376
Signed-off-by: ihcsim <ihcsim@gmail.com>
Adds a tap endpoint in the web api that communicates with the dashboard
via websockets.
I've moved a bunch of code from the cli tap.go into utils so that the code
can be shared between web and CLI. I think we should consider making the
display more suited to web, but in the short term, reusing the CLI's
rendering of tap events works.
Adds a Tap page in the Web UI that you can use to make tap requests.
The form currently only allows you to enter a resource and namespace,
other filters coming in a follow-up branch.
`ca-bundle-distributor` described the original role of the program but
`ca` ("Certificate Authority") better describes its current role.
Signed-off-by: Brian Smith <brian@briansmith.org>
These files were created with the executable bit set accidentally due
to the way my network file system setup was configured.
Signed-off-by: Brian Smith <brian@briansmith.org>
* update grafana dashboards to remove conduit reference and replace with linkerd instances
* update test install fixtures to reflect changes
Fixes: #1315
Signed-off-by: Franziska von der Goltz <franziska@vdgoltz.eu>
The control-plane's `ClusterRole` and `ClusterRoleBinding` objects are
global. Because their names did not vary across multiple control-plane
deployments, it prevented multiple control-planes from coexisting (when
RBAC is enabled).
Modify the `ClusterRole` and `ClusterRoleBinding` objects to include the
control-plane's namespace in their names. Also modify the integration
test to first install two control-planes, and then perform its full
suite of tests, to prevent regression.
Fixes#1292.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>