`bin/test-cleanup` takes 48s on ci.
This change sets `kubectl --wait=false`, so the command should return
immediately rather than waiting for resources to be fully deleted.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The CLI now specifies a default port, 50750, for the Linkerd dashboard.
If that port is not available, it resumes the original behavior of binding to a
free ephemeral port.
linkerd/linkerd2#1721 introduced a `--single-namespace` install flag,
enabling the control-plane to function within a single namespace. With
the introduction of ServiceProfiles, and upcoming identity changes, this
single namespace mode of operation is becoming less viable.
This change removes the `--single-namespace` install flag, and all
underlying support. The control-plane must have cluster-wide access to
operate.
A few related changes:
- Remove `--single-namespace` from `linkerd check`, this motivates
combining some check categories, as we can always assume cluster-wide
requirements.
- Simplify the `k8s.ResourceAuthz` API, as callers no longer need to
make a decision based on cluster-wide vs. namespace-wide access.
Components either have access, or they error out.
- Modify the web dashboard to always assume ServiceProfiles are enabled.
Reverts #1721
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The ci job pulls Go code from googlesource.com, among other places.
These requests were regularly failing due to rate-limiting.
Introduce a script, from go.googlesource.com, to generate a .gitcookies
file. That script is stored in a `$GITCOOKIE_SH` environment variable in
ci, which is base64 decoded and executed during ci.
More info:
https://github.com/golang/go/issues/12933#issuecomment-199429151
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
## Problem
When an object has no previous route metrics, we do not generate a table for
that object.
The reasoning behind this was for reducing output of the following command:
```
$ linkerd routes deploy --to deploy/foo
```
For each deployment object, if it has no previous traffic to `deploy/foo`, then
a table would not be generated for it.
However, the behavior we see with that indicates there is an error even when a
Service Profile is installed:
```
$ linkerd routes deploy deploy/foo
Error: No Service Profiles found for selected resources
```
## Solution
Always generate a stat table for the queried resource object.
## Validation
I deployed [booksapp](https://github.com/buoyantIO/booksapp) with the `traffic`
deployment removed and Service Profiles installed.
Without the fix, `linkerd routes deploy/webapp` displays an error because there
has been no traffic to `deploy/webapp` without the `traffic` deployment.
With the fix, the following output is generated:
```
ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
GET / webapp 0.00% 0.0rps 0ms 0ms 0ms
GET /authors/{id} webapp 0.00% 0.0rps 0ms 0ms 0ms
GET /books/{id} webapp 0.00% 0.0rps 0ms 0ms 0ms
POST /authors webapp 0.00% 0.0rps 0ms 0ms 0ms
POST /authors/{id}/delete webapp 0.00% 0.0rps 0ms 0ms 0ms
POST /authors/{id}/edit webapp 0.00% 0.0rps 0ms 0ms 0ms
POST /books webapp 0.00% 0.0rps 0ms 0ms 0ms
POST /books/{id}/delete webapp 0.00% 0.0rps 0ms 0ms 0ms
POST /books/{id}/edit webapp 0.00% 0.0rps 0ms 0ms 0ms
[DEFAULT] webapp 0.00% 0.0rps 0ms 0ms 0ms
```
Closes#2328
Signed-off-by: Kevin Leimkuhler <kevinl@buoyant.io>
Manual and auto injection was logging the full patch JSON at the `Info`
level.
Modify injection to log the object type and name at the `Info` level,
and the full patch at the `Debug` level.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Some time ago, I fixed sorting on these tables so that the default route ([default])
was sorted to the bottom. The name was changed to [DEFAULT] causing that sort
to no longer put the default route at the bottom. Update to correct case.
linkerd/linkerd2#2428 modified SelfSubjectAccessReview behavior to no
longer paper-over failed ServiceProfile checks, assuming that
ServiceProfiles will be required going forward. There was a lingering
ServiceProfile check in the web's startup that started failing due to
this change, as the web component does not have (and should not need)
ServiceProfile access. The check was originally implemented to inform
the web component whether to expect "single namespace" mode or
ServiceProfile support.
Modify the web's initialization to always expect ServiceProfile support.
Also remove single namespace integration test
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
It's sometimes helpful to spotcheck proxy metrics from a specific pod,
but doing so with kubectl requires a few steps.
Introduce a new `linkerd metrics` command. When given a pod name and
namespace, returns a dump of the proxy's /metrics endpoint.
Also modify the k8s.portforward module to accept initialized k8s config
and client objects, to enable testing.
Fixes#2350.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
linkerd/linkerd2#2349 removed the `--single-namespace` flag, in favor of
runtime detection of cluster vs. namespace access, and also
ServiceProfile availability. This maintained control-plane support for
running in these two states.
This change requires control-plane components have cluster-wide
Kubernetes API access and ServiceProfile availability, and will error
out if not. Once #2349 merges, stage 1 install will be a requirement for
a successful stage 2 install.
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Changed the protobuf definition to take out destinationApiPort entirely
* Store destinationAPIPort as a constant in pkg/inject.go
Fixes#2351
Signed-off-by: Aditya Sharma <hello@adi.run>
Show TCP stats in the linkerd stat output. They are not shown by default, but
will be queried when using -o wide or -o json.
Also display read/write bytes as bytes per sec in the CLI and dashboard.
Fixes#2377
In inject's ResourceConfig, renamed objMeta to podMeta since
it really points to the pod template metadata. And created a new field
workloadMeta that really points to the main workload (e.g. Deployment) metadata.
Refactored uninject to clean up the labels at both podMeta and
workloadMeta. Also it will remove all the labels and annotations that
start with "linkerd.io" except for the "linkerd.io/inject" annotation.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
linkerd/linkerd2#2414 introduced integration tests to ensure logs did
not contain unexpected errors. Additional errors are not being caught,
causing ci to fail.
This change adds more known log errors to the log regex.
Also temporarily enable integration tests in ci for this PR.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The integration tests deploy complete Linkerd environments into
Kubernetes, but do not check if the components are logging errors or
restarting.
Introduce integration tests to validation that all expected
control-plane containers (including `linkerd-proxy` and `linkerd-init`)
are found, logging no errors, and not restarting.
Fixes#2348
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd-init` container requires the NET_ADMIN capability to modify
iptables. The `linkerd check` command was not verifying this.
Introduce a `has NET_ADMIN capability` check, which does the following:
1) Lists all available PodSecurityPolicies, if none found, returns
success
2) For each PodSecurityPolicy, validate one exists that:
- the user has `use` access AND
- provides `*` or `NET_ADMIN` capability
A couple limitations to this approach:
- It is testing whether the user running `linkerd check` has NET_ADMIN,
but during installation time it will be the `linkerd-init` pod that
requires NET_ADMIN.
- It assumes the presense of PodSecurityPolicies in the cluster means
the PodSecurityPolicy admission controller is installed. If the
admission controller is not installed, but PSPs exists that restrict
NET_ADMIN, `linkerd check` will incorrectly report the user does not
have that capability.
This PR also fixes the `can create CustomResourceDefinitions` check to
not specify a namespace when doing a `create` check, as CRDs are
cluster-wide.
Fixes#1732
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Refactor code to make sidebar in sync with the main view (#2134)
Signed-off-by: Gaurav Kumar <gaurav.kumar9825@gmail.com>
* Remove redundancy and cleanup code
Signed-off-by: Gaurav Kumar <gaurav.kumar9825@gmail.com>
* Remove extra space and add new line
Signed-off-by: Gaurav Kumar <gaurav.kumar9825@gmail.com>
This ensures that the MWC always picks up the latest config template during version upgrade.
The removed `update()` method and RBAC permissions are superseded by @2163.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
We were depending on an untagged version of prometheus/client_golang
from Feb 2018.
This bumps our dependency to v0.9.2, from Dec 2018.
Also, this is a prerequisite to #1488.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd install` output relies on Helm templates in the `chart`
directory. In production cli builds, these templates are compiled into
the binary. In development, they are read from the file system. This
development code path relied on GOPATH to determine the location of the
`chart` directory. In anticipation of Go Modules support (#1488), we
cannot assume the repo is within the GOPATH.
This change removes the GOPATH dependency, and instead relies on
`runtime.Caller` to determine the root of the code repo. This change
only affects development (!prod) builds.
Prerequisite to #1488.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
- Created the pkg/inject package to hold the new injection shared lib.
- Extracted from `/cli/cmd/inject.go` and `/cli/cmd/inject_util.go`
the core methods doing the workload parsing and injection, and moved them into
`/pkg/inject/inject.go`. The CLI files should now deal only with
strictly CLI concerns, and applying the json patch returned by the new
lib.
- Proceeded analogously with `/cli/cmd/uninject.go` and
`/pkg/inject/uninject.go`.
- The `InjectReport` struct and helping methods were moved into
`/pkg/inject/report.go`
- Refactored webhook to use the new injection lib
- Removed linkerd-proxy-injector-sidecar-config ConfigMap
- Added the ability to add pod labels and annotations without having to
specify the already existing ones
Fixes#1748, #2289
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
Fixes#1792.
This PR adds filter functionality to the web UI via an optional Material-UI <Toolbar> on the top of the table which contains the table's title and a filter icon. The toolbar only shows if the enableFilter={true} prop is passed down from the parent component. The PR modifies the MetricsTable test and adds tests for BaseTable and TopRoutesTable.
Note: The previous Ant-based UI allowed certain tables to be filtered by individual table column; this capacity is not part of this PR but can be added later if useful.
linkerd/linkerd2#2349 introduced a `SelfSubjectAccessReview` check at
startup, to determine whether each control-plane component should
establish Kubernetes watches cluster-wide or namespace-wide. If this
check occurs before the linkerd-proxy sidecar is ready, it fails, and
the control-plane component restarts.
This change configures each control-plane pod to skip outbound port 443
when injecting the proxy, allowing the control-plane to connect to
Kubernetes regardless of the `linkerd-proxy` state.
A longer-term fix should involve a more robust control-plane startup,
that is resilient to failed Kubernetes API requests. An even longer-term
fix could involve injecting `linkerd-proxy` as a Kubernetes "sidecar"
container, when that becomes available.
Workaround for #2407
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
proxy: bump pinned version to 7e55196
This picks up the following commit:
* 7e55196 Bump tower-grpc (linkerd/linkerd2-proxy#202)
The new `tower-grpc` version (tower-rs/tower-grpc#115) improves the
messages attached to internal gRPC issues. This will aid significantly
in debugging the proxy's gRPC communication with the control plane.
This picks up the following commits:
* 0fe8063 replace `Error::cause` with `Error::source` (#2370) (linkerd/linkerd2-proxy#201)
* 1ea7559 Minor cleanup in the config tests (linkerd/linkerd2-proxy#188)
* d0ef56b Update *ring* to 0.14.6 (linkerd/linkerd2-proxy#197)
* c54377f fs-watch: Use a properly sized buffer for inotify events (linkerd/linkerd2-proxy#195)
* 23e02a6 Update Router to wait for inner poll_ready before calling inner call
* 2de8e9b Update metrics quickcheck to 0.8, and hyper to 0.12.24
* d1bbd4b make: Optionally include debug symbols with builds (linkerd/linkerd2-proxy#193)
* 738a541 Fix compilation warnings in fs-watch (linkerd/linkerd2-proxy#192)
* 6cc7558 Apply rustfmt (linkerd/linkerd2-proxy#191)
Signed-off-by: Ivan Sim <ivan@buoyant.io>
As described in #2217, the controller returns TLS identities for results even
when the destination pod may not be able to participate in identity
requester: specifically, the other pod may not have the same controller
namespace or it may not be injected with identity.
This change introduces a new annotation, linkerd.io/identity-mode that is set
when injecting pods (via both CLI and webhook). This annotation is always
added.
The destination service now only returns TLS identities when this annotation
is set to optional on a pod and the destination pod uses the same controller.
These semantics are expected to change before the 2.3 release.
Fixes#2217
linkerd/linkerd2#2360 modified the `linkerd check --wait` param from `0`
to `1m`. Waiting on a check command causes spinner control characters in
the output, making output validation non-trivial.
Instead, revert the wait param back to `0`, and use
`TestHelper.RetryFor`.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
linkerd/linkerd#2349 introduced ServiceProfile CRD deletion to
`bin/test-cleanup`. Unfortunately that CRD is cluster-wide and shared
across any Linkerd's currently installed.
Revert CRD deletion.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
We currently set klog to maximum verbosity when debug logging is
enabled. This causes control plane components, however, to log their
serviceaccount tokens, leaking secret information into logs.
By setting the klog level to 6, we avoid this logging.
Fixes#2383