Adds a URL to the `linkerd upgrade` output which contains full upgrade instructions. The message and the URL anchors are different in the case of success or failure.
Fixes#2575.
* Define proxy version override annotation
* Don't override global linkerd version during inject
This ensures consistent usages of the config.linkerd.io/linkerd-version and
linkerd.io/proxy-version annotations. The former will only be used to track
overridden version, while the latter shows the cluster's current default
version.
* Rename proxy version config override annotation
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Previous control plane versions do not provide an 'install' config, so
this field cannot be required.
Now, missing empty are handled more gracefully; and upgrade repairs
install configs with missing fields.
* Disable external profiles by default
* Rename the --disable-external-profiles flag to --enable-external-profiles
Signed-off-by: Ivan Sim <ivan@buoyant.io>
The `install` command errors when the deploy target contains an existing
Linkerd deployment. The `upgrade` command is introduced to reinstall or
reconfigure the Linkerd control plane.
Upgrade works as follows:
1. The controller config is fetched from the Kubernetes API. The Public
API is not used, because we need to be able to reinstall the control
plane when the Public API is not available; and we are not concerned
about RBAC restrictions preventing the installer from reading the
config (as we are for inject).
2. The install configuration is read, particularly the flags used during
the last install/upgrade. If these flags were not set again during the
upgrade, the previous values are used as if they were passed this time.
The configuration is updated from the combination of these values,
including the install configuration itself.
Note that some flags, including the linkerd-version, are omitted
since they are stored elsewhere in the configurations and don't make
sense to track as overrides..
3. The issuer secrets are read from the Kubernetes API so that they can
be re-used. There is currently no way to reconfigure issuer
certificates. We will need to create _another_ workflow for
updating these credentials.
4. The install rendering is invoked with values and config fetched from
the cluster, synthesized with the new configuration.
`storage.tsdb.retention` is deprecated in favor of
`storage.tsdb.retention.time`.
Replace all occurrences.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
When installing Linkerd, a user may override default settings, or may
explicitly configure defaults. Consider install options like `--ha
--controller-replicas=4` -- the `--ha` flag sets a new default value for
the controller-replicas, and then we override it.
When we later upgrade this cluster, how can we know how to configure the
cluster?
We could store EnableHA and ControllerReplicas configurations in the
config, but what if, in a later upgrade, the default value changes? How
can we know whether the user specified an override or just used the
default?
To solve this, we add an `Install` message into a new config.
This message includes (at least) the CLI flags used to invoke
install.
upgrade does not specify defaults for install/proxy-options fields and,
instead, uses the persisted install flags to populate default values,
before applying overrides from the upgrade invocation.
This change breaks the protobuf compatibility by altering the
`installation_uuid` field introduced in 9c442f6885.
Because this change was not yet released (even in an edge release), we
feel that it is safe to break.
Fixes https://github.com/linkerd/linkerd2/issues/2574
This change moves resource-templating logic into a dedicated template,
creates new values types to model kubernetes resource constraints, and
changes the `--ha` flag's behavior to create these resource templates
instead of hardcoding the resource constraints in the various templates.
Performing this check earlier helps to separate the specialized logic to the CLI
and webhook.
Any subsequent modification of this check logic to support config override of
existing meshed workload will be confined to the relevant component.
The shared lib can then focus only on config overrides.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Allow the TCP CONNECTIONS column to be shown on all stat queries in the CLI.
This column will now be called TCP_CONN for brevity.
Read/Write bytes will still only be shown on -o wide or -o json
Some of our templates have started to use 'with .Values' scoping to
limit boilerplate within the tempates.
This change makes this uniform in all templates.
When reading a Linkerd configuration, we cannot determine whether
auto-inject should be configured.
This change adds auto-inject configuration to the global config
structure. Currently, this configuration is effectively boolean,
determined by the presence of an empty value (versus a null).
* Include the DisableExternalProfile option even if it's 'false'. The override logic depends on this option to assign different profile suffix.
* Check for proxy and init image overrides even when registry option is empty
* Append the config annotations to the pod's meta before creating the patch. This ensures that any configs provided via the CLI options are persisted as annotations before the configs override.
* Persist linkerd version CLI option
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Have the Webhook react to pod creation/update only
This was already working almost out-of-the-box, just had to:
- Change the webhook config so it watches pods instead of deployments
- Grant some extra ClusterRole permissions
- Add the piece that figures what's the OwnerReference and add the label
for it
- Manually inject service account mount paths
- Readd volumes tests
Fixes#2342 and #1751
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Currently, the install UUID is regenerated each time `install` is run.
When implementing cluster upgrades, it seems most appropriate to reuse
the prior UUID, rather than generate a new one.
To this end, this change stores an "Installation UUID" in the global
linkerd config.
This change reintroduces identity hinting to the destination service.
The Get endpoint includes identities for pods that are injected with an
identity-mode of "default" and have the same linkerd control plane.
A `serviceaccount` label is now also added to destination response
metadata so that it's accessible in prometheus and tap.
This change adds a new `linkerd2-proxy-identity` binary to the `proxy`
container image as well as a `linkerd2-proxy-run` entrypoint script.
The inject process now sets environment variables on pods to support
identity, including identity names for the destination and identity
services.
As the proxy starts, the identity helper creates a key and CSR in a
tmpfs. As the proxy starts, it reads these files, as well as a
serviceaccount token, and provisions a certificate from controller.
The proxy's /ready endpoint will not succeed until a certificate has
been provisioned.
The proxy will not participate in identity with services other than the
controllers until the Destination controller is modified to provide
identities via discovery.
Because the linkerd-config resource is created after pods that require
it, they can be started before the files are mounted, causing the pods
to restart integration tests to fail.
If we extract the config into its own template file, it can be inserted
before pods are created.
The introduction of identity in 0626fa37 created new state in the
control plane's configuration that must be considered when re-installing
the control plane or when injecting pods.
This change alters `install` to fail if it would seem to conflict with
an existing installation. This behavior may be disabled with the
`--ignore-cluster` flag.
Furthermore, `inject` now _requires_ that it can fetch a configuration
from the control plane in order to operate. Otherwise the
`--ignore-cluster` and `--disable-identity` flags must be specified.
This change does not actually instrument pods to use identity yet---it
lays the framework for proxy identity without changing the test fixture
output (besides a change to how identity HA is configured).
Fixes#2531
Currently, cli/cmd/root.go provides a couple of utilities for building
clients to Linkerd's Public API; however these utilities are infallible,
execute health checks, etc.
There are a class of API clients---for instance, when an inject command
wants to acquire configuration from the API---where these checks are
undesirable. The version CLI built such a client, for example.
This change consolidates the various utilities into a single file.
Furthermore, it renames these utilities to clarify they differ.
https://github.com/linkerd/linkerd2/pull/2521 introduces an "Identity"
controller, but there is no way to include it in linkerd installation.
This change alters the `install` flow as follows:
- An Identity service is _always_ installed;
- Issuer credentials may be specified via the CLI;
- If no Issuer credentials are provided, they are generated each time `install` is called.
- Proxies are NOT configured to use the identity service.
- It's possible to override the credential generation logic---especially
for tests---via install options that can be configured via the CLI.
The new proxy has changed its configuration as follows:
- `LISTENER` urls are now `LISTEN_ADDR` addresses;
- `CONTROL_URL` is now `DESTINATION_SVC_ADDR`;
- `*_NAMESPACE` vars are no longer needed;
- The `PROXY_ID` is now the `DESTINATION_CONTEXT`;
- The "metrics" port is now the "admin" port, since it serves more than
just metrics;
- A readiness probe now checks a dedicated /ready endpoint eagerly.
Identity injection is **NOT** configured by this branch.
This change introduces a new Identity service implementation for the
`io.linkerd.proxy.identity.Identity` gRPC service.
The `pkg/identity` contains a core, abstract implementation of the service
(generic over both the CA and (Kubernetes) Validator interfaces).
`controller/identity` includes a concrete implementation that uses the
Kubernetes TokenReview API to validate serviceaccount tokens when
issuing certificates.
This change does **NOT** alter installation or runtime to include the
identity service. This will be included in a follow-up.
The proxy's TLS implementation has changed to use a new _Identity_ controller.
In preparation for this, the `--tls=optional` CLI flag has been removed
from install and inject; and the `ca` controller has been deleted. Metrics
and UI treatments for TLS have **not** been removed, as they will continue to
be valuable for the new Identity system.
With the removal of the old identity scheme, the Destination service's proxy
ID field is now set with an opaque string (e.g. `ns:emojivoto`) to enable
locality awareness.
* Defined the config annotations as new constants in labels.go
* Introduced the getOverride() functions to override configs
* Introduced new accessors to abstract with type casting
Signed-off-by: Ivan Sim <ivan@buoyant.io>
linkerd/linkerd2#1721 introduced a `--single-namespace` install flag,
enabling the control-plane to function within a single namespace. With
the introduction of ServiceProfiles, and upcoming identity changes, this
single namespace mode of operation is becoming less viable.
This change removes the `--single-namespace` install flag, and all
underlying support. The control-plane must have cluster-wide access to
operate.
A few related changes:
- Remove `--single-namespace` from `linkerd check`, this motivates
combining some check categories, as we can always assume cluster-wide
requirements.
- Simplify the `k8s.ResourceAuthz` API, as callers no longer need to
make a decision based on cluster-wide vs. namespace-wide access.
Components either have access, or they error out.
- Modify the web dashboard to always assume ServiceProfiles are enabled.
Reverts #1721
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Manual and auto injection was logging the full patch JSON at the `Info`
level.
Modify injection to log the object type and name at the `Info` level,
and the full patch at the `Debug` level.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
It's sometimes helpful to spotcheck proxy metrics from a specific pod,
but doing so with kubectl requires a few steps.
Introduce a new `linkerd metrics` command. When given a pod name and
namespace, returns a dump of the proxy's /metrics endpoint.
Also modify the k8s.portforward module to accept initialized k8s config
and client objects, to enable testing.
Fixes#2350.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Changed the protobuf definition to take out destinationApiPort entirely
* Store destinationAPIPort as a constant in pkg/inject.go
Fixes#2351
Signed-off-by: Aditya Sharma <hello@adi.run>
Show TCP stats in the linkerd stat output. They are not shown by default, but
will be queried when using -o wide or -o json.
Also display read/write bytes as bytes per sec in the CLI and dashboard.
The `linkerd-init` container requires the NET_ADMIN capability to modify
iptables. The `linkerd check` command was not verifying this.
Introduce a `has NET_ADMIN capability` check, which does the following:
1) Lists all available PodSecurityPolicies, if none found, returns
success
2) For each PodSecurityPolicy, validate one exists that:
- the user has `use` access AND
- provides `*` or `NET_ADMIN` capability
A couple limitations to this approach:
- It is testing whether the user running `linkerd check` has NET_ADMIN,
but during installation time it will be the `linkerd-init` pod that
requires NET_ADMIN.
- It assumes the presense of PodSecurityPolicies in the cluster means
the PodSecurityPolicy admission controller is installed. If the
admission controller is not installed, but PSPs exists that restrict
NET_ADMIN, `linkerd check` will incorrectly report the user does not
have that capability.
This PR also fixes the `can create CustomResourceDefinitions` check to
not specify a namespace when doing a `create` check, as CRDs are
cluster-wide.
Fixes#1732
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This ensures that the MWC always picks up the latest config template during version upgrade.
The removed `update()` method and RBAC permissions are superseded by @2163.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
We were depending on an untagged version of prometheus/client_golang
from Feb 2018.
This bumps our dependency to v0.9.2, from Dec 2018.
Also, this is a prerequisite to #1488.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd install` output relies on Helm templates in the `chart`
directory. In production cli builds, these templates are compiled into
the binary. In development, they are read from the file system. This
development code path relied on GOPATH to determine the location of the
`chart` directory. In anticipation of Go Modules support (#1488), we
cannot assume the repo is within the GOPATH.
This change removes the GOPATH dependency, and instead relies on
`runtime.Caller` to determine the root of the code repo. This change
only affects development (!prod) builds.
Prerequisite to #1488.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
- Created the pkg/inject package to hold the new injection shared lib.
- Extracted from `/cli/cmd/inject.go` and `/cli/cmd/inject_util.go`
the core methods doing the workload parsing and injection, and moved them into
`/pkg/inject/inject.go`. The CLI files should now deal only with
strictly CLI concerns, and applying the json patch returned by the new
lib.
- Proceeded analogously with `/cli/cmd/uninject.go` and
`/pkg/inject/uninject.go`.
- The `InjectReport` struct and helping methods were moved into
`/pkg/inject/report.go`
- Refactored webhook to use the new injection lib
- Removed linkerd-proxy-injector-sidecar-config ConfigMap
- Added the ability to add pod labels and annotations without having to
specify the already existing ones
Fixes#1748, #2289
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
linkerd/linkerd2#2349 introduced a `SelfSubjectAccessReview` check at
startup, to determine whether each control-plane component should
establish Kubernetes watches cluster-wide or namespace-wide. If this
check occurs before the linkerd-proxy sidecar is ready, it fails, and
the control-plane component restarts.
This change configures each control-plane pod to skip outbound port 443
when injecting the proxy, allowing the control-plane to connect to
Kubernetes regardless of the `linkerd-proxy` state.
A longer-term fix should involve a more robust control-plane startup,
that is resilient to failed Kubernetes API requests. An even longer-term
fix could involve injecting `linkerd-proxy` as a Kubernetes "sidecar"
container, when that becomes available.
Workaround for #2407
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
As described in #2217, the controller returns TLS identities for results even
when the destination pod may not be able to participate in identity
requester: specifically, the other pod may not have the same controller
namespace or it may not be injected with identity.
This change introduces a new annotation, linkerd.io/identity-mode that is set
when injecting pods (via both CLI and webhook). This annotation is always
added.
The destination service now only returns TLS identities when this annotation
is set to optional on a pod and the destination pod uses the same controller.
These semantics are expected to change before the 2.3 release.
Fixes#2217
The control-plane components relied on a `--single-namespace` param,
passed from `linkerd install` into each individual component, to
determine which namespaces they were authorized to access, and whether
to support ServiceProfiles. This command-line flag was redundant given
the authorization rules encoded in the parent `linkerd install` output,
via [Cluster]Role[Binding]s.
Modify the control-plane components to query Kubernetes at startup to
determine which namespaces they are authorized to access, and whether
ServiceProfile support is available. This allows removal of the
`--single-namespace` flag on the components.
Also update `bin/test-cleanup` to cleanup the ServiceProfile CRD.
TODO:
- Remove `--single-namespace` flag on `linkerd install`, part of #2164
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The inject logic combines the modification of a pod spec and the
creation of a "report" detailing problems with the pod spec.
This change extracts the report-creation-and-checking logic from the
injection logic to make the contracts of each of these functions
clearer.
No functional changes are intended.
goconst finds repeated strings that could be replaced by a constant:
https://github.com/jgautheron/goconst
Part of #217
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Adds a flag, tcp_stats to the StatSummary request, which queries prometheus for TCP stats.
This branch returns TCP stats at /api/tps-reports when this flag is true.
TCP stats are now displayed on the Resource Detail pages.
The current queried TCP stats are:
tcp_open_connections
tcp_read_bytes_total
tcp_write_bytes_total
gosimple is a Go linter that specializes in simplifying code
Also fix one spelling error in `cred_test.go`
Part of #217
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Also, some protobuf updates:
* Rename `api_port` to match recent changes in CLI code.
* Remove the `cni` message because it won't be used.
* Remove `registry` field from proto types. This helps to avoid having to workaround edge cases like fully-qualified image name in different format, and overriding user-specified Linkerd version etc.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Add options in CLI for setting proxy CPU and memory limits
- Deprecated `proxy-cpu` and `proxy-memory` in favor of `proxy-cpu-limit` and `proxy-memory-limit`
- Updated validations and tests to reflect new options
Signed-off-by: TwinProduction <twin@twinnation.org>
When changing templates, it's can be pretty time-intensive to
repair all test fixtures.
This change instruments CLI tests with two flags, `-update` and
`-pretty-diff` that control how test fixtures are diffed. When the
`-update` flag is set, the tests fixtures are overwritten as tests
execute. The `-pretty-diff` flag causes the full text of the fixture
to be printed on mismatch.
chart/templates/base.yaml is nearly 800 lines and contains the
kubernetes configurations for the marjority of the control plane.
Furthermore, its contents are not particularly organized (for example,
the prometheus RBAC bindings are in the middle of the controller's
configuration).
The size and complexity of this file makes it especially daunting to
introduce new functionality.
In order to make the situation easier to understand and change, this
splits base.yaml into several new template files: namespace, controller,
serviceprofile, and prometheus, and grafana. The `tls.yaml` template has
been renamed `ca.yaml`, since it installs the `linkerd-ca` resources.
This change also makes the comments uniform, adding a "header" to each
logical component.
Fixes#2154
Up until now, the proxy-api controller service has been the sole service
that the proxy communicates with, implementing the majoriry of the API
defined in the `linkerd2-proxy-api` repo. But this is about to change:
linkerd/linkerd2-proxy-api#25 introduces a new Identity service; and
this service must be served outside of the existing proxy-api service
in the linkerd-controller deployment (so that it may run under a
distinct service account).
With this change, the "proxy-api" name becomes less descriptive. It's no
longer "the service that serves the API for the proxy," it's "the
service that serves the Destination API to the proxy." Therefore, it
seems best to bite the bullet and rename this to be the "destination"
service (i.e. because it only serves the
`io.linkerd.proxy.destination.Destination` service).
Co-authored-by: Kevin Lingerfelt <kl@buoyant.io>
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
`golangci-lint` performs numerous checks on Go code, including golint,
ineffassign, govet, and gofmt.
This change modifies `bin/lint` to use `golangci-lint`, and replaces
usage of golint and govet.
Also perform a one-time gofmt cleanup:
- `gofmt -s -w controller/`
- `gofmt -s -w pkg/`
Part of #217
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The existing hint URLs printing by `linkerd check` pointed to locations
that would change if the linkerd.io website was reorganized.
linkerd/website#148 introduces an alias for hint URLs at
https://linkerd.io/checks/. This is the corresponding change to update
`linkerd check` output.
Depends on linkerd/website#148, relates to linkerd/website#146.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#2077
When looking up service profiles, Linkerd always looks for the service profile objects in the Linkerd control namespace. This is limiting because service owners who wish to create service profiles may not have write access to the Linkerd control namespace.
Instead, we have the control plane look for the service profile in both the client namespace (as read from the proxy's `proxy_id` field from the GetProfiles request and from the service's namespace. If a service profile exists in both namespaces, the client namespace takes priority. In this way, clients may override the behavior dictated by the service.
Signed-off-by: Alex Leong <alex@buoyant.io>
The `linkerd check` command was doing limited validation on
ServiceProfiles.
Make ServiceProfile validation more complete, specifically validate:
- types of all fields
- presence of required fields
- presence of unknown fields
- recursive fields
Also move all validation code into a new `Validate` function in the
profiles package.
Validation of field types and required fields is handled via
`yaml.UnmarshalStrict` in the `Validate` function. This motivated
migrating from github.com/ghodss/yaml to a fork, sigs.k8s.io/yaml.
Fixes#2190
The Proxy API service lacked introspection of its internal state.
Introduce a new gRPC Discovery API, implemented by two servers:
1) Proxy API Server: returns a snapshot of discovery state
2) Public API Server: pass-through to the Proxy API Server
Also wire up a new `linkerd endpoints` command.
Fixes#2165
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Adds the ability to generate a service profile by running a tap for a configurable
amount of time, and using the route results from the routes seen during the tap.
e.g. `linkerd profile web --tap deploy/web -n emojivoto --tap-duration 2s`
Consolidate timeouts for `linkerd check`
- Moved the creation of contexts from inside the methods targeted by the
checks into a single place in the runCheck() and runCheckRPC() methods
where the context is built using a hard-coded timeout of 30 seconds.
- k8s' client-go doesn't allow passing along contexts, but it let's us
setting the Timeout manually.
- Reworded the description for the --wait option.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
# Problem
In order to switch Linkerd template rendering to use `.yaml` files, static
assets must be bundled in the Go binary for use by `linkerd install`.
# Solution
The solution should not affect the local development process of building and
testing.
[vfsgen](https://github.com/shurcooL/vfsgen) generates Go code that statically
implements the provided `http.FileSystem`. Paired with `go generate` and Go
[build tags](https://golang.org/pkg/go/build/), we can continue to use the
template files on disk when developing with no change required.
In `!prod` Go builds, the `cli/static/templates.go` file provides a
`http.FileSystem` to the local templates. In `prod` Go builds, `go generate
./cli` generates `cli/static/generated_templates.gogen.go` that statically
provides the template files.
When built with `-tags prod`, the executable will be built with the staticlly
generated file instead of the local files.
# Validation
The binaries were compiled locally with `bin/docker-build`. The binaries were
then tested with `bin/test-run (pwd)/target/cli/darwin/linkerd`. All tests
passed.
No change was required to successfully run `bin/go-run cli install`. No change
was required to run `bin/linkerd install`.
Fixes#2153
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
In linkerd/linkerd2-proxy#186, the proxy supports configuration of TCP
keepalive values.
This change sets `LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE` and
`LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE` to 10s when injecting the
proxy, so that remote connections are configured with a keepalive.
This configuration is NOT yet exposed through the CLI. This may be done
in a followup, if necessary.
Fixes#1949
Since 37ae423, deployments have been prefixed with linkerd-; however
the inject logic was not changed to take this into consideration when
constructing the controller's identity.
This means that the proxy's client to the control plane has been unable to
establish TLS'd communcation to the proxy-api. Previously, the proxy would
silently fall back to plaintext, but in master this behavior recently changed to
be stricter, so this bug will prevent the proxy from connecting to proxy-api
in any way.
* Add pod spec annotation to disable injection in CLI and auto-injector
* Remove support for linkerd.io/auto-inject label entirely
* Update based on review feedback
* Fix issue with finding the namespace of deployments applied to the default ns
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
Fixes#2042
Adds a new field to service profile routes called `timeout`. Any requests to that route which take longer than the given timeout will be aborted and a 504 response will be returned instead. If the timeout field is not specified, a default timeout of 10 seconds is used.
Signed-off-by: Alex Leong <alex@buoyant.io>
linkerd/website#105 introduced a FAQ page, providing resolutions for all
`linkerd check` failures.
Update each check to reference its corresponding section in the FAQ.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Use `ca.NewCA()` for generating certs and keys for the proxy injector
- Remove from CA controller everything that dealt with the
webhook/proxy-injector
- Remove no longer needed proxy-injector volumes for 'trust-anchors' and
'webhook-secrets'
- Remove from the proxy-injector the retrieval of the trust anchor and
secrets
- tls flag during install is no longer needed for auto-inject to work
Fixes#2095 and fixes#2166
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Export RootOptions and BuildFirewallConfiguration so that the cni-plugin can use them.
* Created the cni-plugin based on istio-cni implementation
* Create skeleton files that need to be filled out.
* Create the install scripts and finish up plugin to write iptables
* Added in an integration test around the install_cni.sh and updated the script to handle the case where it isn't the only plugin. Removed the istio kubernetes.go file in favor of pkg/k8s; initial usage of this package; found and fixed the typo in the ClusterRole and ClusterRoleBinding; found the docker-build-cni-plugin script
* Corrected an incorrect name in the docker build file for cni-plugin
* Rename linkerd2-cni to linkerd-cni
* Fixup Dockerfile and clean up code a bit as well as logging statements.
* Update Gopkg.lock after master merge.
* Update test file to remove temporary tag.
* Fixed the command to run during the test while building up the docker run.
* Added attributions to applicable files; in the test file, use a different container for each test scenario and also print the docker logs to stdout when there is an error;
* Add the --no-init-container flag to install and inject. This flag will not output the initContainer and will add an annotation assuming that the cni will be used in this case.
* Update .travis.yml to build the cni-plugin docker image before running the tests.
* Workaround golint warnings.
* Create a new command to install the linkerd-cni plugin.
* Add the --no-init-container option to linkerd inject
* Use the setup ip tables annotation during the proxy auto inject webhook prevent/allow addition of an init container; move cni-plugin tests to the integration-test section of travis
* gate the cni-plugin tests with the -integration-tests flag; remove unnecessary deployment .yaml file.
* Incorporate PR Cleanup suggestions.
* Remove the SetupIPTablesLabel annotation and use config flags and the presence of the init container to determine whether the cni-plugin writes ip tables.
* Fix a logic bug in the cni-plugin code that prevented the iptables from being written; Address PR comments; make tests pass.
* Update go deps shas
* Changed the single file install-cni plugin filename to be .conf vs .conflist; Incorporated latest PR comments around spacing with the new renderer among others.
* Fix an issue with renaming .conf to .conflist when needed.
* Renamed some of the variables to try to make it more clear what is going on.
* Address final PR comments.
* Hide cni flags for the time being.
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
* Update client-go to 1.13.1
Fixes#2145
* Update Dockerfile-bin with new tag
* Update all the dockerfile tags
* Clean gopkg and do not apply cluster defaults
* Update for klog
* Match existing behavior with klog
* Add klog to gopkg.lock
* Update go-deps shas
* Update klog comment
* Update comment to be a non-sentence
# Problem
In order to refactor `install` to allow for a more flexible configuration, we
should start with the format of the YAML that it renders. Using the Helm
YAML format will make it easier add flexible configuration options in the
future. Currently, the rendered template that `install` produces does not
follow this format.
# Solution
Use the internals that Helm itself uses to render an inject template that
follows the same formatting rules. Helm's `template` cmd provides a good
outline of what is needed to make Linkerd's `install` cmd work as if it was
a Chart.
# Validation
There are no new tests, but there may not be anything to test at this stage.
This is a WIP PR towards the ultimate goal of `install` allowing a more
flexible configuration.
However, `install` now uses all the Helm `template` internals and therefore
satisfies the needed properties for Helm Charts.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This branch removes the `--proxy-bind-timeout` flag from the
`linkerd inject` and `linkerd install` CLI commands, and the
`LINKERD2_PROXY_BIND_TIMEOUT` environment variable from their output.
This is in preparation for removing that timeout from the proxy (as
described in #2013).
I thought it was prudent to remove this from the CLIs before removing it
from the proxy, so we can't create a situation where the CLIs produce
output that results in broken proxy containers.
Fixes#2013
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
DaemonSet stats are not currently shown in the cli stat command, web ui
or grafana dashboard. This commit adds daemonset support for stat.
Update stat command's help message to reference daemonsets.
Update the public-api to support stats for daemonsets.
Add tests for stat summary and api.
Add daemonset get/list/watch permissions to the linkerd-controller
cluster role that's created using the install command.
Update golden expectation test files for install command
yaml manifest output.
Update web UI with daemonsets
Update navigation, overview and pages to list daemonsets and the pods
associated to them.
Add daemonset paths to server, and ui apps.
Add grafana dashboard for daemonsets; a clone of the deployment
dashboard.
Update dependencies and dockerfile hashes
Add DaemonSet support to tap and top commands
Fixes of #2006
Signed-off-by: Zak Knill <zrjknill@gmail.com>
This introduces a `linkerd install-sp` command to install service
profiles into the linkerd control plane. It installs one service profile
for each of controller-api, proxy-api, prometheus, and grafana.
Fixes#1901
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Version checks were not validating that the cli version matched the
control plane or data plane versions.
Add checks via the `linkerd check` command to validate the cli is
running the same version as the control and data plane.
Also add types around `channel-version` string parsing and matching. A
consequence being that during development `version.Version` changes from
`undefined` to `dev-undefined`.
Fixes#2076
Depends on linkerd/website#101
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Introduce resource selector and deprecate namespace field for ListPods
* Changes from code review
* Properly deprecate the field
* Do not check for nil
* Fix the mockProm usage
* Protoc changes revert
* Changed from code review
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
The default font in Windows console did not support the Unicode
characters recently added to check and inject commands. Also the color
library the linkerd cli depends on was not being used in a
cross-platform way.
Replace the existing Unicode characters used in `check` and `inject`
with characters available in most fonts, including Windows console.
Similarly replace the spinner used in `check` with one that uses
characters available in most fonts.
Modify `check` and `inject` to use `color.Output` and `color.Error`,
which wrap `os.Stdout` and `os.Stderr`, and perform special
tranformations when on Windows.
Add a `--no-color` option to `linkerd logs`. While stern uses the same
color library that `check`/`inject` use, it is not yet using the
`color.Output` API for Windows support. That issue is tracked at:
https://github.com/wercker/stern/issues/69
Relates to https://github.com/linkerd/linkerd2/pull/2087
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* When injecting, perform an uninject as a first step
Fixes#1970
The fixture `inject_emojivoto_already_injected.input.yml` is no longer
rejected, so I created the corresponding golden file.
Note that we'll still forbid injection over resources already injected
with third party meshes (Istio, Contour), so now we have
`HasExisting3rdPartySidecars()` to detect that.
* Generalize HasExistingSidecars() to cater for both the auto-injector and
* Convert `linkerd uninject` result format to the one used in `linkerd inject`.
* More updates to the uninject reports. Revert changes to the HasExistingSidecars func.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
When debugging control plane issues or issues pertaining to a linkerd proxy, it can be cumbersome to get logs from affected containers quickly.
This PR adds a new `logs` command to the Linkerd CLI to surface log lines from any container within linkerd's control plane. This feature relies heavily on [stern](https://github.com/wercker/stern), which already provides this behavior. This PR integrates this package into the Linkerd CLI to allow users to quickly retrieve logs whenever they run into issues when using Linkerd.
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
Fixes#1875
This change improves the `linkerd routes` command in a number of important ways:
* The restriction on the type of the `--to` argument is lifted and any resource type can now be used. Try `--to ns/books`, `--to po/webapp-ABCDEF`, `--to au/linkerd.io`, or even `--to svc`.
* All routes for the target will now be populated in the table, even if there are no Prometheus metrics for that route.
* [UNKNOWN] has been renamed to [DEFAULT]
* The `Service/Authority` column will now list `Service` in all cases except for when an authority target is explicitly requested.
```
$ linkerd routes deploy/traffic --to deploy/webapp
ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
GET / webapp 100.00% 0.5rps 50ms 180ms 196ms
GET /authors/{id} webapp 100.00% 0.5rps 100ms 900ms 980ms
GET /books/{id} webapp 100.00% 0.9rps 38ms 93ms 99ms
POST /authors webapp 100.00% 0.5rps 35ms 48ms 50ms
POST /authors/{id}/delete webapp 100.00% 0.5rps 83ms 180ms 196ms
POST /authors/{id}/edit webapp 0.00% 0.0rps 0ms 0ms 0ms
POST /books webapp 45.16% 2.1rps 75ms 425ms 485ms
POST /books/{id}/delete webapp 100.00% 0.5rps 30ms 90ms 98ms
POST /books/{id}/edit webapp 56.00% 0.8rps 92ms 875ms 975ms
[DEFAULT] webapp 0.00% 0.0rps 0ms 0ms 0ms
```
This is all made possible by a shift in the way we handle the destination resource. When we get a request with a `ToResource`, we use the k8s API to find all Services which include at least one pod belonging to that resource. We then fetch all service profiles for those services and display the routes from those serivce profiles.
This shift in thinking also precipitates a change in the TopRoutes API where we no longer need special cases for `ToAll` (which can be specified by `--to au`) or `ToAuthority` (which can be specified by `--to au/<authority>`) and instead can use a `ToResource` to handle all cases.
Signed-off-by: Alex Leong <alex@buoyant.io>
The outputs of the `check` and `inject` commands did not vary much
between successful and failed executions, and were a bit verbose and
challenging to parse.
Reorganize output of `check` and `inject` commands, to provide more
output when errors occur, and less output when successful.
Specific changes:
`linkerd check`
- visually group checks by category
- introduce `hintURL`'s, to provide doc links when checks fail
- add spinners when retrying, remove additional retry lines
- colored unicode characters to indicate success/warning/failure
`linkerd inject`
- modify default output to mirror `kubectl apply`
- only output non-successful inject reports
- support `--verbose` flag to output all inject reports
Fixes#1471, #1653, #1656, #1739
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The linkerd check command organized the various checks via loosely
coupled category IDs, category names, and checkers themselves, all with
ordering defined by consumers of this code.
This change removes category IDs in favor of category names, groups all
checkers by category, and enforces ordering at the HealthChecker
level.
Part of #1471, depends on #2078.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The set of health checks to be executed were dependent on a combination
of check enums and boolean options.
This change modifies the health checks to be governed strictly by a set
of enums.
Next steps:
- tightly couple category IDs to names
- tightly couple checks to their parent categories
- programmatic control over check ordering
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Add `linkerd uninject` command
uninject.go iterates through the resources annotations, labels,
initContainers and Containers, removing what we know was injected by
linkerd.
The biggest part of this commit is the refactoring of inject.go, to make
it more generic and reusable by uninject.
The idea is that in a following PR this functionality will get reused by
`linkerd inject` to uninject as as preliminary step to injection, as a
solution to #1970.
This was tested successfully on emojivoto with:
```
1) inject:
kubectl get -n emojivoto deployment -o yaml | bin/linkerd inject - |
kubectl apply -f -
2) uninject:
kubectl get -n emojivoto deployment -o yaml | bin/linkerd uninject - |
kubectl apply -f -
```
Also created unit tests for uninject.go. The fixture files from the inject
tests could be reused. But as now the input files act as outputs, they
represent existing resources and required these changes (that didn't
affect inject):
- Rearranged fields in alphabetical order.
- Added fields that are only relevant for existing resources (e.g.
creationTimestamp and status.replicas in StatefulSets)
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Setup port-forwarding for linkerd dashboard command
* Output port-forward logs when --verbose flag is set
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Add validation to CRD for service profiles
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
* Use properties instead of oneof
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
* Proxy grafana requests through web service
* Fix -grafana-addr default, clarify -api-addr flag
* Fix version check in grafana dashboards
* Fix comment typo
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Fix reporting of injected resources
When reporting resources provided as a list, it was picking just one
item from the list and ignoring the rest.
Also improved test for injecting list of resources.
Fixes#1676
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
This is a followup branch from #2023:
- delete `proxy/client.go`, move code to `destination-client`
- move `RenderTapEvent` and stat functions from `util` to `cmd`
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#1910
Add a `--routes` flag to the `top` command which show requests by route instead of by path.
This also includes an overhaul of the live table rendering code to be more flexible and better accommodate the various combinations of columns that may be visible or hidden.
Co-authored-by: Andrew Seigner <siggy@buoyant.io>
Signed-off-by: Alex Leong <alex@buoyant.io>
* Create new service accounts for linkerd-web and linkerd-grafana. Change 'serviceAccount:' to 'serviceAccountName:'
* Use dynamic namespace name
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
* Allow input of a volume name for prometheus and grafana
* Make Prometheus and Grafana volume names 'data' by default and disallow user editing via cli flags
* Remove volume name from options
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
* add securityContext with runAsUser: {{.ProxyUID}} to the various containers in the install template
* Update golden to reflect new additions
* changed to a different user id than the proxy user id
* Added a controller-uid install option
* change the port that the proxy-injector runs
* The initContainers needs to be run as the root user.
* move security contexts to container level
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
* Add parameter to stats API to skip retrieving Prometheus stats
Used by the dashboard to populate list of resources.
Fixes#1022
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Prometheus queries check results were being ignored
* Refactor verifyPromQueries() to also test when no prometheus queries
should be generated
* Add test for SkipStats=true
Includes adding ability to public.GenStatSummaryResponse to not generate
basicStats
* Fix previous test
If the `linkerd routes` command gets two routes with the same name, it will only display one of them, even if the routes are from different services. This is particularly obvious with the default `[UNKNOWN]` route.
We now display all routes, even if they have the same name.
Signed-off-by: Alex Leong <alex@buoyant.io>
Add support for service profiles created on external (non-service) authorities. For example, this allows you to create a service profile named `linkerd.io` which will apply to calls made to `linkerd.io`.
This is done by changing the `LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES` to `.` so that the proxy will attempt to lookup a service profile for any authority. We provide the `--disable-external-profiles` proxy flag to revert this behavior in case it is a problem.
We also refactor the proxy-api implementation of GetProfiles so that it does the profile lookup, regardless of if the authority looks like a Kubernetes service name or not. To simplify this, support for multiple resolves (which was unused) was removed.
Signed-off-by: Alex Leong <alex@buoyant.io>
The proxy-api service _always_ suggests that two meshed pods communicate
via HTTP/2 (i.e. via transparent protocol upgrading, if necessary).
This can complicate debugging and diagnostics at times, so it's
important that we have a way to deploy linkerd without this auto-upgrade
behavior.
This change adds a `-disable-h2-upgrade` flag to the `linkerd install`
command that disables transparent upgrading for the whole cluster.
We rework the routes command so that it can accept any Kubernetes resource, making it act much more similarly to the stat command.
Signed-off-by: Alex Leong <alex@buoyant.io>
We rename path to path_regex in the ServiceProfile CRD to make it clear that this field accepts a regular expression. We also take this opportunity to remove unnecessary line anchors from regular expressions now that these anchors are added in the proxy.
Signed-off-by: Alex Leong <alex@buoyant.io>
Adds an endpoint, at /profiles/new that allows you to input a service name and
namespace, and download a service profile yaml template.
This will enable future work, where we can add more of the yaml customization via
a form in the dashboard, and use that data to help the user configure routes.
Filtering by Kubernetes job was not supported. Also filtering by any unknown
type caused a panic.
Add filtering support by Kubernetes job, with special case mapping `job` to
`k8s_job`, to not conflict with Prometheus' job label.
Fix panic when unknown type specified as a `--from` or `--to` flag.
Fix `job` label from `linkerd-proxy` overwriting Prometheus `job` label at
collection time. This caused all metrics collected by proxy sidecars in
Kubernetes jobs to be collected into an incorrect Prometheus job, rather than
the expected `linkerd-proxy` Prometheus job.
Fix `unsupported resource type` tap error message incorrectly printing the
target resource rather than the destination.
Set `--controller-log-level debug` in `install_test.go` for easier debugging.
Expose `slow-cooker`'s metrics via a k8s service in the tap integration test, to
validate proxy requests with a job as destination.
Fixes#1872
Part of #627
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This change alters the controller's Tap service to include route labels
when translating tap events, modifies the public API to include route
metadata in responses, and modifies the tap CLI command to include
rt_ labels in tap output (when -o wide is used).
* Adjust proxy, Prometheus, and Grafana probes
High `readinessProbe.initialDelaySeconds` values delayed the controller's
readiness by up to 30s, preventing cli commands from succeeding shortly after
control plane deployment.
Decrease `readinessProbe.initialDelaySeconds` in the proxy, Prometheus, and
Grafana to the default 0s. Also change `linkerd check` controller pod ordering
to: controller, prometheus, web, grafana.
Detailed probe changes:
- proxy
- decrease `readinessProbe.initialDelaySeconds` from 10s to 0s
- prometheus
- decrease `readinessProbe.initialDelaySeconds` from 30s to 0s
- decrease `readinessProbe.timeoutSeconds` from 30s to 1s
- decrease `livenessProbe.timeoutSeconds` from 30s to 1s
- grafana
- decrease `readinessProbe.initialDelaySeconds` from 30s to 0s
- decrease `readinessProbe.timeoutSeconds` from 30s to 1s
- decrease `readinessProbe.failureThreshold` from 10 to 3
- increase `livenessProbe.initialDelaySeconds` from 0s to 30s
Fixes#1804
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `--open-api` flag is an alternative to the `--template` flag for the `linkerd profile` command. It reads an OpenAPI specification file (also called a swagger file) and uses it to generate a corresponding service profile.
Signed-off-by: Alex Leong <alex@buoyant.io>
This change allows some advised production config to be applied to the install of the control plane.
Currently this runs 3x replicas of the controller and adds some pretty sane requests to each of the components + containers of the control plane.
Fixes#1101
Signed-off-by: Ben Lambert <ben@blam.sh>
Add a routes command which displays per-route stats for services that have service profiles defined.
This change has three parts:
* A new public-api RPC called `TopRoutes` which serves per-route stat data about a service
* An implementation of TopRoutes in the public-api service. This implementation reads per-route data from Prometheus. This is very similar to how the StatSummaries RPC and much of the code was able to be refactored and shared.
* A new CLI command called `routes` which displays the per-route data in a tabular or json format. This is very similar to the `stat` command and much of the code was able to be refactored and shared.
Note that as of the currently targeted proxy version, only outbound route stats are supported so the `--from` flag must be included in order to see data. This restriction will be lifted in an upcoming change once we add support for inbound route stats as well.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Refactor util.BuildResource so it can deal with multiple resources
First step to address #1487: Allow stat summary to query for multiple
resources
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Update the stat cli help text to explain the new multi resource querying ability
Propsal for #1487: Allow stat summary to query for multiple resources
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Allow stat summary to query for multiple resources
Implement this ability by issuing parallel requests to requestStatsFromAPI()
Proposal for #1487
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Update tests as part of multi-resource support in `linkerd stat` (#1487)
- Refactor stat_test.go to reuse the same logic in multiple tests, and
add cases and files for json output.
- Add a couple of cases to api_utils_test.go to test multiple resources
validation.
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* `linkerd stat` called with multiple resources should keep an ordering (#1487)
Add SortedRes holding the order of resources to be followed when
querying `linkerd stat` with multiple resources
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Extra validations for `linkerd stat` with multiple resources (#1487)
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* `linkerd stat` resource grouping, ordering and name prefixing (#1487)
- Group together stats per resource type.
- When more than one resource, prepend name with type.
- Make sure tables always appear in the same order.
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Allow `linkerd stat` to be called with multiple resources
A few final refactorings as per code review.
Fixes#1487
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
We make several changes to the service profile spec to make service profiles more ergonomic and to make them more consistent with the destination profile API.
* Allow multiple fields to be simultaneously set on a RequestMatch or ResponseMatch condition. Doing so is equivalent to combining the fields with an "all" condition.
* Rename "responses" to "response_classes"
* Change "IsSuccess" to "is_failure"
Signed-off-by: Alex Leong <alex@buoyant.io>
A container called `proxy-api` runs in the Linkerd2 controller pod. This container listens on port 8086 and serves the proxy-api but does nothing other than forward gRPC requests to the destination container which listens on port 8089.
We remove the proxy-api container altogether and change the destination container to listen on port 8086 instead of 8089. The result is that clients still use the proxy-api by connecting to `proxy-api.<ns>.svc.cluster.local:8086` but the controller has one fewer containers. This results in a simpler system that is easier to reason about.
Signed-off-by: Alex Leong <alex@buoyant.io>
Service profiles must be named in the form `"<service>.<namespace>"`. This is inconsistent with the fully normalized domain name that the proxy sends to the controller. It also does not permit creating service profiles for non-Kubernetes services.
We switch to requiring that service profiles must be named with the FQDN of their service. For Kubernetes services, this is `"<service>.<namespace>.svc.cluster.local"`.
This change alone is not sufficient for allowing service profile for non-Kubernetes services because the k8s resolver will ignore any DNS names which are not Kubernetes services. Further refactoring of the resolver will be required to allow looking up non-Kubernetes service profiles in Kuberenetes.
Signed-off-by: Alex Leong <alex@buoyant.io>
The existence of an invalid service profile causes `linkerd check` to fail. This means that it is not possible to open the Linkerd dashboard with the `linkerd dashboard` command. While service profile validation is useful, it should not lock users out.
Add the ability to designate health checks as warnings. A failed warning health check will display a warning output in `linkerd check` but will not affect the overall success of the command. Switch the service profile validation to be a warning.
Signed-off-by: Alex Leong <alex@buoyant.io>
Add a new CLI command: `linkerd profile --template` which outputs a sample service profile yaml. Users can edit this sample and then `kubectl apply` it to add a service profile. The sample serves as "documentation by example" of what service profiles may contain.
Example usage:
```bash
linkerd profile -n emojivoto --template web-svc > web-svc-profile.yaml
# edit web-svc-profile.yaml in your favorite editor
kubectl apply -f web-svc-profile.yaml
```
Signed-off-by: Alex Leong <alex@buoyant.io>
We implement the getProfiles method in the destination service. This method returns a stream of destination profiles for a given authority. It does this by looking up the ServiceProfile resource in the controller namespace named `<svc>.<ns>` where `<svc>` is the name of the service and `<ns>` is the namespace of the service.
This PR includes:
* Adding a ServiceProfile Custom Resource Definition to linkerd install
* A watch based implementation of the getProfiles method in the destination service, similar to the implementation of get.
* An update to the destination client script that allows querying the getProfiles method.
Signed-off-by: Alex Leong <alex@buoyant.io>
Added support for json output in `linkerd stat` through a new (-o|--output)=json option.
Fixes#1417
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
* Make room for columns in `linkerd top`
Make room for columns in `linkerd top`. Columns with data longer than some predetermined minimum length were stepping over each other.
Proposal for #1728
* Removed unneeded truncations
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
To support reading and writing of the ServiceProfile custom resource, we add a codegen'd Kubernetes client for this resource.
* Adding the ServiceProfile type and related boilerplate to /controller/gen/apis/serviceprofile. This boilerplate also contains directives that control how codegen works.
* A script in /hack which invokes codegen that generates Kubernetes client machinery for interacting with ServiceProfile resources. The majority of the generated code lives in /controller/gen/client.
* The above-mentioned generated code.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
* Support auto sidecar-injection
1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection
Proposed implementation for #561.
* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected
This ensures the webhook doesn't error out due to proxy that are injected using the command
* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label
Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.
This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.
* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config
The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.
Also, parameterized the proxy bind timeout option in template.go.
Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.
Signed-off-by: ihcsim <ihcsim@gmail.com>
* Added --context flag to specify the context to use to talk to the Kubernetes apiserver
* Fix tests that are failing
* Updated context flag description
Signed-off-by: Darko Radisic <ffd2subroutine@users.noreply.github.com>
Horizontal Pod Autoscaling does not work when container definitions in pods do not all have resource requests, so here's the ability to add CPU + Memory requests to install + inject commands by proving proxy options --proxy-cpu + --proxy-memory
Fixes#1480
Signed-off-by: Ben Lambert <ben@blam.sh>
In linkerd/linkerd2-proxy#99, several proxy configuration variables were
deprecated.
This change updates the CLI to use the updated names to avoid
deprecation warnings during startup.
Add checks to `linkerd check --pre` to verify that the user has permission to create:
* namespaces
* serviceaccounts
* clusterroles
* clusterrolebindings
* services
* deployments
* configmaps
Signed-off-by: Alex Leong <alex@buoyant.io>
* Remove wait option and make it a default for check
* Switch the wait default to true
* Wait by default also for dashboard
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
`linkerd inject` was not checking its input for known sidecars and
initContainers.
Modify `linkerd inject` to check for existing sidecars and
initContainers, specifically, Linkerd, Istio, and Contour.
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#1593
Add a `--hide-sources` flag to `linkerd top`. Setting this removes the source column from the output.
Signed-off-by: Alex Leong <alex@buoyant.io>
linkerd only routes TCP data, but `linkerd inject` does not warn when it
injects into pods with ports set to `protocol: UDP`.
Modify `linkerd inject` to warn when injected into a pod with
`protocol: UDP`. The Linkerd sidecar will still be injected, but the
stderr output will include a warning.
Also add stderr checking on all inject unit tests.
Part of #1516.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
If an input file is un-injectable, existing inject behavior is to simply
output a copy of the input.
Introduce a report, printed to stderr, that communicates the end state
of the inject command. Currently this includes checking for hostNetwork
and unsupported resources.
Malformed YAML documents will continue to cause no YAML output, and return
error code 1.
This change also modifies integration tests to handle stdout and stderr separately.
example outputs...
some pods injected, none with host networking:
```
hostNetwork: pods do not use host networking...............................[ok]
supported: at least one resource injected..................................[ok]
Summary: 4 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
deploy/vote-bot
```
some pods injected, one host networking:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/vote-bot uses "hostNetwork: true"
supported: at least one resource injected..................................[ok]
Summary: 3 of 8 YAML document(s) injected
deploy/emoji
deploy/voting
deploy/web
```
no pods injected:
```
hostNetwork: pods do not use host networking...............................[warn] -- deploy/emoji, deploy/voting, deploy/web, deploy/vote-bot use "hostNetwork: true"
supported: at least one resource injected..................................[warn] -- no supported objects found
Summary: 0 of 8 YAML document(s) injected
```
TODO: check for UDP and other init containers
Part of #1516
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The default value for the max-rps argument to the tap and top commands is an overly conservative 1rps. This causes the data to come in very slowly and much data to be discarded. Furthermore, because tap requests are windowed to 10 seconds, this causes long pauses between updates.
We fix this in two ways. Firstly we reduce the window size to 1s so that updates will come in at least once per second, even when the actual RPS of the data path is extremely high. Secondly, we increase the default max-rps parameter from 1 to 100. This allows tap to paint an accurate picture of the data much more quickly and sidesteps some sampling bias that happens when the max-rps is low.
In general, tap events tend to happen in bursts. For example, one request in may trigger one or more requests out. Likewise, a single upstream event may trigger several requests to the tapped pod in quick succession. Sampling bias will occur when the max-rps is less than the actual rps and when the tap event limit subdivides these event bursts (biasing towards the first few events in the burst). The greater the max-rps, the less the effects of this bias.
Fixes#1525
Signed-off-by: Alex Leong <alex@buoyant.io>
Previously, we would tap any resource's pods, regardless of whether the pods
were meshed or not. We can't actually tap non-meshed pods, so I'm adding a check
that will filter out non-meshed pods from the pods that tap watches.
Previous behaviour:
When attempting to hang a non meshed pod, it would establish
a watch on the pods, but then never return any results. In the CLI you could
just cancel it with Ctrl-C. In the web, clicking Stop would send a
WebSocket.close(1000) but wouldn't actually close the connection...
Behaviour after change :
If no pods under the specified resource are meshed, it'll
return an error of no pods being found to tap
Adds basic probes to the linkerd-proxy containers injected by linkerd inject.
- Currently the Readiness and Liveness probes are configured to be the same.
- I haven't supplied a periodSeconds, but the default is 10.
- I also set the initialDelaySeconds to 10, but that might be a bit high.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
Closes#1170.
This branch adds a `-o wide` (or `--output wide`) flag to the Tap CLI.
Passing this flag adds `src_res` and `dst_res` elements to the Tap
output, as described in #1170. These use the metadata labels in the tap
event to describe what Kubernetes resource the source and destination
peers belong to, based on what resource type is being tapped, and fall
back to pods if either peer is not a member of the specified resource
type.
In addition, when the resource type is not `namespace`, `src_ns` and
`dst_ns` elements are added, which show what namespaces the the source
and destination peers are in. For peers which are not in the Kubernetes
cluster, none of these labels are displayed.
The source metadata added in #1434 is used to populate the `src_res` and
`src_ns` fields.
Also, this branch includes some refactoring to how tap output is
formatted.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Upgrade to dep 0.5.0, go 1.10.3
* Remove existing dep binary if it's the wrong version
* Add version in filename of dep binary to prevent version conflicts
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
This an initial implementation of the `linkerd top` command. This command launches an ncurses style tabular view of current requests (using data from tap). Most of the command line arguments are the same as tap and allow selecting the resource to inspect and filtering which requests to view.
Fixes#1283
Signed-off-by: Alex Leong <alex@buoyant.io>