* Added labels to webhook configurations in charts/
* Multiple replicas of proxy-injector and sp-validator in HA
* Use ControllerComponent template variable for webhookconfigurations
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When `linkerd edges` returns JSON, the data will now be sorted alphabetically by
SRC name, meaning edges will be returned in a consistent order. Logic in the CLI
`edges.go` has also been simplified. These changes should result in the Travis
CI builds passing consistently.
This commit refactors the changes introduced by #2842 where the debug
container spec is created in the 'cli' and 'pkg' packages. This change
follows the existing pattern of annotating the YAML in the CLI code,
and injecting the sidecar spec in the shared library.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
This new annotation is used by the proxy injector to determine if the
debug container needs to be injected.
When using 'linkerd install', the 'pkg/inject' library will only inject
annotations into the workload YAML. Even though 'conf.debugSidecar'
is set in the CLI, the 'injectPodSpec()' function is never invoked on
the proxy injector side. Once the workload YAML got picked up by the
proxy injector, 'conf.debugSidecar' is already nil, since it's a different,
new 'conf' object. The new annotation ensures that the proxy injector
injects the debug container.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
* Update helm charts to include webhooks config and TLS secret
* Update the webhooks to read the secret cert and key
* Update webhooks to not recreate config on restart
* Ensure upgrade preserve existing secrets
* Revert the change to rename the webhook configs
The renaming change breaks upgrade, where the new webhook configs conflict with
the existing ones. The older resources aren't deleted during upgrade because
they are dynamically created.
* Make the secret volume read-only
* Remove unnecessary exported getter functions
* Remove obsolete mwc and vwc templates
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Adds an edges command to the CLI. `linkerd edges` displays connections between resources, and Linkerd proxy identities. Currently this feature will only display edges where both the client identity and server identity are known. The next step will be to display edges for which identity is not known and/or one-sided traffic such as Prometheus and tap requests.
Support for resources opting out of tap
Implements the `linkerd inject --disable-tap` flag (although hidden pending #2811) and the config override annotation `config.linkerd.io/disable-tap`.
Fixes#2778
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Private k8s clusters, such as the private GKE clusters offered by Google
Cloud, cannot be reached through the current API proxy method.
This commit uses the port forwarding feature already developed.
Also modify dashboard command to not fall back to ephemeral port.
Signed-off-by: Jack Price <jackprice@outlook.com>
The multi-stage args used by install, upgrade, and check were
implemented as positional arguments to their respective parent commands.
This made the help documentation unclear, and the code ambiguous as to
which flags corresponded to which stage.
Define `config` and `control-plane` stages as subcommands. The help
menus now explicitly state flags supported.
Fixes#2729
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Add support for `linkerd check config`. Validates the existence of the
Linkerd Namespace, ClusterRoles, ClusterRoleBindings, ServiceAccounts,
and CustomResourceDefitions.
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
CustomResourceDefinition parsing and retrieval is not available via
client-go's `kubernetes.Interface`, but rather via a separate
`k8s.io/apiextensions-apiserver` package.
Introduce support for CustomResourceDefintion object parsing and
retrieval. This change facilitates retrieval of CRDs from the k8s API
server, and also provides CRD resources as mock objects.
Also introduce a `NewFakeAPI` constructor, deprecating
`NewFakeClientSets`. Callers need no longer be concerned with discreet
clientsets (for k8s resources vs. CRDs vs. (eventually)
ServiceProfiles), and can instead use the unified `KubernetesAPI`.
Part of #2337, in service to multi-stage check.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
All ServiceAccounts are intended to be grouped together with other RBAC
resources, particularly for `linkerd install config` output. Grafana and
Web ServiceAccounts were still included with their respective
Deployments.
Group Grafana and Web ServiceAccounts with other RBAC resources.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
`linkerd install` supports a 2-stage install process, `linkerd upgrade`
did not.
Add 2-stage support for `linkerd upgrade`. Also exercise multi-stage
functionality during upgrade integration tests.
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This reverts commit 3de16d47be.
#2740 modified the ServiceProfiles CRD which will cause issues for users upgrading from the old CRD version to the new version. #2748 was an attempt to fix this by bumping the service profile CRD version, however, our testing infrastructure is not well set up to accommodate changes to CRDs because they are resources which are global to the cluster.
We revert this change for now and will revisit it in the future when we can give more thought to CRD versioning, upgrade, and testing.
Signed-off-by: Alex Leong <alex@buoyant.io>
Numerous codepaths have emerged that create k8s configs, k8s clients,
and make k8s api requests.
This branch consolidates k8s client creation and APIs. The primary
change migrates most codepaths to call `k8s.NewAPI` to instantiate a
`KubernetesAPI` struct from `pkg`. `KubernetesAPI` implements the
`kubernetes.Interface` (clientset) interface, and also persists a
`client-go` `rest.Config`.
Specific list of changes:
- removes manual GET requests from `k8s.KubernetesAPI`, in favor of
clientsets
- replaces most calls to `k8s.GetConfig`+`kubernetes.NewForConfig` with
a single `k8s.NewAPI`
- introduces a `timeout` param to `k8s.NewAPI`, currently only used by
healthchecks
- removes `NewClientSet` in `controller/k8s/clientset.go` in favor of
`k8s.NewAPI`
- removes `httpClient` and `clientset` from `HealthChecker`, use
`KubernetesAPI` instead
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Fixes#2720 and 2711
This changes the default behavior of `linkerd inject` to not inject the
proxy but just the `linkerd.io/inject: enabled` annotation for the
auto-injector to pick it up (regardless of any namespace annotation).
A new `--manual` mode was added, which behaves as before, injecting
the proxy in the command output.
The unit tests are running with `--manual` to avoid any changes in the
fixtures.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Add config.linkerd.io/disable-identity annotation
First part of #2540
We'll tackle support for `--disable-identity` in `linkerd install` in a
separate commit.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* The 'linkerd-version' CLI flag is renamed to 'control-plane-version'
* Add version field to proxy config
* Add the control plane version to the global config
* Unit test for init image version
* Use more specific control plane and proxy versions in unit tests
Signed-off-by: Ivan Sim <ivan@buoyant.io>
In some non-tty environments, the `linkerd check` spinner can render
unexpected control characters.
Disable the spinner when run without a tty.
Fixes#2700
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This is an initial change to separate out config-specific k8s objects
from the control-plane components. The eventual goal will be rendering
these configs as the first stage of a multi-stage install.
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd upgrade` command read the control-plane's config from
Kubernetes, which required the environment to be configured to connect
to the appropriate k8s cluster.
Intrdouce a `linkerd upgrade --from-manifests` flag, allowing the user
to feed the output of `linkerd install` into the upgrade command.
Fixes#2629
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This change introduces some unit tests on individual methods in the
upgrade code path, along with some minor cleanup.
Part of #2637
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
When upgrading from an older cluster that has a Linkerd config but no
identity, we need to generate an identity context so that the cluster is
configured properly.
Fixes#2650
The UUID implementation we use to generate install IDs is technically
not random enough for secure uses, which ours is not. To prevent
security scanners like SNYK from flagging this false-positive, let's
just switch to the other UUID implementation (Already in our
dependencies).
92f15e78a9 incorrectly removed the config
version override when patching a config from options, which caused
upgrade to stop updating the config version.
Fixes#2660
The instalOnlyFlagSet incorrectly extends the recordableFlagSet.
I'm not sure if this has any potential for unexpected user interactions,
but it's at least confusing when reading the code.
This change makes the flag sets distinct.
Add validation webhook for service profiles
Fixes#2075
Todo in a follow-up PRs: remove the SP check from the CLI check.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
When the --ha flag is set, we currently set a 10m CPU request, which
corresponds to 1% of a core, which isn't actually enough to keep the
proxy responding to health checks if you have 100 processes on the box.
Let's give ourselves a little more breathing room.
Fixes#2643
This change introduces a basic unit test for the `linkerd upgrade`
command. Given a mock k8s client with linkerd-config and
linkerd-identity-issuer objects, it validates the rendered yaml output
against an expected file.
To enable this testing, most of the logic in the top-level upgrade
command has been moved down into a `validateAndBuild` method.
TODO:
- test individual functions around mutating options, flags, configs, and
values
- enable reading the install information from a manifest rather than k8s
Part of #2637
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This change introduces integration tests for `linkerd inject`. The tests
perform CLI injection, with and without params, and validates the
output, including annotations.
Also add some known errors in logs to `install_test.go`.
TODO:
- deploy uninjected and injected resources to a default and
auto-injected cluster
- test creation and update
Part of #2459
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Adds a URL to the `linkerd upgrade` output which contains full upgrade instructions. The message and the URL anchors are different in the case of success or failure.
Fixes#2575.
* Define proxy version override annotation
* Don't override global linkerd version during inject
This ensures consistent usages of the config.linkerd.io/linkerd-version and
linkerd.io/proxy-version annotations. The former will only be used to track
overridden version, while the latter shows the cluster's current default
version.
* Rename proxy version config override annotation
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Previous control plane versions do not provide an 'install' config, so
this field cannot be required.
Now, missing empty are handled more gracefully; and upgrade repairs
install configs with missing fields.
* Disable external profiles by default
* Rename the --disable-external-profiles flag to --enable-external-profiles
Signed-off-by: Ivan Sim <ivan@buoyant.io>
The `install` command errors when the deploy target contains an existing
Linkerd deployment. The `upgrade` command is introduced to reinstall or
reconfigure the Linkerd control plane.
Upgrade works as follows:
1. The controller config is fetched from the Kubernetes API. The Public
API is not used, because we need to be able to reinstall the control
plane when the Public API is not available; and we are not concerned
about RBAC restrictions preventing the installer from reading the
config (as we are for inject).
2. The install configuration is read, particularly the flags used during
the last install/upgrade. If these flags were not set again during the
upgrade, the previous values are used as if they were passed this time.
The configuration is updated from the combination of these values,
including the install configuration itself.
Note that some flags, including the linkerd-version, are omitted
since they are stored elsewhere in the configurations and don't make
sense to track as overrides..
3. The issuer secrets are read from the Kubernetes API so that they can
be re-used. There is currently no way to reconfigure issuer
certificates. We will need to create _another_ workflow for
updating these credentials.
4. The install rendering is invoked with values and config fetched from
the cluster, synthesized with the new configuration.
`storage.tsdb.retention` is deprecated in favor of
`storage.tsdb.retention.time`.
Replace all occurrences.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
When installing Linkerd, a user may override default settings, or may
explicitly configure defaults. Consider install options like `--ha
--controller-replicas=4` -- the `--ha` flag sets a new default value for
the controller-replicas, and then we override it.
When we later upgrade this cluster, how can we know how to configure the
cluster?
We could store EnableHA and ControllerReplicas configurations in the
config, but what if, in a later upgrade, the default value changes? How
can we know whether the user specified an override or just used the
default?
To solve this, we add an `Install` message into a new config.
This message includes (at least) the CLI flags used to invoke
install.
upgrade does not specify defaults for install/proxy-options fields and,
instead, uses the persisted install flags to populate default values,
before applying overrides from the upgrade invocation.
This change breaks the protobuf compatibility by altering the
`installation_uuid` field introduced in 9c442f6885.
Because this change was not yet released (even in an edge release), we
feel that it is safe to break.
Fixes https://github.com/linkerd/linkerd2/issues/2574
This change moves resource-templating logic into a dedicated template,
creates new values types to model kubernetes resource constraints, and
changes the `--ha` flag's behavior to create these resource templates
instead of hardcoding the resource constraints in the various templates.
Performing this check earlier helps to separate the specialized logic to the CLI
and webhook.
Any subsequent modification of this check logic to support config override of
existing meshed workload will be confined to the relevant component.
The shared lib can then focus only on config overrides.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Allow the TCP CONNECTIONS column to be shown on all stat queries in the CLI.
This column will now be called TCP_CONN for brevity.
Read/Write bytes will still only be shown on -o wide or -o json
Some of our templates have started to use 'with .Values' scoping to
limit boilerplate within the tempates.
This change makes this uniform in all templates.
When reading a Linkerd configuration, we cannot determine whether
auto-inject should be configured.
This change adds auto-inject configuration to the global config
structure. Currently, this configuration is effectively boolean,
determined by the presence of an empty value (versus a null).
* Include the DisableExternalProfile option even if it's 'false'. The override logic depends on this option to assign different profile suffix.
* Check for proxy and init image overrides even when registry option is empty
* Append the config annotations to the pod's meta before creating the patch. This ensures that any configs provided via the CLI options are persisted as annotations before the configs override.
* Persist linkerd version CLI option
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Have the Webhook react to pod creation/update only
This was already working almost out-of-the-box, just had to:
- Change the webhook config so it watches pods instead of deployments
- Grant some extra ClusterRole permissions
- Add the piece that figures what's the OwnerReference and add the label
for it
- Manually inject service account mount paths
- Readd volumes tests
Fixes#2342 and #1751
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Currently, the install UUID is regenerated each time `install` is run.
When implementing cluster upgrades, it seems most appropriate to reuse
the prior UUID, rather than generate a new one.
To this end, this change stores an "Installation UUID" in the global
linkerd config.
This change reintroduces identity hinting to the destination service.
The Get endpoint includes identities for pods that are injected with an
identity-mode of "default" and have the same linkerd control plane.
A `serviceaccount` label is now also added to destination response
metadata so that it's accessible in prometheus and tap.
This change adds a new `linkerd2-proxy-identity` binary to the `proxy`
container image as well as a `linkerd2-proxy-run` entrypoint script.
The inject process now sets environment variables on pods to support
identity, including identity names for the destination and identity
services.
As the proxy starts, the identity helper creates a key and CSR in a
tmpfs. As the proxy starts, it reads these files, as well as a
serviceaccount token, and provisions a certificate from controller.
The proxy's /ready endpoint will not succeed until a certificate has
been provisioned.
The proxy will not participate in identity with services other than the
controllers until the Destination controller is modified to provide
identities via discovery.
Because the linkerd-config resource is created after pods that require
it, they can be started before the files are mounted, causing the pods
to restart integration tests to fail.
If we extract the config into its own template file, it can be inserted
before pods are created.
The introduction of identity in 0626fa37 created new state in the
control plane's configuration that must be considered when re-installing
the control plane or when injecting pods.
This change alters `install` to fail if it would seem to conflict with
an existing installation. This behavior may be disabled with the
`--ignore-cluster` flag.
Furthermore, `inject` now _requires_ that it can fetch a configuration
from the control plane in order to operate. Otherwise the
`--ignore-cluster` and `--disable-identity` flags must be specified.
This change does not actually instrument pods to use identity yet---it
lays the framework for proxy identity without changing the test fixture
output (besides a change to how identity HA is configured).
Fixes#2531
Currently, cli/cmd/root.go provides a couple of utilities for building
clients to Linkerd's Public API; however these utilities are infallible,
execute health checks, etc.
There are a class of API clients---for instance, when an inject command
wants to acquire configuration from the API---where these checks are
undesirable. The version CLI built such a client, for example.
This change consolidates the various utilities into a single file.
Furthermore, it renames these utilities to clarify they differ.
https://github.com/linkerd/linkerd2/pull/2521 introduces an "Identity"
controller, but there is no way to include it in linkerd installation.
This change alters the `install` flow as follows:
- An Identity service is _always_ installed;
- Issuer credentials may be specified via the CLI;
- If no Issuer credentials are provided, they are generated each time `install` is called.
- Proxies are NOT configured to use the identity service.
- It's possible to override the credential generation logic---especially
for tests---via install options that can be configured via the CLI.
The new proxy has changed its configuration as follows:
- `LISTENER` urls are now `LISTEN_ADDR` addresses;
- `CONTROL_URL` is now `DESTINATION_SVC_ADDR`;
- `*_NAMESPACE` vars are no longer needed;
- The `PROXY_ID` is now the `DESTINATION_CONTEXT`;
- The "metrics" port is now the "admin" port, since it serves more than
just metrics;
- A readiness probe now checks a dedicated /ready endpoint eagerly.
Identity injection is **NOT** configured by this branch.
This change introduces a new Identity service implementation for the
`io.linkerd.proxy.identity.Identity` gRPC service.
The `pkg/identity` contains a core, abstract implementation of the service
(generic over both the CA and (Kubernetes) Validator interfaces).
`controller/identity` includes a concrete implementation that uses the
Kubernetes TokenReview API to validate serviceaccount tokens when
issuing certificates.
This change does **NOT** alter installation or runtime to include the
identity service. This will be included in a follow-up.
The proxy's TLS implementation has changed to use a new _Identity_ controller.
In preparation for this, the `--tls=optional` CLI flag has been removed
from install and inject; and the `ca` controller has been deleted. Metrics
and UI treatments for TLS have **not** been removed, as they will continue to
be valuable for the new Identity system.
With the removal of the old identity scheme, the Destination service's proxy
ID field is now set with an opaque string (e.g. `ns:emojivoto`) to enable
locality awareness.
* Defined the config annotations as new constants in labels.go
* Introduced the getOverride() functions to override configs
* Introduced new accessors to abstract with type casting
Signed-off-by: Ivan Sim <ivan@buoyant.io>
linkerd/linkerd2#1721 introduced a `--single-namespace` install flag,
enabling the control-plane to function within a single namespace. With
the introduction of ServiceProfiles, and upcoming identity changes, this
single namespace mode of operation is becoming less viable.
This change removes the `--single-namespace` install flag, and all
underlying support. The control-plane must have cluster-wide access to
operate.
A few related changes:
- Remove `--single-namespace` from `linkerd check`, this motivates
combining some check categories, as we can always assume cluster-wide
requirements.
- Simplify the `k8s.ResourceAuthz` API, as callers no longer need to
make a decision based on cluster-wide vs. namespace-wide access.
Components either have access, or they error out.
- Modify the web dashboard to always assume ServiceProfiles are enabled.
Reverts #1721
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Manual and auto injection was logging the full patch JSON at the `Info`
level.
Modify injection to log the object type and name at the `Info` level,
and the full patch at the `Debug` level.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
It's sometimes helpful to spotcheck proxy metrics from a specific pod,
but doing so with kubectl requires a few steps.
Introduce a new `linkerd metrics` command. When given a pod name and
namespace, returns a dump of the proxy's /metrics endpoint.
Also modify the k8s.portforward module to accept initialized k8s config
and client objects, to enable testing.
Fixes#2350.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Changed the protobuf definition to take out destinationApiPort entirely
* Store destinationAPIPort as a constant in pkg/inject.go
Fixes#2351
Signed-off-by: Aditya Sharma <hello@adi.run>
Show TCP stats in the linkerd stat output. They are not shown by default, but
will be queried when using -o wide or -o json.
Also display read/write bytes as bytes per sec in the CLI and dashboard.
The `linkerd-init` container requires the NET_ADMIN capability to modify
iptables. The `linkerd check` command was not verifying this.
Introduce a `has NET_ADMIN capability` check, which does the following:
1) Lists all available PodSecurityPolicies, if none found, returns
success
2) For each PodSecurityPolicy, validate one exists that:
- the user has `use` access AND
- provides `*` or `NET_ADMIN` capability
A couple limitations to this approach:
- It is testing whether the user running `linkerd check` has NET_ADMIN,
but during installation time it will be the `linkerd-init` pod that
requires NET_ADMIN.
- It assumes the presense of PodSecurityPolicies in the cluster means
the PodSecurityPolicy admission controller is installed. If the
admission controller is not installed, but PSPs exists that restrict
NET_ADMIN, `linkerd check` will incorrectly report the user does not
have that capability.
This PR also fixes the `can create CustomResourceDefinitions` check to
not specify a namespace when doing a `create` check, as CRDs are
cluster-wide.
Fixes#1732
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This ensures that the MWC always picks up the latest config template during version upgrade.
The removed `update()` method and RBAC permissions are superseded by @2163.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
We were depending on an untagged version of prometheus/client_golang
from Feb 2018.
This bumps our dependency to v0.9.2, from Dec 2018.
Also, this is a prerequisite to #1488.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd install` output relies on Helm templates in the `chart`
directory. In production cli builds, these templates are compiled into
the binary. In development, they are read from the file system. This
development code path relied on GOPATH to determine the location of the
`chart` directory. In anticipation of Go Modules support (#1488), we
cannot assume the repo is within the GOPATH.
This change removes the GOPATH dependency, and instead relies on
`runtime.Caller` to determine the root of the code repo. This change
only affects development (!prod) builds.
Prerequisite to #1488.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
- Created the pkg/inject package to hold the new injection shared lib.
- Extracted from `/cli/cmd/inject.go` and `/cli/cmd/inject_util.go`
the core methods doing the workload parsing and injection, and moved them into
`/pkg/inject/inject.go`. The CLI files should now deal only with
strictly CLI concerns, and applying the json patch returned by the new
lib.
- Proceeded analogously with `/cli/cmd/uninject.go` and
`/pkg/inject/uninject.go`.
- The `InjectReport` struct and helping methods were moved into
`/pkg/inject/report.go`
- Refactored webhook to use the new injection lib
- Removed linkerd-proxy-injector-sidecar-config ConfigMap
- Added the ability to add pod labels and annotations without having to
specify the already existing ones
Fixes#1748, #2289
Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
linkerd/linkerd2#2349 introduced a `SelfSubjectAccessReview` check at
startup, to determine whether each control-plane component should
establish Kubernetes watches cluster-wide or namespace-wide. If this
check occurs before the linkerd-proxy sidecar is ready, it fails, and
the control-plane component restarts.
This change configures each control-plane pod to skip outbound port 443
when injecting the proxy, allowing the control-plane to connect to
Kubernetes regardless of the `linkerd-proxy` state.
A longer-term fix should involve a more robust control-plane startup,
that is resilient to failed Kubernetes API requests. An even longer-term
fix could involve injecting `linkerd-proxy` as a Kubernetes "sidecar"
container, when that becomes available.
Workaround for #2407
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
As described in #2217, the controller returns TLS identities for results even
when the destination pod may not be able to participate in identity
requester: specifically, the other pod may not have the same controller
namespace or it may not be injected with identity.
This change introduces a new annotation, linkerd.io/identity-mode that is set
when injecting pods (via both CLI and webhook). This annotation is always
added.
The destination service now only returns TLS identities when this annotation
is set to optional on a pod and the destination pod uses the same controller.
These semantics are expected to change before the 2.3 release.
Fixes#2217
The control-plane components relied on a `--single-namespace` param,
passed from `linkerd install` into each individual component, to
determine which namespaces they were authorized to access, and whether
to support ServiceProfiles. This command-line flag was redundant given
the authorization rules encoded in the parent `linkerd install` output,
via [Cluster]Role[Binding]s.
Modify the control-plane components to query Kubernetes at startup to
determine which namespaces they are authorized to access, and whether
ServiceProfile support is available. This allows removal of the
`--single-namespace` flag on the components.
Also update `bin/test-cleanup` to cleanup the ServiceProfile CRD.
TODO:
- Remove `--single-namespace` flag on `linkerd install`, part of #2164
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The inject logic combines the modification of a pod spec and the
creation of a "report" detailing problems with the pod spec.
This change extracts the report-creation-and-checking logic from the
injection logic to make the contracts of each of these functions
clearer.
No functional changes are intended.
goconst finds repeated strings that could be replaced by a constant:
https://github.com/jgautheron/goconst
Part of #217
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Adds a flag, tcp_stats to the StatSummary request, which queries prometheus for TCP stats.
This branch returns TCP stats at /api/tps-reports when this flag is true.
TCP stats are now displayed on the Resource Detail pages.
The current queried TCP stats are:
tcp_open_connections
tcp_read_bytes_total
tcp_write_bytes_total