* Bump proxy-init to v1.3.2
Bumped `proxy-init` version to v1.3.2, fixing an issue with `go.mod`
(linkerd/linkerd2-proxy-init#9).
This is a non-user-facing fix.
* Check Extension api server Authentication
* Added Checks and tests for extension api-server authentication
* Fixed Failing Static Checks
* Updated the golden file
Signed-off-by: Christy Jacob <christyjacob4@gmail.com>
This fix ensures that we ignore whitespace and newlines when checking that roots match between the Linkerd config map and the issuer secret (in the case of using external issue + Helm).
Fixes: #3907
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
## edge-20.1.3
* CLI
* Introduced `linkerd check --pre --linkerd-cni-enabled`, used when the CNI
plugin is used, to check it has been properly installed before proceeding
with the control plane installation
* Added support for the `--as-group` flag so that users can impersonate
groups for Kubernetes operations (thanks @mayankshah160!)
* Controller
* Fixed an issue where an override of the Docker registry was not being
applied to debug containers (thanks @javaducky!)
* Added check for the Subject Alternate Name attributes to the API server
when access restrictions have been enabled (thanks @javaducky!)
* Added support for arbitrary pod labels so that users can leverage the
Linkerd provided Prometheus instance to scrape for their own labels
(thanks @daxmc99!)
* Fixed an issue with CNI config parsing
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
**Subject**
Fixes bug where override of Docker registry was not being applied to debug containers (#3851)
**Problem**
Overrides for Docker registry are not being applied to debug containers and provide no means to correct the image.
**Solution**
This update expands the `data.proxy` configuration section within the Linkerd `ConfigMap` to maintain the overridden image name for debug containers at _install_-time similar to handling of the `proxy` and `proxyInit` images.
This change also enables the further override option of the registry for debug containers at _inject_-time given utilization of the `--registry` CLI option.
**Validation**
Several new unit tests have been created to confirm functionality. In addition, the following workflows were run through:
### Standard Workflow with Custom Registry
This workflow installs Linkerd control plane based upon a custom registry, then injecting the debug sidecar into a service.
* Start with a k8s instance having no Linkerd installation
* Build all images locally using `bin/docker-build`
* Create custom tags (using same version) for generated images, e.g. `docker tag gcr.io/linkerd-io/debug:git-a4ebecb6 javaducky.com/linkerd-io/debug:git-a4ebecb6`
* Install Linkerd with registry override `bin/linkerd install --registry=javaducky.com/linkerd-io | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap now contains the debug image name, pull policy, and version within the `data.proxy` section
* Request injection of the debug image into an available container. I used the Emojivoto voting service as described in https://linkerd.io/2/tasks/using-the-debug-container/ as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Once the deployment creates a new pod for the service, inspection should show that the container now includes the "linkerd-debug" container name based on the applicable override image seen previously within the ConfigMap
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Custom Registry Override at Injection
This builds upon the “Standard Workflow with Custom Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=gcr.io/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Standard Workflow with Default Registry
This workflow is the typical workflow which utilizes the standard Linkerd image registry.
* Uninstall the Linkerd control plane using `bin/linkerd install --ignore-cluster | kubectl delete -f -` as described at https://linkerd.io/2/tasks/uninstall/
* Clean the Emojivoto environment using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl delete -f -` then reinstall using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -`
* Perform standard Linkerd installation as `bin/linkerd install | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap references the default debug image of `gcr.io/linkerd-io/debug` within the `data.proxy` section
* Request injection of the debug image into an available container as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Default Registry at Injection
This workflow builds upon the “Standard Workflow with Default Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=javaducky.com/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
Fixes issue #3851
Signed-off-by: Paul Balogh javaducky@gmail.com
As part of the effort to remove the "experimental" label from the CNI plugin, this PR introduces cni checks to `linkerd check`
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Adds a check to ensure kube-system namespace has `config.linkerd.io/admission-webhooks:disabled`
FIxes#3721
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* Enable mixed configuration of skip-[inbound|outbound]-ports using port numbers and ranges (#3752)
* included tests for generated output given proxy-ignore configuration options
* renamed "validate" method to "parseAndValidate" given mutation
* updated documentation to denote inclusiveness of ranges
* Updates for expansion of ignored inbound and outbound port ranges to be handled by the proxy-init rather than CLI (#3766)
This change maintains the configured ports and ranges as strings rather than unsigned integers, while still providing validation at the command layer.
* Bump versions for proxy-init to v1.3.0
Signed-off-by: Paul Balogh <javaducky@gmail.com>
* Added checks for cert correctness
* Add warning checks for approaching expiration
* Add unit tests
* Improve unit tests
* Address comments
* Address more comments
* Prevent upgrade from breaking proxies when issuer cert is overwritten (#3821)
* Address more comments
* Add gate to upgrade cmd that checks that all proxies roots work with the identitiy issuer that we are updating to
* Address comments
* Enable use of upgarde to modify both roots and issuer at the same time
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* Replaced `uuid` with `uid` from linkerd-config resource
Fixes#3621
Removed the old `uuid` for identifying linkerd installations, and
replaced it with the `uid` property from the `linkerd-config` ConfigMap.
I tested that this `uid` remains the same by updating the config and
also upgrading linkerd, using both the CLI and Helm.
Note that this required granting `linkerd-web` RBAC access to the
`linkerd-config` Config.
I also added an integration test to verify the stability of the uid.
* Health check: check if proxies trust anchors match configuration
If Linkerd is reinstalled or if the trust anchors are modified while
proxies are running on the cluster, they will contain an outdated
`LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS` certificate.
This changeset adds support for `linkerd check`, so it checks if there
is any proxy running on the cluster, and performing the check against
the configuration trust anchor. If there's a failure (considered a
warning), `linkerd check` will notify the user about what pods are the
offenders (and in what namespace each one is), and also a hint to
remediate the issue (restarting the pods).
* Add integration tests for proxy certificate check
Fixes#3344
Signed-off-by: Rafael Fernández López <ereslibre@ereslibre.es>
Fixes#3356
1.16 removes some api groups that were already deprecated. From k8s blog
post (https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/):
```
- PodSecurityPolicy: will no longer be served from extensions/v1beta1 in
v1.16.
Migrate to the policy/v1beta1 API, available since v1.10. Existing
persisted data can be retrieved/updated via the policy/v1beta1 API.
- DaemonSet, Deployment, StatefulSet, and ReplicaSet: will no longer be
served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 in v1.16.
Migrate to the apps/v1 API, available since v1.9. Existing persisted
data can be retrieved/updated via the apps/v1 API.
```
Previous PRs had already made this change at the Helm templates level,
but we still needed to do it at the API calls and tests.
The integration tests ran fine for k8s 1.12 and 1.15. They fail on 1.16
because the upgrade integration test tries to install linkerd 2.5 which is not
compatible with 1.16.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
`linkerd check`, the web dashboard, and Grafana all perform version
checks to validate Linkerd is up to date. It's common for users to
seldom execute these codepaths. This makes it difficult to identify what
versions of Linkerd are currently in use and what environments it is
being run in, which helps prioritize testing and backports.
Introduce a `heartbeat` CronJob to the default Linkerd install. The
cronjob executes every 24 hours, starting from 5 minutes after
`linkerd install` is run.
Example check URL:
https://versioncheck.linkerd.io/version.json?
install-time=1562761177&
k8s-version=v1.15.0&
meshed-pods=8&
rps=3&
source=heartbeat&
uuid=cc4bb700-3314-426a-9f0f-ec588b9df020&
version=git-b97ee9f7
Fixes#2961
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
When waiting for controller pods to be created or become ready, `linkerd check` doesn't offer any hints as to whether there has been an error (such as an ImagePullBackoff).
We add pod status to the output to make this more immediately obvious.
Fixes#2877
Signed-off-by: Alex Leong <alex@buoyant.io>
`linkerd check --pre` validates that PSPs provide `NET_ADMIN`, but was
not validating `NET_RAW`, despite `NET_RAW` being required by Linkerd's
proxy-init container since #2969.
Introduce a `has NET_RAW capability` check to `linkerd check --pre`.
Fixes#3054
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
PR #2603 modified the web process to read the UUID from the
`linkerd-config` ConfigMap rather than from a command line flag. The
`linkerd check` command relied on that command line flag to retrieve the
UUID as part of its version check.
Modify `linkerd check` to correctly retrieve the UUID from
`linkerd-config`. Also refactor `linkerd-config` retrieval and parsing
code to be shared between healthcheck, install, and upgrade.
Relates to #2961
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Introduce new checks to determine existence of global resources and the
'linkerd-config' config map.
* Update pre-check to check for existence of global resources
This ensures that multiple control planes can't be installed into
different namespaces.
* Update integration test clean-up script to delete psp and crd
Signed-off-by: Ivan Sim <ivan@buoyant.io>
`linkerd check` validates whether PSP's exist, and if the caller has the
`NET_ADMIN` capability. This check was previously failing if `NET_ADMIN`
was not found, even in the case where the PSP admission controller was
not running. Related, `linkerd install` now includes a PSP, so
`linkerd check` should also validate that the caller can create PSP's.
Modify the `NET_ADMIN` check to warn, but not fail, if PSP's are found
but the caller does not have `NET_ADMIN`. Update the warning message to
mention that this is only a problem if the PSP admission controller is
running (and will only be a problem during injection, since #2920
handles control plane installation by adding its own PSP).
Also introduce a check to validate the caller can create PSP's.
Fixes#2884, #2849
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd check` and `linkerd dashboard` commands validate control
plane pods are up via the `LinkerdAPIChecks` category of checks. These
checks will fail if a single pod is not ready, even in HA mode.
Modify the underlying `validateControlPlanePods` check to return
successful if at least one pod per control plane component is ready.
Fixes#2554
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Add support for `linkerd check config`. Validates the existence of the
Linkerd Namespace, ClusterRoles, ClusterRoleBindings, ServiceAccounts,
and CustomResourceDefitions.
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
CustomResourceDefinition parsing and retrieval is not available via
client-go's `kubernetes.Interface`, but rather via a separate
`k8s.io/apiextensions-apiserver` package.
Introduce support for CustomResourceDefintion object parsing and
retrieval. This change facilitates retrieval of CRDs from the k8s API
server, and also provides CRD resources as mock objects.
Also introduce a `NewFakeAPI` constructor, deprecating
`NewFakeClientSets`. Callers need no longer be concerned with discreet
clientsets (for k8s resources vs. CRDs vs. (eventually)
ServiceProfiles), and can instead use the unified `KubernetesAPI`.
Part of #2337, in service to multi-stage check.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd check --proxy` command checks for proxies in all
namespaces, if the `--namespace` flag is not set. PR #2747 modified the
behavior of `KubernetesAPI.NamespaceExists`. Previously it would succeed
if given an emptry string for a namespace. Now it fails with a
`resource name may not be empty` error (for k8s server `v1.10.11`), or a
not found error (for our fake test client).
Modify the data plane proxy namespace check to return success if the
namespace is not set.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Numerous codepaths have emerged that create k8s configs, k8s clients,
and make k8s api requests.
This branch consolidates k8s client creation and APIs. The primary
change migrates most codepaths to call `k8s.NewAPI` to instantiate a
`KubernetesAPI` struct from `pkg`. `KubernetesAPI` implements the
`kubernetes.Interface` (clientset) interface, and also persists a
`client-go` `rest.Config`.
Specific list of changes:
- removes manual GET requests from `k8s.KubernetesAPI`, in favor of
clientsets
- replaces most calls to `k8s.GetConfig`+`kubernetes.NewForConfig` with
a single `k8s.NewAPI`
- introduces a `timeout` param to `k8s.NewAPI`, currently only used by
healthchecks
- removes `NewClientSet` in `controller/k8s/clientset.go` in favor of
`k8s.NewAPI`
- removes `httpClient` and `clientset` from `HealthChecker`, use
`KubernetesAPI` instead
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Add validation webhook for service profiles
Fixes#2075
Todo in a follow-up PRs: remove the SP check from the CLI check.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
linkerd/linkerd2#1721 introduced a `--single-namespace` install flag,
enabling the control-plane to function within a single namespace. With
the introduction of ServiceProfiles, and upcoming identity changes, this
single namespace mode of operation is becoming less viable.
This change removes the `--single-namespace` install flag, and all
underlying support. The control-plane must have cluster-wide access to
operate.
A few related changes:
- Remove `--single-namespace` from `linkerd check`, this motivates
combining some check categories, as we can always assume cluster-wide
requirements.
- Simplify the `k8s.ResourceAuthz` API, as callers no longer need to
make a decision based on cluster-wide vs. namespace-wide access.
Components either have access, or they error out.
- Modify the web dashboard to always assume ServiceProfiles are enabled.
Reverts #1721
Part of #2337
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd-init` container requires the NET_ADMIN capability to modify
iptables. The `linkerd check` command was not verifying this.
Introduce a `has NET_ADMIN capability` check, which does the following:
1) Lists all available PodSecurityPolicies, if none found, returns
success
2) For each PodSecurityPolicy, validate one exists that:
- the user has `use` access AND
- provides `*` or `NET_ADMIN` capability
A couple limitations to this approach:
- It is testing whether the user running `linkerd check` has NET_ADMIN,
but during installation time it will be the `linkerd-init` pod that
requires NET_ADMIN.
- It assumes the presense of PodSecurityPolicies in the cluster means
the PodSecurityPolicy admission controller is installed. If the
admission controller is not installed, but PSPs exists that restrict
NET_ADMIN, `linkerd check` will incorrectly report the user does not
have that capability.
This PR also fixes the `can create CustomResourceDefinitions` check to
not specify a namespace when doing a `create` check, as CRDs are
cluster-wide.
Fixes#1732
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
`golangci-lint` performs numerous checks on Go code, including golint,
ineffassign, govet, and gofmt.
This change modifies `bin/lint` to use `golangci-lint`, and replaces
usage of golint and govet.
Also perform a one-time gofmt cleanup:
- `gofmt -s -w controller/`
- `gofmt -s -w pkg/`
Part of #217
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Consolidate timeouts for `linkerd check`
- Moved the creation of contexts from inside the methods targeted by the
checks into a single place in the runCheck() and runCheckRPC() methods
where the context is built using a hard-coded timeout of 30 seconds.
- k8s' client-go doesn't allow passing along contexts, but it let's us
setting the Timeout manually.
- Reworded the description for the --wait option.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
When running Linkerd check with a control plane namespace that may contain an additional pod with a replication controller ID for pod names instead of a replicaSet ID, the check command panics because of an "index out of bounds" error.
This PR adds a check to make sure that, when parsing pod names during the `checkControllerRunning` healthcheck, we only check for linkerd control plane pods and that the pod name results in four or more substrings when split on '-'. This prevents the check from panicking when encountering a replication controller ID pod name.
Fixes#2084
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
When a failing check is being retried, we show the current err to the user. This
can sometimes be unnecessarily alarming, as in the case of the control plane
starting up.
If a failing check is in the process of being retried, wait to show the final
error message until the retries have completed.
The outputs of the `check` and `inject` commands did not vary much
between successful and failed executions, and were a bit verbose and
challenging to parse.
Reorganize output of `check` and `inject` commands, to provide more
output when errors occur, and less output when successful.
Specific changes:
`linkerd check`
- visually group checks by category
- introduce `hintURL`'s, to provide doc links when checks fail
- add spinners when retrying, remove additional retry lines
- colored unicode characters to indicate success/warning/failure
`linkerd inject`
- modify default output to mirror `kubectl apply`
- only output non-successful inject reports
- support `--verbose` flag to output all inject reports
Fixes#1471, #1653, #1656, #1739
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The linkerd check command organized the various checks via loosely
coupled category IDs, category names, and checkers themselves, all with
ordering defined by consumers of this code.
This change removes category IDs in favor of category names, groups all
checkers by category, and enforces ordering at the HealthChecker
level.
Part of #1471, depends on #2078.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Use ListPods always for data plane HC
* Missing changes in grpc_server.go
* Address review comments
* Read proxy version from spec
Signed-off-by: Alena Varkockova <varkockova.a@gmail.com>
The `linkerd check` command was not validating whether data plane
proxies were successfully reporting metrics to Prometheus.
Introduce a new check that validates data plane proxies are found in
Prometheus. This is made possible via the existing `ListPods` endpoint
in the public API, which includes an `Added` field, indicating a pod's
metrics were found in Prometheus.
Fixes#1517
Signed-off-by: Andrew Seigner <siggy@buoyant.io>