**Subject**
Fixes bug where override of Docker registry was not being applied to debug containers (#3851)
**Problem**
Overrides for Docker registry are not being applied to debug containers and provide no means to correct the image.
**Solution**
This update expands the `data.proxy` configuration section within the Linkerd `ConfigMap` to maintain the overridden image name for debug containers at _install_-time similar to handling of the `proxy` and `proxyInit` images.
This change also enables the further override option of the registry for debug containers at _inject_-time given utilization of the `--registry` CLI option.
**Validation**
Several new unit tests have been created to confirm functionality. In addition, the following workflows were run through:
### Standard Workflow with Custom Registry
This workflow installs Linkerd control plane based upon a custom registry, then injecting the debug sidecar into a service.
* Start with a k8s instance having no Linkerd installation
* Build all images locally using `bin/docker-build`
* Create custom tags (using same version) for generated images, e.g. `docker tag gcr.io/linkerd-io/debug:git-a4ebecb6 javaducky.com/linkerd-io/debug:git-a4ebecb6`
* Install Linkerd with registry override `bin/linkerd install --registry=javaducky.com/linkerd-io | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap now contains the debug image name, pull policy, and version within the `data.proxy` section
* Request injection of the debug image into an available container. I used the Emojivoto voting service as described in https://linkerd.io/2/tasks/using-the-debug-container/ as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Once the deployment creates a new pod for the service, inspection should show that the container now includes the "linkerd-debug" container name based on the applicable override image seen previously within the ConfigMap
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Custom Registry Override at Injection
This builds upon the “Standard Workflow with Custom Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=gcr.io/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Standard Workflow with Default Registry
This workflow is the typical workflow which utilizes the standard Linkerd image registry.
* Uninstall the Linkerd control plane using `bin/linkerd install --ignore-cluster | kubectl delete -f -` as described at https://linkerd.io/2/tasks/uninstall/
* Clean the Emojivoto environment using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl delete -f -` then reinstall using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -`
* Perform standard Linkerd installation as `bin/linkerd install | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap references the default debug image of `gcr.io/linkerd-io/debug` within the `data.proxy` section
* Request injection of the debug image into an available container as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Default Registry at Injection
This workflow builds upon the “Standard Workflow with Default Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=javaducky.com/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
Fixes issue #3851
Signed-off-by: Paul Balogh javaducky@gmail.com
As part of the effort to remove the "experimental" label from the CNI plugin, this PR introduces cni checks to `linkerd check`
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Fixes
- https://github.com/linkerd/linkerd2/issues/2962
- https://github.com/linkerd/linkerd2/issues/2545
### Problem
Field omissions for workload objects are not respected while marshaling to JSON.
### Solution
After digging a bit into the code, I came to realize that while marshaling, workload objects have empty structs as values for various fields which would rather be omitted. As of now, the standard library`encoding/json` does not support zero values of structs with the `omitemty` tag. The relevant issue can be found [here](https://github.com/golang/go/issues/11939). To tackle this problem, the object declaration should have _pointer-to-struct_ as a field type instead of _struct_ itself. However, this approach would be out of scope as the workload object declaration is handled by the k8s library.
I was able to find a drop-in replacement for the `encoding/json` library which supports zero value of structs with the `omitempty` tag. It can be found [here](https://github.com/clarketm/json). I have made use of this library to implement a simple filter like functionality to remove empty tags once a YAML with empty tags is generated, hence leaving the previously existing methods unaffected
Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
Adds a check to ensure kube-system namespace has `config.linkerd.io/admission-webhooks:disabled`
FIxes#3721
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Fixes a problem where the identitiy serice can issue a certificate that has a lifetime larger than the issuer certificate. This was causing the proxies to end up using an invalid TLS certificate. This fix ensures that the lifetime of the issued certificate is not greater than the smallest lifetime of the certs in the issuer cert trust chain.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* The `linkerd-cni` chart should set proper annotations/labels for the namespace
When installing through Helm, the `linkerd-cni` chart will (by default)
install itself under the same namespace ("linkerd") that the `linkerd` chart will be
installed aftewards. So it needs to set up the proper annotations and labels.
* Fix Helm install when disabling init containers
To install linkerd using Helm after having installed linkerd's CNI plugin, one needs to `--set noInitContainer=true`.
But to determine whether to use init containers or not, we weren't
evaluating that, but instead `Values.proxyInit`, which is indeed null
when installing through the CLI but not when installing with Helm. So
init containers were being set despite having passed `--set
noInitContainers=true`.
* Enable mixed configuration of skip-[inbound|outbound]-ports using port numbers and ranges (#3752)
* included tests for generated output given proxy-ignore configuration options
* renamed "validate" method to "parseAndValidate" given mutation
* updated documentation to denote inclusiveness of ranges
* Updates for expansion of ignored inbound and outbound port ranges to be handled by the proxy-init rather than CLI (#3766)
This change maintains the configured ports and ranges as strings rather than unsigned integers, while still providing validation at the command layer.
* Bump versions for proxy-init to v1.3.0
Signed-off-by: Paul Balogh <javaducky@gmail.com>
* Inject preStop hook into the proxy sidecar container to stop it last
This commit adds support for a Graceful Shutdown technique that is used
by some Kubernetes administrators while the more perspective
configuration is being discussed in
https://github.com/kubernetes/kubernetes/issues/65502
The problem is that RollingUpdate strategy does not guarantee that all
traffic will be sent to a new pod _before_ the previous pod is removed.
Kubernetes inside is an event-driven system and when a pod is being
terminating, several processes can receive the event simultaneously.
And if an Ingress Controller gets the event too late or processes it
slower than Kubernetes removes the pod from its Service, users requests
will continue flowing into the black whole.
According [to the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods)
> 1. If one of the Pod’s containers has defined a `preStop` hook,
> it is invoked inside of the container. If the `preStop` hook is still
> running after the grace period expires, step 2 is then invoked with
> a small (2 second) extended grace period.
>
> 2. The container is sent the `TERM` signal. Note that not all
> containers in the Pod will receive the `TERM` signal at the same time
> and may each require a preStop hook if the order in which
> they shut down matters.
This commit adds support for the `preStop` hook that can be configured
in three forms:
1. As command line argument `--wait-before-exit-seconds` for
`linkerd inject` command.
2. As `linkerd2` Helm chart value `Proxy.WaitBeforeExitSeconds`.
2. As `config.alpha.linkerd.io/wait-before-exit-seconds` annotation.
If configured, it will add the following preHook to the proxy container
definition:
```yaml
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep {{.Values.Proxy.WaitBeforeExitSeconds}}
```
To achieve max benefit from the option, the main container should have
its own `preStop` hook with the `sleep` command inside which has
a smaller period than is set for the proxy sidecar. And none of them
must be bigger than `terminationGracePeriodSeconds` configured for the
entire pod.
An example of a rendered Kubernetes resource where
`.Values.Proxy.WaitBeforeExitSeconds` is equal to `40`:
```yaml
# application container
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep 20
# linkerd-proxy container
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep 40
terminationGracePeriodSeconds: 160 # for entire pod
```
Fixes#3747
Signed-off-by: Eugene Glotov <kivagant@gmail.com>
* Added checks for cert correctness
* Add warning checks for approaching expiration
* Add unit tests
* Improve unit tests
* Address comments
* Address more comments
* Prevent upgrade from breaking proxies when issuer cert is overwritten (#3821)
* Address more comments
* Add gate to upgrade cmd that checks that all proxies roots work with the identitiy issuer that we are updating to
* Address comments
* Enable use of upgarde to modify both roots and issuer at the same time
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
This PR adds support for CronJobs and ReplicaSets to `linkerd inject`, the web
dashboard and CLI. It adds a new Grafana dashboard for each kind of resource.
Closes#3614Closes#3630Closes#3584Closes#3585
Signed-off-by: Sergio Castaño Arteaga tegioz@icloud.com
Signed-off-by: Cintia Sanchez Garcia cynthiasg@icloud.com
This PR allows the dashboard to query for a resource's definition in YAML
format, if the boolean `queryForDefinition` in the `ResourceDetail` component is
set to true.
This change to the web API and the dashboard component was made for a future
redesigned dashboard detail page. At present, `queryForDefinition` is set to
false and there is no visible change to the user with this PR.
Signed-off-by: Sergio Castaño Arteaga <tegioz@icloud.com> Signed-off-by: Cintia
Sanchez Garcia <cynthiasg@icloud.com>
* Replaced `uuid` with `uid` from linkerd-config resource
Fixes#3621
Removed the old `uuid` for identifying linkerd installations, and
replaced it with the `uid` property from the `linkerd-config` ConfigMap.
I tested that this `uid` remains the same by updating the config and
also upgrading linkerd, using both the CLI and Helm.
Note that this required granting `linkerd-web` RBAC access to the
`linkerd-config` Config.
I also added an integration test to verify the stability of the uid.
Fixes#3566
As explained in #3566, as of go 1.13 there's a strict check that ensures a dependency's timestamp matches it's sha (as declared in go.mod). Our smi-sdk dependency has a problem with that that got resolved later on, but more work would be required to upgrade that dependency. In the meantime a quick pair of replace statements at the bottom of go.mod fix the issue.
`linkerd check` can now be run from the dashboard in the `/controlplane` view.
Once the check results are received, they are displayed in a modal in a similar
style to the CLI output.
Closes#3613
* Add support for uninject command to uninject namespace configs
* Add relevant unit tests in cli/cmd/uninject_test.go
Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
* rework annotations doc generation from godoc parsing to map[string]string and get rid of unused yaml tags
* move annotations doc function from pkg/k8s to cli/cmd
Signed-off-by: StupidScience <tonysignal@gmail.com>
* Add cmd to inject debug sidecar for l5d components only
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
* Revert "Add cmd to inject debug sidecar for l5d components only"
This reverts commit 50b8b3577e.
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
* Stop uninjecting metadata from control plane components
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
* Ensure inject can be run on control plane components only if --manual is present
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
* Add inject support for namespaces(Fix#3255)
* Add relevant unit tests (including overridden annotations)
Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
* If tap source IP matches many running pods then only show the IP
When an unmeshed source ip matched more than one running pod, tap was
showing the names for all those pods, even though the didn't necessary
originate the connection. This could be reproduced when using pod
network add-on such as Calico.
With this change, if a node matches, return it, otherwise we proceed to look for a matching pod. If exactly one running pod matches we return it. Otherwise we return just the IP.
Fixes#3103
* Add support for --identity-issuer-mode flag to install cmd
* Change flag to be a bool
* Read correct data form identity when external issuer is used
* Add ability for identity service to dynamically reload certs
* Fix failing tests
* Minor refactor
* Load trust anchors from identity issuer secret
* Make identity service actually watch for issuer certs updates
* Add some testing around cmd line identity options validation
* Add tests ensuring that identity service loads issuer
* Take into account external-issuer flag during upgrade + tests
* Fix failing upgrade test
* Address initial review feedback
* Address further review feedback on cli and helm
* Do not persist --identity-external-issuer
* Some improvements to identitiy service
* Bring back persistane of external issuer flag
* Address more feedback
* Update dockerfiles shas
* Publishing k8s events on issuer certs rotation
* Ensure --ignore-cluster+external issuer is not supported
* Update go-deps shas
* Transition to identity issuer scheme based configuration
* Use k8s consts for secret file names
Signed-off-by: zaharidichev <zaharidichev@gmail.com>
The `linkerd upgrade --from-manifests` command supports reading the
manifest output via `linkerd install`. PR #3167 introduced a tap
APIService object into `linkerd install`, but the manifest-reading code
in fake.go was never updated to support this new object kind.
Update the fake clientset code to support APIService objects.
Fixes#3559
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Add missing package to proxy Dockerfile
* Fix failing 'check' integration test
* Trim whitespaces in certs comparison.
Without this change, the integration test would fail because the trust anchor
stored in the linkerd-config config map generated by the Helm renderer is
stripped of the line breaks. See charts/linkerd2/templates/_config.tpl
Signed-off-by: Ivan Sim <ivan@buoyant.io>
* Health check: check if proxies trust anchors match configuration
If Linkerd is reinstalled or if the trust anchors are modified while
proxies are running on the cluster, they will contain an outdated
`LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS` certificate.
This changeset adds support for `linkerd check`, so it checks if there
is any proxy running on the cluster, and performing the check against
the configuration trust anchor. If there's a failure (considered a
warning), `linkerd check` will notify the user about what pods are the
offenders (and in what namespace each one is), and also a hint to
remediate the issue (restarting the pods).
* Add integration tests for proxy certificate check
Fixes#3344
Signed-off-by: Rafael Fernández López <ereslibre@ereslibre.es>
* Add the tracing environment variables to the proxy spec
* Add tracing event
* Remove unnecessary CLI change
* Update log message
* Handle single segment service name
* Use default service account if not provided
The injector doesn't read the defaults from the values.yaml
* Remove references to conf.workload.ownerRef in log messages
This nested field isn't always set.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
This reverts commit edd3b1f6d4.
This is a temporary revert of #3461 while we sort out some details of how this should configured and how it should interact with configuring a trace collector on the Linkerd proxy. We will reintroduce this change once the config plan is straightened out.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#278
Add `linkerd install|upgrade --disable-heartbeat` flag, and have
`linkerd check` check for the heartbeat's SA only if it's enabled.
Also added those flags into the `linkerd upgrade -h` examples.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
If the namespace is controlled by an external tool or can't be installed
with Helm, disable its installation
Fixes#3412
Signed-off-by: Eugene Glotov <kivagant@gmail.com>
The repo depended on an old version of client-go. It also depended on
stern, which itself depended on an old version of client-go, making
client-go upgrade non-trivial.
Update the repo to client-go v12.0.0, and also replace stern with a
fork.
This fork of stern includes the following changes:
- updated to use Go Modules
- updated to use client-go v12.0.0
- fixed log line interleaving:
- https://github.com/wercker/stern/issues/96
- based on:
- 8723308e46Fixes#3382
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The controller Docker image included 7 Go binaries (destination,
heartbeat, identity, proxy-injector, public-api, sp-validator, tap),
each roughly 35MB, with similar dependencies.
Change each controller binary into subcommands of a single `controller`
binary, decreasing the controller Docker image size from 315MB to 38MB.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Disable heartbeat by default
Signed-off-by: Kevin Taylor <kevtaylor@expedia.com>
* Address review
Signed-off-by: Kevin Taylor <kevtaylor@expedia.com>
* Remove tabs in values
Signed-off-by: Kevin Taylor <kevtaylor@expedia.com>
Fixes#3356
1.16 removes some api groups that were already deprecated. From k8s blog
post (https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/):
```
- PodSecurityPolicy: will no longer be served from extensions/v1beta1 in
v1.16.
Migrate to the policy/v1beta1 API, available since v1.10. Existing
persisted data can be retrieved/updated via the policy/v1beta1 API.
- DaemonSet, Deployment, StatefulSet, and ReplicaSet: will no longer be
served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 in v1.16.
Migrate to the apps/v1 API, available since v1.9. Existing persisted
data can be retrieved/updated via the apps/v1 API.
```
Previous PRs had already made this change at the Helm templates level,
but we still needed to do it at the API calls and tests.
The integration tests ran fine for k8s 1.12 and 1.15. They fail on 1.16
because the upgrade integration test tries to install linkerd 2.5 which is not
compatible with 1.16.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Stop ignoring client-go log entries
Pipe klog output into logrus. Not doing this avoids us from seeing
client-go log entries, for some reason I don't understand.
To enable, `--controller-log-level` must be `debug`.
This was discovered while trying to debug sending events for #3253.
I added an integration test that fails when this piping is not in place.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Have the proxy-injector emit events upon injection/skipping injection
Fixes#3253
Have the proxy-injector emit an event whenever a injection happens, or
when injection is skipped for some reason (also added that reason into
the proxy-injector logs). The level is associated to the parent workload
(it can't be associated to the pod because at this point the pod hasn't
been persisted).
The event recorder was setup at the `webhook/server.go` level and passed
to the proxy-injector's `Inject` function. The sp-validator thus also
has access to the event recorder, but for now it's not using it.
Related changes:
- Refactored `api.GetOwnerKindAndName()` to have it return a more
generic object.
- Refactored `report.Injectable()` to also have it return the reason why
a workload is not injectable.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Rename template-values.go
* Define new constructor of charts.Values type
* Move all Helm values related code to the pkg/charts package
* Bump dependency
* Use '/' in filepath to remain compatible with VFS requirement
* Add unit test to verify Helm YAML output
* Alejandro's feedback
* Add unit test for Helm YAML validation (HA)
Signed-off-by: Ivan Sim <ivan@buoyant.io>
* Always use forward-slash when interacting with the VFS
Fixes#3283
Our VFS implementation relies on `net.http.FileSystem` which always
expects `/` regardless of the OS.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
* Check for Namespace level config override annotations
* Add unit tests for namespace level config overrides
* add integration test for namespace level config override
* use different namespace for override tests
* check resource requests for integration tests
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
This PR adds `trafficsplit` as a supported resource for the `linkerd stat` command. Users can type `linkerd stat ts` to see the apex and leaf services of their trafficsplits, as well as metrics for those leaf services.
* Delete symlink to old Helm chart
* Update 'install' code to use common Helm template structs
* Remove obsolete TLS assets functions.
These are now handle by Helm functions inside the templates
* Read defaults from values.yaml and values-ha.yaml
* Ensure that webhooks TLS assets are retained during upgrade
* Fix a few bugs in the Helm templates (see bullet points):
* Merge the way the 'install' ha and non-ha options are handled into one function
* Honor the 'NoInitContainer' option in the components templates
* Control plane mTLS will not be disabled if identity context in the
config map is empty. The data plane mTLS will still be automatically disabled
if the context is nil.
* Resolve test failures from rebase with master
* Fix linter issues
* Set service account mount path read-only field
* Add TLS variables of the webhooks and tap to values.yaml
During upgrade, these secrets are preserved to ensure they remain synced
wih the CA bundle in the webhook configurations. These Helm variables are used
to override the defaults in the templates.
* Remove obsolete 'chart' folder
* Fix bugs in templates
* Handle missing webhooks and tap TLS assets during upgrade
When upgrading from an older version that don't have these secrets, fallback to let Helm
create them by creating an empty charts.TLS struct.
* Revert the selector labels of webhooks to be compatible with that in 2.4
In 2.4, the proxy injector and profile validator webhooks already have their selector labels defined.
Since these attributes are immutable, the recent change to these selectors introduced by the Helm chart work will cause upgrade to fail.
* Alejandro's feedback
* Siggy's feedback
* Removed redundant unexported custom types
Signed-off-by: Ivan Sim <ivan@buoyant.io>
Now that we inject at the pod level by default, `linkerd uninject` should remove the `linkerd.io/inject: enabled`
annotation. Also added a test for that.
Fix#3156
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
### Motivation
PR #3167 introduced the tap APIService and migrated `linkerd tap` to use it.
Subsequent PRs (#3186 and #3187) updated `linkerd top` and `linkerd profile
--tap` to use the tap APIService. This PR moves the web's Go server to now also
use the tap APIService instead of the public API. It also ensures an error
banner is shown to the user when unauthorized taps fail via `linkerd top`
command in *Overview* and *Top*, and `linkerd tap` command in *Tap*.
### Details
The majority of these changes are focused around piping through the HTTP error
that occurs and making sure the error banner generated displays the error
message explaining to view the tap RBAC docs.
`httpError` is now public (`HTTPError`) and the error message generated is short
enough to fit in a control frame (explained [here](https://github.com/linkerd/linkerd2/blob/kleimkuhler%2Fweb-tap-apiserver/web/srv/api_handlers.go#L173-L175)).
### Testing
The error we are testing for only occurs when the linkerd-web service account is
not authorzied to tap resources. Unforutnately that is not the case on Docker
For Mac (assuming that is what you use locally), so you'll need to test on a
different cluster. I chose a GKE cluster made through the GKE console--not made
through cluster-utils because it adds cluster-admin.
Checkout the branch locally and `bin/docker-build` or `ares-build` if you have
it setup. It should produce a linkerd with the version `git-04e61786`. I have
already pushed the dependent components, so you won't need to `bin/docker-push
git-04e61786`.
Install linkerd on this GKE cluster and try to run `tap` or `top` commands via
the web. You should see the following errors:
### Tap

### Top

Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
* Refactor proxy injection to use Helm charts
Fixes#3128
A new chart `/charts/patch` was created, that generates the JSON patch
payload that is to be returned to the k8s API when doing the injection
through the proxy injector, and it's also leveraged by the `linkerd
inject --manual` CLI.
The VFS was used by `linkerd install` to access the old chart under
`/chart`. Now the proxy injection also uses the Helm charts to generate
the JSON patch (see above) so we've moved the VFS from `cli/static` to a
new common place under `/pkg/charts/static`, and the new root for the VFS is
now `/charts`.
`linkerd install` hasn't yet migrated to use the new charts (that'll
happen in #3127), so the only change in that regard was the creation of
`/charts/chart` which is a symlink pointing to `/chart` that
`install.go` now uses, so that the VFS contains both the old and new
charts, as a temporary measure.
You can see that `/bin/Dockerfile-bin`, `/controller/Dockerfile` and
`/bin/build-cli-bin` do now `go generate` pointing to the new location
(and the `go generate` annotation was moved from `/cli/main.go` to
`pkg/charts/static/templates.go`).
The symlink trick doesn't work when building the binaries through
Docker, so `/bin/Dockerfile-bin` replaces the symlink with an actual
copy of `/chart`.
Also note that in `/controller/Dockerfile` we now need to include the
`prod` tag in `go install` like we do in `/bin/Dockerfile-bin` so that
the proxy injector does use the VFS instead of the local file system.
- The common logic to parse a chart has been moved from `install.go` to
`/pkg/charts/util.go`.
- The special ENV var in the proxy for "outbound router capacity" that
only applies to the Prometheus pod is now handled directly in the proxy
partial and all the associated go code could be removed.
- The `patch.go` lib for generating the JSON patch in go along
with its tests `patch_test.go` are no longer needed.
- Lots of functions in `/pkg/inject/inject.go` got removed/simplified
with their logic being moved into the charts themselves. As a
consequence lots of things in `inject_test.go` became irrelevant.
- Moved `template-values.go` from `/pkg/inject` to `pkg/charts` as that
contains the go structs representation of the chart variables that
will be leveraged in #3127.
Don't forget to run `/bin/helm.sh` whenever you make changes to charts
;-)
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Continue of #2950.
I decided to check for the `clusterDomain` in the config map in web server main for the same reasons as as pointed out here https://github.com/linkerd/linkerd2/pull/3113#discussion_r306935817
It decouples the server implementations from the config.
Signed-off-by: Armin Buerkle <armin.buerkle@alfatraining.de>
PR #3167 introduced a Tap APIService, and migrated linkerd tap to it.
This change migrates `linkerd profile --tap` to the new Tap APIService.
Depends on #3186Fixes#3169
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The Tap Service enabled tapping of any meshed pod, regardless of user
privilege.
This change introduces a new Tap APIService. Kubernetes provides
authentication and authorization of Tap requests, and then forwards
requests to a new Tap APIServer, which implements a Kubernetes
aggregated APIServer. The Tap APIServer authenticates the client TLS
from Kubernetes, and authorizes the user via a SubjectAccessReview.
This change also modifies the `linkerd tap` command to make requests
against the new APIService.
The Tap APIService implements these Kubernetes-style endpoints:
POST /apis/tap.linkerd.io/v1alpha1/watch/namespaces/:ns/tap
POST /apis/tap.linkerd.io/v1alpha1/watch/namespaces/:ns/:res/:name/tap
GET /apis
GET /apis/tap.linkerd.io
GET /apis/tap.linkerd.io/v1alpha1
GET /healthz
GET /healthz/log
GET /healthz/ping
GET /metrics
GET /openapi/v2
GET /version
Users authorize to the new `tap.linkerd.io/v1alpha1` via RBAC. Only the
`watch` verb is supported. Access is also available via subresources
such as `deployments/tap` and `pods/tap`.
This change introduces the following resources into the default Linkerd
install:
- Global
- APIService/v1alpha1.tap.linkerd.io
- ClusterRoleBinding/linkerd-linkerd-tap-auth-delegator
- `linkerd` namespace:
- Secret/linkerd-tap-tls
- `kube-system` namespace:
- RoleBinding/linkerd-linkerd-tap-auth-reader
Tasks not covered by this PR:
- `linkerd top`
- `linkerd dashboard`
- `linkerd profile --tap`
- removal of the unauthenticated tap controller
Fixes#2725, #3162, #3172
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Similar to `kubectl --as`, global flag across all linkerd subcommands
which sets a `ImpersonationConfig` in the Kubernetes API config.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
### Summary
In order for Pods' tap servers to start authorizing tap clients, the tap server
must be able to check client names against the expected tap service name.
This change injects the `LINKERD2_PROXY_TAP_SVC_NAME` into proxy PodSpecs.
### Details
The tap servers on the individual resources being tapped should be able to
verify that the client is the tap service. The `LINKERD2_PROXY_TAP_SVC_NAME` is
now injected as an environment variable in the proxies so that it can check this
value against the client name of the TLS connection. Currently, this environment
will go unused. There is an open PR (linkerd2-proxy#290) to use this variable in
the proxy, but this is *not* dependent on that merging first.
Note: The variable is not injected if tap is disabled.
### Testing
Test output has been updated with the newly injected environment variable.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
PR #3056 introduced a cluster heartbeat cronjob to the Linkerd
installation. This implies the user installing Linkerd requires the
privileges to create CronJobs.
Update `linkerd check` to validate the user has privileges necessary to
create CronJobs.
Fixes#3057
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
`linkerd check`, the web dashboard, and Grafana all perform version
checks to validate Linkerd is up to date. It's common for users to
seldom execute these codepaths. This makes it difficult to identify what
versions of Linkerd are currently in use and what environments it is
being run in, which helps prioritize testing and backports.
Introduce a `heartbeat` CronJob to the default Linkerd install. The
cronjob executes every 24 hours, starting from 5 minutes after
`linkerd install` is run.
Example check URL:
https://versioncheck.linkerd.io/version.json?
install-time=1562761177&
k8s-version=v1.15.0&
meshed-pods=8&
rps=3&
source=heartbeat&
uuid=cc4bb700-3314-426a-9f0f-ec588b9df020&
version=git-b97ee9f7
Fixes#2961
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The openAPIV3Schema validation in the ServiceProfiles CRD is very limited in what it can validate and is obviated by more sophisticated validation done by the validating admission controller. Therefore, we would like to remove the openAPIV3Schema validation to reduce the size and complexity of the CRD object.
To do so, we must also bump the version of the ServiceProfile custom resource from v1alpha1 to v1alpha2. This ensures that when the controller is upgraded, it will attempt to watch the v1alpha2 resource. If it cannot (because, for example, the controller pod started before the ServiceProfile CRD was updated and therefore the v1alpha2 version does not exist) then it will go into a crash loop backoff until it can. This essentially means that the controller will wait for the CRD to be upgraded to include v1alpha2 before it will start.
Bumping the version is necessary because if we did not, it would be possible for the controller to start before the CRD is updated (removing the validation). In this case, when the CRD is edited, the controller will lose its list watch on ServiceProfiles and will stop getting updates.
Signed-off-by: Alex Leong <alex@buoyant.io>
When waiting for controller pods to be created or become ready, `linkerd check` doesn't offer any hints as to whether there has been an error (such as an ImagePullBackoff).
We add pod status to the output to make this more immediately obvious.
Fixes#2877
Signed-off-by: Alex Leong <alex@buoyant.io>
During operations with `linkerd stat` sometimes it's not clear the actual
pod status.
This commit introduces a method, to the `k8s`package, getting the pod status,
based on [`kubectl` logic](33a3e325f7/pkg/printers/internalversion/printers.go (L558-L640))
to expose the `STATUS` column for pods . Also, it changes the stat command
on the` cli` package adding a column when the resource type is a Pod.
Fixes#1967
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
`linkerd check --pre` validates that PSPs provide `NET_ADMIN`, but was
not validating `NET_RAW`, despite `NET_RAW` being required by Linkerd's
proxy-init container since #2969.
Introduce a `has NET_RAW capability` check to `linkerd check --pre`.
Fixes#3054
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The existing `linkerd install` error message for existing resources was
shared with `linkerd check`. Given the different contexts, the messaging
made more sense for `linkerd check` than for `linkerd install`.
Modify the error messaging for `linkerd install` to print a bare list
of existing resources, and provide instructions for proceeding.
For example:
```bash
$ linkerd install
Unable to install the Linkerd control plane. It appears that there is an existing installation:
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-controller
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity
If you are sure you'd like to have a fresh install, remove these resources with:
linkerd install --ignore-cluster | kubectl delete -f -
Otherwise, you can use the --ignore-cluster flag to overwrite the existing global resources.
```
Fixes#3045
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
PR #2603 modified the web process to read the UUID from the
`linkerd-config` ConfigMap rather than from a command line flag. The
`linkerd check` command relied on that command line flag to retrieve the
UUID as part of its version check.
Modify `linkerd check` to correctly retrieve the UUID from
`linkerd-config`. Also refactor `linkerd-config` retrieval and parsing
code to be shared between healthcheck, install, and upgrade.
Relates to #2961
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The `linkerd check` for healthy ReplicaSets had a generic
`control plane components ready` description, and a hint anchor to
`l5d-existence-psp`. While a ReplicaSet failure could definitely occur
due to psp, that hintAnchor was already in use by the "control plane
PodSecurityPolicies exist" check.
Rename the `control plane components ready` check to
`control plane replica sets are ready`, and the hintAnchor from
`l5d-existence-psp` to `l5d-existence-replicasets`.
Relates to https://github.com/linkerd/website/issues/372.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
Linkerd's CLI flags all match 1:1 with their `config.linkerd.io/*`
annotation counterparts, except `--enable-debug-sidecar`, which
corresponded to `config.linkerd.io/debug`. Additionally, the Linkerd
docs assume this 1:1 mapping.
Rename the `config.linkerd.io/debug` annotation to
`config.linkerd.io/enable-debug-sidecar`.
Relates to https://github.com/linkerd/website/issues/381
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This change implements the DstOverrides feature of the destination profile API (aka traffic splitting).
We add a TrafficSplitWatcher to the destination service which watches for TrafficSplit resources and notifies subscribers about TrafficSplits for services that they are subscribed to. A new TrafficSplitAdaptor then merges the TrafficSplit logic into the DstOverrides field of the destination profile.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Introduce new checks to determine existence of global resources and the
'linkerd-config' config map.
* Update pre-check to check for existence of global resources
This ensures that multiple control planes can't be installed into
different namespaces.
* Update integration test clean-up script to delete psp and crd
Signed-off-by: Ivan Sim <ivan@buoyant.io>
* Simplify port-forwarding code
Simplifies the establishment of a port-forwarding by moving the common
logic into `PortForward.Init()`
Stemmed from this
[comment](https://github.com/linkerd/linkerd2/pull/2937#discussion_r295078800)
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
Problem:
For logrus logging, When a TTY is detected, the default format is elapsed timestamp.
This caused not-readable timestamps when running the go processes locally.
Solution:
Have logrus print full timestamps instead of the time elapsed since starting
in the controller go processes when running them
* Set logrus timestamps on public-api and web startup
* Move log level setting to flags.go
`linkerd check` validates whether PSP's exist, and if the caller has the
`NET_ADMIN` capability. This check was previously failing if `NET_ADMIN`
was not found, even in the case where the PSP admission controller was
not running. Related, `linkerd install` now includes a PSP, so
`linkerd check` should also validate that the caller can create PSP's.
Modify the `NET_ADMIN` check to warn, but not fail, if PSP's are found
but the caller does not have `NET_ADMIN`. Update the warning message to
mention that this is only a problem if the PSP admission controller is
running (and will only be a problem during injection, since #2920
handles control plane installation by adding its own PSP).
Also introduce a check to validate the caller can create PSP's.
Fixes#2884, #2849
Signed-off-by: Andrew Seigner <siggy@buoyant.io>