* cli: add helm customization flags to core install
Fixes#5506
This branch adds helm way of customization through
`set`, `set-string`, `values`, `set-files` flags for
`linkerd install` cmd along with unit tests.
For this to work, the helm v3 engine rendering helpers
had to be used instead of our own wrapper type.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* viz: move some components into linkerd-viz
This branch moves the grafana,prometheus,web, tap components
into a new viz chart, following the same extension model that
multi-cluster and jaeger follow.
The components in viz are not injected during install time, and
will go through the injector. The `viz install` does not have any
cli flags to customize the install directly but instead follow the Helm
way of customization by using flags such as
`set`, `set-string`, `values`, `set-files`.
**Changes Include**
- Move `grafana`, `prometheus`, `web`, `tap` templates into viz extension.
- Remove all add-on related charts, logic and tests w.r.t CLI & Helm.
- Clean up `linkerd2/values.go` & `linkerd2/values.yaml` to not contain
fields related to viz components.
- Update `linkerd check` Healthchecks to not check for viz components.
- Create a new top level `viz` directory with CLI logic and Helm charts.
- Clean fields in the `viz/Values.yaml` to be in the `<component>.<property>`
model. Ex: `prometheus.resources`, `dashboard.image.tag`, etc so that it is
consistent everywhere.
**Testing**
```bash
# Install the Core Linkerd Installation
./bin/linkerd install | k apply -f -
# Wait for the proxy-injector to be ready
# Install the Viz Extension
./bin/linkerd cli viz install | k apply -f -
# Customized Install
./bin/linkerd cli viz install --set prometheus.enabled=false | k apply -f -
```
What is not included in this PR:
- Move of Controller from core install into the viz extension.
- Simplification and refactoring of the core chart i.e removing `.global`, etc.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5385 (second bug in there)
Added missing label `linkerd.io/control-plane-ns=linkerd` that all the
control plane resources must have, that is passed to `kubectl apply
--prune`
Fixes#5385
## The problems
- `linkerd install --ha` isn't honoring flags
- `linkerd upgrade --ha` is overridding existing configs silently or failing with an error
- *Upgrading HA instances from before 2.9 to version 2.9.1 results in configs being overridden silently, or the upgrade fails with an error*
## The cause
The change in #5358 attempted to fix `linkerd install --ha` that was only applying some of the `values-ha.yaml` defaults, by calling `charts.NewValues(true)` and merging that with the values built from `values.yaml` overriden by the flags. It turns out the `charts.NewValues()` implementation was by itself merging against `values.yaml` and as a result any flag was getting overridden by its default.
This also happened when doing `linkerd upgrade --ha` on an existing instance, which could result in silently overriding settings, or it could also fail loudly like for example when upgrading set up that has an external issuer (in this case the issuer cert won't be able to be read during upgrade and an error would occur as described in #5385).
Finally, when doing `linkerd upgrade` (no --ha flag) on an HA install from before 2.9 results in configs getting overridden as well (silently or with an error) because in order to generate the `linkerd-config-overrides` secret, the original install flags are retrieved from `linkerd-config` via the `loadStoredValuesLegacy()` function which then effectively ends up performing a `linkerd upgrade` with all the flags used for `linkerd install` and falls into the same trap as above.
## The fix
In `values.go` the faulting merging logic is not used anymore, so now `NewValues()` only returns the default values from `values.yaml` and doesn't require an argument anymore. It calls `readDefaults()` which now only returns the appropriate values depending on whether we're on HA or not.
There's a new function `MergeHAValues()` that merges `values-ha.yaml` into the current values (it doesn't look into `values.yaml` anymore), which is only used when processing the `--ha` flag in `options.go`.
## How to test
To replicate the issue try setting a custom setting and check it's not applied:
```bash
linkerd install --ha --controller-log level debug | grep log.level
- -log-level=info
```
## Followup
This wasn't caught because we don't have HA integration tests. Now that our test infra is based on k3d, it should be easy to make such a test using a cluster with multiple nodes. Either that or issuing `linkerd install --ha` with additional configs and compare against a golden file.
* upgrades: make webhooks restart if TLS creds are updated
Fixes#5231
Currently, we do not re-use the TLS certs during upgrades, which
means that the secrets are updated while the webhooks are still
paired with the older ones, causing the webhook requests to fail.
This can be solved by making webhooks be restarted whenever there
is a change in the certs. This can be performed by storing the hash
of the `*-rbac` file, which contains the secrets, thus making the
pod templates change whenever there is an update to the certs thus
making restarts required.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* extension: Add new jaeger binary
This branch adds a new jaeger binary project in the jaeger directory.
This follows the same logic as that of `linkerd install`. But as
`linkerd install` VFS logic expects charts to be present in `/charts`
directory, This command gets its own static pkg to generate its own
VFS for its chart.
This covers only the install part of the command
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#4874
This branch upgrades Helm sdk from v2 to v3 *without any functionaly
changes*, just replacing types with newer API's.
This should not effect our current support for Helm v2 as we did not
change any of the underlying tempaltes(which work with Helm v2). This
works becuase we did not use any of the API's that read the Chart
metadata (which are the only ones changed from v2 to v3) and currently
manually load files and pass ito the sdk.
This PR should provide a great point to start more of the newer Helm v3
API's including for the upgrade workflow thus allowing us to make
Linkerd CLI more simpler.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
CLI crashes if linkerd-config contains unexpected values.
Add a safe accessor that initializes an empty Global on the first
access. Refactor all accesses to use the newly introduced accessor using
gopls.
Add test for linkerd-config data without Global.
Fixes#5215
Co-authored-by: Itai Schwartz <yitai27@gmail.com>
Signed-off-by: Hod Bin Noon <bin.noon.hod@gmail.com>
This is a major refactor of the install/upgrade code which removes the config protobuf and replaces it with a config overrides secret which stores overrides to the values struct. Further background on this change can be found here: https://github.com/linkerd/linkerd2/discussions/4966
Note: as-is this PR breaks injection. There is work to move injection onto a Values-based config which must land before this can be merged.
A summary of the high level changes:
* the install, global, and proxy fields of linkerd-config ConfigMap are no longer populated
* the CLI install flow now follows these simple steps:
* load default Values from the chart
* update the Values based on the provided CLI flags
* render the chart with these values
* also render a Secret/linkerd-config-overrides which describes the values which have been changed from their defaults
* the CLI upgrade flow now follows these simple stesp:
* load the default Values from the chart
* if Secret/linkerd-config-overrides exists, apply the overrides onto the values
* otherwise load the legacy ConfigMap/linkerd-config and use it to updates the values
* further update the values based on the provided CLI flags
* render the chart and the Secret/linkerd-config-overrides as above
* Helm install and upgrade is unchanged
Signed-off-by: Alex Leong <alex@buoyant.io>
This PR Updates the Injection Logic (both CLI and proxy-injector)
to use `Values` struct instead of protobuf Config, part of our move
in removing the protobuf.
This does not touch any of the flags, install related code.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Alex Leong <alex@buoyant.io>
This branch updates the check functionality to read
the new `linkerd-config.values` which contains the full
Values struct showing the current state of the Linkerd
installation. (being added in #5020 )
This is done by adding a new `FetchCurrentConfiguraiton`
which first tries to get the latest, if not falls back
to the older `linkerd-config` protobuf format.`
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
This PR adds a new secret to the output of `linkerd install` called `linkerd-config-overrides`. This is the first step towards simplifying the configuration of the linkerd install and upgrade flow through the CLI. This secret contains the subset of the values.yaml which have been overridden. In other words, the subset of values which differ from their default values. The idea is that this will give us a simpler way to produce the `linkerd upgrade` output while still persisting options set during install. This will eventually replace the `linkerd-config` configmap entirely.
This PR only adds and populates the new secret. The secret is not yet read or used anywhere. Subsequent PRs will update individual control plane components to accept their configuration through flags and will update the `linkerd upgrade` flow to use this secret instead of the `linkerd-config` configmap.
This secret is only generated by the CLI and is not present or required when installing or upgrading with Helm.
Here are sample contents of the secret, base64 decoded. Note that identity tls context is saved as an override so that it can be persisted across updates. Since these fields contain private key material, this object must be a secret. This secret is only used for upgrades and thus only the CLI needs to be able to read it. We will not create any RBAC bindings to grant service accounts access to this secret.
```
global:
identityTrustAnchorsPEM: |
-----BEGIN CERTIFICATE-----
MIIBhDCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0
eS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMjAwODI1MjMzMTU3WhcNMjEwODI1
MjMzMjE3WjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j
YWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQ0e7IPBlVZ03TL8UVlODllbh8b
2pcM5mbtSGgpX9z0l3n5M70oHn715xu2szh63oBjPl2ZfOA5Bd43cJIksONQo0Iw
QDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC
MA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAI7Sy8P+3TYCJBlK
pIJSZD4lGTUyXPD4Chl/FwWdFfvyAiEA6AgCPbNCx1dOZ8RpjsN2icMRA8vwPtTx
oSfEG/rBb68=
-----END CERTIFICATE-----
heartbeatSchedule: '42 23 * * * '
identity:
issuer:
crtExpiry: "2021-08-25T23:32:17Z"
tls:
crtPEM: |
-----BEGIN CERTIFICATE-----
MIIBhDCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0
eS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMjAwODI1MjMzMTU3WhcNMjEwODI1
MjMzMjE3WjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j
YWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQ0e7IPBlVZ03TL8UVlODllbh8b
2pcM5mbtSGgpX9z0l3n5M70oHn715xu2szh63oBjPl2ZfOA5Bd43cJIksONQo0Iw
QDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC
MA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAI7Sy8P+3TYCJBlK
pIJSZD4lGTUyXPD4Chl/FwWdFfvyAiEA6AgCPbNCx1dOZ8RpjsN2icMRA8vwPtTx
oSfEG/rBb68=
-----END CERTIFICATE-----
keyPEM: |
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIJaqjoDnqkKSsTqJMGeo3/1VMfJTBsMEuMWYzdJVxIhToAoGCCqGSM49
AwEHoUQDQgAENHuyDwZVWdN0y/FFZTg5ZW4fG9qXDOZm7UhoKV/c9Jd5+TO9KB5+
9ecbtrM4et6AYz5dmXzgOQXeN3CSJLDjUA==
-----END EC PRIVATE KEY-----
```
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#4191#4993
This bumps Kubernetes client-go to the latest v0.19.2 (We had to switch directly to 1.19 because of this issue). Bumping to v0.19.2 required upgrading to smi-sdk-go v0.4.1. This also depends on linkerd/stern#5
This consists of the following changes:
- Fix ./bin/update-codegen.sh by adding the template path to the gen commands, as it is needed after we moved to GOMOD.
- Bump all k8s related dependencies to v0.19.2
- Generate CRD types, client code using the latest k8s.io/code-generator
- Use context.Context as the first argument, in all code paths that touch the k8s client-go interface
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Push docker images to ghcr.io instead of gcr.io
The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.
Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.
Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.
Besides that, the other changes are just a global search and replace (s/gcr.io\/linkerd-io/ghcr.io\/linkerd/).
Fixes#4790
This PR removes both the SMI-Metrics templates along with the
experimental sub-commands. This also removes pkg `smi-metrics`
as there is no direct use of it without the commands.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
## What/How
@adleong pointed out in #4780 that when enabling slices during an upgrade, the new value does not persist in the `linkerd-config` ConfigMap. I took a closer look and it seems that we were never overwriting the values in case they were different.
* To fix this, I added an if block when validating and building the upgrade options -- if the current flag value differs from what we have in the ConfigMap, then change the ConfigMap value.
* When doing so, I made sure to check that if the cluster does not support `EndpointSlices` yet the flag is set to true, we will error out. This is done similarly (copy&paste similarily) to what's in the install part.
* Additionally, I have noticed that the helm ConfigMap template stored the flag value under `enableEndpointSlices` field name. I assume this was not changed in the initial PR to reflect the changes made in the protocol buffer. The API (and thus the CLI) uses the field name `endpointSliceEnabled` instead. I have changed the config template so that helm installations will use the same field, which can then be used in the destination service or other components that may implement slice support in the future.
Signed-off-by: Matei David <matei.david.35@gmail.com>
The upgrade tests were failing due to hardcoded certificates which had expired. Additionally, these tests contained large swaths of yaml that made it very difficult to understand the semantics of each test case and even more difficult to maintain.
We greatly improve the readability and maintainability of these tests by using a slightly different approach. Each test follows this basic structure:
* Render an install manifest
* Initialize a fake k8s client with the install manifest (and sometimes additional manifests)
* Render an upgrade manifest
* Parse the manifests as yaml tree structures
* Perform a structured diff on the yaml tree structured and look for expected and unexpected differences
The install manifests are generated dynamically using the regular install flow. This means that we no longer need large sections of hardcoded yaml in the tests themselves. Additionally, we now asses the output by doing a structured diff against the install manifest. This means that we no longer need golden files with explicit expected output.
All test cases were preserved except for the following:
* Any test cases related to multiphase install (config/control plane) were not replicated. This flow doesn't follow the same pattern as the tests above because the install and upgrade manifests are not expected to be the same or similar. I also felt that these tests were lower priority because the multiphase install/upgrade feature does not seem to be very popular and is a potential candidate for deprecation.
* Any tests involving upgrading from a very old config were not replicated. The code to generate these old style configs is no longer present in the codebase so in order to test this case, we would need to resort to hardcoded install manifests. These tests also seemed low priority to me because Linkerd versions that used the old config are now over 1 year old so it may no longer be critical that we support upgrading from them. We generally recommend that users upgrading from an old version of Linkerd do so by upgrading through each major version rather than directly to the latest.
Signed-off-by: Alex Leong <alex@buoyant.io>
This PR moves default values into add-on specific values.yaml thus
allowing us to update default values as they would not be present in
linkerd-config-addons cm.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* support overriding inbound and outbound connect timeouts.
* add validation on user provided TCP connect timeouts
* convert valid time values into ms
Signed-off-by: Matt Miller <mamiller@rosettastone.com>
EndpointSlices have been made opt-in due to their experimental nature. This PR
introduces a new install flag 'enableEndpointSlices' that will allow adopters to
specify in their cli install or helm install step whether they would like to
use endpointslices as a resource in the destination service, instead of the
endpoints k8s resource.
Signed-off-by: Matei David <matei.david.35@gmail.com>
This moves Prometheus as a add-on, thus making it optional but enabled by default. The also make `linkerd-prometheus` more configurable, and allow it to have its own life-cycle for upgrades, configuration, etc.
This work will be followed by documentation that help users configure existing Prometheus to work with Linkerd.
**Changes Include:**
- moving prometheus manifests into a separate chart at `charts/add-ons/prometheus`, and adding it as a dependency to `linkerd2`
- implement the `addOn` interface to support the same with CLI.
- include configuration in `linkerd-config-addons`
**User Facing Changes:**
The default install experience does not change much but for users who have already configured Prometheus differently, would need to apply the same using the new configuration fields present in chart README
* feat: add log format annotation and helm value
Json log formatting has been added via https://github.com/linkerd/linkerd2-proxy/pull/500
but wiring the option through as an annotation/helm value is still
necessary.
This PR adds the annotation and helm value to configure log format.
Closes#2491
Signed-off-by: Naseem <naseem@transit.app>
In #4585 we are observing an issue where a loop is encountered when using nginx ingress. The problem is that the outbound proxy does a dst lookup on the IP address which happens to be the very same address the ingress is listening on.
In order to avoid situations like that this PR introduces a way to modify the set of networks for which the proxy shall do IP based discovery. The change introduces a helm flag `.Values.global.proxy.destinationGetNetworks` that can be used to modify this value. There are two ways a user can affect the this setting:
- setting the `destinationGetNetworks` field in values during a Helm install, which changes the default on all injected pods
- using an annotation ` config.linkerd.io/proxy-destination-get-networks` for injected workloads to override this value
Note that this setting cannot be tweaked through the `install` or `inject` command
Fix: #4585
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Certain install flags are intended to help with Linkerd development and generally are not useful (and are potentially confusing) to users.
We hide these flags in release (edge or stable) builds of the CLI but show them in all other builds. The list of affected flags is:
* control-plane-version
* proxy-image
* proxy-version
* image-pull-policy
* init-image
* init-image-version
Signed-off-by: Alex Leong <alex@buoyant.io>
* Support Multi-stage install with Add-Ons
* add upgrade tests for add-ons
* add multi stage upgrade unit tests
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* use downward API to mount labels to the proxy container as a volume
* add namespace as a label to the pod
* add a trace inject test
* add downwardAPi for controlplaneTracing
* add controlPlaneTracing condition to volumeMounts
* update add-ons to have workload-ns
* add workload-ns label to control-plane components
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
This change adds a `--smi-metrics` install flag which controls if the SMI-metrics controller and associated RBAC and APIService resources are installed. The flag defaults to false and is hidden.
We plan to remove this flag or default it to true if and when the SMI-Metrics integration graduates from experimental.
Signed-off-by: Alex Leong <alex@buoyant.io>
Adds the SMI metrics API to the Linkerd install flow. This installs the SMI metrics controller deployment, the SMI metrics ApiService object, and supporting RBAC, and config resources.
This is the first step toward having Linkerd consume the SMI metrics API in the CLI and web dashboard.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Moves Common templates needed to partials
As add-ons re-use the partials helm chart, all the templates needed by multiple charts should be present in partials
This commit also updates the helm tests
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* add tracing add-on helm chart
Tracing sub-chart includes open-census and jaeger components as a sub-chart which can be enabled as needed
* Updated Install path to also install add-ons
This includes new interface for add-ons to implement, with example tracing implementation
* Updates Linkerd install path to also install add-ons
Changes include:
- Adds an optional Linkerd Values configmap which stores add-on configuration when add-ons are present.
- Updates Linkerd install path to check for add-ons and render their sub-charts.
- Adds a install Option called config, which is used to pass confiugration for add-ons.
- Uses a fork of mergo, to over-write default Values with the Values struct generated from config.
* Updates the upgrade path about add-ons.
Upgrade path now checks for the linkerd-values cm, and overwrites the default values with it, if present.
It then checks the config option, for any further overwrites
* Refactor linkerd-values and re-update tests
also adds relevant nil checks
* Refactor code to fix linting issues
* Fixes an error with linkerd-config global values
Also refactors the linkerd-values cm to work the same with helm
* Fix a nil pointer issue for tests
* Updated Tracing add-on chart meta-data
Also introduced a defaultGetFiles method for add-ons
* Add add-on/charts to gitignore
* refactor gitignore for chart deps
* Moves sub-charts to /charts directly
* Refactor linkerd values cm
* Add comment in linkerd-values
* remove extra controlplanetracing flag
* Support Stages deployment for add-ons along with tests
* linting fix
* update tracing rbac
* Removes the need for add-on Interface
- Uses helm loading capabiltiies to get info about add-ons
- Uses reflection to not have to unnecessarily add checks for each add-on type
* disable tracing flag
* Remove dep on forked mergo
- Re-use merge from helm
* Re-use helm's merge
* Override the chartDir path during tests
* add error check
* Updated the dependency iteration code
Currently, the charts directory, will not have the deps in the repo. So, Code is updated to read the dependencies from requirements.yaml
and use that info to read templates from the relevant add-ons directory.
* Hard Code add-ons name
* Remove struct details for add-ons
- As we don't use fields of a add-on struct, we don't have them to be typed. Instead we can just use the `enabled` flag using reflection
- Users can just use map[string]interface{} as the add-on type.
* update unit tests
* linting fix
* Rename flag to addon-config
* Use Chart loading logic
- This code uses chart loading to read the files and keep in a vfs.
- Once we have those files read we will then use them for generation of sub-charts.
* Go fmt fix
* Update the linkerd-values cm to use second level field
* Add relevant unit tests for mergeRaw
* linting fix
* Move addon tests to a new file
* Fix golden files
* remove addon install unit test
* Refactor sub-chart load logic
* Add install tracing unit test
* golden file update for tracing install
* Update golden files to reflect another pr changes
* Move addon-config flag to recordFlagSet
* add relevant tracing enabled checks
* linting fix
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* feat: added prometheus Registry Option for install command
* chore: draft commit
* Draft for custom prometheus image
* Support for custom prometheus image
This PR adds support to override the default prometheus image name and use custom image names in private repositories
* Added default Prometheus Image from values.yaml
The default can be overridden by the argument given in installOptions
* chore: fixed failing check
* Fixed fialing check
* Updated the tests as per the new flag
* Air-gapped installation for prometheus-image
* Air Gapped installation for Prometheus Image
* Added regex for prometheus repository/image cli option
Signed-off-by: Christy Jacob <christyjacob4@gmail.com>
**Subject**
Fixes bug where override of Docker registry was not being applied to debug containers (#3851)
**Problem**
Overrides for Docker registry are not being applied to debug containers and provide no means to correct the image.
**Solution**
This update expands the `data.proxy` configuration section within the Linkerd `ConfigMap` to maintain the overridden image name for debug containers at _install_-time similar to handling of the `proxy` and `proxyInit` images.
This change also enables the further override option of the registry for debug containers at _inject_-time given utilization of the `--registry` CLI option.
**Validation**
Several new unit tests have been created to confirm functionality. In addition, the following workflows were run through:
### Standard Workflow with Custom Registry
This workflow installs Linkerd control plane based upon a custom registry, then injecting the debug sidecar into a service.
* Start with a k8s instance having no Linkerd installation
* Build all images locally using `bin/docker-build`
* Create custom tags (using same version) for generated images, e.g. `docker tag gcr.io/linkerd-io/debug:git-a4ebecb6 javaducky.com/linkerd-io/debug:git-a4ebecb6`
* Install Linkerd with registry override `bin/linkerd install --registry=javaducky.com/linkerd-io | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap now contains the debug image name, pull policy, and version within the `data.proxy` section
* Request injection of the debug image into an available container. I used the Emojivoto voting service as described in https://linkerd.io/2/tasks/using-the-debug-container/ as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Once the deployment creates a new pod for the service, inspection should show that the container now includes the "linkerd-debug" container name based on the applicable override image seen previously within the ConfigMap
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Custom Registry Override at Injection
This builds upon the “Standard Workflow with Custom Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=gcr.io/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Standard Workflow with Default Registry
This workflow is the typical workflow which utilizes the standard Linkerd image registry.
* Uninstall the Linkerd control plane using `bin/linkerd install --ignore-cluster | kubectl delete -f -` as described at https://linkerd.io/2/tasks/uninstall/
* Clean the Emojivoto environment using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl delete -f -` then reinstall using `curl -sL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -`
* Perform standard Linkerd installation as `bin/linkerd install | kubectl apply -f -`
* Once Linkerd has been fully initialized, you should be able to confirm that the `linkerd-config` ConfigMap references the default debug image of `gcr.io/linkerd-io/debug` within the `data.proxy` section
* Request injection of the debug image into an available container as `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar - | kubectl apply -f -`
* Debugging can also be verified by viewing debug container logs as `kubectl -n emojivoto logs deploy/voting linkerd-debug -f`
* Modifying the `config.linkerd.io/enable-debug-sidecar` annotation, setting to “false”, should show that the pod will be recreated no longer running the debug container.
### Overriding the Default Registry at Injection
This workflow builds upon the “Standard Workflow with Default Registry” by overriding the Docker registry utilized for the debug container at the time of injection.
* “Clean” the Emojivoto voting service by removing any Linkerd annotations from the deployment
* Request injection similar to before, except provide the `--registry` option as in `kubectl -n emojivoto get deploy/voting -o yaml | bin/linkerd inject --enable-debug-sidecar --registry=javaducky.com/linkerd-io - | kubectl apply -f -`
* Inspection of the deployment config should now show the override annotation for `config.linkerd.io/debug-image` having the debug container from the new registry. Viewing the running pod should show that the `linkerd-debug` container was injected and running the correct image. Of note, the proxy and proxy-init images are still running the “original” override images.
* As before, modifying the `config.linkerd.io/enable-debug-sidecar` annotation setting to “false”, should show that the pod will be recreated no longer running the debug container.
Fixes issue #3851
Signed-off-by: Paul Balogh javaducky@gmail.com