This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
The misspellings have been reported at aaf440489e (commitcomment-41423663)
The action reports that the changes in this PR would make it happy: 5b82c6c5ca
Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
Fixes#4708
Adds a `linkerd multicluster uninstall` command which outputs the manifests required to uninstall the mutlicluster components. This command first checks that no links exist and advises that any links must be removed with `linkerd multicluster unlink` before proceeding. Typical usage is:
```
linkerd multicluster uninstall | kubectl delete -f -
```
Signed-off-by: Alex Leong <alex@buoyant.io>
When the Link CRD does not exist, multicluster checks in `linkerd check` will be skipped. The `--multicluster` flag is intended to force these checks on, but was being ignored.
We update the options to force the multicluster checks on when the `--multicluster` flag is used, as intended.
Now when `linkerd check --multicluster` is run on a cluster without the multicluster support installed, it gives the following output:
```
linkerd-multicluster
--------------------
× Link CRD exists
multicluster.linkerd.io/Link CRD is missing: the server could not find the requested resource
see https://linkerd.io/checks/#l5d-multicluster-link-crd-exists for hints
Status check results are ×
```
Signed-off-by: Alex Leong <alex@buoyant.io>
Supersedes #4846
Bump proxy-init to v1.3.6, containing CNI fixes and support for
multi-arch builds.
#4846 included this in v1.3.5 but proxy.golang.org refused to update the
modified SHA
The upgrade tests were failing due to hardcoded certificates which had expired. Additionally, these tests contained large swaths of yaml that made it very difficult to understand the semantics of each test case and even more difficult to maintain.
We greatly improve the readability and maintainability of these tests by using a slightly different approach. Each test follows this basic structure:
* Render an install manifest
* Initialize a fake k8s client with the install manifest (and sometimes additional manifests)
* Render an upgrade manifest
* Parse the manifests as yaml tree structures
* Perform a structured diff on the yaml tree structured and look for expected and unexpected differences
The install manifests are generated dynamically using the regular install flow. This means that we no longer need large sections of hardcoded yaml in the tests themselves. Additionally, we now asses the output by doing a structured diff against the install manifest. This means that we no longer need golden files with explicit expected output.
All test cases were preserved except for the following:
* Any test cases related to multiphase install (config/control plane) were not replicated. This flow doesn't follow the same pattern as the tests above because the install and upgrade manifests are not expected to be the same or similar. I also felt that these tests were lower priority because the multiphase install/upgrade feature does not seem to be very popular and is a potential candidate for deprecation.
* Any tests involving upgrading from a very old config were not replicated. The code to generate these old style configs is no longer present in the codebase so in order to test this case, we would need to resort to hardcoded install manifests. These tests also seemed low priority to me because Linkerd versions that used the old config are now over 1 year old so it may no longer be critical that we support upgrading from them. We generally recommend that users upgrading from an old version of Linkerd do so by upgrading through each major version rather than directly to the latest.
Signed-off-by: Alex Leong <alex@buoyant.io>
* When releasing, build and upload the amd64, arm64 and arm architectures builds for the CLI
* Refactored `Dockerfile-bin` so it has separate stages for single and multi arch builds. The latter stage is only used for releases.
Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>
This PR moves default values into add-on specific values.yaml thus
allowing us to update default values as they would not be present in
linkerd-config-addons cm.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Some installations upgrading from versions prior to 2.7.x may have missing debug image name and version. This fix ensures that the default values are in place for this scenario and additionally upgrades the version of debug image with the control plane version.
Signed-off-by: Paul Balogh <javaducky@gmail.com>
Build ARM docker images in the release workflow.
# Changes:
- Add a new env key `DOCKER_MULTIARCH` and `DOCKER_PUSH`. When set, it will build multi-arch images and push them to the registry. See https://github.com/docker/buildx/issues/59 for why it must be pushed to the registry.
- Usage of `crazy-max/ghaction-docker-buildx ` is necessary as it already configured with the ability to perform cross-compilation (using QEMU) so we can just use it, instead of manually set up it.
- Usage of `buildx` now make default global arguments. (See: https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope)
# Follow-up:
- Releasing the CLI binary file in ARM architecture. The docker images resulting from these changes already build in the ARM arch. Still, we need to make another adjustment like how to retrieve those binaries and to name it correctly as part of Github Release artifacts.
Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>
Fixes#4707
In order to remove a multicluster link, we add a `linkerd multicluster unlink` command which produces the yaml necessary to delete all of the resources associated with a `linkerd multicluster link`. These are:
* the link resource
* the service mirror controller deployment
* the service mirror controller's RBAC
* the probe gateway mirror for this link
* all mirror services for this link
This command follows the same pattern as the `linkerd uninstall` command in that its output is expected to be piped to `kubectl delete`. The typical usage of this command is:
```
linkerd --context=source multicluster unlink --cluster-name=foo | kubectl --context=source delete -f -
```
This change also fixes the shutdown lifecycle of the service mirror controller by properly having it listen for the shutdown signal and exit its main loop.
A few alternative designs were considered:
I investigated using owner references as suggested [here](https://github.com/linkerd/linkerd2/issues/4707#issuecomment-653494591) but it turns out that owner references must refer to resources in the same namespace (or to cluster scoped resources). This was not feasible here because a service mirror controller can create mirror services in many different namespaces.
I also considered having the service mirror controller delete the mirror services that it created during its own shutdown. However, this could lead to scenarios where the controller is killed before it finishes deleting the services that it created. It seemed more reliable to have all the deletions happen from `kubectl delete`. Since this is the case, we avoid having the service mirror controller delete mirror services, even when the link is deleted, to avoid the race condition where the controller and CLI both attempt to delete the same mirror services and one of them fails with a potentially alarming error message.
Signed-off-by: Alex Leong <alex@buoyant.io>
* bump prometheus to the latest v2.19.3
latest prometheus version shows a lot of decrease in the memory usage
and other benefits
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* CNI add support for priorityClassName
As requested in #2981 one should be able to optionally define a priorityClassName for the linkerd2 pods.
With this commit support for priorityClassName is added to the CNI plugin helm chart as well as to the
cli command for installing the CNI plugin.
Also added an `installNamespace` Helm option for the CNI installation.
Implements part of #2981.
Signed-off-by: alex.berger@nexiot.ch <alex.berger@nexiot.ch>
This pr adds `globa.prometheusUrl` field which will be used to configure publlic-api, hearbeat, grafana, etc (i,e query path) to use a external Prometheus.
* support overriding inbound and outbound connect timeouts.
* add validation on user provided TCP connect timeouts
* convert valid time values into ms
Signed-off-by: Matt Miller <mamiller@rosettastone.com>
* Add sidecar container support for linkerd-prometheus
Adds a new setting to the Prometheus' Helm config, allowing adding any kind of sidecar containers to the main container.
The specific use case that inspired this was for exporting data from Prometheus to external systems (e.g. cloudwatch, stackdriver, datadog) using a process that watches the prometheus write-ahead log (WAL).
Signed-off-by: Nathan J. Mehl <n@oden.io>
Add a new structure on the destination controller side to keep track of contextual information.
The token format has been changed from ns:<namespace> to a JSON format so that more variables can be
encdoed in the token. As part of this PR, a new field 'nodeName' has been added to help with service
topologies.
Fixes#4498
Signed-off-by: Matei David <matei.david.35@gmail.com>
This PR removes the service mirror controller from `linkerd mc install` to `linkerd mc link`, as described in https://github.com/linkerd/rfc/pull/31. For fuller context, please see that RFC.
Basic multicluster functionality works here including:
* `linkerd mc install` installs the Link CRD but not any service mirror controllers
* `linkerd mc link` creates a Link resource and installs a service mirror controller which uses that Link
* The service mirror controller creates and manages mirror services, a gateway mirror, and their endpoints.
* The `linkerd mc gateways` command lists all linked target clusters, their liveliness, and probe latences.
* The `linkerd check` multicluster checks have been updated for the new architecture. Several checks have been rendered obsolete by the new architecture and have been removed.
The following are known issues requiring further work:
* the service mirror controller uses the existing `mirror.linkerd.io/gateway-name` and `mirror.linkerd.io/gateway-ns` annotations to select which services to mirror. it does not yet support configuring a label selector.
* an unlink command is needed for removing multicluster links: see https://github.com/linkerd/linkerd2/issues/4707
* an mc uninstall command is needed for uninstalling the multicluster addon: see https://github.com/linkerd/linkerd2/issues/4708
Signed-off-by: Alex Leong <alex@buoyant.io>
* Migrate CI to docker buildx and other improvements
## Motivation
- Improve build times in forks. Specially when rerunning builds because of some flaky test.
- Start using `docker buildx` to pave the way for multiplatform builds.
## Performance improvements
These timings were taken for the `kind_integration.yml` workflow when we merged and rerun the lodash bump PR (#4762)
Before these improvements:
- when merging: `24:18`
- when rerunning after merge (docker cache warm): `19:00`
- when running the same changes in a fork (no docker cache): `32:15`
After these improvements:
- when merging: `25:38`
- when rerunning after merge (docker cache warm): `19:25`
- when running the same changes in a fork (docker cache warm): `19:25`
As explained below, non-forks and forks now use the same cache, so the important take is that forks will always start with a warm cache and we'll no longer see long build times like the `32:15` above.
The downside is a slight increase in the build times for non-forks (up to a little more than a minute, depending on the case).
## Build containers in parallel
The `docker_build` job in the `kind_integration.yml`, `cloud_integration.yml` and `release.yml` workflows relied on running `bin/docker-build` which builds all the containers in sequence. Now each container is built in parallel using a matrix strategy.
## New caching strategy
CI now uses `docker buildx` for building the container images, which allows using an external cache source for builds, a location in the filesystem in this case. That location gets cached using actions/cache, using the key `{{ runner.os }}-buildx-${{ matrix.target }}-${{ env.TAG }}` and the restore key `${{ runner.os }}-buildx-${{ matrix.target }}-`.
For example when building the `web` container, its image and all the intermediary layers get cached under the key `Linux-buildx-web-git-abc0123`. When that has been cached in the `main` branch, that cache will be available to all the child branches, including forks. If a new branch in a fork asks for a key like `Linux-buildx-web-git-def456`, the key won't be found during the first CI run, but the system falls back to the key `Linux-buildx-web-git-abc0123` from `main` and so the build will start with a warm cache (more info about how keys are matched in the [actions/cache docs](https://docs.github.com/en/actions/configuring-and-managing-workflows/caching-dependencies-to-speed-up-workflows#matching-a-cache-key)).
## Packet host no longer needed
To benefit from the warm caches both in non-forks and forks like just explained, we're required to ditch doing the builds in Packet and now everything runs in the github runners VMs.
As a result there's no longer separate logic for non-forks and forks in the workflow files; `kind_integration.yml` was greatly simplified but `cloud_integration.yml` and `release.yml` got a little bigger in order to use the actions artifacts as a repository for the images built. This bloat will be fixed when support for [composite actions](https://github.com/actions/runner/blob/users/ethanchewy/compositeADR/docs/adrs/0549-composite-run-steps.md) lands in github.
## Local builds
You still are able to run `bin/docker-build` or any of the `docker-build.*` scripts. And to make use of buildx, run those same scripts after having set the env var `DOCKER_BUILDKIT=1`. Using buildx supposes you have installed it, as instructed [here](https://github.com/docker/buildx).
## Other
- A new script `bin/docker-cache-prune` is used to remove unused images from the cache. Without that the cache grows constantly and we can rapidly hit the 5GB limit (when the limit is attained the oldest entries get evicted).
- The `go-deps` dockerfile base image was changed from `golang:1.14.2` (ubuntu based) to `golang-1:14.2-alpine` also to conserve cache space.
# Addressed separately in #4875:
Got rid of the `go-deps` image and instead added something similar on top of all the Dockerfiles dealing with `go`, as a first stage for those Dockerfiles. That continues to serve as a way to pre-populate go's build cache, which speeds up the builds in the subsequent stages. That build should in theory be rebuilt automatically only when `go.mod` or `go.sum` change, and now we don't require running `bin/update-go-deps-shas`. That script was removed along with all the logic elsewhere that used it, including the `go_dependencies` job in the `static_checks.yml` github workflow.
The list of modules preinstalled was moved from `Dockerfile-go-deps` to a new script `bin/install-deps`. I couldn't find a way to generate that list dynamically, so whenever a slow-to-compile dependency is found, we have to make sure it's included in that list.
Although this simplifies the dev workflow, note that the real motivation behind this was a limitation in buildx's `docker-container` driver that forbids us from depending on images that haven't been pushed to a registry, so we have to resort to building the dependencies as a first stage in the Dockerfiles.
EndpointSlices have been made opt-in due to their experimental nature. This PR
introduces a new install flag 'enableEndpointSlices' that will allow adopters to
specify in their cli install or helm install step whether they would like to
use endpointslices as a resource in the destination service, instead of the
endpoints k8s resource.
Signed-off-by: Matei David <matei.david.35@gmail.com>
This moves Prometheus as a add-on, thus making it optional but enabled by default. The also make `linkerd-prometheus` more configurable, and allow it to have its own life-cycle for upgrades, configuration, etc.
This work will be followed by documentation that help users configure existing Prometheus to work with Linkerd.
**Changes Include:**
- moving prometheus manifests into a separate chart at `charts/add-ons/prometheus`, and adding it as a dependency to `linkerd2`
- implement the `addOn` interface to support the same with CLI.
- include configuration in `linkerd-config-addons`
**User Facing Changes:**
The default install experience does not change much but for users who have already configured Prometheus differently, would need to apply the same using the new configuration fields present in chart README
The splitStringListToPorts helm function is currently incorrectly formating a list of ports as an array of Port objects that look ike {"port" : 555}. The config map protobuf representation however expects that the ignoreOutboundPorts and ignoreInboundPorts fields are are list of PortRange objects ({"portRange" : 555}).
This was causing the injector to return an empty string when trying to parse a PortRange object resulting in the ports not getting set correctly when injecting workloads. Note that this is happening only with helm installations as this is when we are actually using a helm template for outputting the config map.
To fix that the splitStringListToPorts helm function is changed to format the objects as the json representation of PortRange and is renamed to splitStringListToPortRanges
Fix: #4679
Signed-off-by: Zahari Dichev zaharidichev@gmail.com
* update helm render tests to read child charts values.yaml
Helm installation by default, considers values.yaml for dependend charts
and uses them in rendering. This function is being used for add-ons to
keep the default template values, allowing further overriden from the
parent chart's i.e linkerd2 values.yaml or --addon-config through CLI.
This PR updates the Helm tests to reflect the same i.e consider
values.yaml of chart dependencies if present.
This does not have any UX changes but helps with the follow up
add-on related work.
Using following command the wrong spelling were found and later on
fixed:
```
codespell --skip CHANGES.md,.git,go.sum,\
controller/cmd/service-mirror/events_formatting.go,\
controller/cmd/service-mirror/cluster_watcher_test_util.go,\
SECURITY_AUDIT.pdf,.gcp.json.enc,web/app/img/favicon.png \
--ignore-words-list=aks,uint,ans,files\' --check-filenames \
--check-hidden
```
Signed-off-by: Suraj Deshmukh <surajd.service@gmail.com>
Based on the [EndpointSlice PR](https://github.com/linkerd/linkerd2/pull/4663), this is just the k8s/api support for endpointslices to shorten the first PR.
* Adds CRD
* Adds functions that check whether the cluster has EndpointSlice access
* Adds discovery & endpointslice informers to api.
Signed-off-by: Matei David <matei.david.35@gmail.com>
* feat: add log format annotation and helm value
Json log formatting has been added via https://github.com/linkerd/linkerd2-proxy/pull/500
but wiring the option through as an annotation/helm value is still
necessary.
This PR adds the annotation and helm value to configure log format.
Closes#2491
Signed-off-by: Naseem <naseem@transit.app>
Data disappears upon prometheus restarts due to it being all in-memory.
Adding an option to enabled persistence by means of a PVC would be the right approach. It is commonly seen in a wide array of helm charts.
Fixes#4576
Signed-off-by: Naseem <naseem@transit.app>
Regenerated protobuf files, using version 1.4.2 that was upgraded from
1.3.2 with the proxy-api update in #4614.
As of v1.4 protobuf messages are disallowed to be copied (because they
hold a mutex), so whenever a message is passed to or returned from a
function we need to use a pointer.
This affects _mostly_ test files.
This is required to unblock #4620 which is adding a field to the config
protobuf.
* Update inject to error out on failure
Update injection process to throw an error when the reason for failure is due to sidecar, udp, automountServiceAccountToken or hostNetwork
Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
In #4585 we are observing an issue where a loop is encountered when using nginx ingress. The problem is that the outbound proxy does a dst lookup on the IP address which happens to be the very same address the ingress is listening on.
In order to avoid situations like that this PR introduces a way to modify the set of networks for which the proxy shall do IP based discovery. The change introduces a helm flag `.Values.global.proxy.destinationGetNetworks` that can be used to modify this value. There are two ways a user can affect the this setting:
- setting the `destinationGetNetworks` field in values during a Helm install, which changes the default on all injected pods
- using an annotation ` config.linkerd.io/proxy-destination-get-networks` for injected workloads to override this value
Note that this setting cannot be tweaked through the `install` or `inject` command
Fix: #4585
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Fixes#4606
This has not worked as far back as stable-2.6.0.
## Solution
The recommended upgrade process is to include `--prune` as part of `kubectl
apply ..`:
```bash
$ linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -
```
This is an issue for multi-stage upgrade because `linkerd upgrade config` does
not include the `linkerd-config` ConfigMap in it's output.
`kubectl apply --prune ..` will then prune this resource because it matches the
label selector *and* is not in the above output.
The issue occurs when `linkerd upgrade control-plane` is run and expects to find
the ConfigMap that was just pruned.
This can be fixed by not suggesting to prune resources as part of the
multi-stage upgrade.
## Considered
Including `templates/config.yaml` in the install output regardless of the stage.
Instead of it being a template only used in `control-plane` stage in
[render](4aa3ca7f87/cli/cmd/install.go (L873-L886)), it could always be rendered.
This just exposes other things that are pruned in the process:
```bash
❯ bin/linkerd upgrade control-plane |kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -
× Failed to build upgrade configuration: secrets "linkerd-identity-issuer" not found
For troubleshooting help, visit: https://linkerd.io/upgrade/#troubleshooting
error: no objects passed to apply
```
Ultimately, resources part of the `control-plane` stage need to remain and that
will not happen if we prune all resources not in the `config` stage output
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
As reported in #4259 linkerd check run from linkerd's web cconsole is
broken as the underlying RBAC Role cannot access the apiregistration.k8s.io API Group.
With this commit the RBAC Role is fixed allowing read-only access to the API Group
apiregistration.k8s.io.
Fixes#4259
Signed-off-by: alex.berger@nexiot.ch <alex.berger@nexiot.ch>
Put back space after `grafanaUrl` label in `linkerd-config-addons.yaml`
to avoid breaking the yaml parsing.
```
$ linkerd check
...
linkerd-addons
--------------
‼ 'linkerd-config-addons' config map exists
could not unmarshal linkerd-config-addons config-map: error
unmarshaling JSON: while decoding JSON: json: cannot unmarshal
string into Go struct field Values.global of type linkerd2.Global
```
This was added in #4544 to avoid having the configmap being badly formatted.
So this PR fixes the yaml, but then if we don't set `grafanaUrl` the
configmap format gets messed up, but apparently that's just a cosmetic
problem:
```
apiVersion: v1
data:
values: "global:\n grafanaUrl: \ngrafana:\n enabled: true\n
image:\n name:
gcr.io/linkerd-io/grafana\n name: linkerd-grafana\n resources:\n
cpu:\n limit:
240m\n memory:\n limit: null\ntracing:\n enabled:
false"
kind: ConfigMap
```
Fixes#4541
This PR adds the following checks
- if a mirrored service has endpoints. (This includes gateway mirrors too).
- if an exported service is referencing a gateway that does not exist.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Alex Leong <alex@buoyant.io>