* Add automatic readme generation for charts
The current readmes for each chart is generated
manually and doesn't contain all the information available.
Utilize helm-docs to automatically fill out readme.mds
for the helm charts by pulling metadata from values.yml.
Fixes#4156
Co-authored-by: GMarkfjard <gabma047@student.liu.se>
* Jaeger injector mutating webhook
Closes#5231. This is based off of the `alex/sep-tracing` branch.
This webhook injects the `LINKERD2_PROXY_TRACE_COLLECTOR_SVC_ADDR`,
`LINKERD2_PROXY_TRACE_COLLECTOR_SVC_NAME` and
`LINKERD2_PROXY_TRACE_ATTRIBUTES_PATH` environment vars into the proxy
spec when a pod is created, as well as the podinfo volume and its mount.
If any of these are found to be present already in the pod spec, it
exits without applying a patch.
The `values.yaml` file has been expanded to include config for this
webhook. In particular, one can define a `namespaceSelector` and/or a
`objectSelector` to filter which pods will this webhook act on.
The config entries in `values.yam` for `collectorSvcAddr` and
`collectorSvcAccount` can be overriden with the
`config.linkerd.io/trace-collector` and
`config.alpha.linkerd.io/trace-collector-service-account` annotation at
the namespace or pod spec level.
## How to test:
```bash
docker build . -t ghcr.io/linkerd/jaeger-webhook:0.0.1 -f
jaeger/proxy-mutator/Dockerfile
k3d image import ghcr.io/linkerd/jaeger-webhook:0.0.1
bin/helm-build
linkerd install
helm install jaeger jaeger/charts/jaeger
linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
kubectl -n emojivoto get po -l app=emoji-svc -oyaml | grep -A1 TRACE
```
## Reinvocation policy
The webhookconfig resource is configured with `reinvocationPolicy:
IfNeeded` so that if the tracing injector gets triggered before the
proxy injector, it will get triggered a second time after the proxy
injector runs so it can act on the injected proxy. By default this won't
be necessary because the webhooks run in alphabetical order (this is not
documented in k8s docs though) so
`linkerd-proxy-injector-webhook-config` will run before
`linkerd-proxy-mutator-webhook-config`. In order to test the
reinvocation mechanism, you can change the name of the former so it gets
called first.
I versioned the webhook image as `0.0.1`, but we can decide to align
that with linkerd's main version tag.
* extension: Add new jaeger binary
This branch adds a new jaeger binary project in the jaeger directory.
This follows the same logic as that of `linkerd install`. But as
`linkerd install` VFS logic expects charts to be present in `/charts`
directory, This command gets its own static pkg to generate its own
VFS for its chart.
This covers only the install part of the command
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5230
This PR moves tracing into a jaeger chart with no proxy injection
templates. We still keep the dependency on partials, as we could use
common templates like resources, etc from there.
Signed-off-by: Tarun Pothulapati tarunpothulapati@outlook.com
* Consolidate integration tests under k3d
Fixes#5007
Simplified integration tests by moving all to k3d. Previously things were running in Kind, except for the multicluster tests, which implied some extra complexity in the supporting scripts.
Removed the KinD config files under `test/integration/configs`, as config is now passed as flags into the `k3d` command.
Also renamed `kind_integration.yml` to `integration_tests.yml`
Test skipping logic under ARM was also simplified.
* Update BUILD.md with multiarch stuff and some extras
Adds to `BUILD.md` a new section `Publishing images` explaining the
workflow for testing custom builds.
Also updates and gives more precision to the section `Building CLI for
development`.
Finally, a new `Multi-architecture builds` section is added.
This PR also removes `SUPPORTED_ARCHS` from `bin/docker-build-cli` that
is no longer used.
Note I'm leaving some references to Minikube. I might change that in a
separate PR to point to k3d if we manage to migrate the KinD stuff to
k3d.
gcr.io has an issue that it's not possible to update multi-arch images
(see eclipse/che#16983 and open-policy-agent/gatekeeper#665).
We're now relying on ghcr.io instead, which I verified doesn't have this
bug, so we can stop skipping these pushes.
In order for the integration tests to run successfully on a dedicated ARM cluster, two small changes are necessary:
* We need to skip the multicluster test since this test uses two separate clusters (source and target)
* We need to properly uninstall the multicluster helm chart during cleanup.
With these changes, I was able to successfully run the integration tests on a dedicated ARM cluster.
Signed-off-by: Alex Leong <alex@buoyant.io>
Currently the --wait flag times out when creating a calico cluster. The result is that we end up waiting for 5 minutes to simply emit a warning and continue. Instead we can check the readiness of some k8s components to ensure our cluster is up and running and avoid the delay.
Signed-off-by: Zahari Dichev zaharidichev@gmail.com
The `bin/tests` script takes command-line arguments, but it requires
that all arguments are specified before the linkerd binary path; and it
silently ignores flags that follow the linkerd binary. Furthermore,
unexpected flags may be incorrectly parsed as the linkerd binary path.
This changes argument parsing to be more flexible about ordering; and it
prints the full usage error when unexpected flags are encountered.
Previously, `releases.yaml` was trying to load images into the kind
clusters but that failed because those images were already in `ghcr.io`
and not in the local docker cache, but that failure was masked.
Unmasking that failure revealed some flaws that this change addresses:
- In `bin/_test_helpers` (used by `bin/tests`), modified the `images`
arg to accept `docker(default)|archive|skip`, for determining how to
load the images into the cluster (if loading them at all)
- In `bin/image-load`, changed arg `images` to `archive` which is more
descriptive.
- Have `kind_integration.yml` call `bin/tests --images archive`.
- Have `release.yml` call `bin/tests --images skip`.
Fixes#4191#4993
This bumps Kubernetes client-go to the latest v0.19.2 (We had to switch directly to 1.19 because of this issue). Bumping to v0.19.2 required upgrading to smi-sdk-go v0.4.1. This also depends on linkerd/stern#5
This consists of the following changes:
- Fix ./bin/update-codegen.sh by adding the template path to the gen commands, as it is needed after we moved to GOMOD.
- Bump all k8s related dependencies to v0.19.2
- Generate CRD types, client code using the latest k8s.io/code-generator
- Use context.Context as the first argument, in all code paths that touch the k8s client-go interface
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
This implements the run_multicluster_test() function in bin/_test-helpers.sh.
The idea is to create two clusters (source and target) using k3d, with linkerd and multicluster support in both, plus emojivoto (without vote-bot) in target, and vote-bot in source.
We then link the clusters and make sure traffic is flowing.
Detailed sequence:
Create certficates.
Install linkerd along with multicluster support in the target cluster.
Run the target1 test: install emojivoto in the target cluster (without vote-bot).
Run linkerd mc link on the target cluster.
Install linkerd along with multicluster support in the source cluster.
Apply the link resource in the source cluster.
Run the source test: Check linkerd mc gateways returns the target cluster link, and only install emojivoto's vote-bot in the source cluster. Note vote-bot's yaml defines the web-svc service as web-svc-target.emojivoto:80
Run the target2 test: Make sure web-svc in the target cluster is receiving requests.
* Add support for k3d in integration tests
KinD doesn't support setting LoadBalancer services out of the box. It can be added with some additional work, but it seems the solutions are not cross-platform.
K3d on the other hand facilitates this, so we'll be using k3d clusters for the multicluster integration test.
The current change sets the ground by generalizing some of the integration tests operations that were hard-coded to KinD.
- Added `bin/k3d` to wrap the setup and running of a pinned version of `k3d`.
- Refactored `bin/_test-helpers.sh` to account for tests to be run in either KinD or k3d.
- Renamed `bin/kind-load` to `bin/image-load` and make it more generic to load images for both KinD (default) and k3d. Also got rid of the no longer used `--images-host` option.
- Added a placeholder for the new `multicluster` test in the lists in `bin/_test-helpers.sh`. It starts by setting up two k3d clusters.
* Refactor handling of the `--multicluster` flag in integration tests (#4995)
Followup to #4994, based off of that branch (`alpeb/k3d-tests`).
This is more preliminary work previous to the more complete multicluster integration test.
- Removed the `--multicluster` flag from all the tests we had in `bin/_test-helpers.sh`, so only the new "multicluster" integration test will make use of that. Also got rid of the `TestUninstallMulticluster()` test in `install_test.go` to keep the multicluster stuff around, needed for the more complete multicluster test that will be implemented in a followup PR.
- Added "multicluster" to the list of tests in the `kind_integration.yml` workflow.
- For now, this new "multicluster" test in `run_multicluster_test()` is just running the install tests (`test/integration/install_test.go`) with the `--multicluster` flag.
Co-authored-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Adds bin/certs-openssl, which creates self-signed root cert/key and issuer cert/key using openssl. This will be used in the two clusters set up in the multicluster integration test (followup PR), given CI already has openssl and to avoid having to install step.
Adds a new flag `--certs-path` to the integration tests, pointing to the path where those certs (ca.crt, ca.key, issuer.key and issuer.crt) will be located to be fed into linkerd install's `--identity-*` flags.
* Integration test for smi-metrics
This PR adds an integration test which installs SMI-Metrics and performs
queries and matches the reply with a regex query.
Currently, We store the SMI Helm pkg locally and run the test on top, so
That our CI does not break and we will periodically update the package
based on the newer releases of SMI-Metrics
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* tests: Add new CNI deep integration tests
Fixes#3944
This PR adds a new test, called cni-calico-deep which installs the Linkerd CNI
plugin on top of a cluster with Calico and performs the current integration tests on top, thus
validating various Linkerd features when CNI is enabled. For Calico
to work, special config is required for kind which is at `cni-calico.yaml`
This is different from the CNI integration tests that we run in
cloud integration which performs the CNI level integration tests.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When some test failed in the middle of the
`./tests/integration/install_test.go` suite, multicluster resources can
be left-over, which `./bin/test-cleanup` wasn't removing.
This was affecting the ARM integration tests, that require good cleanup
since they use a non-transient cluster.
* Push docker images to ghcr.io instead of gcr.io
The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.
Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.
Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.
Besides that, the other changes are just a global search and replace (s/gcr.io\/linkerd-io/ghcr.io\/linkerd/).
`./bin/test-cleanup` was trying to remove the
resources with the label `linkerd.io/is-test-helm` which we're not
using. Instead, we simply call `helm delete` on the appropriate helm
releases.
This is required for a clean cleanup after the ARM integration test, whose
cluster is just cleaned by this script at the end and is not torn down.
Add a static check that ensures the generated files from the proto definitions have not changed.
Fix#4669
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* When releasing, build and upload the amd64, arm64 and arm architectures builds for the CLI
* Refactored `Dockerfile-bin` so it has separate stages for single and multi arch builds. The latter stage is only used for releases.
Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>
Build ARM docker images in the release workflow.
# Changes:
- Add a new env key `DOCKER_MULTIARCH` and `DOCKER_PUSH`. When set, it will build multi-arch images and push them to the registry. See https://github.com/docker/buildx/issues/59 for why it must be pushed to the registry.
- Usage of `crazy-max/ghaction-docker-buildx ` is necessary as it already configured with the ability to perform cross-compilation (using QEMU) so we can just use it, instead of manually set up it.
- Usage of `buildx` now make default global arguments. (See: https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope)
# Follow-up:
- Releasing the CLI binary file in ARM architecture. The docker images resulting from these changes already build in the ARM arch. Still, we need to make another adjustment like how to retrieve those binaries and to name it correctly as part of Github Release artifacts.
Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>
This PR adds the LinguiJS project to the Linkerd dashboard for i18n and
translation. It is a precursor to adding translations to the dashboard. Only
two components have been translated in this PR, to allow reviewers to evaluate
the ease of use; A second PR will add translations for the remaining components.
* Fixed `linkerd check` not finding Prometheus
## The Problem
`linkerd check` run right after install is failing because it can't find the Prometheus Pod.
## The Cause
The "control plane pods are ready" check used to verify the existence of all the control plane pods, blocking until all the pods were ready.
Since #4724, Prometheus is no longer included in that check because it's checked separately as an add-on. An unintended consequence is that when the ensuing "control plane self-check" is triggered, Prometheus might not be ready yet and the check fails because it doesn't do retries.
## The Fix
The "control plane self-check" uses a gRPC call (it's the only check that does that) and those weren't designed with retries in mind.
This PR adds retry functionality to the `runCheckRPC()` function, making sure the final output remains the same
It also temporarily disables the `upgrade-edge` integration test because after installing edge-20.7.4 `linkerd check` will fail because of this.
Removed the dependency on the base image, and instead install the needed packages in the Dockerfiles for debug and CNI.
Also removed some obsolete info from BUILD.md
Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>
This PR removes the service mirror controller from `linkerd mc install` to `linkerd mc link`, as described in https://github.com/linkerd/rfc/pull/31. For fuller context, please see that RFC.
Basic multicluster functionality works here including:
* `linkerd mc install` installs the Link CRD but not any service mirror controllers
* `linkerd mc link` creates a Link resource and installs a service mirror controller which uses that Link
* The service mirror controller creates and manages mirror services, a gateway mirror, and their endpoints.
* The `linkerd mc gateways` command lists all linked target clusters, their liveliness, and probe latences.
* The `linkerd check` multicluster checks have been updated for the new architecture. Several checks have been rendered obsolete by the new architecture and have been removed.
The following are known issues requiring further work:
* the service mirror controller uses the existing `mirror.linkerd.io/gateway-name` and `mirror.linkerd.io/gateway-ns` annotations to select which services to mirror. it does not yet support configuring a label selector.
* an unlink command is needed for removing multicluster links: see https://github.com/linkerd/linkerd2/issues/4707
* an mc uninstall command is needed for uninstalling the multicluster addon: see https://github.com/linkerd/linkerd2/issues/4708
Signed-off-by: Alex Leong <alex@buoyant.io>
* Migrate CI to docker buildx and other improvements
## Motivation
- Improve build times in forks. Specially when rerunning builds because of some flaky test.
- Start using `docker buildx` to pave the way for multiplatform builds.
## Performance improvements
These timings were taken for the `kind_integration.yml` workflow when we merged and rerun the lodash bump PR (#4762)
Before these improvements:
- when merging: `24:18`
- when rerunning after merge (docker cache warm): `19:00`
- when running the same changes in a fork (no docker cache): `32:15`
After these improvements:
- when merging: `25:38`
- when rerunning after merge (docker cache warm): `19:25`
- when running the same changes in a fork (docker cache warm): `19:25`
As explained below, non-forks and forks now use the same cache, so the important take is that forks will always start with a warm cache and we'll no longer see long build times like the `32:15` above.
The downside is a slight increase in the build times for non-forks (up to a little more than a minute, depending on the case).
## Build containers in parallel
The `docker_build` job in the `kind_integration.yml`, `cloud_integration.yml` and `release.yml` workflows relied on running `bin/docker-build` which builds all the containers in sequence. Now each container is built in parallel using a matrix strategy.
## New caching strategy
CI now uses `docker buildx` for building the container images, which allows using an external cache source for builds, a location in the filesystem in this case. That location gets cached using actions/cache, using the key `{{ runner.os }}-buildx-${{ matrix.target }}-${{ env.TAG }}` and the restore key `${{ runner.os }}-buildx-${{ matrix.target }}-`.
For example when building the `web` container, its image and all the intermediary layers get cached under the key `Linux-buildx-web-git-abc0123`. When that has been cached in the `main` branch, that cache will be available to all the child branches, including forks. If a new branch in a fork asks for a key like `Linux-buildx-web-git-def456`, the key won't be found during the first CI run, but the system falls back to the key `Linux-buildx-web-git-abc0123` from `main` and so the build will start with a warm cache (more info about how keys are matched in the [actions/cache docs](https://docs.github.com/en/actions/configuring-and-managing-workflows/caching-dependencies-to-speed-up-workflows#matching-a-cache-key)).
## Packet host no longer needed
To benefit from the warm caches both in non-forks and forks like just explained, we're required to ditch doing the builds in Packet and now everything runs in the github runners VMs.
As a result there's no longer separate logic for non-forks and forks in the workflow files; `kind_integration.yml` was greatly simplified but `cloud_integration.yml` and `release.yml` got a little bigger in order to use the actions artifacts as a repository for the images built. This bloat will be fixed when support for [composite actions](https://github.com/actions/runner/blob/users/ethanchewy/compositeADR/docs/adrs/0549-composite-run-steps.md) lands in github.
## Local builds
You still are able to run `bin/docker-build` or any of the `docker-build.*` scripts. And to make use of buildx, run those same scripts after having set the env var `DOCKER_BUILDKIT=1`. Using buildx supposes you have installed it, as instructed [here](https://github.com/docker/buildx).
## Other
- A new script `bin/docker-cache-prune` is used to remove unused images from the cache. Without that the cache grows constantly and we can rapidly hit the 5GB limit (when the limit is attained the oldest entries get evicted).
- The `go-deps` dockerfile base image was changed from `golang:1.14.2` (ubuntu based) to `golang-1:14.2-alpine` also to conserve cache space.
# Addressed separately in #4875:
Got rid of the `go-deps` image and instead added something similar on top of all the Dockerfiles dealing with `go`, as a first stage for those Dockerfiles. That continues to serve as a way to pre-populate go's build cache, which speeds up the builds in the subsequent stages. That build should in theory be rebuilt automatically only when `go.mod` or `go.sum` change, and now we don't require running `bin/update-go-deps-shas`. That script was removed along with all the logic elsewhere that used it, including the `go_dependencies` job in the `static_checks.yml` github workflow.
The list of modules preinstalled was moved from `Dockerfile-go-deps` to a new script `bin/install-deps`. I couldn't find a way to generate that list dynamically, so whenever a slow-to-compile dependency is found, we have to make sure it's included in that list.
Although this simplifies the dev workflow, note that the real motivation behind this was a limitation in buildx's `docker-container` driver that forbids us from depending on images that haven't been pushed to a registry, so we have to resort to building the dependencies as a first stage in the Dockerfiles.
https://github.com/linkerd/linkerd2-proxy/pull/593 changed the proxy
release process to produce platform-specific binaries.
This change modifies the bin/fetch-proxy script to fetch amd64-specific
binaries. The proxy version has been updated to v1.104.1, which includes
no code changes since v1.104.0.
Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>
The `tests` variable wasn't being properly initialized, which resulted
in the `helm-deep` tests being repeated, and without cleanup in between,
the attempt to create resources that were already there caused an error.
This creates a new integration test target that launches the deep suite,
using a linkerd instance installed through Helm.
I've added a `global.proxyInit.ignoreInboundPorts=1234,5678` override
during install and enhanced the injection test to catch problems like
what we saw in #4679.
This fixes the deep integration test which currently only calls `run_test` for
`edges` integration test.
This occurs because `run_test "${tests[@]}"` will pass an entire array of
filenames when `run_test` only expects *one* filename.
The solution is to loop through `tests` and call `run_test` for each file.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Using following command the wrong spelling were found and later on
fixed:
```
codespell --skip CHANGES.md,.git,go.sum,\
controller/cmd/service-mirror/events_formatting.go,\
controller/cmd/service-mirror/cluster_watcher_test_util.go,\
SECURITY_AUDIT.pdf,.gcp.json.enc,web/app/img/favicon.png \
--ignore-words-list=aks,uint,ans,files\' --check-filenames \
--check-hidden
```
Signed-off-by: Suraj Deshmukh <surajd.service@gmail.com>
The function triggering the test for k8s custom cluster domain was
misnamed, and thus the test wasn't being run.
This also adds some extra error handling to catch this and other
potential issues.
This PR adds a new cli test to see if installation yamls are correctly
generated even on windows, this is important because of all the file
path difference between windows and Linux, and if any code uses a wrong
format might cause the chart generation commands to fail on windows.
This creates a separate workflow for both release and integration.
Also, all the exisiting integration tests are moved in to
/tests/integration to separate from /test/cli as this test does not fall
under integration tests category
## Summary
Change the default behavior of integration tests to be isolated by cluster.
Additionally, make running one or all tests easier than the current process.
These changes are explained more in the [Testing
RFC](https://github.com/linkerd/rfc/blob/master/design/0004-isolated-integration-tests.md)
## Changes
This is a script used only by Linkerd developers, but there is a lot of useful
usage examples and explanations in `bin/tests --help` output:
```
Run Linkerd integration tests.
Optionally specify one of the following tests: [upgrade helm helm-upgrade uninstall deep external-issuer]
Usage:
tests [--images] [--images-host ssh://linkerd-docker] [--name test-name] [--skip-kind-create] /path/to/linkerd
Examples:
# Run all tests in isolated clusters
tests /path/to/linkerd
# Run single test in isolated clusters
tests --name test-name /path/to/linkerd
# Skip KinD cluster creation and run all tests in default cluster context
tests --skip-kind-create /path/to/linkerd
# Load images from tar files located under the 'image-archives' directory
# Note: This is primarly for CI
tests --images /path/to/linkerd
# Retrieve images from a remote docker instance and then load them into KinD
# Note: This is primarly for CI
tests --images --images-host ssh://linkerd-docker /path/to/linkerd
Available Commands:
--name: the argument to this option is the specific test to run
--skip-kind-create: skip KinD cluster creation step and run tests in an existing cluster.
--images: (Primarily for CI) use 'kind load image-archive' to load the images from local .tar files in the current directory.
--images-host: (Primarily for CI) the argument to this option is used as the remote docker instance from which images are first retrieved (using 'docker save') to be then loaded into KinD. This command requires --images.
```
### Run all tests
Old:
```bash
bin/test-run $PWD/bin/linkerd
```
New:
```bash
bin/tests $PWD/bin/linkerd
```
### Run single test (upgrade for example):
Current:
```bash
. bin/_test-run.sh
init_test_run $PWD/bin/linkerd
upgrade_integration_tests
```
New:
```bash
bin/tests --name upgrade $PWD/bin/linkerd
```
### Run tests in isolated KinD clusters
Current: Not possible without running single tests in newly created clusters
manually
New:
```bash
bin/tests $PWD/bin/linkerd
```
### Run tests in isolated namespaces on an existing cluster
Old:
```bash
bin/test-run $PWD/bin/linkerd
```
New:
```bash
bin/tests --skip-kind-create $PWD/bin/linkerd
```
## CI
`kind_integration` has been updated so that it does not create a KinD cluster as
part of its test setup.
`cloud_integration` passes the `--skip-kind-create` flag so that the tests are
run serially in a non-KinD cluster.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This PR adds multicluster components to the integration tests.
The existing tests have been modified to pass the `--multicluster` flag so that the entire integration test suite runs with multicluster components.
Currently, the upgrade tests do not have multicluster components installed, but this will be done in a follow-up PR.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
The `bin/go-run` script generates a temporary binary, stored in the root
of the repository.
This change moves it into `target/` so that is included in the
.dockerignore, and so that the repo can be cleaned easily by removing
the `target/` directory.
Using `/bin/env` increases portability for the shell scripts (and often using `/bin/env` is requested by e.g. Mac users). This would also facilitate testing scripts with different Bash versions via the Bash containers, as they have bash in `/usr/local` and not `/bin`. Using `/bin/env`, there is no need to change the script when testing. (I assume the latter was behind c301ea214b (diff-ecec5e3a811f60bc2739019004fa35b0), which would not happen using `/bin/env`.)
Signed-off-by: Joakim Roubert <joakimr@axis.com>
Explicitly shebang `bin/update-go-deps-shas` with `#!/bin/bash` instead
of `#!/bin/sh` because the latter points to `dash` in most Ubuntu-based
distros, and the script's `bin/_tag.sh` dependency requires bash.