* Jaeger injector mutating webhook
Closes#5231. This is based off of the `alex/sep-tracing` branch.
This webhook injects the `LINKERD2_PROXY_TRACE_COLLECTOR_SVC_ADDR`,
`LINKERD2_PROXY_TRACE_COLLECTOR_SVC_NAME` and
`LINKERD2_PROXY_TRACE_ATTRIBUTES_PATH` environment vars into the proxy
spec when a pod is created, as well as the podinfo volume and its mount.
If any of these are found to be present already in the pod spec, it
exits without applying a patch.
The `values.yaml` file has been expanded to include config for this
webhook. In particular, one can define a `namespaceSelector` and/or a
`objectSelector` to filter which pods will this webhook act on.
The config entries in `values.yam` for `collectorSvcAddr` and
`collectorSvcAccount` can be overriden with the
`config.linkerd.io/trace-collector` and
`config.alpha.linkerd.io/trace-collector-service-account` annotation at
the namespace or pod spec level.
## How to test:
```bash
docker build . -t ghcr.io/linkerd/jaeger-webhook:0.0.1 -f
jaeger/proxy-mutator/Dockerfile
k3d image import ghcr.io/linkerd/jaeger-webhook:0.0.1
bin/helm-build
linkerd install
helm install jaeger jaeger/charts/jaeger
linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
kubectl -n emojivoto get po -l app=emoji-svc -oyaml | grep -A1 TRACE
```
## Reinvocation policy
The webhookconfig resource is configured with `reinvocationPolicy:
IfNeeded` so that if the tracing injector gets triggered before the
proxy injector, it will get triggered a second time after the proxy
injector runs so it can act on the injected proxy. By default this won't
be necessary because the webhooks run in alphabetical order (this is not
documented in k8s docs though) so
`linkerd-proxy-injector-webhook-config` will run before
`linkerd-proxy-mutator-webhook-config`. In order to test the
reinvocation mechanism, you can change the name of the former so it gets
called first.
I versioned the webhook image as `0.0.1`, but we can decide to align
that with linkerd's main version tag.
Using `/bin/env` increases portability for the shell scripts (and often using `/bin/env` is requested by e.g. Mac users). This would also facilitate testing scripts with different Bash versions via the Bash containers, as they have bash in `/usr/local` and not `/bin`. Using `/bin/env`, there is no need to change the script when testing. (I assume the latter was behind c301ea214b (diff-ecec5e3a811f60bc2739019004fa35b0), which would not happen using `/bin/env`.)
Signed-off-by: Joakim Roubert <joakimr@axis.com>
Handling of whitespace paths was not fully implemented; this patch adds
the missing pieces. Also, only use bash where bash-specific
functionality is used/needed.
Signed-off-by: Joakim Roubert <joakimr@axis.com>
Split proxy-init into separate repo
Fixes#2563
The new repo is https://github.com/linkerd/linkerd2-proxy-init, and I
tagged the latest there `v1.0.0`.
Here, I've removed the `/proxy-init` dir and pinned the injected
proxy-init version to `v1.0.0` in the injector code and tests.
`/cni-plugin` depends on proxy-init, so I updated the import paths
there, and could verify CNI is still working (there is some flakiness
but unrelated to this PR).
For consistency, I added a `--init-image-version` flag to `linkerd
inject` along with its corresponding override config annotation.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
When developing on the proxy, it's convenient to build the proxy while
the linkerd2 image is building at a given tag; but because the proxy is
built last, it's difficult to build the proxy at the same tag
simultaneously.
This is made easier by building the proxy first so that the parallel
build can be initiated after this. This shouldn't impact other
development workflows.
* Export RootOptions and BuildFirewallConfiguration so that the cni-plugin can use them.
* Created the cni-plugin based on istio-cni implementation
* Create skeleton files that need to be filled out.
* Create the install scripts and finish up plugin to write iptables
* Added in an integration test around the install_cni.sh and updated the script to handle the case where it isn't the only plugin. Removed the istio kubernetes.go file in favor of pkg/k8s; initial usage of this package; found and fixed the typo in the ClusterRole and ClusterRoleBinding; found the docker-build-cni-plugin script
* Corrected an incorrect name in the docker build file for cni-plugin
* Rename linkerd2-cni to linkerd-cni
* Fixup Dockerfile and clean up code a bit as well as logging statements.
* Update Gopkg.lock after master merge.
* Update test file to remove temporary tag.
* Fixed the command to run during the test while building up the docker run.
* Added attributions to applicable files; in the test file, use a different container for each test scenario and also print the docker logs to stdout when there is an error;
* Add the --no-init-container flag to install and inject. This flag will not output the initContainer and will add an annotation assuming that the cni will be used in this case.
* Update .travis.yml to build the cni-plugin docker image before running the tests.
* Workaround golint warnings.
* Create a new command to install the linkerd-cni plugin.
* Add the --no-init-container option to linkerd inject
* Use the setup ip tables annotation during the proxy auto inject webhook prevent/allow addition of an init container; move cni-plugin tests to the integration-test section of travis
* gate the cni-plugin tests with the -integration-tests flag; remove unnecessary deployment .yaml file.
* Incorporate PR Cleanup suggestions.
* Remove the SetupIPTablesLabel annotation and use config flags and the presence of the init container to determine whether the cni-plugin writes ip tables.
* Fix a logic bug in the cni-plugin code that prevented the iptables from being written; Address PR comments; make tests pass.
* Update go deps shas
* Changed the single file install-cni plugin filename to be .conf vs .conflist; Incorporated latest PR comments around spacing with the new renderer among others.
* Fix an issue with renaming .conf to .conflist when needed.
* Renamed some of the variables to try to make it more clear what is going on.
* Address final PR comments.
* Hide cni flags for the time being.
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
`fast-build` was performing a full Docker build minus the cli, and then
building cli locally. Separately, shasum was called with a `-p flag,
breaking some builds on Darwin.
Instead, rename `fast-build` to `build-cli-bin`, and restrict it to only
building the cli locally, without any Docker dependencies. Also modify
`bin/linkerd` to call `build-cli-bin` rather than
`docker-build-cli-bin`.
To perform an equivalent of `fast-build`:
`LINKERD_LOCAL_BUILD_CLI=1 bin/docker-build`
`shasum` fix cribbed from #2071.
Relates to #1704
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
PR #978 introduced usage of parallel in docker-build. Unfortunately this
breaks if the system has non-GNU parallel.
Remove usage of parallel until we can do at least one of the following:
- detect version of parallel installed
- make usage of parallel optional and off by default
- confirm this speeds up builds for a majority of use cases
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Enable optional parallel build of docker images
By default, docker does image builds in a single thread. For our containers, this is a little slow on my system. Using `parallel` allows for *optional* improvements in speed there.
Before: 41s
After: 22s
* Move parallel help text to stderr
Using a vanilla Grafana Docker image as part of `conduit install`
avoided maintaining a conduit-specific Grafana Docker image, but made
packaging dashboard json files cumbersome.
Roll our own Grafana Docker image, that includes conduit-specific
dashboard json files. This significantly decreases the `conduit install`
output size, and enables dashboard integration in the docker-compose
environment.
Fixes#567
Part of #420
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This image isn't used. It references its base image using the `latest` tag, which
is wrong; it should have been using the tag that the base image was built with. It
is likely that the last few iterations of this image that we've published have
wrong and useless contents.
With that in mind, just remove the image.
Fixes#578.
Signed-off-by: Brian Smith <brian@briansmith.org>
The build scripts assume they are executed from the root of this repo.
This prevents running scripts from other locations, for example,
`cd web && ../bin/go-run .`.
Modify the build scripts to work regardless of current directory.
Fixes#301
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The current proxy Dockerfile configuration does not cache dependencies
well, which can increase build times substantially.
By carefully splitting proxy/Dockerfile into several stages that mock
parts of the project, dependencies may be built and cached in Docker
such that changes to the proxy only require building the conduit-proxy
crate.
Furthermore, proxy/Dockerfile now runs the proxy's tests before
producing an artifact, unless the ` PROXY_SKIP_TESTS` build-arg is set
and not-empty.
The `PROXY_UNOPTIMIZED` build-arg has been added to support quicker,
debug-friendly builds.
DOCKER_FORCE_BUILD, combined with symbolic tags, added complexity and
risk of running unintended versions of the code.
This change removes DOCKER_FORCE_BUILD, and sets all Docker tags
programmatically. The decision to pull or build has been moved up the
stack from _docker.sh to the docker-build-* scripts. Workflows that
want to favor docker pulls (like ci), can do so explicitly via
docker-pull.
fixes#141
Signed-off-by: Andrew Seigner <andrew@sig.gy>
We’ve built Conduit from the ground up to be the fastest, lightest,
simplest, and most secure service mesh in the world. It features an
incredibly fast and safe data plane written in Rust, a simple yet
powerful control plane written in Go, and a design that’s focused on
performance, security, and usability. Most importantly, Conduit
incorporates the many lessons we’ve learned from over 18 months of
production service mesh experience with Linkerd.
This repository contains a few tightly-related components:
- `proxy` -- an HTTP/2 proxy written in Rust;
- `controller` -- a control plane written in Go with gRPC;
- `web` -- a UI written in React, served by Go.