Fixes issue described in [this comment](https://github.com/linkerd/linkerd2/issues/9310#issuecomment-1247201646)
Rollback #7382
Should be cherry-picked back into 2.12.1
For 2.12.0, #7382 removed the env vars `_l5d_ns` and `_l5d_trustdomain` from the proxy manifest because they were no longer used anywhere. In particular, the jaeger injector used them when injecting the env var `LINKERD2_PROXY_TAP_SVC_NAME=tap.linkerd-viz.serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)` but then started using values.yaml entries instead of these env vars.
The problem is when upgrading the core control plane (or anything else) to 2.12.0, the 2.11 jaeger extension will still be running and will attempt to inject the old env var into the pods, making reference to `l5d_ns` and `_l5d_trustdomain` which the new proxy container won't offer anymore. This will put the pod in an error state.
This change restores back those env vars. We will be able to remove them at last in 2.13.0, when presumably the jaeger injector would already have already been upgraded to 2.12 by the user.
Replication steps:
```bash
$ curl -sL https://run.linkerd.io/install | LINKERD2_VERSION=stable-2.11.4 sh
$ linkerd install | k apply -f -
$ linkerd jaeger install | k apply -f -
$ linkerd check
$ curl -sL https://run.linkerd.io/install | LINKERD2_VERSION=stable-2.12.0 sh
$ linkerd upgrade --crds | k apply -f -
$ linkerd upgrade | k apply -f -
$ k get po -n linkerd
NAME READY STATUS RESTARTS AGE
linkerd-identity-58544dfd8-jbgkb 2/2 Running 0 2m19s
linkerd-destination-764bf6785b-v8cj6 4/4 Running 0 2m19s
linkerd-proxy-injector-6d4b8c9689-zvxv2 2/2 Running 0 2m19s
linkerd-identity-55bfbf9cd4-4xk9g 0/2 CrashLoopBackOff 1 (5s ago) 32s
linkerd-proxy-injector-5b67589678-mtklx 0/2 CrashLoopBackOff 1 (5s ago) 32s
linkerd-destination-ff9b5f67b-jw8w5 0/4 PostStartHookError 0 (8s ago) 32s
```
Closes#9312#9118 introduced the `linkerd.io/trust-root-sha256` annotation which is
automatically added to control plane components.
This change ensures that all injected workloads also receive this annotation.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
In 2.11.x, proxyInit.runAsRoot was true by default, which caused the
proxy-init's runAsUser field to be 0. proxyInit.runAsRoot is now
defaulted to false in 2.12.0, but runAsUser still isn't
configurable, and when following the upgrade instructions
here, helm doesn't change runAsUser and so it conflicts with the new value
for runAsRoot=false, resulting in the pods erroring with this message:
Error: container's runAsUser breaks non-root policy (pod: "linkerd-identity-bc649c5f9-ckqvg_linkerd(fb3416d2-c723-4664-acf1-80a64a734561)", container: linkerd-init)
This PR adds a new default for runAsUser to avoid this issue.
* Bump proxy-init to v2.0.0
New release of proxy-init.
Updated:
* Helm values to use v2.0.0 of proxy-init
* Helm docs
* Tests
Note: go dependencies have not been updated since the new version will
break API compatibility with older versions (source files have been
moved, see issue for more details).
Closes#9164
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Oliver Gould <ver@buoyant.io>
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Some hosts may not have 'nft' modules available. Currently, proxy-init
defaults to using 'iptables-nft'; if the host does not have support for
nft modules, the init container will crash, blocking all injected
workloads from starting up.
This change defaults the 'iptablesMode' value to 'legacy'.
* Update linkerd-control-plane/values file default
* Update proxy-init partial to default to 'legacy' when no mode is
specified
* Change expected values in 'pkg/charts/linkerd2/values_test.go' and in
'cli/cmd/install_test'
* Update golden files
Fixes#9053
Signed-off-by: Matei David <matei@buoyant.io>
This change introduces a new value to be used at install (or upgrade)
time. The value (`proxyInit.iptablesMode=nft|legacy`) is responsible
for starting the proxy-init container in nft or legacy mode.
By default, the init container will use iptables-nft. When the mode is set to
`nft`, it will instead use iptables-nft. Most modern Linux distributions
support both, but a subset (such as RHEL based families) only support
iptables-nft and nf_tables.
Signed-off-by: Matei David <matei@buoyant.io>
This change bumps the proxy-init version from v1.6.1 to the latest
version, v1.6.2. As part of the new release, proxy-init now adds
net_admin and net_raw sys caps to xtables-nft-multi so that nftables
mode can be used without requiring root privileges.
* Bump go.mod
* Bump version in helm values
* Bump version in misc files
* Bump version in code
Signed-off-by: Matei David <matei@buoyant.io>
Release v1.6.1 of proxy-init adds support for iptables-nft. This change
bumps up the proxy-init version used in code, chart values, and golden
files.
* Update go.mod dep
* Update CNI plugin with new opts
* Update proxy-init ref in golden files and chart values
* Update policy controller CI workflow
Signed-off-by: Matei David <matei@buoyant.io>
The injector configures the proxy with a set of known inbound ports
which are used (by the proxy) to discover inbound server configuration.
The list of ports is derived from the pod's container ports; container
ports may be optional and thus not present. The proxy supports dynamic
discovery of additional ports at runtime but since they are lazy,
additional ports may be dropped or updated long after pod start-up.
To ensure HTTP probes are handled correctly, this change introduces new
functionality to configure the list of inbound ports for the proxy with
any ports targeted by healthcheck probes, as long as they are HTTP, and
even if they are not present in the containerPorts configuration.
This change also introduces additional liveness (or readiness) probes to
the current injector webhook test fixtures in order to assert that
injected pods will always have their healthcheck target ports included
in the proxy's configuration.
Closes#8638
Signed-off-by: Matei David <matei@buoyant.io>
Closes#7980
A pod is considered `Burstable` instead of `Guaranteed` if there exists at least one container in the pod that specifies CPU/memory limits/requests that do not match.
The `linkerd-init` container falls into this category meaning that even if all other containers in a Pod have matching CPU/memory limits/requests, the Pod will not be considered `Guaranteed` because of `linkerd-init`'s hardcoded values.
This changes the values to match, meaning that `linkerd-init` will not be the culprit container if a Pod is not considered `Guaranteed`. Raising the requests—instead of lowering the limits—felt like the safer option here. This means that the container will now always be guaranteed these amounts _and_ will never use more.
[Docs](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed) explain this in more detail.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
If the proxy doesn't become ready `linkerd-await` never succeeds
and the proxy's logs don't become accessible.
This change adds a default 2 minute timeout so that pod startup
continues despite the proxy failing to become ready. `linkerd-await`
fails and `kubectl` will report that a post start hook failed.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
* Remove the `proxy.disableIdentity` config
Fixes#7724
Also:
- Removed the `linkerd.io/identity-mode` annotation.
- Removed the `config.linkerd.io/disable-identity` annotation.
- Removed the `linkerd.proxy.validation` template partial, which only
made sense when `proxy.disableIdentity` was `true`.
- TestInjectManualParams now requires to hit the cluster to retrieve the
trust root.
Linkerd proxy-init container is currently enforced to run as root.
Removes hardcoding `runAsNonRoot: false` and `runAsUser: 0`. This way
the container inherits the user ID from the proxy-init image instead which
may allow to run as non-root.
Fixes#5505
Signed-off-by: Schlotter, Christian <christian.schlotter@daimler.com>
We've previously handled inbound connections on 443 as opaque, meaning
that we don't do any TLS detection.
This prevents the proxy from reporting meaningful metadata on these TLS
connections--especially the connection's SNI value.
This change also simplifies the core control plane's configuration for
skipping outbound connection on 443 to be much simpler (and
documented!).
Updates linkerd2-proxy-init version to v1.4.0
Major change includes removing "redirect-non-loopback-traffic" rule; previously packets with destination != 127.0.0.1 on lo originating from proxy process would be sent to the inbound proxy port (assuming application tries to talk to itself). This is no longer the case.
Signed-off-by: Matei David <matei@buoyant.io>
#6719 changed the proxy injector so that it adds the `config.linkerd.io/opaque-ports` annotation to all pods and services if they or their namespace do not already contain the annotation. The value used is the default list of opaque ports—which is `25,443,587,3306,4444,5432,6379,9300,11211` unless otherwise specified by the user during installation.
Closes#6729
The main issue with this is that if a service exposes a service port `9090` that targets `3306`, the service _should_ have `9090` set as opaque since it targets a default opaque port, but it does not. This change ensures that services with this situation have `9090` set as opaque.
Additionally, services and pods do not need an annotation for with the entire default opaque ports list if they don't expose those ports in the first place. This change will filter out ports from the default list if the service or pod does not expose them.
### tests
I've added some unit tests that demonstrate the change in behavior and explained in the original issue #6729.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
The proxy injector now adds the `config.linkerd.io/default-inbound-policy` annotation to all injected pods.
Closes#6720.
If the pod has the annotation before injection then that value is used. If the pod does not have the annotation but the namespace does, then it inherits that. If both the pod and the namespace do not have the annotation, then it defaults to `.Values.policyController.defaultAllowPolicy`.
Upon injecting the sidecar container into the pod, this annotation value is used to set the `LINKERD2_PROXY_INBOUND_DEFAULT_POLICY` environment variable. Additionally, `LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS` is also set to the value of `.Values.clusterNetworks`.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
In order to discover how a workload is configured without knowing the global defaults, the `opaque-ports` annotation is now added by the proxy injector to workloads, regardless of the list being the default or user-specified.
Closes#6689
#### core
Because core control plane components do not go through the proxy injector the annotation is added to the `destination`, `identity`, and `proxy-injector` templates.
The `linkerd-destination` and `linkerd-proxy-injector` deployments both now just have the `opaque-ports: "8443"` annotation. The `linkerd-identity` deployment and service doesn't need this annotation since it doesn't expose anything in the default list.
#### non-core
All other resources go through the proxy injector; it decides whether or not services or pods (the two resources that it can add annotations to) should get the default list.
Workloads get the default list of opaque ports added if they and their namespace do not have the annotation already. So this boils down to:
1. If the workload already has the annotation, no patch is created
2. If the namespace has the annotation but the workload does not, a patch is generated
3. If the workload and namespace do not have the annotation, a patch is generated
#### tests
A unit test has been added and I performed the following manual tests:
1. Injected a pod with the annotation: a patch is generated but there is no change to opaque ports
2. Injected a pod with the namespace annotation: a patch is genereted and opaque ports are copied down to the pod
3. Injected a pod with no annotation on it or the namespace: a patch is generated and the default opaque ports are added
4. Created a pod (not injected): a patch is generated (without the proxy) that adds the annotation (this holds true for if the pod having the annotation or the namespace having the annotation)
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* injector: cleanup env variables in `_proxy.tpl`
This PR updates the `_proxy.tpl` file to remove the usage of `_l5d_ns`
and `l5d_trustDomain` env variables which can be rendered directly
instead. This also moves the reference variables to the top for
simplicity purposes.
These unused variables will be removed in a future release to
prevent race conditions during upgrades.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Variable references are only expanded to previously defined
environment variables as per https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvar-v1-core
which means for `LINKERD2_PROXY_POLICY_WORKLOAD` to work correctly, the
`_pod_ns` `_pod_name` should be present before they are used.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#6688
This PR adds the new `LINKERD2_PROXY_POLICY_SVC_ADDR` and
`LINKERD2_PROXY_POLICY_SVC_NAME` env variables which are used to specify
the address and the identity (which is `linkerd-destination`) of the
policy server respectively.
This also adds the new `LINKERD2_PROXY_POLICY_WORKLOAD` in the format
of `$ns:$pod` which is used to specify the identity of the workload itself.
A new `_pod_name` env variable has been added to get the name of the pod
through the Downward API.
These variables are only set if the `proxy.component` is not
`linkerd-identity`.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Set `LINKERD2_PROXY_INBOUND_PORTS` during injection
Fixes#6267
The `LINKERD2_PROXY_INBOUND_PORTS` env var will be set during injection,
containing a comma-separated list of the ports in the non-proxy containers in
the pod. For the identity, destination and injector pods, the var is set
manually in their Helm templates.
Since the proxy-injector isn't reinvoked, containers injected by a mutating
webhook after the injector has run won't be detected. As an escape hatch, the
`config.linkerd.io/pod-inbound-ports` annotation has been added to explicit
overrides.
Other changes:
- Removed
`controller/proxy-injector/fake/data/inject-sidecar-container-spec.yaml` which
is no longer used. - Fixed bad indentation in some fixture files under
`controller/proxy-injector/fake/data`.
Default Linkerd skip and opaque port configuration
Missing default ports based on docs
Addressed: Add Redis to default list of Opaque ports #6132
Once merged, the default install values will match the recommendations in Linkerd's TCP ports guide.
Fixes#6132
Signed-off-by: jasonmorgan <jmorgan@f9vs.com>
Co-authored-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>
List of changes:
- Include more output in the `simulate` mode (thanks @liuerfire!")
- Log to `stdout` instead of `stderr` (thanks @mo4islona!)
Non user-facing changes:
- Added `dependabot.yml` to receive automated dependencies upgrades PRs (both for go and github actions). As a result, also upgraded a bunch of dependencies.
* Skip configuring firewall if rules exists
This change fixes an issue where the `proxy-init` will fail if
`PROXY_INIT_*` chains already exist in the pod's iptables. This then
causes the pod to never start because proxy-init never finishes running
with a non-zero exit code.
In this change, we capture the output of the `iptables-save` command and
then check to see if the output contains the `PROXY_INIT_*` chains. If
they do, exist and log a warning stating that the chains already
exist.
Fixes#5786
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
### What
This change adds the `config.linkerd.io/proxy-await` annotation which when set will delay application container start until the proxy is ready. This allows users to force application containers to wait for the proxy container to be ready without modifying the application's Docker image. This is different from the current use-case of [linkerd-await](https://github.com/olix0r/linkerd-await) which does require modifying the image.
---
To support this, Linkerd is using the fact that containers are started in the order that they appear in `spec.containers`. If `linkerd-proxy` is the first container, then it will be started first.
Kubernetes will start each container without waiting on the result of the previous container. However, if a container has a hook that is executed immediately after container creation, then Kubernetes will wait on the result of that hook before creating the next container. Using a `PostStart` hook in the `linkerd-proxy` container, the `linkerd-await` binary can be run and force Kubernetes to pause container creation until the proxy is ready. Once `linkerd-await` completes, the container hook completes and the application container is created.
Adding the `config.linkerd.io/await-proxy` annotation to a pod's metadata results in the `linkerd-proxy` container being the first container, as well as having the container hook:
```yaml
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
```
---
### Update after draft
There has been some additional discussion both off GitHub as well as on this PR (specifically with @electrical).
First, we decided that this feature should be enabled by default. The reason for this is more often than not, this feature will prevent start-up ordering issues from occurring without having any negative effects on the application. Additionally, this will be a part of edges up until the 2.11 (the next stable release) and having it enabled by default will allow us to check that it does not conflict often with applications. Once we are closer to 2.11, we'll be able to determine if this should be disabled by default because it causes more issues than it prevents.
Second, this feature will remain configurable; if disabled, then upon injection the proxy container will not be made the first container in the pod manifest. This is important for the reasons discussed with @electrical about tools that make assumptions about app containers being the first container. For example, Rancher defaults to showing overview pages for the `0` index container, and if the proxy container was always `0` then this would defeat the purpose of the overview page.
### Testing
To test this I used the `sleep.sh` script and changed `Dockerfile-proxy` to use it as it's `ENTRYPOINT`. This forces the container to sleep for 20 seconds before starting the proxy.
---
`sleep.sh`:
```bash
#!/bin/bash
echo "sleeping..."
sleep 20
/usr/bin/linkerd2-proxy-run
```
`Dockerfile-proxy`:
```textile
...
COPY sleep.sh /sleep.sh
RUN ["chmod", "+x", "/sleep.sh"]
ENTRYPOINT ["/sleep.sh"]
```
---
```bash
# Build and install with the above changes
$ bin/docker-build
...
$ bin/image-load --k3d
...
$ bin/linkerd install |kubectl apply -f -
```
Annotate the `emoji` deployment so that it's the only workload that should wait for it's proxy to be ready and inject it:
```bash
cat emojivoto.yaml |bin/linkerd inject - |kubectl apply -f -
```
You can then see that the `emoji` deployment is not starting its application container until the proxy is ready:
```bash
$ kubectl get -n emojivoto pods
NAME READY STATUS RESTARTS AGE
voting-ff4c54b8d-sjlnz 1/2 Running 0 9s
emoji-f985459b4-7mkzt 0/2 PodInitializing 0 9s
web-5f86686c4d-djzrz 1/2 Running 0 9s
vote-bot-6d7677bb68-mv452 1/2 Running 0 9s
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Closes#5977
## What
This changes adds support for namespace configuration annotation inheritance for pods. Any annotations (e.g `config.linkerd.io/skip-outbound-ports` or `config.linkerd.io/proxy-await`) that are applied against a namespace will now also be applied to pods running in that namespace by the _proxy-injector_.
* Pods do not inherit annotations from their namespaces; the exception to this is `opaque-ports` introduced in #5941. This expands on the work by allowing all config annotations to be inherited.
* Main advantage here is that instead of applying annotations on a workload-by-workload basis we can just apply them against the namespace and it will be mirrored on all pods within the namespace.
* Through this change the controller can also check the proxy's configuration directly from the pod's meta rather than from env variables.
## How
Change is pretty straightforward. We want to make sure that before we apply a JSON patch we first copy all of the namespace annotations to the pod. The logic that was in place takes care of applying the patch.
* One obvious constraint is that we want only want valid configuration annotations to be applied. To be a "valid" configuration it has to exist and it has to be prefixed with `config.linkerd.io` -- the easiest way to do this is to go through all of the available proxy configuration options and check whether any of the options are included in the namespace's annotations (done in `GetNsConfigKeys()` where we fetch all annotation keys from the namespace).
* A consideration I had with this change is whether to add `opaque-ports` as part of all of the config keys; opaque ports is a bit different though since it can be applied on a pod as well as a service -- through this change we only want to apply config annotations to pods. I chose to keep the two separate.
* Added a unit test that checks if a pod inherits config annotations from its namespace; this also includes an invalid annotation which doesn't show up in the "expected" patch to test we validate configuration correctly.
### Tests
---
I injected emojivoto and added an annotation to its namespace:
```
apiVersion: v1
kind: Namespace
metadata:
annotations:
config.linkerd.io/opaque-ports: "34567"
config.linkerd.io/proxy-log-level: debug
config.linkerd.io/skip-outbound-ports: "44556"
linkerd.io/inject: enabled
```
The deployment specs do not have any additional annotations as part of the pod template metadata. I first tested if the above annotations would be inherited with the current edge release (I expected opaque ports to be).
**Before changes**:
```
apiVersion: v1
kind: Pod
metadata:
annotations:
config.linkerd.io/opaque-ports: "34567"
linkerd.io/created-by: linkerd/proxy-injector edge-21.4.1
linkerd.io/identity-mode: default
linkerd.io/inject: enabled
linkerd.io/proxy-version: edge-21.4.1
creationTimestamp: "2021-04-08T14:33:10Z"
generateName: emoji-696d9d8f95-
labels:
app: emoji-svc
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: emoji
linkerd.io/workload-ns: emojivoto
pod-template-hash: 696d9d8f95
version: v11
spec:
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191
- --outbound-ports-to-ignore
- "44556"
image: cr.l5d.io/linkerd/proxy-init:v1.3.9
imagePullPolicy: IfNotPresent
name: linkerd-init
```
(opaque ports is in there, skip outbound isn't -- although the initContainer gets the right argument since this is already applied from the namespace by the proxy injector).
**After the changes**:
```
apiVersion: v1
kind: Pod
metadata:
annotations:
config.linkerd.io/opaque-ports: "34567"
config.linkerd.io/proxy-log-level: debug
config.linkerd.io/skip-outbound-ports: "44556"
linkerd.io/created-by: linkerd/proxy-injector dev-a7bb62fd-matei
linkerd.io/identity-mode: default
linkerd.io/inject: enabled
linkerd.io/proxy-version: dev-a7bb62fd-matei
creationTimestamp: "2021-04-08T14:42:06Z"
generateName: web-5f86686c4d-
labels:
app: web-svc
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: web
linkerd.io/workload-ns: emojivoto
pod-template-hash: 5f86686c4d
version: v11
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191
- --outbound-ports-to-ignore
- "44556"
image: cr.l5d.io/linkerd/proxy-init:v1.3.9
imagePullPolicy: IfNotPresent
name: linkerd-init
```
(opaque ports is there and so is skip outbound and the proxy log level, correct options still passed to the initContainers).
*Edit*: made a small change, had a look at `GetNsConfigKeys()` and thought it'd be better to keep the slice of keys as a fixed length array since we know there will be at most `len(ProxyAnnotations)` at any point. Not sure such a big size is warranted but we can avoid calling append for every element.
Signed-off-by: Matei David <matei@buoyant.io>
### What
When a namespace has the opaque ports annotation, pods and services should
inherit it if they do not have one themselves. Currently, services do this but
pods do not. This can lead to surprising behavior where services are correctly
marked as opaque, but pods are not.
This changes the proxy-injector so that it now passes down the opaque ports
annotation to pods from their namespace if they do not have their own annotation
set. Closes#5736.
### How
The proxy-injector webhook receives admission requests for pods and services.
Regardless of the resource kind, it now checks if the resource should inherit
the opaque ports annotation from its namespace. It should inherit it if the
namespace has the annotation but the resource does not.
If the resource should inherit the annotation, the webhook creates an annotation
patch which is only responsible for adding the opaque ports annotation.
After generating the annotation patch, it checks if the resource is injectable.
From here there are a few scenarios:
1. If no annotation patch was created and the resource is not injectable, then
admit the request with no changes. Examples of this are services with no OP
annotation and inject-disabled pods with no OP annotation.
2. If the resource is a pod and it is injectable, create a patch that includes
the proxy and proxy-init containers—as well as any other annotations and
labels.
3. The above two scenarios lead to a patch being generated at this point, so no
matter the resource the patch is returned.
### UI changes
Resources are now reported to either be "injected", "skipped", or "annotated".
The first pass at this PR worked around the fact that injection reports consider
services and namespaces injectable. This is not accurate because they don't have
pod templates that could be injected; they can however be annotated.
To fix this, an injection report now considers resources "annotatable" and uses
this to clean up some logic in the `inject` command, as well as avoid a more
complex proxy-injector webhook.
What's cool about this is it fixes some `inject` command output that would label
resources as "injected" when they were not even mutated. For example, namespaces
were always reported as being injected even if annotations were not added. Now,
it will properly report that a namespace has been "annotated" or "skipped".
### Tests
For testing, unit tests and integration tests have been added. Manual testing
can be done by installing linkerd with `debug` controller log levels, and
tailing the proxy-injector's app container when creating pods or services.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This change removes the default ignored inbound and outbound ports from the
proxy init configuration.
These ports have been moved to the the `proxy.opaquePorts` configuration so that
by default, installations will proxy all traffic on these ports opaquely.
Closes#5571Closes#5595
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This adds namespace inheritance of the opaque ports annotation to services.
This means that the proxy injector now watches services creation in a cluster.
When a new service is created, the webhook receives an admission request for
that service and determines whether a patch needs to be created.
A patch is created if the service does not have the annotation, but the
namespace does. This means the service inherits the annotation from the
namespace.
A patch is not created if the service and the namespace do not have the
annotation, or the service has the annotation. In the case of the service having
the annotation, we don't even need to check the namespace since it would not
inherit it anyways.
If a namespace has the annotation value changed, this will not be reflected on
the service. The service would need to be recreated so that it goes through
another admission request.
None of this applies to the `inject` command which still skips service
injection. We rely on being able to check the namespace annotations, and this is
only possible in the proxy injector webhook when we can query the k8s API.
Closes#5737
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
We've created a custom domain, `cr.l5d.io`, that redirects to `ghcr.io`
(using `scarf.sh`). This custom domain allows us to swap the underlying
container registry without impacting users. It also provides us with
important metrics about container usage, without collecting PII like IP
addresses.
This change updates our Helm charts and CLIs to reference this custom
domain. The integration test workflow now refers to the new domain,
while the release workflow continues to use the `ghcr.io/linkerd` registry
for the purpose of publishing images.
Fixes#5755 follow-up to #5750 and #5751
- Unifies the Go version across Docker and CI to be 1.14.15;
- Updates the GitHub Actions base image from ubuntu-18.04 to ubuntu-20.04; and
- Updates the runtime base image from debian:buster-20201117-slim to debian:buster-20210208-slim.
Pods with unusual DNS configurations may not be able to resolve the
control plane's domain names. We can avoid search path shenanigans by
adding a trailing dot to these names.
## What this changes
This adds a tap-injector component to the `linkerd-viz` extension which is
responsible for adding the tap service name environment variable to the Linkerd
proxy container.
If a pod does not have a Linkerd proxy, no action is taken. If tap is disabled
via annotation on the pod or the namespace, no action is taken.
This also removes the environment variable for explicitly disabling tap through
an environment variable. Tap status for a proxy is now determined only be the
presence or absence of the tap service name environment variable.
Closes#5326
## How it changes
### tap-injector
The tap-injector component determines if `LINKERD2_PROXY_TAP_SVC_NAME` should be
added to a pod's Linkerd proxy container environment. If the pod satisfies the
following, it is added:
- The pod has a Linkerd proxy container
- The pod has not already been mutated
- Tap is not disabled via annotation on the pod or the pod's namespace
### LINKERD2_PROXY_TAP_DISABLED
Now that tap is an extension of Linkerd and not a core component, it no longer
made sense to explicitly enable or disable tap through this Linkerd proxy
environment variable. The status of tap is now determined only be if the
tap-injector adds or does not add the `LINKERD2_PROXY_TAP_SVC_NAME` environment
variable.
### controller image
The tap-injector has been added to the controller image's several startup
commands which determines what it will do in the cluster.
As a follow-up, I think splitting out the `tap` and `tap-injector` commands from
the controller image into a linkerd-viz image (or something like that) makes
sense.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This upgrades both the proxy-init image itself, and the go dependency on
proxy-init as a library, which fixes CNI in k3s and any host using
binaries coming from BusyBox, where `nsenter` has an
issue parsing arguments (see rancher/k3s#1434).
The proxy no longer honors DESTINATION_GET variables, as profile lookups
inform when endpoint resolution is performed. Also, there is no longer
a router capacity limit.
It appears that Amazon can use the `100.64.0.0/10` network, which is
technically private, for a cluster's Pod network.
Wikipedia describes the network as:
> Shared address space for communications between a service provider
> and its subscribers when using a carrier-grade NAT.
In order to avoid requiring additional configuration on EKS clusters, we
should permit discovery for this network by default.
The proxy has a default, hardcoded set of ports on which it doesn't do
protocol detection (25, 587, 3306 -- all of which are server-first
protocols). In a recent change, this default set was removed from
the outbound proxy, since there was no way to configure it to anything
other than the default set. I had thought that there was a default set
applied to proxy-init, but this appears to not be the case.
This change adds these ports to the default Helm values to restore the
prior behavior.
I have also elected to include 443 in this set, as it is generally our
recommendation to avoid proxying HTTPS traffic, since the proxy provides
very little value on these connections today.
Additionally, the memcached port 11211 is skipped by default, as clients
do not issue any sort of preamble that is immediately detectable.
These defaults may change in the future, but seem like good choices for
the 2.9 release.
This PR Updates the Injection Logic (both CLI and proxy-injector)
to use `Values` struct instead of protobuf Config, part of our move
in removing the protobuf.
This does not touch any of the flags, install related code.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Alex Leong <alex@buoyant.io>
* Push docker images to ghcr.io instead of gcr.io
The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.
Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.
Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.
Besides that, the other changes are just a global search and replace (s/gcr.io\/linkerd-io/ghcr.io\/linkerd/).