Ensure consistent JSON logging for proxy-injector container
When JSON logging is enabled in the proxy-injector `controllerLogFormat: json` some log messages adhere to the JSON format while others do not. This inconsistency creates difficulty in parsing logs, especially in automated workflows. For example:
```
{"level":"info","msg":"received admission review request \"83a0ce4d-ab81-42c9-abe4-e0ade0f926e2\"","time":"2024-10-10T21:06:18Z"}
time="2024-10-10T21:06:18Z" level=info msg="received pod/mypod"
```
Modified the logging implementation in the `controller/proxy-injector/webhook.go` to ensure all log messages follow the JSON format consistently. This was achieved by removing a new instance of the logrus logger that was being created in the file and replacing it with the global logger instance, ensuring all logs respect the controllerLogFormat configuration.
Reproduced the issue by enabling JSON logging and observing mixed-format logs when install the emojivoto sample application. Applied the changes and verified that all logs consistently use the JSON format.
Ran the linkerd check command and confirmed there are no additional warnings or issues.
Tested various scenarios, including pods with and without the injection annotation, to ensure consistent logging behavior.
Fixes#13168
Signed-off-by: Micah See msee@usc.edu
Default values for 30s will be enough to linux TCP-stack completes about 7 packages retransmissions, after about 7 retransmissions RTO (retransmission timeout) will rapidly grows and does not make much sense to wait for too long.
Setting TCP_USER_TIMEOUT between linkerd-proxy and wild world is enough, since connections to containers in same pod are more stable and reliable
Fixes#13023
Signed-off-by: UsingCoding <extendedmoment@outlook.com>
## Problem
When the IPv6 stack in Linux is disabled, the proxy will crash at startup.
## Repro
In a Linux machine, disable IPv6 networking through the `net.ipv6.conf.*` sysctl kernel tunables, and restart the system:
- In /etc/sysctl.conf add:
```
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
```
- In /etc/default/grub set:
```
GRUB_CMDLINE_LINUX="ipv6.disable=1"
```
- Don't forget to update grub before rebooting:
```
sudo update-grub
```
In a default k3d cluster, install Linkerd. You should see the following error in any proxy log:
```
thread 'main' panicked at /__w/linkerd2-proxy/linkerd2-proxy/linkerd/app/src/lib.rs:245:14:
Failed to bind inbound listener: Os { code: 97, kind: Uncategorized, message: "Address family not supported by protocol" }
```
## Cause
Even if a k8s cluster didn't support IPv6, we were counting on the nodes having an IPv6 stack, which allowed us to bind to the inbound proxy to [::] (although not to [::1] for the outbound proxy, as seen in GKE). This was the case in the major cloud providers we tested, but it turns out there are folks running nodes with IPv6 disabled and so we have to cater that case as well.
## Fix
The current change undoes some of the changes from 7cbe2f5ca6 (for the proxy config), 7cbe2f5ca6 (for the policy controller) and 66034099d9 (for linkerd-cni), binding back again to 0.0.0.0 unless `disableIPv6` is false.
The Linkerd proxy suppresses all logging of HTTP headers at debug level or higher unless the `proxy.logHTTPHeaders` helm values is set to `insecure`. However, even when this value is not set, HTTP headers can still be logged if the log level is set to trace.
We update the log string we use to disable logging of HTTP headers from `linkerd_proxy_http::client[{headers}]=off` to the more general `[{headers}]=off,[{request}]=off`. This will disable any logging which includes a `headers` or `request` field. This has the effect of disabling the logging of headers at trace level as well. As before, these logs can be re-enabled by settings `proxy.logHTTPHeaders=insecure`.
Signed-off-by: Alex Leong <alex@buoyant.io>
Default values for `linkerd-init` (resources allocated) are not always
the right fit. We offer default values to ensure proxy-init does not get
in the way of QOS Guaranteed (`linkerd-init` resource limits and
requests cannot be configured in any other way).
Instead of using default values that can be overridden, we can re-use
the proxy's configuration values. For the pod to be QOS Guaranteed, the
values for the proxy have to be set any way. If we re-use the same
values for proxy-init we can ensure we'll always request the same amount
of CPU and memory as needed.
* `linkerd-init` now defaults to the proxy's values
* when the proxy has an annotation configuration for resource requests,
it also impacts `linkerd-init`
* Helm chart and docs have been updated to reflect the missing values.
* tests now no longer use `ProxyInit.Resources`
UPGRADE NOTE:
- Deprecates `proxyInit.resources` field in the Helm values.
- It will be a no-op if specified (no hard failures)
Closes#11320
---------
Signed-off-by: Matei David <matei@buoyant.io>
The proxy may expose a /shutdown HTTP endpoint on its admin server that may be used by `linkerd-await --shutdown` to trigger proxy shutdown after a process completes. If an application has an SSRF vulnerability, however, an attacker could use this endpoint to trigger proxy shutdown, causing a denial of service. This admin endpoint is only useful with linkerd-await; and this functionality is supplanted by Kubernetes Native Sidecars.
To address this potential issue, this change disables the proxy's admin endpoint by default. A helm value is introduced to support enabling the admin endpoint cluster-wide; and the `config.linkerd.io/proxy-admin-shutdown: enabled` annotation may be set to enable it the admin endpoint on an individual workload.
Signed-off-by: Alex Leong <alex@buoyant.io>
Followup to linkerd/linkerd2-proxy#2872 , where we swapped the
trust-dns-resolver with the hickory-resolver dependency. This change
updates the default log level setting for the proxy to account for
that.
Those releases ensure that when IPv6 is enabled, the series of ip6tables commands succeed. If they fail, the proxy-init/linkerd-cni containers should fail as well, instead of ignoring errors.
See linkerd/linkerd2-proxy-init#388
Fixes#12620
When the Linkerd proxy log level is set to `debug` or higher, the proxy logs HTTP headers which may contain sensitive information.
While we want to avoid logging sensitive data by default, logging of HTTP headers can be a helpful debugging tool. Therefore, we add a `proxy.logHTTPHeaders` Helm value which prevents the logging of HTTP headers when set to false. The default value of this value is false so that headers cannot be logged unless users opt-in.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#11773
Make the proxy's GUID configurable via `proxy.gid` which defaults to `-1`, in which case the GUID is not set.
Also added ability to set the GUID for proxy-init and the core and extension controllers.
---------
Signed-off-by: Nico Feulner <nico.feulner@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
This changes the default of the Helm value `disableIPv6` to `true`.
Additionally, the proxy's `LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS` env var
is now set accordingly to that value.
This addresses an incompatibility with GKE introduced in last week's
edge (`edge-24.5.1`): in default IPv4-only nodes in GKE clusters the
proxy can't bind to `::1`, so we have make IPv6 opt-in to avoid
surprises.
As part of the ongoing effort to support IPv6/dual-stack networks, this change
enables the proxy to properly forward IPv6 connections:
- Adds the new `LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS` environment variable when
injecting the proxy. This is supported as of proxy v2.228.0 which was just
pulled into the linkerd2 repo in #2d5085b56e465ef56ed4a178dfd766a3e16a631d.
This adds the IPv6 loopback address (`[::1]`) to the IPv4 one (`127.0.0.1`)
so the proxy can forward outbound connections received via IPv6. The injector
will still inject `LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR` to support the rare
case where the `proxy.image.version` value is overridden with an older
version. The new proxy still considers that variable, but it's superseded by
the new one. The old variable is considered deprecated and should be removed
in the future.
- The values for `LINKERD2_PROXY_CONTROL_LISTEN_ADDR`,
`LINKERD2_PROXY_ADMIN_LISTEN_ADDR` and `LINKERD2_PROXY_INBOUND_LISTEN_ADDR`
have been updated to point to the IPv6 wildcard address (`[::]`) instead of
the IPv4 one (`0.0.0.0`) for the same reason. Unlike with the loopback
address, the IPv6 wildcard address suffices to capture both IPv4 and IPv6
traffic.
- The endpoint translator's `getInboundPort()` has been updated to properly
parse the IPv6 loopback address retrieved from the proxy container manifest.
A unit test was added to validate the behavior.
HTTP/2 keep-alives enable HTTP/2 servers to issue PING messages to clients to
ensure that the connections are still healthy. This is especially useful when
the OS loses a FIN packet, which causes the connection resources to be held by
the proxy until we attempt to write to the socket again. Keep-alives provide a
mechanism to issue periodic writes on the socket to test the connection health.
5760ed2 updated the proxy to support HTTP/2 server configuration and 9e241a7
updated the proxy's Helm templating to support aribtrary configuration.
Together, these changes enable configuring default HTTP/2 server keep-alives
for all data planes in a cluster.
The default keep-alive configuration uses a 10s interval and a 3s timeout. These
are chosen to be conservative enough that they don't trigger false positive
timeouts, but frequent enough that these connections may be garbage collected
in a timely fashion.
A new release has been cut for both. The new release adds a new `GID`
feature that allows iptables to skip traffic originating from a process
running under the specified GID. The CNI plugin also includes a fix for
native sidecar containers.
* Bump proxy-init from `v2.3.0` to `v2.4.0`
* Bump CNI plugin from `v1.4.0` to `v1.5.0`
---------
Signed-off-by: Matei David <matei@buoyant.io>
The proxy-injector package has a `ResourceConfig` type that is
responsible for parsing resources, applying overrides, and serialising a
series of configuration values to a Kubernetes patch. The functionality
is very concrete in its assumption; it always relies on a pod spec and
it mutates inner state when deciding on which overrides to apply.
This is not a flexible way to handle injection and configuration
overriding for other types of resources. We change this by turning
methods previously defined on `ResourceConfig` into free-standing
functions. These functions can be applied for any type of resources in
order to compute a set of configuration values based on annotation
overrides. Through the change, the functions can be used to compute
static configuration for non-Pod types or can be used in tests.
Signed-off-by: Matei David <matei@buoyant.io>
The injector may try to resolve kinds that it does not know about.
When it does so, it logs a warning. In clusters with these unexpected
workload owners, the injector logs these warnings excessively.
This change reduces these logs to trace-level.
Signed-off-by: adarsh-jaiss <its.adarshjaiss@gmail.com>
As part of the ongoing effort to support IPv6/dual-stack networks, this
implements the following generalizations to the manifests:
- Expand the `policyController.probeNetworks` config to include the IPv6
wildcard address `::/0`
- Have the policy controller gRPC and admin servers bind to the IPv6
loopback address `[::]`. With this the controller still keeps on
listening on the IPv4 loopback as well, so we remain
backwards-compatible.
- Also expanded the `clusterNetworks` config to include the accepted
IPV6 ULAs (`fd00::/8`), which is IPv6's equivalent of IPv4's private
networks.
In certain cases (e.g. high CPU load) kubelets can be slow to read readiness
and liveness responses. Linkerd is configured with a default time out of `1s`
for its probes. To prevent injected pod restarts under high load, this
change makes probe timeouts configurable.
---------
Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Matei David <matei@buoyant.io>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
Currently, the value that is put in the `LINKERD2_PROXY_POLICY_WORKLOAD` env var has the format of `pod_ns:pod_name`. This PR changes the format of the policy token into a json struct, so it can encode the type of workload and not only its location. For now, we add an additional `external_workload` type.
Zahari Dichev <zaharidichev@gmail.com>
linkerd/linkerd2-proxy#2587 adds configuration parameters that bound the
lifetime and idle times of control plane streams. This change helps to
mitigate imbalanced control plane replica usage and to generally prevent
scenarios where a stream becomes "stuck," as has been observed when a
control plane replica is unhealthy.
This change adds helm values to control this behavior. Default values
are provided.
When the destination controller logs about receiving or sending messages to a data plane proxy, there is no information in the log about which data plane pod it is communicating with. This can make it difficult to diagnose issues which span the data plane and control plane.
We add a `pod` field to the context token that proxies include in requests to the destination controller. We add this pod name to the logging context so that it shows up in log messages. In order to accomplish this, we had to plumb through logging context in a few places where it previously had not been. This gives us a more complete logging context and more information in each log message.
An example log message with this fuller logging context is:
```
time="2023-10-24T00:14:09Z" level=debug msg="Sending destination add: add:{addrs:{addr:{ip:{ipv4:183762990} port:8080} weight:10000 metric_labels:{key:\"control_plane_ns\" value:\"linkerd\"} metric_labels:{key:\"deployment\" value:\"voting\"} metric_labels:{key:\"pod\" value:\"voting-7475cb974c-2crt5\"} metric_labels:{key:\"pod_template_hash\" value:\"7475cb974c\"} metric_labels:{key:\"serviceaccount\" value:\"voting\"} tls_identity:{dns_like_identity:{name:\"voting.emojivoto.serviceaccount.identity.linkerd.cluster.local\"}} protocol_hint:{h2:{}}} metric_labels:{key:\"namespace\" value:\"emojivoto\"} metric_labels:{key:\"service\" value:\"voting-svc\"}}" addr=":8086" component=endpoint-translator context-ns=emojivoto context-pod=web-767f4484fd-wmpvf remote="10.244.0.65:52786" service="voting-svc.emojivoto.svc.cluster.local:8080"
```
Note the `context-pod` field.
Additionally, we have tested this when no pod field is included in the context token (e.g. when handling requests from a pod which does not yet add this field) and confirmed that the `context-pod` log field is empty, but no errors occur.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Bump CNI plugin to v1.2.1
* Bump proxy-init to v2.2.2
Both dependencies include a fix for CVE-2023-2603. Since alpine is used
as the runtime image, there is a security vulnerability detected in the
produced images (due to an issue with libcap). The alpine images have
been bumped to address the CVE.
Signed-off-by: Matei David <matei@buoyant.io>
When using `linkerd-await` as a preStart hook, we need to explicitly pass in the proxy's admin port if it is not the default (4191). While the admin server listener can be bound to any arbitrary port using `config.linkerd.io/admin-port` as a configuration annotation, `linkerd-await`'s template is not aware of the override resulting in start-up errors.
This change adds the override to `linkerd-await` by always using an explicit `--port` argument.
---------
Signed-off-by: jclegras <11457480+jclegras@users.noreply.github.com>
The proxy caches results in-memory, both for inbound and outbound
service (and policy) discovery. While the proxy's default values are
great in most cases, certain client configurations may require
overrides. The proxy supports overriding the default values, however, it
currently does not offer an easy way for users to configure them.
This PR introduces two new values in Linkerd's control plane chart. The
values control the inbound and outbound cache discovery idle timeout --
the amount of time a result will be kept in the cache if unused. Setting
this value will change the configuration for all injected proxies, but
not for the control plane.
---------
Signed-off-by: Matei David <matei@buoyant.io>
* add `trust_dns=error` to default proxy log level
Since upstream has yet to release a version with PR
bluejekyll/trust-dns#1881, this commit changes the proxy's default log
level to silence warnings from `trust_dns_proto` that are generally
spurious.
Closes#10123.
proxy-init v2.2.1:
* Sanitize `subnets-to-ignore` flag
* Dep bumps
cni-plugin v1.1.0:
* Add support for the `config.linkerd.io/skip-subnets` annotation
* Dep bumps
validator v0.1.2:
* Dep bumps
Also, `linkerd-network-validator` is now released wrapped in a tar file, so this PR also amends `Dockerfile-proxy` to account for that.
Fixes#10150
When we added PodSecurityAdmission in #9719 (and included in
edge-23.1.1), we added the entry `seccompProfile.type=RuntimeDefault` to
the containers SecurityContext.
For PSP to accept that we require to add the annotation
`seccomp.security.alpha.kubernetes.io/allowedProfileNames:
"runtime/default"` into the PSP resource, which also implies we require
to add the entry `seccompProfile.type=RuntimeDefault` to the pod's
SecurityContext as well, not just the container's.
It also turns out the `namespace-metadata` Jobs used by extensions for
the helm installation method didn't have their ServiceAccount properly
bound to the PSP resource. This resulted in the `helm install` command
failing, and although the extensions resources did get deployed, they
were not being discoverable by `linkerd check`. This change fixes that
as well, that has been broken since 2.12.0!
* Removed dupe imports
My IDE (vim-gopls) has been complaining for a while, so I decided to take
care of it. Found via
[staticcheck](https://github.com/dominikh/go-tools)
* Add stylecheck to go-lint checks
Closes#9676
This adds the `pod-security.kubernetes.io/enforce` label as described in [Pod Security Admission labels for namespaces](https://kubernetes.io/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces).
PSA gives us three different possible values (policies or modes): [privileged, baseline and restricted](https://kubernetes.io/docs/concepts/security/pod-security-standards/).
For non-CNI mode, the proxy-init container relies on granting the NET_RAW and NET_ADMIN capabilities, which places those pods under the `restricted` policy. OTOH for CNI mode we can enforce the `restricted` policy, by setting some defaults on the containers' `securityContext` as done in this PR.
Also note this change also adds the `cniEnabled` entry in the `values.yaml` file for all the extension charts, which determines what policy to use.
Final note: this includes the fix from #9717, otherwise an empty gateway UID prevents the pod to be created under the `restricted` policy.
## How to test
As this is only enforced as of k8s 1.25, here are the instructions to run 1.25 with k3d using Calico as CNI:
```bash
# launch k3d with k8s v1.25, with no flannel CI
$ k3d cluster create --image='+v1.25' --k3s-arg '--disable=local-storage,metrics-server@server:0' --no-lb --k3s-arg --write-kubeconfig-mode=644 --k3s-arg --flannel-backend=none --k3s-arg --cluster-cidr=192.168.0.0/16 --k3s-arg '--disable=servicelb,traefik@server:0'
# install Calico
$ k apply -f https://k3d.io/v5.1.0/usage/advanced/calico.yaml
# load all the images
$ bin/image-load --k3d proxy controller policy-controller web metrics-api tap cni-plugin jaeger-webhook
# install linkerd-cni
$ bin/go-run cli install-cni|k apply -f -
# install linkerd-crds
$ bin/go-run cli install --crds|k apply -f -
# install linkerd-control-plane in CNI mode
$ bin/go-run cli install --linkerd-cni-enabled|k apply -f -
# Pods should come up without issues. You can also try the viz and jaeger extensions.
# Try removing one of the securityContext entries added in this PR, and the Pod
# won't come up. You should be able to see the PodSecurity error in the associated
# ReplicaSet.
```
To test the multicluster extension using CNI, check this [gist](https://gist.github.com/alpeb/4cbbd5ad87538b9e0d39a29b4e3f02eb) with a patch to run the multicluster integration test with CNI in k8s 1.25.
* edge-22.11.3 change notes
Besides the notes, this corrects a small point in `RELEASE.md`, and
bumps the proxy-init image tag to `v2.1.0`. Note that the entry under
`go.mod` wasn't bumped because moving it past v2 requires changes on
`linkerd2-proxy-init`'s `go.mod` file, and we're gonna drop that
dependency soon anyways. Finally, all the charts got their patch version
bumped, except for `linkerd2-cni` that got its minor bumped because of
the tolerations default change.
## edge-22.11.3
This edge release fixes connection errors to pods using a `hostPort` different
than their `containerPort`. Also the `network-validator` init container improves
its logging, and the `linkerd-cni` DaemonSet now gets deployed in all nodes by
default.
* Fixed `destination` service to properly discover targets using a `hostPort`
different than their `containerPort`, which was causing 502 errors
* Upgraded the `network-validator` with better logging allowing users to
determine whether failures occur as a result of their environment or the tool
itself
* Added default `Exists` toleration to the `linkerd-cni` DaemonSet, allowing it
to be deployed in all nodes by default, regardless of taints
Co-authored-by: Oliver Gould <ver@buoyant.io>
* Use metadata API in the proxy and tap injectors
Part of #9485
This adds a new `MetadataAPI` similar to the current `k8s.API` hosting informers, but backed by k8s' `metadatainformer` shared informers, which retrieves only the objects metadata, resulting in less memory consumption by its clients. Currently this is only implemented for the proxy and tap injectors. Usage by the destination controller will be implemented as a follow-up.
## Existing API enhancements
Shared objects and logic required by API and MetadataAPI have been moved to the new `k8s.go`, `api_resource.go` and `prometheus.go` files. That includes the `isValidRSParent()` function whose arg is now more generic.
## Unit tests
`/controller/k8s/api_test.go` now also instantiates a MetadataAPI, used in the augmented `TestGetObjects()` and `TestGetOwnerKindAndName()` tests. The `resources` struct was introduced to capture the common fields among tests and simplify `newMockAPI()`'s signature.
## Other Changes
The injector no longer watches for Pods. It only requires watching workloads that own resources (and also watch namespaces), so Pod is not required.
## Testing Memory Consumption
Install linkerd, inject emojivoto and check the injector memory consumption with `kubectl -n linkerd top pod linkerd-proxy-injector-xxx`. It'll start consuming about 16Mi. Then ramp up emojivoto's `voting` deployment replicas to 2000. After 5 minutes memory will stabilize around 32Mi using the current branch. Using the latest edge, it'll stabilize around 110Mi.
Fixes issue described in [this comment](https://github.com/linkerd/linkerd2/issues/9310#issuecomment-1247201646)
Rollback #7382
Should be cherry-picked back into 2.12.1
For 2.12.0, #7382 removed the env vars `_l5d_ns` and `_l5d_trustdomain` from the proxy manifest because they were no longer used anywhere. In particular, the jaeger injector used them when injecting the env var `LINKERD2_PROXY_TAP_SVC_NAME=tap.linkerd-viz.serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)` but then started using values.yaml entries instead of these env vars.
The problem is when upgrading the core control plane (or anything else) to 2.12.0, the 2.11 jaeger extension will still be running and will attempt to inject the old env var into the pods, making reference to `l5d_ns` and `_l5d_trustdomain` which the new proxy container won't offer anymore. This will put the pod in an error state.
This change restores back those env vars. We will be able to remove them at last in 2.13.0, when presumably the jaeger injector would already have already been upgraded to 2.12 by the user.
Replication steps:
```bash
$ curl -sL https://run.linkerd.io/install | LINKERD2_VERSION=stable-2.11.4 sh
$ linkerd install | k apply -f -
$ linkerd jaeger install | k apply -f -
$ linkerd check
$ curl -sL https://run.linkerd.io/install | LINKERD2_VERSION=stable-2.12.0 sh
$ linkerd upgrade --crds | k apply -f -
$ linkerd upgrade | k apply -f -
$ k get po -n linkerd
NAME READY STATUS RESTARTS AGE
linkerd-identity-58544dfd8-jbgkb 2/2 Running 0 2m19s
linkerd-destination-764bf6785b-v8cj6 4/4 Running 0 2m19s
linkerd-proxy-injector-6d4b8c9689-zvxv2 2/2 Running 0 2m19s
linkerd-identity-55bfbf9cd4-4xk9g 0/2 CrashLoopBackOff 1 (5s ago) 32s
linkerd-proxy-injector-5b67589678-mtklx 0/2 CrashLoopBackOff 1 (5s ago) 32s
linkerd-destination-ff9b5f67b-jw8w5 0/4 PostStartHookError 0 (8s ago) 32s
```
Closes#9312#9118 introduced the `linkerd.io/trust-root-sha256` annotation which is
automatically added to control plane components.
This change ensures that all injected workloads also receive this annotation.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
In 2.11.x, proxyInit.runAsRoot was true by default, which caused the
proxy-init's runAsUser field to be 0. proxyInit.runAsRoot is now
defaulted to false in 2.12.0, but runAsUser still isn't
configurable, and when following the upgrade instructions
here, helm doesn't change runAsUser and so it conflicts with the new value
for runAsRoot=false, resulting in the pods erroring with this message:
Error: container's runAsUser breaks non-root policy (pod: "linkerd-identity-bc649c5f9-ckqvg_linkerd(fb3416d2-c723-4664-acf1-80a64a734561)", container: linkerd-init)
This PR adds a new default for runAsUser to avoid this issue.
* Bump proxy-init to v2.0.0
New release of proxy-init.
Updated:
* Helm values to use v2.0.0 of proxy-init
* Helm docs
* Tests
Note: go dependencies have not been updated since the new version will
break API compatibility with older versions (source files have been
moved, see issue for more details).
Closes#9164
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Oliver Gould <ver@buoyant.io>
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Some hosts may not have 'nft' modules available. Currently, proxy-init
defaults to using 'iptables-nft'; if the host does not have support for
nft modules, the init container will crash, blocking all injected
workloads from starting up.
This change defaults the 'iptablesMode' value to 'legacy'.
* Update linkerd-control-plane/values file default
* Update proxy-init partial to default to 'legacy' when no mode is
specified
* Change expected values in 'pkg/charts/linkerd2/values_test.go' and in
'cli/cmd/install_test'
* Update golden files
Fixes#9053
Signed-off-by: Matei David <matei@buoyant.io>
This change introduces a new value to be used at install (or upgrade)
time. The value (`proxyInit.iptablesMode=nft|legacy`) is responsible
for starting the proxy-init container in nft or legacy mode.
By default, the init container will use iptables-nft. When the mode is set to
`nft`, it will instead use iptables-nft. Most modern Linux distributions
support both, but a subset (such as RHEL based families) only support
iptables-nft and nf_tables.
Signed-off-by: Matei David <matei@buoyant.io>
This change bumps the proxy-init version from v1.6.1 to the latest
version, v1.6.2. As part of the new release, proxy-init now adds
net_admin and net_raw sys caps to xtables-nft-multi so that nftables
mode can be used without requiring root privileges.
* Bump go.mod
* Bump version in helm values
* Bump version in misc files
* Bump version in code
Signed-off-by: Matei David <matei@buoyant.io>
Release v1.6.1 of proxy-init adds support for iptables-nft. This change
bumps up the proxy-init version used in code, chart values, and golden
files.
* Update go.mod dep
* Update CNI plugin with new opts
* Update proxy-init ref in golden files and chart values
* Update policy controller CI workflow
Signed-off-by: Matei David <matei@buoyant.io>
The injector configures the proxy with a set of known inbound ports
which are used (by the proxy) to discover inbound server configuration.
The list of ports is derived from the pod's container ports; container
ports may be optional and thus not present. The proxy supports dynamic
discovery of additional ports at runtime but since they are lazy,
additional ports may be dropped or updated long after pod start-up.
To ensure HTTP probes are handled correctly, this change introduces new
functionality to configure the list of inbound ports for the proxy with
any ports targeted by healthcheck probes, as long as they are HTTP, and
even if they are not present in the containerPorts configuration.
This change also introduces additional liveness (or readiness) probes to
the current injector webhook test fixtures in order to assert that
injected pods will always have their healthcheck target ports included
in the proxy's configuration.
Closes#8638
Signed-off-by: Matei David <matei@buoyant.io>
* Fix injector not emitting skip events properly
A resource that cannot be injected -- for a variety of reasons, such as
using the host network, or not mounting a SA token when token projection
is disabled -- may still have an annotation patch if the resource's
endpoints have ports marked as opaque.
When an annotation patch is generated but an injection patch is not, the
injector will not emit an event and consider the resource as "injected",
when in fact it does not have the sidecar. This can make it hard to
investigate issues where resources that are supposed to be injected are
not because the failure is not obvious.
This change refactors and simplifies the logic by emitting an event
whenever a resource is not injected, regardless of whether an annotation
patch is generated for it. This should provide better visibility in case
of failures. Furthermore, the change refactors the code to avoid too
many early returns and make it easier to trace the codepath through the
function. The initial assumption that an annotation patch should not
increment injection skipped admission response has been left intact; in
other words, an annotation patch will not count in the metrics as a
skip, but an event with the skip reason is still emitted.
Fixes#8634
Signed-off-by: Matei David <matei@buoyant.io>
Fixes#8624
When the proxy-injector encounters a resource with an owner ref, it calls `api.GetObjects` to fetch the owner. If the owner is a kind which is not supported by the proxy-injector, we will panic.
We add a condition so that we only attempt to fetch the owner resource if it is a kind we support.
Signed-off-by: Alex Leong <alex@buoyant.io>
We frequently compare data structures--sometimes very large data
structures--that are difficult to compare visually. This change replaces
uses of `reflect.DeepEqual` with `deep.Equal`. `go-test`'s `deep.Equal`
returns a diff of values that are not equal.
Signed-off-by: Oliver Gould <ver@buoyant.io>