Updates linkerd2-proxy-init version to v1.4.0
Major change includes removing "redirect-non-loopback-traffic" rule; previously packets with destination != 127.0.0.1 on lo originating from proxy process would be sent to the inbound proxy port (assuming application tries to talk to itself). This is no longer the case.
Signed-off-by: Matei David <matei@buoyant.io>
Part of https://github.com/linkerd/linkerd2/issues/6647
This PR adds a new warning that is displayed when `linkerd viz stat ts`
is used as TrafficSplits without SMI extension will not be supported
from `2.12`
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
#6719 changed the proxy injector so that it adds the `config.linkerd.io/opaque-ports` annotation to all pods and services if they or their namespace do not already contain the annotation. The value used is the default list of opaque ports—which is `25,443,587,3306,4444,5432,6379,9300,11211` unless otherwise specified by the user during installation.
Closes#6729
The main issue with this is that if a service exposes a service port `9090` that targets `3306`, the service _should_ have `9090` set as opaque since it targets a default opaque port, but it does not. This change ensures that services with this situation have `9090` set as opaque.
Additionally, services and pods do not need an annotation for with the entire default opaque ports list if they don't expose those ports in the first place. This change will filter out ports from the default list if the service or pod does not expose them.
### tests
I've added some unit tests that demonstrate the change in behavior and explained in the original issue #6729.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
The identity controller cannot depend on the policy controller; but we
can use a more restrictive default policy here. This change updates the
identity controller's default policy to be `cluster-unauthenticated` (so
that health checking is permitted) and sets the identity service port to
require TLS so that unmeshed connections may not reach the identity
controller.
This change adds more golden files to have a `linguist-generated=true`
setting so that they are not displayed in PRs.
Cargo.lock files are set with `linguist-generated=false` so that these
changes are not hidden in PRs.
* Speed improvements for `integration_tests.yml` and CLI docker targets changes
Fixes#6735
`cli/Dockerfile` has been refactored to have different possible final targets, one per os/arch, while keeping the old `multi-arch` target that builds for everything.
The `DOCKER_MULTIARCH` env var has been replaced with `DOCKER_TARGET`, that should match any of the targets in that Dockerfile. If not set, its value is set to the host's os/arch automatically in `bin/_docker.sh`.
`bin/_docker.sh` is consumed by `bin/docker-build-cli-bin`, whose logic is now simpler and allows to be called through something like `DOCKER_TARGET=xxx bin/docker-build-cli-bin` to be able to build the CLI inside docker for a specific os/arch (again, if `DOCKER_TARGET` is unset, it'll build for the host's os/arch).
`bin/docker-pull-binaries` was also simplified in the same way.
The `integration_tests.yml` workflow now sets `DOCKER_TARGET=linux-amd64` because that's all that's required. This makes the `Docker build (cli-bin)` job to no longer be the lengthiest one, which results in a speedup of 5mins for the `docker_build` part of the workflow.
The `release.yml` continues to work as before, having now `DOCKER_TARGET=multi-arch`, given that besides the `linux-amd64` CLI we also need `linux-arm64` and the windows CLI to be available there.
We add a validating admission controller to the policy controller which validates `Server` resources. When a `Server` admission request is received, we look at all existing `Server` resources in the cluster and ensure that no other `Server` has an identical selector and port.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
This edge release continues to build on the policy feature by adding support for
cluster-scoped default policies and exposing policy labels on various prometheus
metrics. The proxy has been updated to return HTTP-level authorization errors
at the time that the request is processed, instead of when the connection is
established.
In addition, the proxy-injector has been updated to set the `opaque-ports`
annotation on a workload to make sure that controllers can discover how the
workload was configured. Also, the `sleep` binary has been added to the proxy
image in order to restore the functionality required for `waitBeforeExitSeconds`
to work.
* Added `default-inbound-policy` annotation to the proxy-injector
* Updated the proxy-injector to always add the `opaque-ports` annotation
* Added `sleep` binary to proxy image
* Updated inbound traffic metrics to include server and authorization labels
* Updated the policy-controller to honor pod level port annotations when a
`Server` resource definition does not match the ports defined for the workload
* Updated the point at which the proxy returns HTTP-level authorization errors
* Exposed permit and policy labels on HTTP metrics
* Added support for cluster-scoped default policies
* Dropped `nonroot` variant from the policy-controller's distroless base image
to avoid erroring in some environments.
This release improves policy handling for HTTP connections so that
requests are failed with a 403 Forbidden status (or a PERMISSION_DENIED
grpc-status, if appropriate).
Inbound metrics now include labels indicating the server and/or
authorization used to allow a connection or request to the proxy. Error
metrics now include an `unauthorized` error reason for traffic that is
denied by policy.
Finally, the outbound proxy no longer initializes mTLS or HTTP/2
upgrades when the target proxy is itself. This is done in preparation
for changes that will allow the proxy to stop forwarding connections on
`localhost` so that servers bound only on the loopback interface are not
exposed by Linkerd.
---
* build(deps): bump h2 from 0.3.3 to 0.3.4 (linkerd/linkerd2-proxy#1212)
* build(deps): bump libc from 0.2.99 to 0.2.100 (linkerd/linkerd2-proxy#1213)
* Use `ExtractParam` in transport metrics (linkerd/linkerd2-proxy#1211)
* policy: Add support for cluster-scoped default policies (linkerd/linkerd2-proxy#1210)
* Expose policy labels on inbound transport metrics (linkerd/linkerd2-proxy#1215)
* inbound: Expose permit labels on HTTP metrics (linkerd/linkerd2-proxy#1216)
* build(deps): bump tokio from 1.10.0 to 1.10.1 (linkerd/linkerd2-proxy#1218)
* build(deps): bump codecov/codecov-action from 2.0.2 to 2.0.3 (linkerd/linkerd2-proxy#1217)
* build(deps): bump hyper from 0.14.11 to 0.14.12 (linkerd/linkerd2-proxy#1221)
* build(deps): bump bytes from 1.0.1 to 1.1.0 (linkerd/linkerd2-proxy#1222)
* inbound: Return HTTP-level authorization errors (linkerd/linkerd2-proxy#1220)
* Skip TLS and H2 when target is inbound IP (linkerd/linkerd2-proxy#1219)
The proxy injector now adds the `config.linkerd.io/default-inbound-policy` annotation to all injected pods.
Closes#6720.
If the pod has the annotation before injection then that value is used. If the pod does not have the annotation but the namespace does, then it inherits that. If both the pod and the namespace do not have the annotation, then it defaults to `.Values.policyController.defaultAllowPolicy`.
Upon injecting the sidecar container into the pod, this annotation value is used to set the `LINKERD2_PROXY_INBOUND_DEFAULT_POLICY` environment variable. Additionally, `LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS` is also set to the value of `.Values.clusterNetworks`.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
In order to discover how a workload is configured without knowing the global defaults, the `opaque-ports` annotation is now added by the proxy injector to workloads, regardless of the list being the default or user-specified.
Closes#6689
#### core
Because core control plane components do not go through the proxy injector the annotation is added to the `destination`, `identity`, and `proxy-injector` templates.
The `linkerd-destination` and `linkerd-proxy-injector` deployments both now just have the `opaque-ports: "8443"` annotation. The `linkerd-identity` deployment and service doesn't need this annotation since it doesn't expose anything in the default list.
#### non-core
All other resources go through the proxy injector; it decides whether or not services or pods (the two resources that it can add annotations to) should get the default list.
Workloads get the default list of opaque ports added if they and their namespace do not have the annotation already. So this boils down to:
1. If the workload already has the annotation, no patch is created
2. If the namespace has the annotation but the workload does not, a patch is generated
3. If the workload and namespace do not have the annotation, a patch is generated
#### tests
A unit test has been added and I performed the following manual tests:
1. Injected a pod with the annotation: a patch is generated but there is no change to opaque ports
2. Injected a pod with the namespace annotation: a patch is genereted and opaque ports are copied down to the pod
3. Injected a pod with no annotation on it or the namespace: a patch is generated and the default opaque ports are added
4. Created a pod (not injected): a patch is generated (without the proxy) that adds the annotation (this holds true for if the pod having the annotation or the namespace having the annotation)
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Fixes#6743
As in #6392 for the proxy image (fixed by #6451), using the
`distroless/cc:nonroot` base image breaks the policy container in some
environments. So we're changing that to `distroless/cc`. The policy
container is already being run using a non-root user, so we're not
compromising on security.
* injector: cleanup env variables in `_proxy.tpl`
This PR updates the `_proxy.tpl` file to remove the usage of `_l5d_ns`
and `l5d_trustDomain` env variables which can be rendered directly
instead. This also moves the reference variables to the top for
simplicity purposes.
These unused variables will be removed in a future release to
prevent race conditions during upgrades.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Addresses part of #6735
The job remains in the `release.yml` workflow, which should continue
doing more complete checks (yet with lower probability of failure) than
the integration tests.
* test: Fix `rabbitmq-server` manifests in `externalresources` test
Started to notice the following problems in the `externalresources` tests
that is causing them to retry and take a lot of time:
- liveness and readiness probes doesn't seem to be working causing
restarts. This is addressed by using the [suggested probes from
the rabbitmq docs](https://github.com/rabbitmq/diy-kubernetes-examples/blob/master/gke/statefulset.yaml#L118).
- `linkerd-proxy` erroring that `LINKERD2_PROXY_INBOUND_PORTS` is not
set. This is fixed by adding the container ports that are being
used.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Policy controller API responses include a set of labels. These labels
are to be used in proxy m$etrics to indicate why traffic is permitted to
a pod. This permits metrics to be associated with `Server` and
ServerAuthorization` resources (i.e. for `stat`).
This change updates the response API to include a `name` label
referencing the server's name. When the policy is derived from a default
configuration (and not a `Server` instance), the name takes the form
'default:<policy>'.
This change also updates authorization labels. Defaults are encoded as
servers are, otherwise the authorization's name is set as a label. The
`tls` and `authn` labels have been removed, as they're redundant with
other labels that are already present.
Pods may be annotated with annotations like
`config.linkerd.io/opaque-ports` and
`config.linkerd.io/proxy-require-identity-inbound-ports`--these
annotations configure default behavior that should be honored when a
`Server` does not match the workload's ports. As it stands now, the
policy controller would break opaque-ports configurations that aren't
reflected in a `Server`.
This change reworks the pod indexer to create a default server watch for
each _port_ (rather than for each pod). The cache of default server
watches is now lazy, creating watches as needed for all used
combinations of default policies. These watches are never dropped, but
there are only a few possible combinations of port configurations, so
this doesn't pose any concerns re: memory usage.
While doing this, the names used to describe these default policies are
updated to be prefixed with `default:`. This generally makes these names
more descriptive and easier to understand.
Fixes broken link in readme and values files:
helmcustomizing-the-namespace
should be
helm#customizing-the-namespace
The former gives a 404.
Signed-off-by: Andrew Hemming <andrew@footprintmedia.net>
We currently build all of our CLI binaries serially, but if we have a
docker stage for each platform, we can parellize builds for each
platform, reducing build times significantly.
This change renames `cli/Dockerfile-bin` to `cli/Dockerfile` (so
that we get syntax highlighting in editors, etc) and restructures the
Dockerfile to have a docker stage for each platform. Then, there are
two final stages: 'basic' and 'multi-arch'. The `bin/docker-build-cli-bin`
utility typically only builds the `basic` target; when
`DOCKER_MULTIARCH` is set, it also builds the target that includes
arm binaries.
## edge-21.8.3
This release adds support for dynamic inbound policies. The proxy now discovers
policies from the policy-controller API for all application ports documented in
a pod spec. Rejected connections are logged. Policies are not yet reflected in
the proxy's metrics.
These policies also allow the proxy to skip protocol detection when a server is
explicitly annotated as HTTP/2 or when the server is documented to be opaque or
application-terminated TLS.
* Added a new section to linkerd-viz's dashboard that lists installed extensions
(thanks @sannimichaelse!)
* Added the `enableHeadlessServices` Helm flag to the `linkerd multicluster
link` command for enabling headless service mirroring (thanks @knutgoetz!)
* Removed some unused and duplicate constants in the codebase (thanks
@xichengliudui!)
* Added support for exposing service metadata from exported to mirrored services
in multicluster installations (thanks @importhuman!)
* Fixed an issue where the policy controller's liveness checks would fail after
the controller was disconnected but had successfully resumed its watches
* Fixed the `linkerd-policy` service selector to properly select `destination`
control plane components
* Added additional environment variables to the proxy container to allow support
for dynamic policy configuration
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Co-authored-by: Oliver Gould <ver@buoyant.io>
This release adds support for dynamic inbound policies. The proxy now
discovers policies from Linkerd'2 policy-controller API for all
application ports documented in a pod spec. Rejected connections are
logged. Policies are not yet reflected in the proxy's metrics.
These policies also allow the proxy to skip protocol detection when a
server is explicitly annotated as HTTP/2 or when the server is
documented to be opaque or application-terminated TLS.
---
* inbound: Use policy protocol configurations (linkerd/linkerd2-proxy#1203)
* build(deps): bump tokio from 1.9.0 to 1.10.0 (linkerd/linkerd2-proxy#1204)
* build(deps): bump tracing-subscriber from 0.2.19 to 0.2.20 (linkerd/linkerd2-proxy#1207)
* inbound: Discover policies from the control plane (linkerd/linkerd2-proxy#1205)
* build(deps): bump httparse from 1.4.1 to 1.5.1 (linkerd/linkerd2-proxy#1208)
Variable references are only expanded to previously defined
environment variables as per https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvar-v1-core
which means for `LINKERD2_PROXY_POLICY_WORKLOAD` to work correctly, the
`_pod_ns` `_pod_name` should be present before they are used.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>