Fixes#9965
Adds a `path` property to the RedirectRequestFilter in all versions. This property was absent from the CRD even though it appears in the gateway API documentation and is represented in the internal types. Adding this property to the CRD will also users to specify it.
Add a new version to the HTTPRoute CRD: v1beta2. This new version includes two changes from v1beta1:
* Added `port` property to `parentRef` for use when the parentRef is a Service
* Added `backendRefs` property to HTTPRoute rules
We switch the storage version of the HTTPRoute CRD from v1alpha1 to v1beta2 so that these new fields may be persisted.
We also update the policy admission controller to allow an HTTPRoute parentRef type to be Service (in addition to Server).
Signed-off-by: Alex Leong <alex@buoyant.io>
This change expands on existing shortnames while adding others for various policy resources. This improves the user experience when issuing commands via kubectl.
Fixes#9322
Signed-off-by: Paul Balogh <javaducky@gmail.com>
* Removed dupe imports
My IDE (vim-gopls) has been complaining for a while, so I decided to take
care of it. Found via
[staticcheck](https://github.com/dominikh/go-tools)
* Add stylecheck to go-lint checks
Helm chart has `identity.externalCA` value.
CLI code sets `identity.issuer.externalCA` and fails to produce the desired configuration. This change aligns everything to `identity.externalCA`.
Signed-off-by: Dmitry Mikhaylov <anoxape@gmail.com>
Closes#9676
This adds the `pod-security.kubernetes.io/enforce` label as described in [Pod Security Admission labels for namespaces](https://kubernetes.io/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces).
PSA gives us three different possible values (policies or modes): [privileged, baseline and restricted](https://kubernetes.io/docs/concepts/security/pod-security-standards/).
For non-CNI mode, the proxy-init container relies on granting the NET_RAW and NET_ADMIN capabilities, which places those pods under the `restricted` policy. OTOH for CNI mode we can enforce the `restricted` policy, by setting some defaults on the containers' `securityContext` as done in this PR.
Also note this change also adds the `cniEnabled` entry in the `values.yaml` file for all the extension charts, which determines what policy to use.
Final note: this includes the fix from #9717, otherwise an empty gateway UID prevents the pod to be created under the `restricted` policy.
## How to test
As this is only enforced as of k8s 1.25, here are the instructions to run 1.25 with k3d using Calico as CNI:
```bash
# launch k3d with k8s v1.25, with no flannel CI
$ k3d cluster create --image='+v1.25' --k3s-arg '--disable=local-storage,metrics-server@server:0' --no-lb --k3s-arg --write-kubeconfig-mode=644 --k3s-arg --flannel-backend=none --k3s-arg --cluster-cidr=192.168.0.0/16 --k3s-arg '--disable=servicelb,traefik@server:0'
# install Calico
$ k apply -f https://k3d.io/v5.1.0/usage/advanced/calico.yaml
# load all the images
$ bin/image-load --k3d proxy controller policy-controller web metrics-api tap cni-plugin jaeger-webhook
# install linkerd-cni
$ bin/go-run cli install-cni|k apply -f -
# install linkerd-crds
$ bin/go-run cli install --crds|k apply -f -
# install linkerd-control-plane in CNI mode
$ bin/go-run cli install --linkerd-cni-enabled|k apply -f -
# Pods should come up without issues. You can also try the viz and jaeger extensions.
# Try removing one of the securityContext entries added in this PR, and the Pod
# won't come up. You should be able to see the PodSecurity error in the associated
# ReplicaSet.
```
To test the multicluster extension using CNI, check this [gist](https://gist.github.com/alpeb/4cbbd5ad87538b9e0d39a29b4e3f02eb) with a patch to run the multicluster integration test with CNI in k8s 1.25.
When CNI plugins run in ebpf mode, they may rewrite the packet
destination when doing socket-level load balancing (i.e in the
`connect()` call). In these cases, skipping `443` on the outbound side
for control plane components becomes redundant; the packet is re-written
to target the actual Kubernetes API Server backend (which typically
listens on port `6443`, but may be overridden when the cluster is
created).
This change adds port `6443` to the list of skipped ports for control
plane components. On the linkerd-cni plugin side, the ports are
non-configurable. Whenever a pod with the control plane component label
is handled by the plugin, we look-up the `kubernetes` service in the
default namespace and append the port values (of both ClusterIP and
backend) to the list.
On the initContainer side, we make this value configurable in Helm and
provide a sensible default (`443,6443`). Users may override this value
if the ports do not correspond to what they have in their cluster. In
the CLI, if no override is given, we look-up the service in the same way
that we do for linkerd-cni; if failures are encountered we fallback to
the default list of ports from the values file.
Closes#9817
Signed-off-by: Matei David <matei@buoyant.io>
This change aims to solve two distinct issues that have cropped up in
the proxy-init configuration.
First, it decouples `allowPrivilegeEscalation` from running proxy-init
as root. At the moment, whenever the container is run as root, privilege
escalation is also allowed. In more restrictive environments, this will
prevent the pod from coming up (e.g security policies may complain about
`allowPrivilegeEscalation=true`). Worth noting that privilege escalation
is not necessary in many scenarios since the capabilities are passed to
the iptables child process at build time.
Second, it introduces a new `privileged` value that will allow users to
run the proxy-init container without any restrictions (meaning all
capabilities are inherited). This is essentially the same as mapping
root on host to root in the container. This value may solve issues in
distributions that run security enhanced linux, since iptables will be
able to load kernel modules that it may otherwise not be able to load
(privileged mode allows the container nearly the same privileges as
processes running outside of a container on a host, this further allows
the container to set configurations in AppArmor or SELinux).
Privileged mode is independent from running the container as root. This
gives users more control over the security context in proxy-init. The
value may still be used with `runAsRoot: false`.
Fixes#9718
Signed-off-by: Matei David <matei@buoyant.io>
* edge-22.11.3 change notes
Besides the notes, this corrects a small point in `RELEASE.md`, and
bumps the proxy-init image tag to `v2.1.0`. Note that the entry under
`go.mod` wasn't bumped because moving it past v2 requires changes on
`linkerd2-proxy-init`'s `go.mod` file, and we're gonna drop that
dependency soon anyways. Finally, all the charts got their patch version
bumped, except for `linkerd2-cni` that got its minor bumped because of
the tolerations default change.
## edge-22.11.3
This edge release fixes connection errors to pods using a `hostPort` different
than their `containerPort`. Also the `network-validator` init container improves
its logging, and the `linkerd-cni` DaemonSet now gets deployed in all nodes by
default.
* Fixed `destination` service to properly discover targets using a `hostPort`
different than their `containerPort`, which was causing 502 errors
* Upgraded the `network-validator` with better logging allowing users to
determine whether failures occur as a result of their environment or the tool
itself
* Added default `Exists` toleration to the `linkerd-cni` DaemonSet, allowing it
to be deployed in all nodes by default, regardless of taints
Co-authored-by: Oliver Gould <ver@buoyant.io>
When calling `linkerd upgrade, if the `linkerd-config-overrides` Secret is not found then we ask the user to run `linkerd repair`, but that has long been removed from the CLI.
Also removed code comment as the error is explicit enough.
Fix upgrade when using --from-manifests
When the `--from-manifests` flag is used to upgrade through the CLI,
the kube client used to fetch existing configuration (from the
ConfigMap) is a "fake" client. The fake client returns values from a
local source. The two clients are used interchangeably to perform the
upgrade; which one is initialized depends on whether a value has been
passed to `--from-manifests`.
Unfortunately, this breaks CLI upgrades to any stable-2.12.x version
when the flag is used. Since a fake client is used, the upgrade will
fail when checking for the existence of CRDs, even if the CRDs have been
previously installed in the cluster.
This change fixes the issue by first initializing an actual Kubernetes
client (that will be used to check for CRDs). If the values should be
read from a local source, the client is replaced with a fake one. Since
this takes place after the CRD check, the upgrade will not fail on the
CRD precondition.
Fixes#9788
Signed-off-by: Matei David <matei@buoyant.io>
When users use CNI, we want to ensure that network rewriting inside the pod is setup before allowing linkerd to start. When rewriting isn't happening, we want to exit with a clear error message and enough information in the container log for the administrator to either file a bug report with us or fix their configuration.
This change adds a validator initContainer to all injected workloads, when linkerd is installed with "cniEnabled=false". The validator replaces the noop init container, and will prevent pods from starting up if iptables is not configured.
Part of #8120
Signed-off-by: Steve Jenson <stevej@buoyant.io>
Add a "noop" init container which uses the proxy image and runs `/bin/sleep 0` to injected pods. This init container is only added when the linkerd-cni-plugin is enabled. The idea here is that by running an init container, we trigger kubernetes to update the pod status. In particular, this ensures that the pod status IP is populated, which is necessary in certain cases where other CNIs such as Calico are involved.
Therefore, this may fix https://github.com/linkerd/linkerd2/issues/9310, but I don't have a reproduction and therefore am not able to verify.
Signed-off-by: Alex Leong <alex@buoyant.io>
Add PodMonitor resources to the Helm chart
With an external Prometheus setup installed using prometheus-operator the Prometheus instance scraping can be configured using Service/PodMonitor resources.
By adding PodMonitor resource into Linkerd Helm chart we can mimic the configuration of bundled Prometheus, see https://github.com/linkerd/linkerd2/blob/main/viz/charts/linkerd-viz/templates/prometheus.yaml#L47-L151, that comes with linkerd-viz extension. The PodMonitor resources are based on https://github.com/linkerd/website/issues/853#issuecomment-913234295 which are proven to be working. The only problem we face is that bundled Grafana charts will need to look at different jobs when querying metrics.
When enabled by `podMonitor.enabled` value in the Helm chart, PodMonitor for Linkerd resources should be installed alongside the Linkerd and Linkerd metrics should be present in the Prometheus.
Fixes#6596
Signed-off-by: Martin Odstrcilik <martin.odstrcilik@gmail.com>
Fixes issue described in [this comment](https://github.com/linkerd/linkerd2/issues/9310#issuecomment-1247201646)
Rollback #7382
Should be cherry-picked back into 2.12.1
For 2.12.0, #7382 removed the env vars `_l5d_ns` and `_l5d_trustdomain` from the proxy manifest because they were no longer used anywhere. In particular, the jaeger injector used them when injecting the env var `LINKERD2_PROXY_TAP_SVC_NAME=tap.linkerd-viz.serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)` but then started using values.yaml entries instead of these env vars.
The problem is when upgrading the core control plane (or anything else) to 2.12.0, the 2.11 jaeger extension will still be running and will attempt to inject the old env var into the pods, making reference to `l5d_ns` and `_l5d_trustdomain` which the new proxy container won't offer anymore. This will put the pod in an error state.
This change restores back those env vars. We will be able to remove them at last in 2.13.0, when presumably the jaeger injector would already have already been upgraded to 2.12 by the user.
Replication steps:
```bash
$ curl -sL https://run.linkerd.io/install | LINKERD2_VERSION=stable-2.11.4 sh
$ linkerd install | k apply -f -
$ linkerd jaeger install | k apply -f -
$ linkerd check
$ curl -sL https://run.linkerd.io/install | LINKERD2_VERSION=stable-2.12.0 sh
$ linkerd upgrade --crds | k apply -f -
$ linkerd upgrade | k apply -f -
$ k get po -n linkerd
NAME READY STATUS RESTARTS AGE
linkerd-identity-58544dfd8-jbgkb 2/2 Running 0 2m19s
linkerd-destination-764bf6785b-v8cj6 4/4 Running 0 2m19s
linkerd-proxy-injector-6d4b8c9689-zvxv2 2/2 Running 0 2m19s
linkerd-identity-55bfbf9cd4-4xk9g 0/2 CrashLoopBackOff 1 (5s ago) 32s
linkerd-proxy-injector-5b67589678-mtklx 0/2 CrashLoopBackOff 1 (5s ago) 32s
linkerd-destination-ff9b5f67b-jw8w5 0/4 PostStartHookError 0 (8s ago) 32s
```
The identity controller requires access to read all deployments. This
isn't necessary.
When these permissions were added in #3600, we incorrectly assumed that
we must pass a whole Deployment resource as a _parent_ when recording
events. The [EventRecorder docs] say:
> 'object' is the object this event is about. Event will make a
> reference--or you may also pass a reference to the object directly.
We can confirm this by reviewing the source for [GetReference]: we can
simply construct an ObjectReference without fetching it from the API.
This change lets us drop unnecessary privileges in the identity
controller.
[EventRecorder docs]: https://pkg.go.dev/k8s.io/client-go/tools/record#EventRecorder
[GetReference]: ab826d2728/tools/reference/ref.go (L38-L45)
Signed-off-by: Oliver Gould <ver@buoyant.io>
policy/v1beta1 PodDisruptionBudget is deprecated in K8s v1.21+ and unavailable in v1.25+.
This change updates the API version to policy/v1.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
In #6635 (f9f3ebe), we removed the `Namespace` resources from the
linkerd Helm charts. But this change also removed the `namespace` field
from all generated metadata, adding conditional logic to only include it
when being installed via the CLI.
This conditional logic currently causes spurious whitespace in output
YAML. This doesn't cause problems but is aesthetically
inconsistent/distracting.
This change removes the `partials.namespace` helper and instead inlines
the value in our templates. This makes our CLI- and Helm-generated
manifests slightly more consistent and removes needless indirection.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Closes#9312#9118 introduced the `linkerd.io/trust-root-sha256` annotation which is
automatically added to control plane components.
This change ensures that all injected workloads also receive this annotation.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Signed-off-by: William Morgan <william@buoyant.io>
<!-- Thanks for sending a pull request!
If you already have a well-structured git commit message, chances are GitHub
set the title and description of this PR to the git commit message subject and
body, respectively. If so, you may delete these instructions and submit your PR.
If this is your first time, please read our contributor guide:
https://github.com/linkerd/linkerd2/blob/main/CONTRIBUTING.md
The title and description of your Pull Request should match the git commit
subject and body, respectively. Git commit messages are structured as follows:
```
Subject
Problem
Solution
Validation
Fixes #[GitHub issue ID]
DCO Sign off
```
Example git commit message:
```
Introduce Pull Request Template
GitHub's community guidelines recommend a pull request template, the repo was
lacking one.
Introduce a `PULL_REQUEST_TEMPLATE.md` file.
Once merged, the
[Community profile checklist](https://github.com/linkerd/linkerd2/community)
should indicate the repo now provides a pull request template.
Fixes#3321
Signed-off-by: Jane Smith <jane.smith@example.com>
```
Note the git commit message subject becomes the pull request title.
For more details around git commits, see the section on Committing in our
contributor guide:
https://github.com/linkerd/linkerd2/blob/main/CONTRIBUTING.md#committing
-->
Signed-off-by: William Morgan <william@buoyant.io>
Closes#9230#9202 prepped the release candidate for `stable-2.12.0` by removing the `-edge`
suffix and adding the `-rc2` suffix.
This preps the chart versions for the stable release by removing that `-rc2`
suffix.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
The `linkerd authz` command help indicates that it shows server authorizations, but it shows authorization policies too.
We update the help text to indicate that all authorizations are shown.
Signed-off-by: Alex Leong <alex@buoyant.io>
<!-- Thanks for sending a pull request!
If you already have a well-structured git commit message, chances are GitHub
set the title and description of this PR to the git commit message subject and
body, respectively. If so, you may delete these instructions and submit your PR.
If this is your first time, please read our contributor guide:
https://github.com/linkerd/linkerd2/blob/main/CONTRIBUTING.md
The title and description of your Pull Request should match the git commit
subject and body, respectively. Git commit messages are structured as follows:
```
Subject
Problem
Solution
Validation
Fixes #[GitHub issue ID]
DCO Sign off
```
Example git commit message:
```
Introduce Pull Request Template
GitHub's community guidelines recommend a pull request template, the repo was
lacking one.
Introduce a `PULL_REQUEST_TEMPLATE.md` file.
Once merged, the
[Community profile checklist](https://github.com/linkerd/linkerd2/community)
should indicate the repo now provides a pull request template.
Fixes#3321
Signed-off-by: Jane Smith <jane.smith@example.com>
```
Note the git commit message subject becomes the pull request title.
For more details around git commits, see the section on Committing in our
contributor guide:
https://github.com/linkerd/linkerd2/blob/main/CONTRIBUTING.md#committing
-->
Signed-off-by: Alex Leong <alex@buoyant.io>
In 2.11.x, proxyInit.runAsRoot was true by default, which caused the
proxy-init's runAsUser field to be 0. proxyInit.runAsRoot is now
defaulted to false in 2.12.0, but runAsUser still isn't
configurable, and when following the upgrade instructions
here, helm doesn't change runAsUser and so it conflicts with the new value
for runAsRoot=false, resulting in the pods erroring with this message:
Error: container's runAsUser breaks non-root policy (pod: "linkerd-identity-bc649c5f9-ckqvg_linkerd(fb3416d2-c723-4664-acf1-80a64a734561)", container: linkerd-init)
This PR adds a new default for runAsUser to avoid this issue.
This release is the second release candidate for stable-2.12.0.
At this point the Helm charts can be retrieved from the stable repo:
```
helm repo add linkerd https://helm.linkerd.io/stable
helm repo up
helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds
helm install linkerd-control-plane \
-n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
linkerd/linkerd-control-plane
```
The following lists all the changes since edge-22.8.2:
* Fixed inheritance of the `linkerd.io/inject` annotation from Namespace to
Workloads when its value is `ingress`
* Added the `config.linkerd.io/default-inbound-policy: all-authenticated`
annotation to linkerd-multicluster’s Gateway deployment so that all clients
are required to be authenticated
* Added a `ReadHeaderTimeout` of 10s to all the go `http.Server` instances, to
avoid being vulnerable to "slowrolis" attacks
* Added check in `linkerd viz check --proxy` to warn in case namespace have the
`config.linkerd.io/default-inbound-policy: deny` annotation, which would not
authorize scrapes coming from the linkerd-viz Prometheus instance
* Added validation for accepted values for the `--default-inbound-policy` flag
* Fixed invalid URL in the `linkerd install --help` output
* Added `--destination-pod` flag to `linkerd diagnostics endpoints` subcommand
* Added `proxyInit.runAsUser` in `values.yaml` defaulting to non-zero, to
complement the new default `proxyInit.runAsRoot: false` that was rencently
changed
Depends on #9195
Currently, `linkerd inject --default-inbound-policy` does not set the
`config.linkerd.io/default-inbound-policy` annotation on the injected
resource(s).
The `inject` command does _try_ to set that annotation if it's set in
the `Values` generated by `proxyFlagSet`:
14d1dbb3b7/cli/cmd/inject.go (L485-L487)
...but, the flag in the proxy `FlagSet` doesn't set
`Values.Proxy.DefaultInboundPolicy`, it sets
`Values.PolicyController.DefaultAllowPolicy`:
7c5e3aaf40/cli/cmd/options.go (L375-L379)
This is because the flag set is shared across `linkerd inject` and
`linkerd install` subcommands, and in `linkerd install`, we want to set
the default policy for the whole cluster by configuring the policy
controller. In `linkerd inject`, though, we want to add the annotation
to the injected pods only.
This branch fixes this issue by changing the flag so that it sets the
`Values.Proxy.DefaultInboundPolicy` instead of the
`Values.PolicyController.DefaultAllowPolicy` value. In `linkerd
install`, we then set `Values.PolicyController.DefaultAllowPolicy` based
on the value of `Values.Proxy.DefaultInboundPolicy`, while in `inject`,
we will now actually add the annotation.
This branch is based on PR #9195, which adds validation to reject
invalid values for `--default-inbound-policy`, rather than on `main`.
This is because the validation code added in that PR had to be moved
around a bit, since it now needs to validate the
`Values.Proxy.DefaultInboundPolicy` value rather than the
`Values.PolicyController.DefaultAllowPolicy` value. I thought using
#9195 as a base branch was better than basing this on `main` and then
having to resolve merge conflicts later. When that PR merges, this can
be rebased onto `main`.
Fixes#9168
Closes#9141
This introduces the `--destination-pod` flag to the `linkerd diagnostics
endpoints` command which allows users to target a specific destination Pod when
there are multiple running in a cluster.
This can be useful for issues like #8956, where Linkerd HA is installed and
there seem to be stale endpoints in the destination service. Being able to run
this command and identity which destination Pod (if not all) have an incorrect
view of the cluster.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Closes#9148
With this change, the value of `—default-inbound-policy` is verified to be one
of the accepted values.
When the value is not an accepted value we now error
```shell $ linkerd install --default-inbound-policy=everybody Error:
--default-inbound-policy must be one of: all-authenticated, all-unauthenticated,
cluster-authenticated, cluster-unauthenticated, deny (got everybody) Usage:
linkerd install [flags]
... ```
A unit test has also been added.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
* Bump proxy-init to v2.0.0
New release of proxy-init.
Updated:
* Helm values to use v2.0.0 of proxy-init
* Helm docs
* Tests
Note: go dependencies have not been updated since the new version will
break API compatibility with older versions (source files have been
moved, see issue for more details).
Closes#9164
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Oliver Gould <ver@buoyant.io>
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
When Linkerd is installed with the `--default-inbound-policy` flag, this value gets propagated to the `proxy.defaultInboundPolicy` value which sets the `LINKERD2_PROXY_INBOUND_DEFAULT_POLICY` proxy env var, but not to the `policyController.defaultAllowPolicy` value which sets the `--default-policy` flag on the policy-controller.
Since the policy-controller returns default servers when a server resource does not exist, this causes the `--default-inbound-policy` value to be effectively ignored. We update this to set the `PolicyController.DefaultAllowPolicy` value which is used by the proxy as the default when `proxy.defaultInboundPolicy` is not set.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#9022
When updating the Linkerd trust root, for example by running a command like `linkerd upgrade --identity-trust-anchors-file=./bundle.crt | kubectl apply -f -` as described in the [trust root rotation docs](https://linkerd.io/2.11/tasks/manually-rotating-control-plane-tls-credentials/#rotating-the-trust-anchor), the trust root is updated in the Linkerd config, but the identity controller does not restart and does not pick up the new root.
We add a trust root checksum annotation which causes the control plane deployments to change when the trust anchor changes, and thus causes them to restart.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Allows RSA signed trust anchors on linkerd cli (#7771)
Linkerd currently forces using an ECDSA P-256
issuer certificate along with a ECDSA trust
anchor. Still, it's still cryptographically valid
to have an ECDSA P-256 issuer certificate issued
by an RSA signed CA.
CheckCertAlgoRequirements checks if CA cert uses
ECDSA or RSA 2048/4096 signing algorithm.
Fixes#7771
Signed-off-by: Baeyens, Daniel <daniel.baeyens@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
Some hosts may not have 'nft' modules available. Currently, proxy-init
defaults to using 'iptables-nft'; if the host does not have support for
nft modules, the init container will crash, blocking all injected
workloads from starting up.
This change defaults the 'iptablesMode' value to 'legacy'.
* Update linkerd-control-plane/values file default
* Update proxy-init partial to default to 'legacy' when no mode is
specified
* Change expected values in 'pkg/charts/linkerd2/values_test.go' and in
'cli/cmd/install_test'
* Update golden files
Fixes#9053
Signed-off-by: Matei David <matei@buoyant.io>
Closes#8945
This adds the `policyController.probeNetworks` configuration value so that users
can configure the networks from which probes are expected to be performed.
By default, we allow all networks (`0.0.0.0/0`). Additionally, this value
differs from `clusterNetworks` is that it is a list of networks, and thus we
have to join the values in the Helm templating.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
> In some circumstances, the lifecycle.postStart hook can cause the linkerd-proxy
> container to get stuck waiting for identity verification. After the
> linkerd-await timeout, the container will be restarted and the proxy starts
> without further incident. The linkerd-control-plane helm chart currently has a
> way to disable the lifecycle hook for injected proxies, but not for proxies on
> the control plane pods.
>
> This commit adds a new value to the linkerd-control-plane chart of
> proxy.controlPlaneAwait that can be used to disable the postStart lifecycle hook
> on the destination and proxy-injector pods. This is defaulted to true to
> maintain current behavior.
>
> The linkerd-control-plane chart was templated, setting proxy.controlPlaneAwait
> to true and false, verifying that the postStart lifecycle hook was either
> present or absent depending on the proxy.controlPlaneAwait value.
>
> Fixes#8738
This continues the now stale #8739 and removes the version bumps that were
requested.
Signed-off-by: Jacob Lambert [calrisian777@gmail.com](mailto:calrisian777@gmail.com)
Co-authored-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Previously, the `linkerd authz` command would list all ServerAuthorization resources which targeted the specified resource. With the addition of AuthorizationPolicies, we update this command to also show all AuthorizationPolicies which target the specified resource. In cases where the AuthorizationPolicy targets an HTTPRoute which belongs to the resource, we also print the HTTPRoute name.
Sample output:
```
linkerd authz -n emojivoto po
ROUTE SERVER AUTHORIZATION_POLICY SERVER_AUTHORIZATION
* emoji-grpc emoji-grpc
linkerd-metrics linkerd-admin linkerd-metrics
linkerd-probes linkerd-admin linkerd-probes
* prom prom
* voting-grpc voting-grpc
* web-http web-public
```
Signed-off-by: Alex Leong <alex@buoyant.io>
* Update CRD chart version in golden file
Signed-off-by: Alex Leong <alex@buoyant.io>
* Run go tests on charts or golden changes
Signed-off-by: Alex Leong <alex@buoyant.io>
This change introduces a new value to be used at install (or upgrade)
time. The value (`proxyInit.iptablesMode=nft|legacy`) is responsible
for starting the proxy-init container in nft or legacy mode.
By default, the init container will use iptables-nft. When the mode is set to
`nft`, it will instead use iptables-nft. Most modern Linux distributions
support both, but a subset (such as RHEL based families) only support
iptables-nft and nf_tables.
Signed-off-by: Matei David <matei@buoyant.io>
Go 1.18 features a number of important chanages, notably removing client
support for defunct TLS versions: https://tip.golang.org/doc/go1.18
This change updates our Go version in CI and development.
Signed-off-by: Oliver Gould <ver@buoyant.io>
There are several undocumented Helm values that configure control
plane resource constraints. This change fixes the default values to
include this missing documentation.
Fixes#8933
Signed-off-by: Dominik Táskai <dtaskai@pm.me>
As discussed in #8944, Linkerd's current use of the
`gateway.networking.k8s.io` `HTTPRoute` CRD is not a spec-compliant use
of the Gateway API, because we don't support some "core" features of the
Gateway API that don't make sense in Linkerd's use-case. Therefore,
we've chosen to replace the `gateway.networking.k8s.io` `HTTPRoute` CRD
with our own `HTTPRoute` CRD in the `policy.linkerd.io` API group, which
removes the unsupported features.
PR #8949 added the Linkerd versions of those CRDs, but did not remove
support for the Gateway API CRDs. This branch removes the Gateway API
CRDs from the policy controller and `linkerd install`/Helm charts.
The various helper functions for converting the Gateway API resource
binding types from `k8s-gateway-api` to the policy controller's internal
representation is kept in place, but the actual use of that code in the
indexer is disabled. This way, we can add support for the Gateway API
CRDs again easily. Similarly, I've kept the validation code for Gateway
API types in the policy admission controller, but the admission
controller no longer actually tries to validate those resources.
Depends on #8949Closes#8944
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This change bumps the proxy-init version from v1.6.1 to the latest
version, v1.6.2. As part of the new release, proxy-init now adds
net_admin and net_raw sys caps to xtables-nft-multi so that nftables
mode can be used without requiring root privileges.
* Bump go.mod
* Bump version in helm values
* Bump version in misc files
* Bump version in code
Signed-off-by: Matei David <matei@buoyant.io>
Our use of the `gateway.networking.k8s.io` types is not compliant with
the gateway API spec in at least a few ways:
1. We do not support the `Gateway` types. This is considered a "core"
feature of the `HTTPRoute` type.
2. We do not currently update `HTTPRoute` status fields as dictated by
the spec.
3. Our use of Linkerd-specific `parentRef` types may not work well with
the gateway project's admission controller (untested).
Issue #8944 proposes solving this by replacing our use of
`gateway.networking.k8s.io`'s `HTTPRoute` type with our own
`policy.linkerd.io` version of the same type. That issue suggests that
the new `policy.linkerd.io` types be added separately from the change
that removes support for the `gateway.networking.k8s.io` versions, so
that the migration can be done incrementally.
This branch does the following:
* Add new `HTTPRoute` CRDs. These are based on the
`gateway.networking.k8s.io` CRDs, with the following changes:
- The group is `policy.linkerd.io`,
- The API version is `v1alpha1`,
- `backendRefs` fields are removed, as Linkerd does not support them,
- filter types Linkerd does not support (`RequestMirror` and
`ExtensionRef`), are removed.
* Add Rust bindings for the new `policy.linkerd.io` versions of
`HTTPRoute` types in `linkerd-policy-controller-k8s-api`.
The Rust bindings define their own versions of the `HttpRoute`,
`HttpRouteRule`, and `HttpRouteFilter` types, because these types'
structures are changed from the Gateway API versions (due to the
removal of unsupported filter types and fields). For other types,
which are identical to the upstream Gateway API versions (such as the
various match types and filter types), we re-export the existing
bindings from the `k8s-gateway-api`crate to minimize duplication.
* Add conversions to `InboundRouteBinding` from the `policy.linkerd.io`
`HTTPRoute` types.
When possible, I tried to factor out the code that was shared between
the conversions for Linkerd's `HTTPRoute` types and the upstream
Gateway API versions.
* Implement `kubert`'s `IndexNamespacedResource` trait for
`linkerd_policy_controller_k8s_api::policy::HttpRoute`, so that the
policy controller can index both versions of the `HTTPRoute` CRD.
* Adds validation for `policy.linkerd.io` `HTTPRoute`s to the policy
controller's validating admission webhook.
* Updated the policy controller tests to test both versions of
`HTTPRoute`.
## Notes
A couple questions I had about this approach:
- Is re-using bindings from the `k8s-gateway-api` crate appropriate
here, when the type has not changed from the Gateway API version? If
not, I can change this PR to vendor those types as well, but it will
result in a lot more code duplication.
- Right now, the indexer stores all `HTTPRoute`s in the same index.
This means that applying a `policy.linkerd.io` version of `HTTPRoute`
and then applying the Gateway API version with the same ns/name will
update the same value in the index. Is this what we want? I wasn't
entirely sure...
See #8944.
Dependencies like `kubert` may emit INFO level logs that are useful to
see (e.g., when the serviceaccount has insufficient RBAC). This change
updates the default policy controller log level to simply be `info`.
Signed-off-by: Oliver Gould <ver@buoyant.io>