* Add timeout and failureThreshold to multicluster probe
- This adds the `probeSpec.failureThreshold` and `probeSpec.timeout` fields to the Link CRD spec.
- Likewise, the `gateway.probe.failureThreshold` and `gateway.probe.timeout` fields are added to the linkerd-multicluster chart, that are used to populate the new `mirror.linkerd.io/probe-failure-threshold` and `mirror.linkerd.io/probe-timeout` annotations in the gateway service (consumed by `linkerd mc link` to populate probe spec).
- In the probe worker, we replace the hard-coded 50s timeout with the new timeout config (which now defaults to 30s). And the probe loop got refactored in order to not mark the gateway as unhealty until the consecutive failures threshold is reached.
* Make probeTicker.C synchronous
* New "audit" value for default inbound policy
As a preliminary for audit-mode support, this change just adds "audit" to the allowed values for the `proxy.defaultInboundPolicy` helm entry, and to the `--default-inbound-policy` flag for the install CLI. It also adds it to the allowed values for the `config.linkerd.io/default-inbound-policy` annotation.
The proxy may expose a /shutdown HTTP endpoint on its admin server that may be used by `linkerd-await --shutdown` to trigger proxy shutdown after a process completes. If an application has an SSRF vulnerability, however, an attacker could use this endpoint to trigger proxy shutdown, causing a denial of service. This admin endpoint is only useful with linkerd-await; and this functionality is supplanted by Kubernetes Native Sidecars.
To address this potential issue, this change disables the proxy's admin endpoint by default. A helm value is introduced to support enabling the admin endpoint cluster-wide; and the `config.linkerd.io/proxy-admin-shutdown: enabled` annotation may be set to enable it the admin endpoint on an individual workload.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#12620
When the Linkerd proxy log level is set to `debug` or higher, the proxy logs HTTP headers which may contain sensitive information.
While we want to avoid logging sensitive data by default, logging of HTTP headers can be a helpful debugging tool. Therefore, we add a `proxy.logHTTPHeaders` Helm value which prevents the logging of HTTP headers when set to false. The default value of this value is false so that headers cannot be logged unless users opt-in.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#11773
Make the proxy's GUID configurable via `proxy.gid` which defaults to `-1`, in which case the GUID is not set.
Also added ability to set the GUID for proxy-init and the core and extension controllers.
---------
Signed-off-by: Nico Feulner <nico.feulner@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
Fixes: https://github.com/linkerd/linkerd2/issues/12233
When Linkerd is installed in HA mode, linkerd check warns if the admission-webhooks=disabled annotation on kube-system is not set BUT the admission webhooks already exclude kube-system, making the annotation no longer necessary.
Signed-off-by: Alex Leong <alex@buoyant.io>
The ExternalWorkload resource we introduced has a minor naming
inconsistency; `Tls` in `meshTls` is not capitalised. Other resources
that we have (e.g. authentication resources) capitalise TLS (and so does
Go, it follows a similar naming convention).
We fix this in the workload resource by changing the field's name and
bumping the version to `v1beta1`.
Upgrading the control plane version will continue to work without
downtime. However, if an existing resource exists, the policy controller
will not completely initialise. It will not enter a crashloop backoff,
but it will also not become ready until the resource is edited or
deleted.
Signed-off-by: Matei David <matei@buoyant.io>
* Add native sidecar support
Kubernetes will be providing beta support for native sidecar containers in version 1.29. This feature improves network proxy sidecar compatibility for jobs and initContainers.
Introduce a new annotation config.alpha.linkerd.io/proxy-enable-native-sidecar and configuration option Proxy.NativeSidecar that causes the proxy container to run as an init-container.
Fixes: #11461
Signed-off-by: TJ Miller <millert@us.ibm.com>
Prior to setting an enormous value to disable protocol detection, the
field was meant to be configurable. In the refactor, the annotation name
stayed the same instead of reflecting the change in the contract (i.e.
not configurable but toggled). Additionally, there were two types in the
proxy partials.
Signed-off-by: Matei David <matei@buoyant.io>
This change allows users to configure protocol detection timeout values
(outbound and inbound). Certain environments may find that protocol
detection inhibits debugging and makes it harder to reason with a
client's behaviour. In such cases (and not only) it may be deseriable to
change the default protocol detection timeout to a higher value than the
default 10s.
Through this change, users may configure their timeout values either
with install-time settings or through annotations; this follows our
usual proxy configuration model. The proxy uses different timeout values
for the inbound and outbound stacks (even though they use the same
default value) and this change respects that by adding two separate
fields.
Signed-off-by: Matei David <matei@buoyant.io>
We add the ability to mirror services in "remote discovery" mode where no Endpoints are created for the service in the source cluster, but instead the `multicluster.linkerd.io/remote-discovery` and `multicluster.linkerd.io/remote-service` labels are set on the mirror service to indicate that the control plane should perform remote discovery for this service.
To accomplish this, we add a new field to the Link resource: `remoteDiscoverySelector` which is a parallel to `selector` but instead it selects Services to export in remote discovery mode. Since this field is purely additive, we do not change the Link CRD version. By treating an empty selector as "Nothing", we remain backwards compatible (an unset `remoteDiscoverySelector` will not export any services in remote discovery mode).
Signed-off-by: Alex Leong <alex@buoyant.io>
The proxy caches discovery results in-memory. Linkerd supports
overriding the default eviction timeout for cached discovery results
through install (i.e. helm) values. However, it is currently not
possible to configure timeouts on a workload-per-workload basis, or to
configure the values after Linkerd has been installed (or upgraded).
This change adds support for annotation based configuration. Workloads
and namespaces now support two new configuration annotations that will
override the install values when specified.
Additionally, a typo has been fixed on the internal type representation.
This is not a breaking change since the type itself is not exposed to
users and is parsed correctly in the values.yaml file (or CLI)
Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
Closes#9312#9118 introduced the `linkerd.io/trust-root-sha256` annotation which is
automatically added to control plane components.
This change ensures that all injected workloads also receive this annotation.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
PR linkerd/linkerd2-proxy#1815 added support for a
`LINKERD2_PROXY_SHUTDOWN_GRACE_PERIOD` environment variable that
configures the proxy's maximum grace period for graceful shutdown. This
is intended to ensure that if a proxy is shut down, it will eventually
terminate in a relatively timely manner, even if some stubborn
connections don't close gracefully.
This branch adds support for a `config.linkerd.io/shutdown-grace-period`
annotation that can be used to override the default grace period
duration.
Hopefully I've added this everywhere it needs to be added --- please let
me know if I've missed anything!
* Remove the `proxy.disableIdentity` config
Fixes#7724
Also:
- Removed the `linkerd.io/identity-mode` annotation.
- Removed the `config.linkerd.io/disable-identity` annotation.
- Removed the `linkerd.proxy.validation` template partial, which only
made sense when `proxy.disableIdentity` was `true`.
- TestInjectManualParams now requires to hit the cluster to retrieve the
trust root.
With #7661, the proxy supports a `LINKERD2_PROXY_ACCESS_LOG`
configuration with the values `apache` or `json`. This configuration
causes the proxy to emit access logs to stderr. This branch makes it
possible for users to enable access logging by adding an annotation,
`config.linkerd.io/access-log`, that tells the proxy injector to set
this environment variable.
I've also added some tests to ensure that the annotation and the
environment variable are set correctly. I tried to follow the existing
tests as examples of how we do this, but please let me know if I've
overlooked anything!
Closes#7662#1913
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The goal is to support configuring the
`--subnets-to-ignore` flag in proxy-init
This change adds a new annotation `/skip-subnets` which
takes a comma-separated list of valid CIDR.
The argument will map to the `--subnets-to-ignore`
flag in the proxy-init initContainer.
Fixes#6758
Signed-off-by: Michael Lin <mlzc@hey.com>
Fixes#3307
Add support for annotations `config.linkerd.io/proxy-ephemeral-storage-limit` and `config.linkerd.io/proxy-ephemeral-storage-request`
Signed-off-by: Michael Lin <mlzc@hey.com>
Fixes#3260
## Summary
Currently, Linkerd uses a service Account token to validate a pod
during the `Certify` request with identity, through which identity
is established on the proxy. This works well and good, as Kubernetes
attaches the `default` service account token of a namespace as a volume
(unless overridden with a specific service account by the user). Catch
here being that this token is aimed at the application to talk to the
kubernetes API and not specifically for Linkerd. This means that there
are [controls outside of Linkerd](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server), to manage this service token, which
users might want to use, [causing problems with Linkerd](https://github.com/linkerd/linkerd2/issues/3183)
as Linkerd might expect it to be present.
To have a more granular control over the token, and not rely on the
service token that can be managed externally, [Bound Service Tokens](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens)
can be used to generate tokens that are specifically for Linkerd,
that are bound to a specific pod, along with an expiry.
## Background on Bounded Service Tokens
This feature has been GA’ed in Kubernetes 1.20, and is enabled by default
in most cloud provider distributions. Using this feature, Kubernetes can
be asked to issue specific tokens for linkerd usage (through audience bound
configuration), with a specific expiry time (as the validation happens every
24 hours when establishing identity, we can follow the same), bounded to
a specific pod (meaning verification fails if the pod object isn’t available).
Because of all these bounds, and not being able to use this token for
anything else, This feels like the right thing to rely on to validate
a pod to issue a certificate.
### Pod Identity Name
We still use the same service account name as the pod identity
(used with metrics, etc) as these tokens are all generated from the
same base service account attached to the pod (could be defualt, or
the user overriden one). This can be verified by looking at the `user`
field in the `TokenReview` response.
<details>
<summary>Sample TokenReview response</summary>
Here, The new token was created for the vault audience for a pod which
had a serviceAccount token volume projection and was using the `mine`
serviceAccount in the default namespace.
```json
"kind": "TokenReview",
"apiVersion": "authentication.k8s.io/v1",
"metadata": {
"creationTimestamp": null,
"managedFields": [
{
"manager": "curl",
"operation": "Update",
"apiVersion": "authentication.k8s.io/v1",
"time": "2021-10-19T19:21:40Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:spec":{"f:audiences":{},"f:token":{}}}
}
]
},
"spec": {
"token": "....",
"audiences": [
"vault"
]
},
"status": {
"authenticated": true,
"user": {
"username": "system:serviceaccount:default:mine",
"uid": "889a81bd-e31c-4423-b542-98ddca89bfd9",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:default",
"system:authenticated"
],
"extra": {
"authentication.kubernetes.io/pod-name": [
"nginx"
],
"authentication.kubernetes.io/pod-uid": [
"ebf36f80-40ee-48ee-a75b-96dcc21466a6"
]
}
},
"audiences": [
"vault"
]
}
```
</details>
## Changes
- Update `proxy-injector` and install scripts to include the new
projected Volume and VolumeMount.
- Update the `identity` pod to validate the token with the linkerd
audience key.
- Added `identity.serviceAccountTokenProjection` to disable this
feature.
- Updated err'ing logic with `autoMountServiceAccount: false`
to fail only when this feature is disabled.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
We add a validating admission controller to the policy controller which validates `Server` resources. When a `Server` admission request is received, we look at all existing `Server` resources in the cluster and ensure that no other `Server` has an identical selector and port.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
The proxy injector now adds the `config.linkerd.io/default-inbound-policy` annotation to all injected pods.
Closes#6720.
If the pod has the annotation before injection then that value is used. If the pod does not have the annotation but the namespace does, then it inherits that. If both the pod and the namespace do not have the annotation, then it defaults to `.Values.policyController.defaultAllowPolicy`.
Upon injecting the sidecar container into the pod, this annotation value is used to set the `LINKERD2_PROXY_INBOUND_DEFAULT_POLICY` environment variable. Additionally, `LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS` is also set to the value of `.Values.clusterNetworks`.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
The purpose of this PR is to mirror StatefulSets in a multicluster setting. Currently, it isn't possible to communicate with a specific pod in a StatefulSet across clusters without manually creating clusterIP services for each pod backing the StatefulSet in the target cluster.
After some brainstorming, we decided that one way to solve this problem is to have the Service Mirror component create a "root" headless service in our source cluster along with clusterIP services (one for each pod backing the StatefulSet in the target cluster). The idea here is that each individual clusterIP service will also have an Endpoints object whose only
host is the Gateway IP -- this is the way mirrored services are constructed in a multicluster environment. The Endpoints object for the root service will contain pairs of hostnames and IP addresses; each hostname maps to the name of a pod in the StatefulSet, its IP corresponds to the clusterIP service that the Service Mirror would create in the source cluster.
To exemplify, assume a StatefulSet `foo` in a target cluster `west` with 2 pods (foo-0, foo-1). In the source cluster `east`, we create a headless root service foo-west` and 2 services (`foo-0-west`, `foo-1-west`) whose Endpoints point to the Gateway IP. Foo-west's Endpoints will contain an AddressSet with two hosts:
```yaml
# foo-west Endpoints
- hostname: foo-0
ip: <clusterIP of foo-0-west>
- hostname: foo-1
ip: <clusterIP of foo-1-west>
```
By making these changes, we solve the concerns associated with manually creating these services since the Service Mirror would reconcile, create and delete the clusterIP services (as opposed to requiring any interaction from the end user). Furthermore, by having a "root" headless service we can also configure DNS -- for an end user, there wouldn't be any difference in addressing a specific pod in the StatefulSet as far as syntax goes (i.e the host `foo-0.foo-west.default.svc.cluster.local` would point to the pod foo-0 in cluster west).
Closes#5162
Fixes#6452
We add a `linkerd-identity-trust-roots` ConfigMap which contains the configured trust root bundle. The proxy template partial is modified so that core control plane components load this bundle from the configmap through the downward API.
The identity controller is updated to mount this new configmap as a volume read the trust root bundle at startup.
Similarly, the proxy-injector also mounts this new configmap. For each pod it injects, it reads the trust root bundle file and sets it on the injected pod.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Set `LINKERD2_PROXY_INBOUND_PORTS` during injection
Fixes#6267
The `LINKERD2_PROXY_INBOUND_PORTS` env var will be set during injection,
containing a comma-separated list of the ports in the non-proxy containers in
the pod. For the identity, destination and injector pods, the var is set
manually in their Helm templates.
Since the proxy-injector isn't reinvoked, containers injected by a mutating
webhook after the injector has run won't be detected. As an escape hatch, the
`config.linkerd.io/pod-inbound-ports` annotation has been added to explicit
overrides.
Other changes:
- Removed
`controller/proxy-injector/fake/data/inject-sidecar-container-spec.yaml` which
is no longer used. - Fixed bad indentation in some fixture files under
`controller/proxy-injector/fake/data`.
This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
The misspellings have been reported at 0d56327e6f (commitcomment-51603624)
The action reports that the changes in this PR would make it happy: 03a9c310aa
Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
### What
This change adds the `config.linkerd.io/proxy-await` annotation which when set will delay application container start until the proxy is ready. This allows users to force application containers to wait for the proxy container to be ready without modifying the application's Docker image. This is different from the current use-case of [linkerd-await](https://github.com/olix0r/linkerd-await) which does require modifying the image.
---
To support this, Linkerd is using the fact that containers are started in the order that they appear in `spec.containers`. If `linkerd-proxy` is the first container, then it will be started first.
Kubernetes will start each container without waiting on the result of the previous container. However, if a container has a hook that is executed immediately after container creation, then Kubernetes will wait on the result of that hook before creating the next container. Using a `PostStart` hook in the `linkerd-proxy` container, the `linkerd-await` binary can be run and force Kubernetes to pause container creation until the proxy is ready. Once `linkerd-await` completes, the container hook completes and the application container is created.
Adding the `config.linkerd.io/await-proxy` annotation to a pod's metadata results in the `linkerd-proxy` container being the first container, as well as having the container hook:
```yaml
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
```
---
### Update after draft
There has been some additional discussion both off GitHub as well as on this PR (specifically with @electrical).
First, we decided that this feature should be enabled by default. The reason for this is more often than not, this feature will prevent start-up ordering issues from occurring without having any negative effects on the application. Additionally, this will be a part of edges up until the 2.11 (the next stable release) and having it enabled by default will allow us to check that it does not conflict often with applications. Once we are closer to 2.11, we'll be able to determine if this should be disabled by default because it causes more issues than it prevents.
Second, this feature will remain configurable; if disabled, then upon injection the proxy container will not be made the first container in the pod manifest. This is important for the reasons discussed with @electrical about tools that make assumptions about app containers being the first container. For example, Rancher defaults to showing overview pages for the `0` index container, and if the proxy container was always `0` then this would defeat the purpose of the overview page.
### Testing
To test this I used the `sleep.sh` script and changed `Dockerfile-proxy` to use it as it's `ENTRYPOINT`. This forces the container to sleep for 20 seconds before starting the proxy.
---
`sleep.sh`:
```bash
#!/bin/bash
echo "sleeping..."
sleep 20
/usr/bin/linkerd2-proxy-run
```
`Dockerfile-proxy`:
```textile
...
COPY sleep.sh /sleep.sh
RUN ["chmod", "+x", "/sleep.sh"]
ENTRYPOINT ["/sleep.sh"]
```
---
```bash
# Build and install with the above changes
$ bin/docker-build
...
$ bin/image-load --k3d
...
$ bin/linkerd install |kubectl apply -f -
```
Annotate the `emoji` deployment so that it's the only workload that should wait for it's proxy to be ready and inject it:
```bash
cat emojivoto.yaml |bin/linkerd inject - |kubectl apply -f -
```
You can then see that the `emoji` deployment is not starting its application container until the proxy is ready:
```bash
$ kubectl get -n emojivoto pods
NAME READY STATUS RESTARTS AGE
voting-ff4c54b8d-sjlnz 1/2 Running 0 9s
emoji-f985459b4-7mkzt 0/2 PodInitializing 0 9s
web-5f86686c4d-djzrz 1/2 Running 0 9s
vote-bot-6d7677bb68-mv452 1/2 Running 0 9s
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This reverts commit f9ab867cbc which renamed the
multicluster label name from `mirror.linkerd.io` to `multicluster.linkerd.io`.
While this change was made to follow similar namings in other extensions, it
complicates the multicluster upgrade process due to the secret creation.
`mirror.linkerd.io` is not that important of a label to change and this will
allow a smoother upgrade process for `stable-2.10.x`
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This renames the multicluster annotation prefix from `mirror.linkerd.io` to
`multicluster.linkerd.io` in order to reflect other extension naming patterns.
Additionally, it moves labels only used in the Multicluster extension into their
own labels file—again to reflect other extensions.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
We've created a custom domain, `cr.l5d.io`, that redirects to `ghcr.io`
(using `scarf.sh`). This custom domain allows us to swap the underlying
container registry without impacting users. It also provides us with
important metrics about container usage, without collecting PII like IP
addresses.
This change updates our Helm charts and CLIs to reference this custom
domain. The integration test workflow now refers to the new domain,
while the release workflow continues to use the `ghcr.io/linkerd` registry
for the purpose of publishing images.
## What this changes
This adds a tap-injector component to the `linkerd-viz` extension which is
responsible for adding the tap service name environment variable to the Linkerd
proxy container.
If a pod does not have a Linkerd proxy, no action is taken. If tap is disabled
via annotation on the pod or the namespace, no action is taken.
This also removes the environment variable for explicitly disabling tap through
an environment variable. Tap status for a proxy is now determined only be the
presence or absence of the tap service name environment variable.
Closes#5326
## How it changes
### tap-injector
The tap-injector component determines if `LINKERD2_PROXY_TAP_SVC_NAME` should be
added to a pod's Linkerd proxy container environment. If the pod satisfies the
following, it is added:
- The pod has a Linkerd proxy container
- The pod has not already been mutated
- Tap is not disabled via annotation on the pod or the pod's namespace
### LINKERD2_PROXY_TAP_DISABLED
Now that tap is an extension of Linkerd and not a core component, it no longer
made sense to explicitly enable or disable tap through this Linkerd proxy
environment variable. The status of tap is now determined only be if the
tap-injector adds or does not add the `LINKERD2_PROXY_TAP_SVC_NAME` environment
variable.
### controller image
The tap-injector has been added to the controller image's several startup
commands which determines what it will do in the cluster.
As a follow-up, I think splitting out the `tap` and `tap-injector` commands from
the controller image into a linkerd-viz image (or something like that) makes
sense.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* jaeger: add check sub command
This adds a new `linkerd jaeger check` command to have checks w.r.t
jaeger extension. This is similar to that of the `linkerd check` cmd.
As jaeger is a separate package, It was a bit complex for this to work
as not all types and fields from healthcheck pkg are public, Helper
funcs were used to mitigate this.
This has the following changes:
- Adds a new `check.go` file under the jaeger extension pkg
- Moves some commonly needed funcs and types from `cli/cmd/check.go`
and `pkg/healthcheck/health.go` into
`pkg/healthcheck/healthcheck_output.go`.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including:
* removing the tracing components from the main control plane chart
* removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension)
* removing tracing related checks (these will be added back into `linkerd jaeger check`)
* removing related tests
We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension. To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to.
Note that for now, only the control plane components send traces; the proxies in the control plane do not. This is because the linkerd-jaeger injector is not yet available. However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available.
I tested this by doing the following:
1. bin/linkerd install | kubectl apply -f -
1. bin/helm install jaeger jaeger/charts/jaeger
1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f -
1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686
1. open http://localhost:16686
1. see traces from the linkerd control plane
Signed-off-by: Alex Leong <alex@buoyant.io>
* Have webhooks refresh their certs automatically
Fixes partially #5272
In 2.9 we introduced the ability for providing the certs for `proxy-injector` and `sp-validator` through some external means like cert-manager, through the new helm setting `externalSecret`.
We forgot however to have those services watch changes in their secrets, so whenever they were rotated they would fail with a cert error, with the only workaround being to restart those pods to pick the new secrets.
This addresses that by first abstracting out `FsCredsWatcher` from the identity controller, which now lives under `pkg/tls`.
The webhook's logic in `launcher.go` no longer reads the certs before starting the https server, moving that instead into `server.go` which in a similar way as identity will receive events from `FsCredsWatcher` and update `Server.cert`. We're leveraging `http.Server.TLSConfig.GetCertificate` which allows us to provide a function that will return the current cert for every incoming request.
### How to test
```bash
# Create some root cert
$ step certificate create linkerd-proxy-injector.linkerd.svc ca.crt ca.key \
--profile root-ca --no-password --insecure --san linkerd-proxy-injector.linkerd.svc
# configure injector's caBundle to be that root cert
$ cat > linkerd-overrides.yaml << EOF
proxyInjector:
externalSecret: true
caBundle: |
< ca.crt contents>
EOF
# Install linkerd. The injector won't start untill we create the secret below
$ bin/linkerd install --controller-log-level debug --config linkerd-overrides.yaml | k apply -f -
# Generate an intermediatery cert with short lifespan
step certificate create linkerd-proxy-injector.linkerd.svc ca-int.crt ca-int.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 4m --no-password --insecure --san linkerd-proxy-injector.linkerd.svc
# Create the secret using that intermediate cert
$ kubectl create secret tls \
linkerd-proxy-injector-k8s-tls \
--cert=ca-int.crt \
--key=ca-int.key \
--namespace=linkerd
# start following the injector log
$ k -n linkerd logs -f -l linkerd.io/control-plane-component=proxy-injector -c proxy-injector
# Inject emojivoto. The pods should be injected normally
$ bin/linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
# Wait about 5 minutes and delete a pod
$ k -n emojivoto delete po -l app=emoji-svc
# You'll see it won't be injected, and something like "remote error: tls: bad certificate" will appear in the injector logs.
# Regenerate the intermediate cert
$ step certificate create linkerd-proxy-injector.linkerd.svc ca-int.crt ca-int.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 4m --no-password --insecure --san linkerd-proxy-injector.linkerd.svc
# Delete the secret and recreate it
$ k -n linkerd delete secret linkerd-proxy-injector-k8s-tls
$ kubectl create secret tls \
linkerd-proxy-injector-k8s-tls \
--cert=ca-int.crt \
--key=ca-int.key \
--namespace=linkerd
# Wait a couple of minutes and you'll see some filesystem events in the injector log along with a "Certificate has been updated" entry
# Then delete the pod again and you'll see it gets injected this time
$ k -n emojivoto delete po -l app=emoji-svc
```
Fixes#5118
This PR adds a new supported value for the `linkerd.io/inject` annotation. In addition to `enabled` and `disabled`, this annotation may now be set to `ingress`. This functions identically to `enabled` but it also causes the `LINKERD2_PROXY_INGRESS_MODE="true"` environment variable to be set on the proxy. This causes the proxy to operate in ingress mode as described in #5118
With this set, ingresses are able to properly load service profiles based on the l5d-dst-override header.
Signed-off-by: Alex Leong <alex@buoyant.io>
In #5110 the `global.proxy.destinationGetNetworks` configuration is
renamed to `global.clusterNetworks` to better reflect its purpose.
The `config.linkerd.io/proxy-destination-get-networks` annotation allows
this configuration to be overridden per-workload, but there's no real use
case for this. I don't think we want to support this value differing
between pods in a cluster. No good can come of it.
This change removes support for the `proxy-destination-get-networks`
annotation.
This PR Updates the Injection Logic (both CLI and proxy-injector)
to use `Values` struct instead of protobuf Config, part of our move
in removing the protobuf.
This does not touch any of the flags, install related code.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Alex Leong <alex@buoyant.io>
Currently the secrets for the proxy-injector, sp-validator webhooks and tap API service are using the Opaque secret type and linkerd-specific field names. This makes it impossible to use cert-manager (https://github.com/jetstack/cert-manager) to provisions and rotate the secrets for these services. This change converts the secrets defined in the linkerd2 helm charts and the controller use the kubernetes.io/tls format instead. This format is used for secrets containing the generated secrets by cert-manager.
Signed-off-by: Lutz Behnke <lutz.behnke@finleap.com>
## Motivation
Closes#4950
## Solution
Add the `config.linkerd.io/opaque-ports` annotation to either a namespace or pod
spec to set the proxy `LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION`
environment variable.
Currently this environment variable is not used by the proxy, but will be
addressed by #4938.
## Valid values
Ports: `config.linkerd.io/opaque-ports: 4322,3306`
Port ranges: `config.linkerd.io/opaque-ports: 4320-4325`
Mixed ports and port ranges: `config.linkerd.io/opaque-ports: 4320-4325`
If the pod has named ports such as:
```
- name: nginx
image: nginx:latest
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
```
The name can also be used as a value: `config.linkerd.io/opaque-ports:
nginx-port`
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Push docker images to ghcr.io instead of gcr.io
The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.
Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.
Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.
Besides that, the other changes are just a global search and replace (s/gcr.io\/linkerd-io/ghcr.io\/linkerd/).
Fixes#4790
This PR removes both the SMI-Metrics templates along with the
experimental sub-commands. This also removes pkg `smi-metrics`
as there is no direct use of it without the commands.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
The misspellings have been reported at aaf440489e (commitcomment-41423663)
The action reports that the changes in this PR would make it happy: 5b82c6c5ca
Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
* support overriding inbound and outbound connect timeouts.
* add validation on user provided TCP connect timeouts
* convert valid time values into ms
Signed-off-by: Matt Miller <mamiller@rosettastone.com>
Using following command the wrong spelling were found and later on
fixed:
```
codespell --skip CHANGES.md,.git,go.sum,\
controller/cmd/service-mirror/events_formatting.go,\
controller/cmd/service-mirror/cluster_watcher_test_util.go,\
SECURITY_AUDIT.pdf,.gcp.json.enc,web/app/img/favicon.png \
--ignore-words-list=aks,uint,ans,files\' --check-filenames \
--check-hidden
```
Signed-off-by: Suraj Deshmukh <surajd.service@gmail.com>
* feat: add log format annotation and helm value
Json log formatting has been added via https://github.com/linkerd/linkerd2-proxy/pull/500
but wiring the option through as an annotation/helm value is still
necessary.
This PR adds the annotation and helm value to configure log format.
Closes#2491
Signed-off-by: Naseem <naseem@transit.app>