Commit Graph

125 Commits

Author SHA1 Message Date
Alex Leong db495d6765
refactor(proxy-injector): make injection code take a value overrider as a parameter (#14037)
In order to make the way that proxy injector patches more flexible, we adjust the method signature of `ResourceConfig.GetPodPatch` to accept a `ValueOverrider`.  The type of `ValueOverrider` is:

```
func(values *l5dcharts.Values, overrides map[string]string, namedPorts map[string]int32) (*l5dcharts.Values, error)
``` 

and specifies how overrides (in the form of pod and namespace annotations) get translated into values for the proxy patch template.

The current override behavior, specified in `GetOverriddenValues`, is supplied in all cases, making this a refactor with no functional changes.

Signed-off-by: Alex Leong <alex@buoyant.io>
2025-05-23 15:30:58 -07:00
Scott Fleener 838f2fd222
feat(policy): Configure outbound hostname labels in metrics (#13822)
Linkerd proxies no longer omit `hostname` labels for outbound policy metrics (due to potential for high-cardinality).

This change adds Helm templates and annotations to control this behavior, allowing users to opt-in to these outbound hostname labels.

Signed-off-by: Scott Fleener <scott@buoyant.io>
2025-03-25 16:39:36 -07:00
vishal tewatia bd577deb54
fix(injector): use annotated values for debug container (#13778)
Issue #13636 was opened stating that custom debug container annotations
had no effect.

Quick investigation confirmed the issue and further debugging revealed a
bug in code where the final values for helm chart were not using values
processed by GetOverriddenValues function and that's why annotations had
no effect for debug containers. This had been fixed now.

Added to unit test to test added code. Manual testing also done. The
issue seems to be resolved.

Fixes #13636

Signed-off-by: Vishal Tewatia <tewatiavishal3@gmail.com>
Co-authored-by: Vishal Tewatia <tewatiavishal3@gmail.com>
2025-03-18 14:25:16 -05:00
Oliver Gould cb86d669ea
feat(inject): replace proxy.cores with proxy.runtime.workers (#13767)
The proxy.cores helm value is overly restrictive: it enforces a hard upper
limit. In some scenarios, a fixed limit is not practical: for example, when the
proxy is meshing an application that configures no limits.

This change replaces the proxy.cores value with a new proxy.runtime.workers
structure, with members:

- `minimum`: configures the minimum number of worker threads a proxy may use.
- `maximumCPURatio`: optionally allows the proxy to use a larger
  number of CPUs, relative to the number of available CPUs on the node.

So with a minimum of 2 and a ratio of 0.1, a proxy would run 2 worker threads
(the minimum) running on an 8 core node, but allocate 10 worker threads on a 96
core node.

When the `config.linkerd.io/proxy-cpu-limit` is used, that will continue to set
the maximum number of worker threads to a fixed value.

When it is not set, however, the minimum worker pool size is derived from the
`config.linkerd.io/proxy-cpu-request`.

An additional `config.linkerd.io/proxy-cpu-ratio-limit` annotation is introduced
to allow workload-level configuration.
2025-03-17 18:54:20 +00:00
Oliver Gould 9c29655e5e
refactor(inject): group resource requests/limits by type (#13769)
In preparation for changes to CPU resource configuration, this commit reorders
the internals of the annotation overriding logic to group resource requests and
limits by type.
2025-03-11 17:34:07 +00:00
Oliver Gould 9bd16f3b3b
chore: update Go code for new lints (#13437)
Before updating our dev image with a new version of golangci-lint, this change
updates our Go code to satisfy new lints.

No functional changes.
2024-12-06 07:14:17 -08:00
Alejandro Pedraza aeadb63340
New "audit" value for default inbound policy (#12844)
* New "audit" value for default inbound policy

As a preliminary for audit-mode support, this change just adds "audit" to the allowed values for the `proxy.defaultInboundPolicy` helm entry, and to the `--default-inbound-policy` flag for the install CLI. It also adds it to the allowed values for the `config.linkerd.io/default-inbound-policy` annotation.
2024-07-17 15:54:27 -05:00
Alex Leong 35fb2d6d11
feat!: Add config to disable proxy /shutdown admin endpoint (#12705)
The proxy may expose a /shutdown HTTP endpoint on its admin server that may be used by `linkerd-await --shutdown` to trigger proxy shutdown after a process completes. If an application has an SSRF vulnerability, however, an attacker could use this endpoint to trigger proxy shutdown, causing a denial of service. This admin endpoint is only useful with linkerd-await; and this functionality is supplanted by Kubernetes Native Sidecars.

To address this potential issue, this change disables the proxy's admin endpoint by default. A helm value is introduced to support enabling the admin endpoint cluster-wide; and the `config.linkerd.io/proxy-admin-shutdown: enabled` annotation may be set to enable it the admin endpoint on an individual workload.

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-06-14 09:55:15 -07:00
Alex Leong e0fe0248d5
Add config to disable HTTP proxy logging (#12665)
Fixes #12620

When the Linkerd proxy log level is set to `debug` or higher, the proxy logs HTTP headers which may contain sensitive information.

While we want to avoid logging sensitive data by default, logging of HTTP headers can be a helpful debugging tool.  Therefore, we add a `proxy.logHTTPHeaders` Helm value which prevents the logging of HTTP headers when set to false.  The default value of this value is false so that headers cannot be logged unless users opt-in.

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-06-11 17:46:54 -07:00
Nico Feulner 3d674599b3
make group ID configurable (#11924)
Fixes #11773

Make the proxy's GUID configurable via `proxy.gid` which defaults to `-1`, in which case the GUID is not set.
Also added ability to set the GUID for proxy-init and the core and extension controllers.

---------

Signed-off-by: Nico Feulner <nico.feulner@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
2024-05-23 15:54:21 -05:00
Matei David 38c6d11832
Change injector overriding logic to be more generic (#12405)
The proxy-injector package has a `ResourceConfig` type that is
responsible for parsing resources, applying overrides, and serialising a
series of configuration values to a Kubernetes patch. The functionality
is very concrete in its assumption; it always relies on a pod spec and
it mutates inner state when deciding on which overrides to apply.

This is not a flexible way to handle injection and configuration
overriding for other types of resources. We change this by turning
methods previously defined on `ResourceConfig` into free-standing
functions. These functions can be applied for any type of resources in
order to compute a set of configuration values based on annotation
overrides. Through the change, the functions can be used to compute
static configuration for non-Pod types or can be used in tests.


Signed-off-by: Matei David <matei@buoyant.io>
2024-04-10 15:51:58 +01:00
occupyhabit 6eeaea4d94
chore: Remove repetitive words (#12330)
Signed-off-by: occupyhabit <wangmengjiao@outlook.com>
2024-03-25 09:33:39 -07:00
TJ Miller 1b37e1989f
Add native sidecar support (#11465)
* Add native sidecar support

Kubernetes will be providing beta support for native sidecar containers in version 1.29.  This feature improves network proxy sidecar compatibility for jobs and initContainers.

Introduce a new annotation config.alpha.linkerd.io/proxy-enable-native-sidecar and configuration option Proxy.NativeSidecar that causes the proxy container to run as an init-container.

Fixes: #11461

Signed-off-by: TJ Miller <millert@us.ibm.com>
2023-11-22 12:23:24 -05:00
Matei David 1e6a019b31
Introduce configurable values for protocol detection (#11536)
This change allows users to configure protocol detection timeout values
(outbound and inbound). Certain environments may find that protocol
detection inhibits debugging and makes it harder to reason with a
client's behaviour. In such cases (and not only) it may be deseriable to
change the default protocol detection timeout to a higher value than the
default 10s.

Through this change, users may configure their timeout values either
with install-time settings or through annotations; this follows our
usual proxy configuration model. The proxy uses different timeout values
for the inbound and outbound stacks (even though they use the same
default value) and this change respects that by adding two separate
fields.

Signed-off-by: Matei David <matei@buoyant.io>
2023-11-02 14:03:50 +00:00
Matei David 6bea77d89b
Add cache configuration annotation support (#10871)
The proxy caches discovery results in-memory. Linkerd supports
overriding the default eviction timeout for cached discovery results
through install (i.e. helm) values. However, it is currently not
possible to configure timeouts on a workload-per-workload basis, or to
configure the values after Linkerd has been installed (or upgraded).

This change adds support for annotation based configuration. Workloads
and namespaces now support two new configuration annotations that will
override the install values when specified.

Additionally, a typo has been fixed on the internal type representation.
This is not a breaking change since the type itself is not exposed to
users and is parsed correctly in the values.yaml file (or CLI)


Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2023-05-10 16:27:37 +01:00
Eliza Weisman 34df5aa606
inject: don't expand opaque port ranges (#10827)
Currently, the proxy injector will expand lists of opaque port ranges
into lists of individual port numbers. This is because the proxy has
historically not accepted port ranges in the
`LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION` environment
variable. However, when very large ranges are used, the size of the
injected manifest can be quite large, since each individual port number
in a range must be listed separately.

Proxy PR linkerd/linkerd2-proxy#2395 changed the proxy to accept ranges
as well as individual port numbers in the opaque ports environment
variable, and this change was included in the latest proxy release
(v2.200.0). This means that the proxy-injector no longer needs to expand
large port ranges into individual port numbers, and can now simply
forward the list of ranges to the proxy. This branch changes the proxy
injector to do this, resolving issues with manifest size due to large
port ranges.

Closes #9803
2023-04-27 11:27:40 -07:00
Zahari Dichev ea84a70f32
inject: avoid extra serialization when building ResourceConfig (#10589)
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
2023-03-22 17:16:08 -05:00
Alejandro Pedraza 7428d4aa51
Removed dupe imports (#10049)
* Removed dupe imports

My IDE (vim-gopls) has been complaining for a while, so I decided to take
care of it. Found via
[staticcheck](https://github.com/dominikh/go-tools)

* Add stylecheck to go-lint checks
2023-01-10 14:34:56 -05:00
Alejandro Pedraza 4dbb027f48
Use metadata API in the proxy and tap injectors (#9650)
* Use metadata API in the proxy and tap injectors

Part of #9485

This adds a new `MetadataAPI` similar to the current `k8s.API` hosting informers, but backed by k8s' `metadatainformer` shared informers, which retrieves only the objects metadata, resulting in less memory consumption by its clients. Currently this is only implemented for the proxy and tap injectors. Usage by the destination controller will be implemented as a follow-up.

## Existing API enhancements

Shared objects and logic required by API and MetadataAPI have been moved to the new `k8s.go`, `api_resource.go` and `prometheus.go` files. That includes the `isValidRSParent()` function whose arg is now more generic.

## Unit tests

`/controller/k8s/api_test.go` now also instantiates a MetadataAPI, used in the augmented `TestGetObjects()` and `TestGetOwnerKindAndName()` tests. The `resources` struct was introduced to capture the common fields among tests and simplify `newMockAPI()`'s signature.

## Other Changes

The injector no longer watches for Pods. It only requires watching workloads that own resources (and also watch namespaces), so Pod is not required.

## Testing Memory Consumption

Install linkerd, inject emojivoto and check the injector memory consumption with `kubectl -n linkerd top pod linkerd-proxy-injector-xxx`. It'll start consuming about 16Mi. Then ramp up emojivoto's `voting` deployment replicas to 2000. After 5 minutes memory will stabilize around 32Mi using the current branch. Using the latest edge, it'll stabilize around 110Mi.
2022-11-16 09:21:39 -05:00
Jeremy Chase 32b4ac4f3a
Populate empty proxy-version annotation (#9382)
Addresses: #9311 

* Set injected `proxy-version` annotation to `values.LinkerdVersion` when image version is empty.
* Set `Proxy.Image.Version` consistently between CLI and Helm

Tested when installed via CLI:

```
$ k get po -o yaml -n emojivoto | grep proxy-version
      linkerd.io/proxy-version: dev-0911ad92-jchase
      linkerd.io/proxy-version: dev-0911ad92-jchase
      linkerd.io/proxy-version: dev-0911ad92-jchase
      linkerd.io/proxy-version: dev-0911ad92-jchase
```

Untested when installed via Helm.

Signed-off-by: Jeremy Chase <jeremy.chase@gmail.com>
Co-authored-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
2022-10-11 13:05:59 -06:00
Kevin Leimkuhler b7387820c3
Add trust-root-sha256 annotation to injected workloads (#9361)
Closes #9312

#9118 introduced the `linkerd.io/trust-root-sha256` annotation which is
automatically added to control plane components.

This change ensures that all injected workloads also receive this annotation.

Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
2022-09-08 22:22:57 -06:00
Alejandro Pedraza fd98c064c6
Properly inherit `linkerd.io/inject: ingress` from NS to workload (#9114)
* Properly inherit `linkerd.io/inject: ingress` from NS to workload

Workloads were inheriting it as the default `enabled` mode.

Introduced a new entry in the inject integration test to catch this.

This fix is paired with the ingress doc clarification PR linkerd/website#1398
2022-08-12 17:17:34 -05:00
Eliza Weisman 85e5ab3b38
inject: add `config.linkerd.io/shutdown-grace-period` annotation (#8923)
PR linkerd/linkerd2-proxy#1815 added support for a
`LINKERD2_PROXY_SHUTDOWN_GRACE_PERIOD` environment variable that
configures the proxy's maximum grace period for graceful shutdown. This
is intended to ensure that if a proxy is shut down, it will eventually
terminate in a relatively timely manner, even if some stubborn
connections don't close gracefully.

This branch adds support for a `config.linkerd.io/shutdown-grace-period`
annotation that can be used to override the default grace period
duration.

Hopefully I've added this everywhere it needs to be added --- please let
me know if I've missed anything!
2022-07-19 14:43:38 -07:00
Alex Leong b7a0b8adb4
Bump minimum kubernetes version to 1.21 (#8647)
Fixes #8592

Increase the minimum supported kubernetes version from 1.20 to 1.21.  This allows us to drop support for batch/v1beta1/CronJob and discovery/v1beta1/EndpointSlices, instead using only v1 of those resources.  This fixes deprecation warnings about these warnings printed by the CLI.

Signed-off-by: Alex Leong <alex@buoyant.io>
2022-06-14 15:15:28 -07:00
Matei David 574cd49b3a
Include pod probe ports in inbound proxy config (#8645)
The injector configures the proxy with a set of known inbound ports
which are used (by the proxy) to discover inbound server configuration.
The list of ports is derived from the pod's container ports; container
ports may be optional and thus not present. The proxy supports dynamic
discovery of additional ports at runtime but since they are lazy,
additional ports may be dropped or updated long after pod start-up.

To ensure HTTP probes are handled correctly, this change introduces new
functionality to configure the list of inbound ports for the proxy with
any ports targeted by healthcheck probes, as long as they are HTTP, and
even if they are not present in the containerPorts configuration.

This change also introduces additional liveness (or readiness) probes to
the current injector webhook test fixtures in order to assert that
injected pods will always have their healthcheck target ports included
in the proxy's configuration.

Closes #8638

Signed-off-by: Matei David <matei@buoyant.io>
2022-06-13 18:33:56 +01:00
Oliver Gould 425a43def5
Enable gocritic linting (#7906)
[gocritic][gc] helps to enforce some consistency and check for potential
errors. This change applies linting changes and enables gocritic via
golangci-lint.

[gc]: https://github.com/go-critic/go-critic

Signed-off-by: Oliver Gould <ver@buoyant.io>
2022-02-17 22:45:25 +00:00
Oliver Gould f5876c2a98
go: Enable `errorlint` checking (#7885)
Since Go 1.13, errors may "wrap" other errors. [`errorlint`][el] checks
that error formatting and inspection is wrapping-aware.

This change enables `errorlint` in golangci-lint and updates all error
handling code to pass the lint. Some comparisons in tests have been left
unchanged (using `//nolint:errorlint` comments).

[el]: https://github.com/polyfloyd/go-errorlint

Signed-off-by: Oliver Gould <ver@buoyant.io>
2022-02-16 18:32:19 -07:00
Alejandro Pedraza 68b63269d9
Remove the `proxy.disableIdentity` config (#7729)
* Remove the `proxy.disableIdentity` config

Fixes #7724

Also:
- Removed the `linkerd.io/identity-mode` annotation.
- Removed the `config.linkerd.io/disable-identity` annotation.
- Removed the `linkerd.proxy.validation` template partial, which only
  made sense when `proxy.disableIdentity` was `true`.
- TestInjectManualParams now requires to hit the cluster to retrieve the
  trust root.
2022-01-31 10:17:10 -05:00
Eliza Weisman 9e9c9457ae
inject: support `config.linkerd.io/access-log` annotation (#7689)
With #7661, the proxy supports a `LINKERD2_PROXY_ACCESS_LOG`
configuration with the values `apache` or `json`. This configuration
causes the proxy to emit access logs to stderr. This branch makes it
possible for users to enable access logging by adding an annotation,
`config.linkerd.io/access-log`, that tells the proxy injector to set
this environment variable.

I've also added some tests to ensure that the annotation and the
environment variable are set correctly. I tried to follow the existing
tests as examples of how we do this, but please let me know if I've
overlooked anything!

Closes #7662 #1913

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-01-24 14:02:19 -08:00
Michael Lin 99f3e087e1
Introduce annotation to skip subnets (#7631)
The goal is to support configuring the
`--subnets-to-ignore` flag in proxy-init

This change adds a new annotation `/skip-subnets` which
takes a comma-separated list of valid CIDR.
The argument will map to the `--subnets-to-ignore`
flag in the proxy-init initContainer.

Fixes #6758

Signed-off-by: Michael Lin <mlzc@hey.com>
2022-01-20 16:53:59 +00:00
Alejandro Pedraza f9f3ebefa9
Remove namespace from charts and split them into `linkerd-crd` and `linkerd-control-plane` (#6635)
Fixes #6584 #6620 #7405

# Namespace Removal

With this change, the `namespace.yaml` template is rendered only for CLI installs and not Helm, and likewise the `namespace:` entry in the namespace-level objects (using a new `partials.namespace` helper).

The `installNamespace` and `namespace` entries in `values.yaml` have been removed.

There in the templates where the namespace is required, we moved from `.Values.namespace` to `.Release.Namespace` which is filled-in automatically by Helm. For the CLI, `install.go` now explicitly defines the contents of the `Release` map alongside `Values`.

The proxy-injector has a new `linkerd-namespace` argument given the namespace is no longer persisted in the `linkerd-config` ConfigMap, so it has to be passed in. To pass it further down to `injector.Inject()` without modifying the `Handler` signature, a closure was used.

------------
Update: Merged-in #6638: Similar changes for the `linkerd-viz` chart:

Stop rendering `namespace.yaml` in the `linkerd-viz` chart.

The additional change here is the addition of the `namespace-metadata.yaml` template (and its RBAC), _not_ rendered in CLI installs, which is a Helm `post-install` hook, consisting on a Job that executes a script adding the required annotations and labels to the viz namespace using a PATCH request against kube-api. The script first checks if the namespace doesn't already have an annotations/labels entries, in which case it has to add extra ops in that patch.

---------
Update: Merged-in the approved #6643, #6665 and #6669 which address the `linkerd2-cni`, `linkerd-multicluster` and `linkerd-jaeger` charts. 

Additional changes from what's already mentioned above:
- Removes the install-namespace option from `linkerd install-cni`, which isn't found in `linkerd install` nor `linkerd viz install` anyways, and it would add some complexity to support.
- Added a dependency on the `partials` chart to the `linkerd-multicluster-link` chart, so that we can tap on the `partials.namespace` helper.
- We don't have any more the restriction on having the muticluster objects live in a separate namespace than linkerd. It's still good practice, and that's the default for the CLI install, but I removed that validation.


Finally, as a side-effect, the `linkerd mc allow` subcommand was fixed; it has been broken for a while apparently:

```console
$ linkerd mc allow --service-account-name foobar
Error: template: linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml:16:7: executing "linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml" at <include "partials.annotations.created-by" $>: error calling include: template: no template "partials.annotations.created-by" associated with template "gotpl"
```
---------
Update: see helm/helm#5465 describing the current best-practice

# Core Helm Charts Split

This removes the `linkerd2` chart, and replaces it with the `linkerd-crds` and `linkerd-control-plane` charts. Note that the viz and other extension charts are not concerned by this change.

Also note the original `values.yaml` file has been split into both charts accordingly.

### UX

```console
$ helm install linkerd-crds --namespace linkerd --create-namespace linkerd/linkerd-crds
...
# certs.yaml should contain identityTrustAnchorsPEM and the identity issuer values
$ helm install linkerd-control-plane --namespace linkerd -f certs.yaml linkerd/linkerd-control-plane
```

### Upgrade

As explained in #6635, this is a breaking change. Users will have to uninstall the `linkerd2` chart and install these two, and eventually rollout the proxies (they should continue to work during the transition anyway).

### CLI

The CLI install/upgrade code was updated to be able to pick the templates from these new charts, but the CLI UX remains identical as before.

### Other changes

- The `linkerd-crds` and `linkerd-control-plane` charts now carry a version scheme independent of linkerd's own versioning, as explained in #7405.
- These charts are Helm v3, which is reflected in the `Chart.yaml` entries and in the removal of the `requirements.yaml` files.
- In the integration tests, replaced the `helm-chart` arg with `helm-charts` containing the path `./charts`, used to build the paths for both charts.

### Followups

- Now it's possible to add a `ServiceProfile` instance for Destination in the `linkerd-control-plane` chart.
2021-12-10 15:53:08 -05:00
Michael Lin 0e39017807
Improve ephemeral-storage resource config (#7218)
Fixes #3307

Add support for annotations `config.linkerd.io/proxy-ephemeral-storage-limit` and `config.linkerd.io/proxy-ephemeral-storage-request`

Signed-off-by: Michael Lin <mlzc@hey.com>
2021-11-05 14:04:57 -05:00
Kevin Leimkuhler d611af3647
Filter default opaque ports for pods and services (#6774)
#6719 changed the proxy injector so that it adds the `config.linkerd.io/opaque-ports` annotation to all pods and services if they or their namespace do not already contain the annotation. The value used is the default list of opaque ports—which is `25,443,587,3306,4444,5432,6379,9300,11211` unless otherwise specified by the user during installation.

Closes #6729

The main issue with this is that if a service exposes a service port `9090` that targets `3306`, the service _should_ have `9090` set as opaque since it targets a default opaque port, but it does not. This change ensures that services with this situation have `9090` set as opaque.

Additionally, services and pods do not need an annotation for with the entire default opaque ports list if they don't expose those ports in the first place. This change will filter out ports from the default list if the service or pod does not expose them.

### tests
I've added some unit tests that demonstrate the change in behavior and explained in the original issue #6729.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-08-31 16:11:42 -06:00
Kevin Leimkuhler 152290e58d
proxy-injector: add `default-inbound-policy` annotation (#6750)
The proxy injector now adds the `config.linkerd.io/default-inbound-policy` annotation to all injected pods.

Closes #6720.

If the pod has the annotation before injection then that value is used. If the pod does not have the annotation but the namespace does, then it inherits that. If both the pod and the namespace do not have the annotation, then it defaults to `.Values.policyController.defaultAllowPolicy`.

Upon injecting the sidecar container into the pod, this annotation value is used to set the `LINKERD2_PROXY_INBOUND_DEFAULT_POLICY` environment variable. Additionally, `LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS` is also set to the value of `.Values.clusterNetworks`.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-08-26 12:46:40 -06:00
Kevin Leimkuhler c7d54bb826
proxy-injector: always add the `opaque-ports` annotation (#6719)
In order to discover how a workload is configured without knowing the global defaults, the `opaque-ports` annotation is now added by the proxy injector to workloads, regardless of the list being the default or user-specified.

Closes #6689

#### core
Because core control plane components do not go through the proxy injector the annotation is added to the `destination`, `identity`, and `proxy-injector` templates.

The `linkerd-destination` and `linkerd-proxy-injector` deployments both now just have the `opaque-ports: "8443"` annotation. The `linkerd-identity` deployment and service doesn't need this annotation since it doesn't expose anything in the default list.

#### non-core
All other resources go through the proxy injector; it decides whether or not services or pods (the two resources that it can add annotations to) should get the default list.

Workloads get the default list of opaque ports added if they and their namespace do not have the annotation already. So this boils down to:
1. If the workload already has the annotation, no patch is created
2. If the namespace has the annotation but the workload does not, a patch is generated
3. If the workload and namespace do not have the annotation, a patch is generated

#### tests
A unit test has been added and I performed the following manual tests:
1. Injected a pod with the annotation: a patch is generated but there is no change to opaque ports
2. Injected a pod with the namespace annotation: a patch is genereted and opaque ports are copied down to the pod
3. Injected a pod with no annotation on it or the namespace: a patch is generated and the default opaque ports are added
4. Created a pod (not injected): a patch is generated (without the proxy) that adds the annotation (this holds true for if the pod having the annotation or the namespace having the annotation)

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-08-26 11:38:40 -06:00
Alejandro Pedraza a4e35b7cc8
Set `LINKERD2_PROXY_INBOUND_PORTS` during injection (#6445)
* Set `LINKERD2_PROXY_INBOUND_PORTS` during injection

Fixes #6267

The `LINKERD2_PROXY_INBOUND_PORTS` env var will be set during injection,
containing a comma-separated list of the ports in the non-proxy containers in
the pod. For the identity, destination and injector pods, the var is set
manually in their Helm templates.

Since the proxy-injector isn't reinvoked, containers injected by a mutating
webhook after the injector has run won't be detected. As an escape hatch, the
`config.linkerd.io/pod-inbound-ports` annotation has been added to explicit
overrides.

Other changes:

- Removed
`controller/proxy-injector/fake/data/inject-sidecar-container-spec.yaml` which
is no longer used.  - Fixed bad indentation in some fixture files under
`controller/proxy-injector/fake/data`.
2021-07-09 11:52:20 -05:00
Sanskar Jaiswal 2599a04a8c
Skip NET_RAW & NET_ADMIN while copying pod drop capabilities. (#6258)
* Skip NET_RAW & NET_ADMIN while copying pod drop capabilities.

While copying the container drop capabilites into the sidecars we should
skip NET_RAW and NET_ADMIN as the init containers require them to setup
iptables.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2021-06-18 11:44:14 -05:00
Josh Soref 0be792fadc
Spelling (#6215)
This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).

The misspellings have been reported at 0d56327e6f (commitcomment-51603624)

The action reports that the changes in this PR would make it happy: 03a9c310aa

Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2021-06-07 15:16:59 -06:00
Matei David 6ea497fdb0
injector: Fix ns opaque ports annotation overriding all service annotations (#6119)
Fixes #6117 

### What
---

> Applying a config.linkerd.io/opaque-ports annotation on a namespace will override all service level annotations such as those used for external-dns

When applying a service with existing annotations to a namespace that has an opaque ports annotation, we would expect the service to inherit the opaque ports annotation but keep its existing annotations. At the moment, this is not the case; when inheriting its annotation from the namespace, all of the service annotations are overridden.

The issue comes from how the annotation patch is generated in the `proxy-injector`. When we generate a patch for services, we first check whether we should add just the opaque annotation or if we should also create the annotations map. When checking for the length of the service's existing annotations map, we check the `resourceConfig`'s pod object -- for a service, the resourceConfig struct will always have an empty annotations map in the embedded pod object.

The fix is quite simple, since we know we only create the annotation patch for services, check the workload object instead of the pod. Pods already inherit all namespace config values at injection time  so it's safe to assume the annotation patch wouldn't be called on a pod.

I think we could also do a quick check on the underlying object's kind here if we want to also preserve the pod annotation lookup, something like:
```
sz = 0
if resourceConfig.Kind == "pod"
  sz = check pod annotation sz
else
  sz = check svc annotation sz 
```

For simplicity, I changed it to the workload instead of checking the kind but I don't mind adding more logic in.

### Tests
---



``` yaml
# for this service
apiVersion: v1
kind: Service
metadata:
  annotations:
    acme.io/foo: bar
    external-dns.alpha.kubernetes.io/internal-hostname: nginx.internal.example.org.
    external-dns.alpha.kubernetes.io/ttl: "10"
  creationTimestamp: "2021-05-13T09:14:06Z"
  name: web-svc
  namespace: emojivoto
  resourceVersion: "105676"
  uid: d2ec67aa-18e8-417f-9ae0-1a9107786201
spec:
---
# applied to this namespace
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "22"
    linkerd.io/inject: enabled
  name: emojivoto
  resourceVersion: "105722"
status:
  phase: Active
```

* *Before*
```sh
# proxy injector generates 2 Adds in the patch
time="2021-05-13T09:21:03Z" level=info msg="received service/web-svc"
time="2021-05-13T09:21:03Z" level=debug msg="using namespace emojivoto config.linkerd.io/opaque-ports annotation value"
time="2021-05-13T09:21:03Z" level=info msg="annotation patch generated for: service/web-svc"
time="2021-05-13T09:21:03Z" level=debug msg="annotation patch: [\n  {\n    \"op\": \"add\",\n    \"path\": \"/metadata/annotations\",\n    \"value\": {}\n  },\n  {\n    \"op\": \"add\",\n    \"path\": \"/metadata/annotations/config.linkerd.io~1opaque-ports\",\n    \"value\": \"22\"\n  }\n]"


# which gives us a service with missing annotations
 k get svc web-svc -o yaml -n emojivoto
apiVersion: v1
kind: Service
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "22"
  name: web-svc
  namespace: emojivoto
```


* *After the change*:
```sh
# for the same manifests, we only get one Add (an append to the existing annotations map)
time="2021-05-13T09:44:19Z" level=info msg="received service/web-svc"
time="2021-05-13T09:44:19Z" level=debug msg="using namespace emojivoto config.linkerd.io/opaque-ports annotation value"
time="2021-05-13T09:44:19Z" level=info msg="annotation patch generated for: service/web-svc"
time="2021-05-13T09:44:19Z" level=debug msg="annotation patch: [\n  {\n    \"op\": \"add\",\n    \"path\": \"/metadata/annotations/config.linkerd.io~1opaque-ports\",\n    \"value\": \"22\"\n  }\n]"

# which gives us a service with existing and new annotation(s)
apiVersion: v1
kind: Service
metadata:
  annotations:
    acme.io/foo: bar
    config.linkerd.io/opaque-ports: "22"
    external-dns.alpha.kubernetes.io/internal-hostname: nginx.internal.example.org.
    external-dns.alpha.kubernetes.io/ttl: "10"
  name: web-svc
  namespace: emojivoto

```

### Update:
---

There may be cases when we want a pod to be annotated but not injected. Added in a check to see what the underlying resource type is -- if we deal with a pod, check pod annotations otherwise the embedded object.

```
# in the same namespace, remove linkerd inject annotation
# this makes sure we check resources that shouldn't be injected can still be annotated
kind: Namespace
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "22"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"linkerd.io/inject":"enabled"},"name":"emojivoto"}}
  creationTimestamp: "2021-05-13T09:14:06Z"

# get a random pod from a random deployment, grep the annotation field (it won't any opaque annotations)
$ kubectl get po vote-bot-69754c864f-xzqf2 -n linkerd -o yaml -n emojivoto | rg annotations -A 4
  annotations:
    linkerd.io/created-by: linkerd/proxy-injector dev-30fa4354-matei
    linkerd.io/identity-mode: default
    linkerd.io/proxy-version: dev-30fa4354-matei
  creationTimestamp: "2021-05-13T09:14:06Z"

# restart deployment, this time it won't be injected since injection was disabled on the namespace
# it should still inherit opaque ports from namespace
$ k get po vote-bot-c9fb5bbbd-86hrm -n emojivoto -o yaml | rg 'annotations' -A 2
  annotations:
    config.linkerd.io/opaque-ports: "22"
    kubectl.kubernetes.io/restartedAt: "2021-05-13T15:29:54Z"
--
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/restartedAt: {}

# inherited opaque ports but is not injected, what we'd expect in this case
# since we want it to be _just_ annotated.
```

Signed-off-by: Matei David <matei@buoyant.io>
2021-05-14 09:59:53 -04:00
Kevin Leimkuhler 1071ec2e77
Add support for awaiting proxy readiness (#5967)
### What

This change adds the `config.linkerd.io/proxy-await` annotation which when set will delay application container start until the proxy is ready. This allows users to force application containers to wait for the proxy container to be ready without modifying the application's Docker image. This is different from the current use-case of [linkerd-await](https://github.com/olix0r/linkerd-await) which does require modifying the image.

---

To support this, Linkerd is using the fact that containers are started in the order that they appear in `spec.containers`. If `linkerd-proxy` is the first container, then it will be started first.

Kubernetes will start each container without waiting on the result of the previous container. However, if a container has a hook that is executed immediately after container creation, then Kubernetes will wait on the result of that hook before creating the next container. Using a `PostStart` hook in the `linkerd-proxy` container, the `linkerd-await` binary can be run and force Kubernetes to pause container creation until the proxy is ready. Once `linkerd-await` completes, the container hook completes and the application container is created.

Adding the `config.linkerd.io/await-proxy` annotation to a pod's metadata results in the `linkerd-proxy` container being the first container, as well as having the container hook:

```yaml
postStart:
  exec:
    command:
    - /usr/lib/linkerd/linkerd-await
```

---

### Update after draft

There has been some additional discussion both off GitHub as well as on this PR (specifically with @electrical).

First, we decided that this feature should be enabled by default. The reason for this is more often than not, this feature will prevent start-up ordering issues from occurring without having any negative effects on the application. Additionally, this will be a part of edges up until the 2.11 (the next stable release) and having it enabled by default will allow us to check that it does not conflict often with applications. Once we are closer to 2.11, we'll be able to determine if this should be disabled by default because it causes more issues than it prevents.

Second, this feature will remain configurable; if disabled, then upon injection the proxy container will not be made the first container in the pod manifest. This is important for the reasons discussed with @electrical about tools that make assumptions about app containers being the first container. For example, Rancher defaults to showing overview pages for the `0` index container, and if the proxy container was always `0` then this would defeat the purpose of the overview page.

### Testing

To test this I used the `sleep.sh` script and changed `Dockerfile-proxy` to use it as it's `ENTRYPOINT`. This forces the container to sleep for 20 seconds before starting the proxy.

---

`sleep.sh`:

```bash
#!/bin/bash
echo "sleeping..."
sleep 20
/usr/bin/linkerd2-proxy-run
```

`Dockerfile-proxy`:

```textile
...
COPY sleep.sh /sleep.sh
RUN ["chmod", "+x", "/sleep.sh"]
ENTRYPOINT ["/sleep.sh"]
```

---

```bash
# Build and install with the above changes
$ bin/docker-build
...
$ bin/image-load --k3d
...
$ bin/linkerd install |kubectl apply -f -
```

Annotate the `emoji` deployment so that it's the only workload that should wait for it's proxy to be ready and inject it:

```bash
cat emojivoto.yaml |bin/linkerd inject - |kubectl apply -f -
```

You can then see that the `emoji` deployment is not starting its application container until the proxy is ready:

```bash
$ kubectl get -n emojivoto pods
NAME                        READY   STATUS            RESTARTS   AGE
voting-ff4c54b8d-sjlnz      1/2     Running           0          9s
emoji-f985459b4-7mkzt       0/2     PodInitializing   0          9s
web-5f86686c4d-djzrz        1/2     Running           0          9s
vote-bot-6d7677bb68-mv452   1/2     Running           0          9s
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-04-21 17:43:23 -04:00
Matei David 99d15f8877
Add ns annotation inheritance to pods (#6002) (#6002)
Closes #5977  

## What

This changes adds support for namespace configuration annotation inheritance for pods. Any annotations (e.g `config.linkerd.io/skip-outbound-ports` or `config.linkerd.io/proxy-await`) that are applied against a namespace will now also be applied to pods running in that namespace by the _proxy-injector_. 

* Pods do not inherit annotations from their namespaces; the exception to this is `opaque-ports` introduced in #5941. This expands on the work by allowing all config annotations to be inherited.
* Main advantage here is that instead of applying annotations on a workload-by-workload basis we can just apply them against the namespace and it will be mirrored on all pods within the namespace.
* Through this change the controller can also check the proxy's configuration directly from the pod's meta rather than from env variables.

## How

Change is pretty straightforward. We want to make sure that before we apply a JSON patch we first copy all of the namespace annotations to the pod. The logic that was in place takes care of applying the patch.

* One obvious constraint is that we want only want valid configuration annotations to be applied. To be a "valid" configuration it has to exist and it has to be prefixed with `config.linkerd.io` -- the easiest way to do this is to go through all of the available proxy configuration options and check whether any of the options are included in the namespace's annotations (done in `GetNsConfigKeys()` where we fetch all annotation keys from the namespace).
* A consideration I had with this change is whether to add `opaque-ports` as part of all of the config keys; opaque ports is a bit different though since it can be applied on a pod as well as a service -- through this change we only want to apply config annotations to pods. I chose to keep the two separate.
* Added a unit test that checks if a pod inherits config annotations from its namespace; this also includes an invalid annotation which doesn't show up in the "expected" patch to test we validate configuration correctly.

### Tests
---

I injected emojivoto and added an annotation to its namespace:

```
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "34567"
    config.linkerd.io/proxy-log-level: debug
    config.linkerd.io/skip-outbound-ports: "44556"
    linkerd.io/inject: enabled
```

The deployment specs do not have any additional annotations as part of the pod template metadata. I first tested if the above annotations would be inherited with the current edge release (I expected opaque ports to be).

**Before changes**:
```
apiVersion: v1
kind: Pod
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "34567"
    linkerd.io/created-by: linkerd/proxy-injector edge-21.4.1
    linkerd.io/identity-mode: default
    linkerd.io/inject: enabled
    linkerd.io/proxy-version: edge-21.4.1
  creationTimestamp: "2021-04-08T14:33:10Z"
  generateName: emoji-696d9d8f95-
  labels:
    app: emoji-svc
    linkerd.io/control-plane-ns: linkerd
    linkerd.io/proxy-deployment: emoji
    linkerd.io/workload-ns: emojivoto
    pod-template-hash: 696d9d8f95
    version: v11
spec:
  initContainers:
  - args:
    - --incoming-proxy-port
    - "4143"
    - --outgoing-proxy-port
    - "4140"
    - --proxy-uid
    - "2102"
    - --inbound-ports-to-ignore
    - 4190,4191
    - --outbound-ports-to-ignore
    - "44556"
    image: cr.l5d.io/linkerd/proxy-init:v1.3.9
    imagePullPolicy: IfNotPresent
    name: linkerd-init
```
(opaque ports is in there, skip outbound isn't -- although the initContainer gets the right argument since this is already applied from the namespace by the proxy injector).

**After the changes**:
```
apiVersion: v1
kind: Pod
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "34567"
    config.linkerd.io/proxy-log-level: debug
    config.linkerd.io/skip-outbound-ports: "44556"
    linkerd.io/created-by: linkerd/proxy-injector dev-a7bb62fd-matei
    linkerd.io/identity-mode: default
    linkerd.io/inject: enabled
    linkerd.io/proxy-version: dev-a7bb62fd-matei
  creationTimestamp: "2021-04-08T14:42:06Z"
  generateName: web-5f86686c4d-
  labels:
    app: web-svc
    linkerd.io/control-plane-ns: linkerd
    linkerd.io/proxy-deployment: web
    linkerd.io/workload-ns: emojivoto
    pod-template-hash: 5f86686c4d
    version: v11
  initContainers:
  - args:
    - --incoming-proxy-port
    - "4143"
    - --outgoing-proxy-port
    - "4140"
    - --proxy-uid
    - "2102"
    - --inbound-ports-to-ignore
    - 4190,4191
    - --outbound-ports-to-ignore
    - "44556"
    image: cr.l5d.io/linkerd/proxy-init:v1.3.9
    imagePullPolicy: IfNotPresent
    name: linkerd-init
```
(opaque ports is there and so is skip outbound and the proxy log level, correct options still passed to the initContainers).

*Edit*: made a small change, had a look at `GetNsConfigKeys()` and thought it'd be better to keep the slice of keys as a fixed length array since we know there will be at most `len(ProxyAnnotations)` at any point. Not sure such a big size is warranted but we can avoid calling append for every element.

Signed-off-by: Matei David <matei@buoyant.io>
2021-04-20 22:25:02 -04:00
Kevin Leimkuhler a11012819c
Add opaque ports namespace inheritance to pods (#5941)
### What

When a namespace has the opaque ports annotation, pods and services should
inherit it if they do not have one themselves. Currently, services do this but
pods do not. This can lead to surprising behavior where services are correctly
marked as opaque, but pods are not.

This changes the proxy-injector so that it now passes down the opaque ports
annotation to pods from their namespace if they do not have their own annotation
set. Closes #5736.

### How

The proxy-injector webhook receives admission requests for pods and services.
Regardless of the resource kind, it now checks if the resource should inherit
the opaque ports annotation from its namespace. It should inherit it if the
namespace has the annotation but the resource does not.

If the resource should inherit the annotation, the webhook creates an annotation
patch which is only responsible for adding the opaque ports annotation.

After generating the annotation patch, it checks if the resource is injectable.
From here there are a few scenarios:

1. If no annotation patch was created and the resource is not injectable, then
   admit the request with no changes. Examples of this are services with no OP
   annotation and inject-disabled pods with no OP annotation.
2. If the resource is a pod and it is injectable, create a patch that includes
   the proxy and proxy-init containers—as well as any other annotations and
   labels.
3. The above two scenarios lead to a patch being generated at this point, so no
   matter the resource the patch is returned.

### UI changes

Resources are now reported to either be "injected", "skipped", or "annotated".

The first pass at this PR worked around the fact that injection reports consider
services and namespaces injectable. This is not accurate because they don't have
pod templates that could be injected; they can however be annotated.

To fix this, an injection report now considers resources "annotatable" and uses
this to clean up some logic in the `inject` command, as well as avoid a more
complex proxy-injector webhook.

What's cool about this is it fixes some `inject` command output that would label
resources as "injected" when they were not even mutated. For example, namespaces
were always reported as being injected even if annotations were not added. Now,
it will properly report that a namespace has been "annotated" or "skipped".

### Tests

For testing, unit tests and integration tests have been added. Manual testing
can be done by installing linkerd with `debug` controller log levels, and
tailing the proxy-injector's app container when creating pods or services.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-03-29 19:41:15 -04:00
Alex Leong abb1e69fbd
CLI: add `--opaque-ports` flag to `inject` (#5851)
Continuation of https://github.com/linkerd/linkerd2/pull/5721/

The `config.linkerd.io/opaque-ports` annotation can now be set using the `--opaque-ports` flag on `inject`

Example

```bash
$ linkerd inject /path/to/manifest.yaml --opaque-ports 3000,5000-6000,mysql
```

This annotation is the only one which is applied to services.

Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Mayank Shah <mayankshah1614@gmail.com>
2021-03-02 08:59:09 -08:00
Kevin Leimkuhler ff93d2d317
Mirror opaque port annotations on services (#5770)
This change introduces an opaque ports annotation watcher that will send
destination profile updates when a service has its opaque ports annotation
change.

The user facing change introduced by this is that the opaque ports annotation is
now required on services when using the multicluster extension. This is because
the service mirror will create mirrored services in the source cluster, and
destination lookups in the source cluster need to discover that the workloads in
the target cluster are opaque protocols.

### Why

Closes #5650

### How

The destination server now has a new opaque ports annotation watcher. When a
client subscribes to updates for a service name or cluster IP, the `GetProfile`
method creates a profile translator stack that passes updates through resource
adaptors such as: traffic split adaptor, service profile adaptor, and now opaque
ports adaptor.

When the annotation on a service changes, the update is passed through to the
client where the `opaque_protocol` field will either be set to true or false.

A few scenarios to consider are:

  - If the annotation is removed from the service, the client should receive
    an update with no opaque ports set.
  - If the service is deleted, the stream stays open so the client should
    receive an update with no opaque ports set.
  - If the service has the annotation added, the client should receive that
    update.

### Testing

Unit test have been added to the watcher as well as the destination server.

An integration test has been added that tests the opaque port annotation on a
service.

For manual testing, using the destination server scripts is easiest:

```
# install Linkerd

# start the destination server
$ go run controller/cmd/main.go destination -kubeconfig ~/.kube/config

# Create a service or namespace with the annotation and inject it

# get the destination profile for that service and observe the opaque protocol field
$ go run controller/script/destination-client/main.go -method getProfile -path test-svc.default.svc.cluster.local:8080
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000} 
INFO[0000]                                              
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000} 
INFO[0000]
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-23 13:36:17 -05:00
Kevin Leimkuhler edd3812f30
Add services to proxy injector for opaque ports annotation (#5766)
This adds namespace inheritance of the opaque ports annotation to services. 

This means that the proxy injector now watches services creation in a cluster.
When a new service is created, the webhook receives an admission request for
that service and determines whether a patch needs to be created.

A patch is created if the service does not have the annotation, but the
namespace does. This means the service inherits the annotation from the
namespace.

A patch is not created if the service and the namespace do not have the
annotation, or the service has the annotation. In the case of the service having
the annotation, we don't even need to check the namespace since it would not
inherit it anyways.

If a namespace has the annotation value changed, this will not be reflected on
the service. The service would need to be recreated so that it goes through
another admission request.

None of this applies to the `inject` command which still skips service
injection. We rely on being able to check the namespace annotations, and this is
only possible in the proxy injector webhook when we can query the k8s API.

Closes #5737

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-17 20:58:18 -05:00
Tarun Pothulapati a393c42536
values: removal of .global field (#5699)
* values: removal of .global field

Fixes #5425

With the new extension model, We no longer need `Global` field
as we don't rely on chart dependencies anymore. This helps us
further cleanup Values, and make configuration more simpler.

To make upgrades and the usage of new CLI with older config work,
We add a new method called `config.RemoveGlobalFieldIfPresent` that
is used in the upgrade and `FetchCurrentConfiguration` paths to remove
global field and attach its child nodes if global is present. This is verified
by the `TestFetchCurrentConfiguration`'s older test that has the global
field.

We also don't yet remove .global in some helm stable-upgrade tests for
the initial install to work.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-02-11 23:38:34 +05:30
Mayank Shah 96e078421c
CLI: Remove the `--disable-tap` flag from inject (#5671)
Fixes https://github.com/linkerd/linkerd2/issues/5664

- Remove `--disable-flag` from `inject`
-  Move `config.linkerd.io/disable-tap` to `viz.linkerd.io/disable-tap`

Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>
2021-02-11 10:01:53 -05:00
Matei David a0e51fdfb5
Change injector proxy version annotation (#5338) (#5469)
### What

When overriding the proxy version using annotations, the respective annotation displays the wrong information (`linkerd.io/proxy-version`). This is a simple fix to display the correct version for the annotation; instead of using the proxy image from the config for the annotation's value, we take it from the overriden values instead.

Based on the discussion from #5338 I understood that when the image is updated it is reflected in the container image version but not the annotation. Alex's proposed fix seems to work like a charm so I can't really take credit for anything. I have attached below some before/after snippets of the deployments & pods. If there any additional changes required (or if I misunderstood anything) let me know and I'll gladly get it sorted :) 

#### Tests
---

Didn't add any new tests, I built the images and just tested the annotation displays the correct version. 

To test:
* I first injected an emojivoto-web deployment, its respective pod had the proxy version set to `dev-...`;
* I then re-injected the same deployment using a different proxy version and restarted the pods, its respective pod displayed the expected annotation value `stable-2.9.0` (whereas before it would have still been `dev-...`)

`Before`
```
# Deployment
apiVersion: apps/v1
kind: Deployment
...
   template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2021-01-04T12:41:47Z"
        linkerd.io/inject: enabled

# Pod
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubectl.kubernetes.io/restartedAt: "2021-01-04T12:41:47Z"
    linkerd.io/created-by: linkerd/proxy-injector dev-8d506317-madavid
    linkerd.io/identity-mode: default
    linkerd.io/inject: enabled
    linkerd.io/proxy-version: dev-8d506317-madavid
```
 
`After`
```sh
$ linkerd inject --proxy-version stable-2.9.0 - | kubectl apply -f -  

# Deployment
apiVersion: apps/v1
kind: Deployment
...
  template:
    metadata:
      annotations:
        config.linkerd.io/proxy-version: stable-2.9.0
        
# Pod
apiVersion: v1
kind: Pod
metadata:
  annotations:
    config.linkerd.io/proxy-version: stable-2.9.0
    kubectl.kubernetes.io/restartedAt: "2021-01-04T12:41:47Z"
    linkerd.io/created-by: linkerd/proxy-injector dev-8d506317-madavid
    linkerd.io/identity-mode: default
    linkerd.io/inject: enabled
    linkerd.io/proxy-version: stable-2.9.0

# linkerd.io/proxy-version changed after injection and now matches the config (and the proxy img)
```

Fixes #5338

Signed-off-by: Matei David <matei.david.35@gmail.com>
2021-01-06 11:13:11 -05:00
Kevin Leimkuhler 7c0843a823
Add opaque ports to destination service updates (#5294)
## Summary

This changes the destination service to start indicating whether a profile is an
opaque protocol or not.

Currently, profiles returned by the destination service are built by chaining
together updates coming from watching Profile and Traffic Split updates.

With this change, we now also watch updates to Opaque Port annotations on pods
and namespaces; if an update occurs this is now included in building a profile
update and is sent to the client.

## Details

Watching updates to Profiles and Traffic Splits is straightforward--we watch
those resources and if an update occurs on one associated to a service we care
about then the update is passed through.

For Opaque Ports this is a little different because it is an annotation on pods
or namespaces. To account for this, we watch the endpoints that we should care
about.

### When host is a Pod IP

When getting the profile for a Pod IP, we check for the opaque ports annotation
on the pod and the pod's namespace. If one is found, we'll indicate if the
profile is an opaque protocol if the requested port is in the annotation.

We do not subscribe for updates to this pod IP. The only update we really care
about is if the pod is deleted and this is already handled by the proxy.

### When host is a Service

When getting the profile for a Service, we subscribe for updates to the
endpoints of that service. For any ports set in the opaque ports annotation on
any of the pods, we check if the requested port is present.

Since the endpoints for a service can be added and removed, we do subscribe for
updates to the endpoints of the service.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-12-18 12:38:59 -05:00
Alex Leong cdc57d1af0
Use linkerd-jaeger extension for control plane tracing (#5299)
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including:

* removing the tracing components from the main control plane chart
* removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension)
* removing tracing related checks (these will be added back into `linkerd jaeger check`)
* removing related tests

We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension.  To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to.

Note that for now, only the control plane components send traces; the proxies in the control plane do not.  This is because the linkerd-jaeger injector is not yet available.  However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available.

I tested this by doing the following:

1. bin/linkerd install | kubectl apply -f -
1. bin/helm install jaeger jaeger/charts/jaeger
1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f -
1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686
1. open http://localhost:16686
1. see traces from the linkerd control plane

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-12-08 14:34:26 -08:00