The proxy.cores helm value is overly restrictive: it enforces a hard upper
limit. In some scenarios, a fixed limit is not practical: for example, when the
proxy is meshing an application that configures no limits.
This change replaces the proxy.cores value with a new proxy.runtime.workers
structure, with members:
- `minimum`: configures the minimum number of worker threads a proxy may use.
- `maximumCPURatio`: optionally allows the proxy to use a larger
number of CPUs, relative to the number of available CPUs on the node.
So with a minimum of 2 and a ratio of 0.1, a proxy would run 2 worker threads
(the minimum) running on an 8 core node, but allocate 10 worker threads on a 96
core node.
When the `config.linkerd.io/proxy-cpu-limit` is used, that will continue to set
the maximum number of worker threads to a fixed value.
When it is not set, however, the minimum worker pool size is derived from the
`config.linkerd.io/proxy-cpu-request`.
An additional `config.linkerd.io/proxy-cpu-ratio-limit` annotation is introduced
to allow workload-level configuration.
Fixes#11773
Make the proxy's GUID configurable via `proxy.gid` which defaults to `-1`, in which case the GUID is not set.
Also added ability to set the GUID for proxy-init and the core and extension controllers.
---------
Signed-off-by: Nico Feulner <nico.feulner@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
The proxy-injector package has a `ResourceConfig` type that is
responsible for parsing resources, applying overrides, and serialising a
series of configuration values to a Kubernetes patch. The functionality
is very concrete in its assumption; it always relies on a pod spec and
it mutates inner state when deciding on which overrides to apply.
This is not a flexible way to handle injection and configuration
overriding for other types of resources. We change this by turning
methods previously defined on `ResourceConfig` into free-standing
functions. These functions can be applied for any type of resources in
order to compute a set of configuration values based on annotation
overrides. Through the change, the functions can be used to compute
static configuration for non-Pod types or can be used in tests.
Signed-off-by: Matei David <matei@buoyant.io>
* Add native sidecar support
Kubernetes will be providing beta support for native sidecar containers in version 1.29. This feature improves network proxy sidecar compatibility for jobs and initContainers.
Introduce a new annotation config.alpha.linkerd.io/proxy-enable-native-sidecar and configuration option Proxy.NativeSidecar that causes the proxy container to run as an init-container.
Fixes: #11461
Signed-off-by: TJ Miller <millert@us.ibm.com>
This change allows users to configure protocol detection timeout values
(outbound and inbound). Certain environments may find that protocol
detection inhibits debugging and makes it harder to reason with a
client's behaviour. In such cases (and not only) it may be deseriable to
change the default protocol detection timeout to a higher value than the
default 10s.
Through this change, users may configure their timeout values either
with install-time settings or through annotations; this follows our
usual proxy configuration model. The proxy uses different timeout values
for the inbound and outbound stacks (even though they use the same
default value) and this change respects that by adding two separate
fields.
Signed-off-by: Matei David <matei@buoyant.io>
The proxy caches discovery results in-memory. Linkerd supports
overriding the default eviction timeout for cached discovery results
through install (i.e. helm) values. However, it is currently not
possible to configure timeouts on a workload-per-workload basis, or to
configure the values after Linkerd has been installed (or upgraded).
This change adds support for annotation based configuration. Workloads
and namespaces now support two new configuration annotations that will
override the install values when specified.
Additionally, a typo has been fixed on the internal type representation.
This is not a breaking change since the type itself is not exposed to
users and is parsed correctly in the values.yaml file (or CLI)
Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
Currently, the proxy injector will expand lists of opaque port ranges
into lists of individual port numbers. This is because the proxy has
historically not accepted port ranges in the
`LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION` environment
variable. However, when very large ranges are used, the size of the
injected manifest can be quite large, since each individual port number
in a range must be listed separately.
Proxy PR linkerd/linkerd2-proxy#2395 changed the proxy to accept ranges
as well as individual port numbers in the opaque ports environment
variable, and this change was included in the latest proxy release
(v2.200.0). This means that the proxy-injector no longer needs to expand
large port ranges into individual port numbers, and can now simply
forward the list of ranges to the proxy. This branch changes the proxy
injector to do this, resolving issues with manifest size due to large
port ranges.
Closes#9803
* Properly inherit `linkerd.io/inject: ingress` from NS to workload
Workloads were inheriting it as the default `enabled` mode.
Introduced a new entry in the inject integration test to catch this.
This fix is paired with the ingress doc clarification PR linkerd/website#1398
PR linkerd/linkerd2-proxy#1815 added support for a
`LINKERD2_PROXY_SHUTDOWN_GRACE_PERIOD` environment variable that
configures the proxy's maximum grace period for graceful shutdown. This
is intended to ensure that if a proxy is shut down, it will eventually
terminate in a relatively timely manner, even if some stubborn
connections don't close gracefully.
This branch adds support for a `config.linkerd.io/shutdown-grace-period`
annotation that can be used to override the default grace period
duration.
Hopefully I've added this everywhere it needs to be added --- please let
me know if I've missed anything!
We frequently compare data structures--sometimes very large data
structures--that are difficult to compare visually. This change replaces
uses of `reflect.DeepEqual` with `deep.Equal`. `go-test`'s `deep.Equal`
returns a diff of values that are not equal.
Signed-off-by: Oliver Gould <ver@buoyant.io>
* Remove the `proxy.disableIdentity` config
Fixes#7724
Also:
- Removed the `linkerd.io/identity-mode` annotation.
- Removed the `config.linkerd.io/disable-identity` annotation.
- Removed the `linkerd.proxy.validation` template partial, which only
made sense when `proxy.disableIdentity` was `true`.
- TestInjectManualParams now requires to hit the cluster to retrieve the
trust root.
With #7661, the proxy supports a `LINKERD2_PROXY_ACCESS_LOG`
configuration with the values `apache` or `json`. This configuration
causes the proxy to emit access logs to stderr. This branch makes it
possible for users to enable access logging by adding an annotation,
`config.linkerd.io/access-log`, that tells the proxy injector to set
this environment variable.
I've also added some tests to ensure that the annotation and the
environment variable are set correctly. I tried to follow the existing
tests as examples of how we do this, but please let me know if I've
overlooked anything!
Closes#7662#1913
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The goal is to support configuring the
`--subnets-to-ignore` flag in proxy-init
This change adds a new annotation `/skip-subnets` which
takes a comma-separated list of valid CIDR.
The argument will map to the `--subnets-to-ignore`
flag in the proxy-init initContainer.
Fixes#6758
Signed-off-by: Michael Lin <mlzc@hey.com>
Fixes#6584#6620#7405
# Namespace Removal
With this change, the `namespace.yaml` template is rendered only for CLI installs and not Helm, and likewise the `namespace:` entry in the namespace-level objects (using a new `partials.namespace` helper).
The `installNamespace` and `namespace` entries in `values.yaml` have been removed.
There in the templates where the namespace is required, we moved from `.Values.namespace` to `.Release.Namespace` which is filled-in automatically by Helm. For the CLI, `install.go` now explicitly defines the contents of the `Release` map alongside `Values`.
The proxy-injector has a new `linkerd-namespace` argument given the namespace is no longer persisted in the `linkerd-config` ConfigMap, so it has to be passed in. To pass it further down to `injector.Inject()` without modifying the `Handler` signature, a closure was used.
------------
Update: Merged-in #6638: Similar changes for the `linkerd-viz` chart:
Stop rendering `namespace.yaml` in the `linkerd-viz` chart.
The additional change here is the addition of the `namespace-metadata.yaml` template (and its RBAC), _not_ rendered in CLI installs, which is a Helm `post-install` hook, consisting on a Job that executes a script adding the required annotations and labels to the viz namespace using a PATCH request against kube-api. The script first checks if the namespace doesn't already have an annotations/labels entries, in which case it has to add extra ops in that patch.
---------
Update: Merged-in the approved #6643, #6665 and #6669 which address the `linkerd2-cni`, `linkerd-multicluster` and `linkerd-jaeger` charts.
Additional changes from what's already mentioned above:
- Removes the install-namespace option from `linkerd install-cni`, which isn't found in `linkerd install` nor `linkerd viz install` anyways, and it would add some complexity to support.
- Added a dependency on the `partials` chart to the `linkerd-multicluster-link` chart, so that we can tap on the `partials.namespace` helper.
- We don't have any more the restriction on having the muticluster objects live in a separate namespace than linkerd. It's still good practice, and that's the default for the CLI install, but I removed that validation.
Finally, as a side-effect, the `linkerd mc allow` subcommand was fixed; it has been broken for a while apparently:
```console
$ linkerd mc allow --service-account-name foobar
Error: template: linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml:16:7: executing "linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml" at <include "partials.annotations.created-by" $>: error calling include: template: no template "partials.annotations.created-by" associated with template "gotpl"
```
---------
Update: see helm/helm#5465 describing the current best-practice
# Core Helm Charts Split
This removes the `linkerd2` chart, and replaces it with the `linkerd-crds` and `linkerd-control-plane` charts. Note that the viz and other extension charts are not concerned by this change.
Also note the original `values.yaml` file has been split into both charts accordingly.
### UX
```console
$ helm install linkerd-crds --namespace linkerd --create-namespace linkerd/linkerd-crds
...
# certs.yaml should contain identityTrustAnchorsPEM and the identity issuer values
$ helm install linkerd-control-plane --namespace linkerd -f certs.yaml linkerd/linkerd-control-plane
```
### Upgrade
As explained in #6635, this is a breaking change. Users will have to uninstall the `linkerd2` chart and install these two, and eventually rollout the proxies (they should continue to work during the transition anyway).
### CLI
The CLI install/upgrade code was updated to be able to pick the templates from these new charts, but the CLI UX remains identical as before.
### Other changes
- The `linkerd-crds` and `linkerd-control-plane` charts now carry a version scheme independent of linkerd's own versioning, as explained in #7405.
- These charts are Helm v3, which is reflected in the `Chart.yaml` entries and in the removal of the `requirements.yaml` files.
- In the integration tests, replaced the `helm-chart` arg with `helm-charts` containing the path `./charts`, used to build the paths for both charts.
### Followups
- Now it's possible to add a `ServiceProfile` instance for Destination in the `linkerd-control-plane` chart.
Fixes#3307
Add support for annotations `config.linkerd.io/proxy-ephemeral-storage-limit` and `config.linkerd.io/proxy-ephemeral-storage-request`
Signed-off-by: Michael Lin <mlzc@hey.com>
* Set `LINKERD2_PROXY_INBOUND_PORTS` during injection
Fixes#6267
The `LINKERD2_PROXY_INBOUND_PORTS` env var will be set during injection,
containing a comma-separated list of the ports in the non-proxy containers in
the pod. For the identity, destination and injector pods, the var is set
manually in their Helm templates.
Since the proxy-injector isn't reinvoked, containers injected by a mutating
webhook after the injector has run won't be detected. As an escape hatch, the
`config.linkerd.io/pod-inbound-ports` annotation has been added to explicit
overrides.
Other changes:
- Removed
`controller/proxy-injector/fake/data/inject-sidecar-container-spec.yaml` which
is no longer used. - Fixed bad indentation in some fixture files under
`controller/proxy-injector/fake/data`.
### What
This change adds the `config.linkerd.io/proxy-await` annotation which when set will delay application container start until the proxy is ready. This allows users to force application containers to wait for the proxy container to be ready without modifying the application's Docker image. This is different from the current use-case of [linkerd-await](https://github.com/olix0r/linkerd-await) which does require modifying the image.
---
To support this, Linkerd is using the fact that containers are started in the order that they appear in `spec.containers`. If `linkerd-proxy` is the first container, then it will be started first.
Kubernetes will start each container without waiting on the result of the previous container. However, if a container has a hook that is executed immediately after container creation, then Kubernetes will wait on the result of that hook before creating the next container. Using a `PostStart` hook in the `linkerd-proxy` container, the `linkerd-await` binary can be run and force Kubernetes to pause container creation until the proxy is ready. Once `linkerd-await` completes, the container hook completes and the application container is created.
Adding the `config.linkerd.io/await-proxy` annotation to a pod's metadata results in the `linkerd-proxy` container being the first container, as well as having the container hook:
```yaml
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
```
---
### Update after draft
There has been some additional discussion both off GitHub as well as on this PR (specifically with @electrical).
First, we decided that this feature should be enabled by default. The reason for this is more often than not, this feature will prevent start-up ordering issues from occurring without having any negative effects on the application. Additionally, this will be a part of edges up until the 2.11 (the next stable release) and having it enabled by default will allow us to check that it does not conflict often with applications. Once we are closer to 2.11, we'll be able to determine if this should be disabled by default because it causes more issues than it prevents.
Second, this feature will remain configurable; if disabled, then upon injection the proxy container will not be made the first container in the pod manifest. This is important for the reasons discussed with @electrical about tools that make assumptions about app containers being the first container. For example, Rancher defaults to showing overview pages for the `0` index container, and if the proxy container was always `0` then this would defeat the purpose of the overview page.
### Testing
To test this I used the `sleep.sh` script and changed `Dockerfile-proxy` to use it as it's `ENTRYPOINT`. This forces the container to sleep for 20 seconds before starting the proxy.
---
`sleep.sh`:
```bash
#!/bin/bash
echo "sleeping..."
sleep 20
/usr/bin/linkerd2-proxy-run
```
`Dockerfile-proxy`:
```textile
...
COPY sleep.sh /sleep.sh
RUN ["chmod", "+x", "/sleep.sh"]
ENTRYPOINT ["/sleep.sh"]
```
---
```bash
# Build and install with the above changes
$ bin/docker-build
...
$ bin/image-load --k3d
...
$ bin/linkerd install |kubectl apply -f -
```
Annotate the `emoji` deployment so that it's the only workload that should wait for it's proxy to be ready and inject it:
```bash
cat emojivoto.yaml |bin/linkerd inject - |kubectl apply -f -
```
You can then see that the `emoji` deployment is not starting its application container until the proxy is ready:
```bash
$ kubectl get -n emojivoto pods
NAME READY STATUS RESTARTS AGE
voting-ff4c54b8d-sjlnz 1/2 Running 0 9s
emoji-f985459b4-7mkzt 0/2 PodInitializing 0 9s
web-5f86686c4d-djzrz 1/2 Running 0 9s
vote-bot-6d7677bb68-mv452 1/2 Running 0 9s
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Closes#5977
## What
This changes adds support for namespace configuration annotation inheritance for pods. Any annotations (e.g `config.linkerd.io/skip-outbound-ports` or `config.linkerd.io/proxy-await`) that are applied against a namespace will now also be applied to pods running in that namespace by the _proxy-injector_.
* Pods do not inherit annotations from their namespaces; the exception to this is `opaque-ports` introduced in #5941. This expands on the work by allowing all config annotations to be inherited.
* Main advantage here is that instead of applying annotations on a workload-by-workload basis we can just apply them against the namespace and it will be mirrored on all pods within the namespace.
* Through this change the controller can also check the proxy's configuration directly from the pod's meta rather than from env variables.
## How
Change is pretty straightforward. We want to make sure that before we apply a JSON patch we first copy all of the namespace annotations to the pod. The logic that was in place takes care of applying the patch.
* One obvious constraint is that we want only want valid configuration annotations to be applied. To be a "valid" configuration it has to exist and it has to be prefixed with `config.linkerd.io` -- the easiest way to do this is to go through all of the available proxy configuration options and check whether any of the options are included in the namespace's annotations (done in `GetNsConfigKeys()` where we fetch all annotation keys from the namespace).
* A consideration I had with this change is whether to add `opaque-ports` as part of all of the config keys; opaque ports is a bit different though since it can be applied on a pod as well as a service -- through this change we only want to apply config annotations to pods. I chose to keep the two separate.
* Added a unit test that checks if a pod inherits config annotations from its namespace; this also includes an invalid annotation which doesn't show up in the "expected" patch to test we validate configuration correctly.
### Tests
---
I injected emojivoto and added an annotation to its namespace:
```
apiVersion: v1
kind: Namespace
metadata:
annotations:
config.linkerd.io/opaque-ports: "34567"
config.linkerd.io/proxy-log-level: debug
config.linkerd.io/skip-outbound-ports: "44556"
linkerd.io/inject: enabled
```
The deployment specs do not have any additional annotations as part of the pod template metadata. I first tested if the above annotations would be inherited with the current edge release (I expected opaque ports to be).
**Before changes**:
```
apiVersion: v1
kind: Pod
metadata:
annotations:
config.linkerd.io/opaque-ports: "34567"
linkerd.io/created-by: linkerd/proxy-injector edge-21.4.1
linkerd.io/identity-mode: default
linkerd.io/inject: enabled
linkerd.io/proxy-version: edge-21.4.1
creationTimestamp: "2021-04-08T14:33:10Z"
generateName: emoji-696d9d8f95-
labels:
app: emoji-svc
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: emoji
linkerd.io/workload-ns: emojivoto
pod-template-hash: 696d9d8f95
version: v11
spec:
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191
- --outbound-ports-to-ignore
- "44556"
image: cr.l5d.io/linkerd/proxy-init:v1.3.9
imagePullPolicy: IfNotPresent
name: linkerd-init
```
(opaque ports is in there, skip outbound isn't -- although the initContainer gets the right argument since this is already applied from the namespace by the proxy injector).
**After the changes**:
```
apiVersion: v1
kind: Pod
metadata:
annotations:
config.linkerd.io/opaque-ports: "34567"
config.linkerd.io/proxy-log-level: debug
config.linkerd.io/skip-outbound-ports: "44556"
linkerd.io/created-by: linkerd/proxy-injector dev-a7bb62fd-matei
linkerd.io/identity-mode: default
linkerd.io/inject: enabled
linkerd.io/proxy-version: dev-a7bb62fd-matei
creationTimestamp: "2021-04-08T14:42:06Z"
generateName: web-5f86686c4d-
labels:
app: web-svc
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: web
linkerd.io/workload-ns: emojivoto
pod-template-hash: 5f86686c4d
version: v11
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191
- --outbound-ports-to-ignore
- "44556"
image: cr.l5d.io/linkerd/proxy-init:v1.3.9
imagePullPolicy: IfNotPresent
name: linkerd-init
```
(opaque ports is there and so is skip outbound and the proxy log level, correct options still passed to the initContainers).
*Edit*: made a small change, had a look at `GetNsConfigKeys()` and thought it'd be better to keep the slice of keys as a fixed length array since we know there will be at most `len(ProxyAnnotations)` at any point. Not sure such a big size is warranted but we can avoid calling append for every element.
Signed-off-by: Matei David <matei@buoyant.io>
We've created a custom domain, `cr.l5d.io`, that redirects to `ghcr.io`
(using `scarf.sh`). This custom domain allows us to swap the underlying
container registry without impacting users. It also provides us with
important metrics about container usage, without collecting PII like IP
addresses.
This change updates our Helm charts and CLIs to reference this custom
domain. The integration test workflow now refers to the new domain,
while the release workflow continues to use the `ghcr.io/linkerd` registry
for the purpose of publishing images.
* values: removal of .global field
Fixes#5425
With the new extension model, We no longer need `Global` field
as we don't rely on chart dependencies anymore. This helps us
further cleanup Values, and make configuration more simpler.
To make upgrades and the usage of new CLI with older config work,
We add a new method called `config.RemoveGlobalFieldIfPresent` that
is used in the upgrade and `FetchCurrentConfiguration` paths to remove
global field and attach its child nodes if global is present. This is verified
by the `TestFetchCurrentConfiguration`'s older test that has the global
field.
We also don't yet remove .global in some helm stable-upgrade tests for
the initial install to work.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5385
## The problems
- `linkerd install --ha` isn't honoring flags
- `linkerd upgrade --ha` is overridding existing configs silently or failing with an error
- *Upgrading HA instances from before 2.9 to version 2.9.1 results in configs being overridden silently, or the upgrade fails with an error*
## The cause
The change in #5358 attempted to fix `linkerd install --ha` that was only applying some of the `values-ha.yaml` defaults, by calling `charts.NewValues(true)` and merging that with the values built from `values.yaml` overriden by the flags. It turns out the `charts.NewValues()` implementation was by itself merging against `values.yaml` and as a result any flag was getting overridden by its default.
This also happened when doing `linkerd upgrade --ha` on an existing instance, which could result in silently overriding settings, or it could also fail loudly like for example when upgrading set up that has an external issuer (in this case the issuer cert won't be able to be read during upgrade and an error would occur as described in #5385).
Finally, when doing `linkerd upgrade` (no --ha flag) on an HA install from before 2.9 results in configs getting overridden as well (silently or with an error) because in order to generate the `linkerd-config-overrides` secret, the original install flags are retrieved from `linkerd-config` via the `loadStoredValuesLegacy()` function which then effectively ends up performing a `linkerd upgrade` with all the flags used for `linkerd install` and falls into the same trap as above.
## The fix
In `values.go` the faulting merging logic is not used anymore, so now `NewValues()` only returns the default values from `values.yaml` and doesn't require an argument anymore. It calls `readDefaults()` which now only returns the appropriate values depending on whether we're on HA or not.
There's a new function `MergeHAValues()` that merges `values-ha.yaml` into the current values (it doesn't look into `values.yaml` anymore), which is only used when processing the `--ha` flag in `options.go`.
## How to test
To replicate the issue try setting a custom setting and check it's not applied:
```bash
linkerd install --ha --controller-log level debug | grep log.level
- -log-level=info
```
## Followup
This wasn't caught because we don't have HA integration tests. Now that our test infra is based on k3d, it should be easy to make such a test using a cluster with multiple nodes. Either that or issuing `linkerd install --ha` with additional configs and compare against a golden file.
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including:
* removing the tracing components from the main control plane chart
* removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension)
* removing tracing related checks (these will be added back into `linkerd jaeger check`)
* removing related tests
We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension. To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to.
Note that for now, only the control plane components send traces; the proxies in the control plane do not. This is because the linkerd-jaeger injector is not yet available. However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available.
I tested this by doing the following:
1. bin/linkerd install | kubectl apply -f -
1. bin/helm install jaeger jaeger/charts/jaeger
1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f -
1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686
1. open http://localhost:16686
1. see traces from the linkerd control plane
Signed-off-by: Alex Leong <alex@buoyant.io>
CLI crashes if linkerd-config contains unexpected values.
Add a safe accessor that initializes an empty Global on the first
access. Refactor all accesses to use the newly introduced accessor using
gopls.
Add test for linkerd-config data without Global.
Fixes#5215
Co-authored-by: Itai Schwartz <yitai27@gmail.com>
Signed-off-by: Hod Bin Noon <bin.noon.hod@gmail.com>
Per #5165, Kubernetes does not necessarily limit the proxy's access to
cores via `cgroups` when a CPU limit is set. As of #5168, the proxy now
supports a `LINKERD2_PROXY_CORES` environment configuration that
augments CPU detection from the host operating system.
This change modifies the proxy injector to ensure that this environment
is configured from the `Values.proxy.cores` Helm value, the
`config.linkerd.io/proxy-cpu-limit` annotation, and the `--proxy-cpu-limit`
install flag.
In #5110 the `global.proxy.destinationGetNetworks` configuration is
renamed to `global.clusterNetworks` to better reflect its purpose.
The `config.linkerd.io/proxy-destination-get-networks` annotation allows
this configuration to be overridden per-workload, but there's no real use
case for this. I don't think we want to support this value differing
between pods in a cluster. No good can come of it.
This change removes support for the `proxy-destination-get-networks`
annotation.
There is no longer a proxy config `DESTINATION_GET_NETWORKS`. Instead of
reflecting this implementation in our values.yaml, this changes this
variable to the more general `clusterNetworks` to emphasize its
similarity to `clusterDomain` for the purposes of discovery.
This is a major refactor of the install/upgrade code which removes the config protobuf and replaces it with a config overrides secret which stores overrides to the values struct. Further background on this change can be found here: https://github.com/linkerd/linkerd2/discussions/4966
Note: as-is this PR breaks injection. There is work to move injection onto a Values-based config which must land before this can be merged.
A summary of the high level changes:
* the install, global, and proxy fields of linkerd-config ConfigMap are no longer populated
* the CLI install flow now follows these simple steps:
* load default Values from the chart
* update the Values based on the provided CLI flags
* render the chart with these values
* also render a Secret/linkerd-config-overrides which describes the values which have been changed from their defaults
* the CLI upgrade flow now follows these simple stesp:
* load the default Values from the chart
* if Secret/linkerd-config-overrides exists, apply the overrides onto the values
* otherwise load the legacy ConfigMap/linkerd-config and use it to updates the values
* further update the values based on the provided CLI flags
* render the chart and the Secret/linkerd-config-overrides as above
* Helm install and upgrade is unchanged
Signed-off-by: Alex Leong <alex@buoyant.io>
This PR Updates the Injection Logic (both CLI and proxy-injector)
to use `Values` struct instead of protobuf Config, part of our move
in removing the protobuf.
This does not touch any of the flags, install related code.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Alex Leong <alex@buoyant.io>
## Motivation
Closes#4950
## Solution
Add the `config.linkerd.io/opaque-ports` annotation to either a namespace or pod
spec to set the proxy `LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION`
environment variable.
Currently this environment variable is not used by the proxy, but will be
addressed by #4938.
## Valid values
Ports: `config.linkerd.io/opaque-ports: 4322,3306`
Port ranges: `config.linkerd.io/opaque-ports: 4320-4325`
Mixed ports and port ranges: `config.linkerd.io/opaque-ports: 4320-4325`
If the pod has named ports such as:
```
- name: nginx
image: nginx:latest
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
```
The name can also be used as a value: `config.linkerd.io/opaque-ports:
nginx-port`
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Push docker images to ghcr.io instead of gcr.io
The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.
Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.
Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.
Besides that, the other changes are just a global search and replace (s/gcr.io\/linkerd-io/ghcr.io\/linkerd/).
* support overriding inbound and outbound connect timeouts.
* add validation on user provided TCP connect timeouts
* convert valid time values into ms
Signed-off-by: Matt Miller <mamiller@rosettastone.com>
* feat: add log format annotation and helm value
Json log formatting has been added via https://github.com/linkerd/linkerd2-proxy/pull/500
but wiring the option through as an annotation/helm value is still
necessary.
This PR adds the annotation and helm value to configure log format.
Closes#2491
Signed-off-by: Naseem <naseem@transit.app>
In #4585 we are observing an issue where a loop is encountered when using nginx ingress. The problem is that the outbound proxy does a dst lookup on the IP address which happens to be the very same address the ingress is listening on.
In order to avoid situations like that this PR introduces a way to modify the set of networks for which the proxy shall do IP based discovery. The change introduces a helm flag `.Values.global.proxy.destinationGetNetworks` that can be used to modify this value. There are two ways a user can affect the this setting:
- setting the `destinationGetNetworks` field in values during a Helm install, which changes the default on all injected pods
- using an annotation ` config.linkerd.io/proxy-destination-get-networks` for injected workloads to override this value
Note that this setting cannot be tweaked through the `install` or `inject` command
Fix: #4585
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* Go test failure message wrappers to create GH Annotations
First part of #4176
## Problem
Failures in go tests need to be properly formatted as Github annotations
so that we can fetch them through Github's API for aggregation and
analysis.
## Solution
A wrapper for error messages has been created in `testutil/annotations.go`.
The idea is that instead of throwing test failures like this:
```go
t.Failf("error retrieving data;\nExpected: %#v\nActual: %#v", expected,
actual)
```
We'd throw them like this:
```go
testutil.AnnotationFatalf("error retrieving data", "error retrieving data;\nExpected: %#v\nActual: %#v", expected,
actual)
```
That will continue reporting the error as before (when using `go test`
or another test runner), but as a side-effect it will also send to
stdout something like:
```
::error file=pkg/inject_test.go,line=133::error retrieving data
```
Which becomes a GH annotation, visible in the CI run summary screen.
The fist string art is used to have the GH annotation be a generic error message
that can be aggregated and counted across multiple test runs. If `testutil.Fatalf(str, args...)`
is called instead, the original error message will be used.
Note that that the output will be produced only when the env var
`GH_ANNOTATION` is set (which will when tests are triggered from a
Github Actions workflow).
Besides `testutil/annotation.go` and its accompanying unit test file,
other changes were made in other tests as examples, the plan being that
in a further PR _all_ the tests will use these wrappers.
* Enable mixed configuration of skip-[inbound|outbound]-ports using port numbers and ranges (#3752)
* included tests for generated output given proxy-ignore configuration options
* renamed "validate" method to "parseAndValidate" given mutation
* updated documentation to denote inclusiveness of ranges
* Updates for expansion of ignored inbound and outbound port ranges to be handled by the proxy-init rather than CLI (#3766)
This change maintains the configured ports and ranges as strings rather than unsigned integers, while still providing validation at the command layer.
* Bump versions for proxy-init to v1.3.0
Signed-off-by: Paul Balogh <javaducky@gmail.com>
* Inject preStop hook into the proxy sidecar container to stop it last
This commit adds support for a Graceful Shutdown technique that is used
by some Kubernetes administrators while the more perspective
configuration is being discussed in
https://github.com/kubernetes/kubernetes/issues/65502
The problem is that RollingUpdate strategy does not guarantee that all
traffic will be sent to a new pod _before_ the previous pod is removed.
Kubernetes inside is an event-driven system and when a pod is being
terminating, several processes can receive the event simultaneously.
And if an Ingress Controller gets the event too late or processes it
slower than Kubernetes removes the pod from its Service, users requests
will continue flowing into the black whole.
According [to the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods)
> 1. If one of the Pod’s containers has defined a `preStop` hook,
> it is invoked inside of the container. If the `preStop` hook is still
> running after the grace period expires, step 2 is then invoked with
> a small (2 second) extended grace period.
>
> 2. The container is sent the `TERM` signal. Note that not all
> containers in the Pod will receive the `TERM` signal at the same time
> and may each require a preStop hook if the order in which
> they shut down matters.
This commit adds support for the `preStop` hook that can be configured
in three forms:
1. As command line argument `--wait-before-exit-seconds` for
`linkerd inject` command.
2. As `linkerd2` Helm chart value `Proxy.WaitBeforeExitSeconds`.
2. As `config.alpha.linkerd.io/wait-before-exit-seconds` annotation.
If configured, it will add the following preHook to the proxy container
definition:
```yaml
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep {{.Values.Proxy.WaitBeforeExitSeconds}}
```
To achieve max benefit from the option, the main container should have
its own `preStop` hook with the `sleep` command inside which has
a smaller period than is set for the proxy sidecar. And none of them
must be bigger than `terminationGracePeriodSeconds` configured for the
entire pod.
An example of a rendered Kubernetes resource where
`.Values.Proxy.WaitBeforeExitSeconds` is equal to `40`:
```yaml
# application container
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep 20
# linkerd-proxy container
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep 40
terminationGracePeriodSeconds: 160 # for entire pod
```
Fixes#3747
Signed-off-by: Eugene Glotov <kivagant@gmail.com>
* rework annotations doc generation from godoc parsing to map[string]string and get rid of unused yaml tags
* move annotations doc function from pkg/k8s to cli/cmd
Signed-off-by: StupidScience <tonysignal@gmail.com>
* Add the tracing environment variables to the proxy spec
* Add tracing event
* Remove unnecessary CLI change
* Update log message
* Handle single segment service name
* Use default service account if not provided
The injector doesn't read the defaults from the values.yaml
* Remove references to conf.workload.ownerRef in log messages
This nested field isn't always set.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
* Check for Namespace level config override annotations
* Add unit tests for namespace level config overrides
* add integration test for namespace level config override
* use different namespace for override tests
* check resource requests for integration tests
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Refactor proxy injection to use Helm charts
Fixes#3128
A new chart `/charts/patch` was created, that generates the JSON patch
payload that is to be returned to the k8s API when doing the injection
through the proxy injector, and it's also leveraged by the `linkerd
inject --manual` CLI.
The VFS was used by `linkerd install` to access the old chart under
`/chart`. Now the proxy injection also uses the Helm charts to generate
the JSON patch (see above) so we've moved the VFS from `cli/static` to a
new common place under `/pkg/charts/static`, and the new root for the VFS is
now `/charts`.
`linkerd install` hasn't yet migrated to use the new charts (that'll
happen in #3127), so the only change in that regard was the creation of
`/charts/chart` which is a symlink pointing to `/chart` that
`install.go` now uses, so that the VFS contains both the old and new
charts, as a temporary measure.
You can see that `/bin/Dockerfile-bin`, `/controller/Dockerfile` and
`/bin/build-cli-bin` do now `go generate` pointing to the new location
(and the `go generate` annotation was moved from `/cli/main.go` to
`pkg/charts/static/templates.go`).
The symlink trick doesn't work when building the binaries through
Docker, so `/bin/Dockerfile-bin` replaces the symlink with an actual
copy of `/chart`.
Also note that in `/controller/Dockerfile` we now need to include the
`prod` tag in `go install` like we do in `/bin/Dockerfile-bin` so that
the proxy injector does use the VFS instead of the local file system.
- The common logic to parse a chart has been moved from `install.go` to
`/pkg/charts/util.go`.
- The special ENV var in the proxy for "outbound router capacity" that
only applies to the Prometheus pod is now handled directly in the proxy
partial and all the associated go code could be removed.
- The `patch.go` lib for generating the JSON patch in go along
with its tests `patch_test.go` are no longer needed.
- Lots of functions in `/pkg/inject/inject.go` got removed/simplified
with their logic being moved into the charts themselves. As a
consequence lots of things in `inject_test.go` became irrelevant.
- Moved `template-values.go` from `/pkg/inject` to `pkg/charts` as that
contains the go structs representation of the chart variables that
will be leveraged in #3127.
Don't forget to run `/bin/helm.sh` whenever you make changes to charts
;-)
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
The patch provided by @ihcsim applies correct values for the securityContext during injection, namely: `allowPrivilegeEscalation = false`, `readOnlyRootFilesystem = true`, and the capabilities are copied from the primary container. Additionally, the proxy-init container securityContext has been updated with appropriate values.
Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>
Split proxy-init into separate repo
Fixes#2563
The new repo is https://github.com/linkerd/linkerd2-proxy-init, and I
tagged the latest there `v1.0.0`.
Here, I've removed the `/proxy-init` dir and pinned the injected
proxy-init version to `v1.0.0` in the injector code and tests.
`/cni-plugin` depends on proxy-init, so I updated the import paths
there, and could verify CNI is still working (there is some flakiness
but unrelated to this PR).
For consistency, I added a `--init-image-version` flag to `linkerd
inject` along with its corresponding override config annotation.
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
This commit refactors the changes introduced by #2842 where the debug
container spec is created in the 'cli' and 'pkg' packages. This change
follows the existing pattern of annotating the YAML in the CLI code,
and injecting the sidecar spec in the shared library.
Signed-off-by: Ivan Sim <ivan@buoyant.io>
This new annotation is used by the proxy injector to determine if the
debug container needs to be injected.
When using 'linkerd install', the 'pkg/inject' library will only inject
annotations into the workload YAML. Even though 'conf.debugSidecar'
is set in the CLI, the 'injectPodSpec()' function is never invoked on
the proxy injector side. Once the workload YAML got picked up by the
proxy injector, 'conf.debugSidecar' is already nil, since it's a different,
new 'conf' object. The new annotation ensures that the proxy injector
injects the debug container.
Signed-off-by: Ivan Sim <ivan@buoyant.io>