Since Go 1.13, errors may "wrap" other errors. [`errorlint`][el] checks
that error formatting and inspection is wrapping-aware.
This change enables `errorlint` in golangci-lint and updates all error
handling code to pass the lint. Some comparisons in tests have been left
unchanged (using `//nolint:errorlint` comments).
[el]: https://github.com/polyfloyd/go-errorlint
Signed-off-by: Oliver Gould <ver@buoyant.io>
Closes#7816.
With this change, the `LINKERD2_PROXY_INBOUND_PORTS` is always rendered in install/inject output. This means that if a workload does not expose any ports, then the env var is rendered as the empty string.
Coupled with linkerd/linkerd2-proxy#1478, no error is printed upon proxy startup.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
* Edge-22.2.2 change notes
## edge-22.2.2
This edge release updates the jaeger extension to be available in ARM
architectures as well, and applies some security-oriented amendments.
* Upgraded jaeger and the opentelemetry-collector to their latest versions,
which now support ARM architectures
* Fixed `linkerd multicluster check` which was reporting false warnings
* Started enforcing TLS v1.2 as a minimum in the webhook servers
* Had the identity controller emit SHA256 certificate fingerprints in its
logs/events, instead of MD5
The CLI's diagnostic command that dumps a proxy's certificate
information does not (and should not) verify the proxy's certificate.
This change documents why verification is disabled.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Currently both the `Server` and `ServerAuthorization` CRDs are defined
in a single file. As additional CRDs are introduced, this becomes
unwieldy to navigate.
This change splits `policy-crd.yaml` into `policy/sever.yaml` and
`policy/serverauthorization.yaml`. It also renames
`serviceprofile-crd.yaml` to `serviceprofile.yaml` (since it's already
under the `linkerd-crds` chart).
No functional changes.
Signed-off-by: Oliver Gould <ver@buoyant.io>
If the proxy doesn't become ready `linkerd-await` never succeeds
and the proxy's logs don't become accessible.
This change adds a default 2 minute timeout so that pod startup
continues despite the proxy failing to become ready. `linkerd-await`
fails and `kubectl` will report that a post start hook failed.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
## edge-22.2.1
This edge release removed the `disableIdentity` configuration now that the proxy
no longer supports running without identity.
* Added a `privileged` configuration to linkerd-cni which is required by some
environments
* Fixed an issue where the TLS credentials used by the policy validator were not
updated when the credentials were rotated
* Removed the `disableIdentity` configurations now that the proxy no longer
supports running without identity
* Fixed an issue where `linkerd jaeger check` would needlessly fail for BYO
Jaeger or collector installations
* Fixed a Helm HA installation race condition introduced by the stoppage of
namespace creation
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
* Fix HA race when installing through Helm
Fixes#7699
The problem didn't affect 2.11, only latest edges since the Helm charts
got split into `linkerd-crds` and `linkerd-control-plane` and we stopped
creating the linkerd namespace.
With the surrendering of the creation of the namespace, we can no longer
guarantee the existence of the `config.linkerd.io/admission-webhooks`
label, so this PR creates an `objectSelector` for the injector that
filters-out control-plane components, based on the existence of the
`linkerd.io/control-plane-component` label.
Given we still want the multicluster components to be injected, we had
to be rename its `linkerd.io/control-plane-component` label to
`component`, following the same convention used by the other extensions.
The corresponding Prometheus rule for scraping the service mirrors was
updated accordingly.
A similar filter was added for the linkerd-cni DaemonSet.
Also, now that the `kubernetes.io/metadata.name` is prevalent, we're
also using it to filter out the kube-system and cert-manager namespaces.
The former namespace was already mentioned in the docs; the latter is
also included to avoid having races with cert-manager-cainjector which
can be used to provision the injector's cert.
* Remove the `proxy.disableIdentity` config
Fixes#7724
Also:
- Removed the `linkerd.io/identity-mode` annotation.
- Removed the `config.linkerd.io/disable-identity` annotation.
- Removed the `linkerd.proxy.validation` template partial, which only
made sense when `proxy.disableIdentity` was `true`.
- TestInjectManualParams now requires to hit the cluster to retrieve the
trust root.
Fixes#7391
Supersedes #7527
Some environments required privileged access in order to deploy the
linkerd-cni config under `/host/etc/cni/net.d/`
Co-authored-by: Kim Christensen kimworking@gmail.com
## edge-22.1.5
This edge release adds support for per-request Access Logging for HTTP inbound
requests in Linkerd. A new annotation i.e. `config.linkerd.io/access-log` is added,
which configures the proxies to emit access logs to stderr. `apache` and `json`
are the supported configuration options, emitting access logs in Apache Common
Log Format and JSON respectively.
Special thanks to @tustvold for all the initial work around this!
* Updated injector to support the new `config.linkerd.io/access-log` annotation
* Added a new `LINKERD2_PROXY_ACCESS_LOG` proxy environment variable to configure
the access log format (thanks @tustvold)
* Updated service mirror controller to emit relevant events when
mirroring is skipped for a service
* Updated various dependencies across the project (thanks @dependabot)
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
With #7661, the proxy supports a `LINKERD2_PROXY_ACCESS_LOG`
configuration with the values `apache` or `json`. This configuration
causes the proxy to emit access logs to stderr. This branch makes it
possible for users to enable access logging by adding an annotation,
`config.linkerd.io/access-log`, that tells the proxy injector to set
this environment variable.
I've also added some tests to ensure that the annotation and the
environment variable are set correctly. I tried to follow the existing
tests as examples of how we do this, but please let me know if I've
overlooked anything!
Closes#7662#1913
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
## edge-22.1.4
This edge release features a new configuration annotation, support for
externally hosted Grafana instances, and other improvements in the CLI,
dashboard and Helm charts. To learn more about using an external Grafana
instance with Linkerd, you can refer to our
[docs](0c3c5cd5ae/linkerd.io/content/2.12/tasks/grafana.md).
* Added a new annotation to configure skipping subnets in the init container
(`config.linkerd.io/skip-subnets`). This configuration option is ideal for
Docker-in-Docker (dind) workloads (thanks @michaellzc!)
* Added support in the dashboard for externally hosted Grafana instances
(thanks @jackgill!)
* Introduced resource block to `linkerd-jaeger` Helm chart (thanks
@yuriydzobak!)
* Introduced parametrized datasource (`DS_PROMETHEUS`) in all Grafana
dashboards. This allows pointing to the right Prometheus datasource when
importing a dashboard
* Introduced a consistent `--ignore-cluster` flag in the CLI for the base
installation and extensions; manifests will now be rendered even if there is
an existing installation in the current Kubernetes context (thanks
@krzysztofdrys!)
* Updated the service mirror controller to skip mirroring services whose
namespaces do not yet exist in the source cluster; previously, the service
mirror would create the namespace itself.
Signed-off-by: Matei David <matei@buoyant.io>
The goal is to support configuring the
`--subnets-to-ignore` flag in proxy-init
This change adds a new annotation `/skip-subnets` which
takes a comma-separated list of valid CIDR.
The argument will map to the `--subnets-to-ignore`
flag in the proxy-init initContainer.
Fixes#6758
Signed-off-by: Michael Lin <mlzc@hey.com>
* release notes for `edge-22.1.3`
## edge-22.1.3
This release removes the Grafana component in the linkerd-viz extension.
Users can now import linkerd dashboards into Grafana from the [Linkerd org](https://grafana.com/orgs/linkerd)
in Grafana. Users can also follow the instructions in the [docs](https://github.com/linkerd/website/pull/1273)
to install a separate Grafana that can be integrated with the Linkerd Dashboard.
* Stopped shipping grafana-based image in the linkerd-viz extension
* Removed `repair` sub-command in the CLI
* Updated various dependencies across the project (thanks @dependabot)
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#7594
The `linkerd repair command` specifically was used to fix an issue in Linkerd 2.9.x. It it no longer necessary. Users experiencing the issue should use the repair command with CLI version 2.9.x before upgrading to a later version.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Stop shipping grafana-based image
Fixes#6045#7358
With this change we stop building a Grafana-based image preloaded with the Linkerd Grafana dashboards.
Instead, we'll recommend users to install Grafana by themselves, and we provide a file `grafana/values.yaml` with a default config that points to all the same Grafana dashboards we had, which are now hosted in https://grafana.com/orgs/linkerd/dashboards .
The new file `grafana/README.md` contains instructions for installing the official Grafana Helm chart, and mentions other available methods.
The `grafana.enabled` flag has been removed, and `grafanaUrl` has been moved to `grafana.url`. This will help consolidating other grafana settings that might emerge, in particular when #7429 gets addressed.
## Dashboards definitions changes
The dashboard definitions under `grafana/dashboards` (which should be kept in sync with what's published in https://grafana.com/orgs/linkerd/dashboards), got updated, adding the `__inputs`, `__elements` and `__requires` entries at the beginning, that were required in order to be published.
This release adds support for using the cert-manager CA Injector to configure
Linkerd's webhooks.
* Fixed a rare issue when a Service's opaque ports annotation does not match
that of the pods in the service
* Disallowed privilege escalation in control plane containers (thanks @kichristensen!)
* Updated the multicluster extension's service mirror controller to make mirror
services empty when the exported service is empty
* Added support for injecting Webhook CA bundles with cert-manager CA Injector
(thanks @bdun1013!)
Signed-off-by: Alex Leong <alex@buoyant.io>
Disabling privilege escalation is a security best practice. But
currently this is not supported when installing from Helm.
A parameter called `privilegeEscalationEnabled` is added to the Helm
chart. The default value is `true`to avoid breaking changes to the Helm
chart.
Fixes#7282
Signed-off-by: Kim Christensen <kimworking@gmail.com>
* Adding support for injecting Webhook CA bundles with cert-manager CA Injector (#7353)
Currently, users need to pass in the caBundle when doing a helm/CLI install. If the user is already using cert-manager to generate webhook certs, they can use the cert-manager CA injector to populate the caBundle for the Webhooks.
Adding inectCaFrom and injectCaFromSecret options to every webhook alongside every caBundle option gives users the ability to add the cert-manager.io/inject-ca-from or cert-manager.io/inject-ca-from-secret annotations to the Webhooks specifying the Certificate or Secret to pull the CA from to accomplish ca bundle injection.
Signed-off-by: Brian Dunnigan <bdun1013dev@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
* update chart version for `edge-21.12.4`
This PR updates the chart version of the `linkerd-control-plane`
chart for the latest edge.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When installing Linkerd on a cluster with the Docker container runtime, `proxyInit.runAsRoot` but be set to `true` in order for Linkerd to operate. This is checked two different ways: `linkerd check --pre` and `linkerd check`.
#7457 discussed if it's better to emit this as a warning or error, but after some further discussion it makes more sense as a `linkerd install` runtime error so that a user cannot miss this configuration.
It still remains as part of `linkerd check` in case more nodes are added that do not satisfy this condition, or Linkerd is installed through Helm.
```sh
$ linkerd install
there are nodes using the docker container runtime and proxy-init container must run as root user.
try installing linkerd via --set proxyInit.runAsRoot=true
$ linkerd install --set proxyInit.runAsRoot=false
there are nodes using the docker container runtime and proxy-init container must run as root user.
try installing linkerd via --set proxyInit.runAsRoot=true
$ linkerd install --set proxyInit.runAsRoot=""
there are nodes using the docker container runtime and proxy-init container must run as root user.
try installing linkerd via --set proxyInit.runAsRoot=true
$ linkerd install --set proxyInit.runAsRoot=true
...
$ linkerd install --set proxyInit.runAsRoot=1
...
```
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Fixes#6740
\#6711 removed the usage of unnecessary reference variables
in the proxy template, as they are not needed. Their definations
were left as there were race conditions with extension installs.
As `2.11` was released with that change, Now its a good time to
remove the definations too as no usages should be present from a
`2.11` upgrade.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#6584#6620#7405
# Namespace Removal
With this change, the `namespace.yaml` template is rendered only for CLI installs and not Helm, and likewise the `namespace:` entry in the namespace-level objects (using a new `partials.namespace` helper).
The `installNamespace` and `namespace` entries in `values.yaml` have been removed.
There in the templates where the namespace is required, we moved from `.Values.namespace` to `.Release.Namespace` which is filled-in automatically by Helm. For the CLI, `install.go` now explicitly defines the contents of the `Release` map alongside `Values`.
The proxy-injector has a new `linkerd-namespace` argument given the namespace is no longer persisted in the `linkerd-config` ConfigMap, so it has to be passed in. To pass it further down to `injector.Inject()` without modifying the `Handler` signature, a closure was used.
------------
Update: Merged-in #6638: Similar changes for the `linkerd-viz` chart:
Stop rendering `namespace.yaml` in the `linkerd-viz` chart.
The additional change here is the addition of the `namespace-metadata.yaml` template (and its RBAC), _not_ rendered in CLI installs, which is a Helm `post-install` hook, consisting on a Job that executes a script adding the required annotations and labels to the viz namespace using a PATCH request against kube-api. The script first checks if the namespace doesn't already have an annotations/labels entries, in which case it has to add extra ops in that patch.
---------
Update: Merged-in the approved #6643, #6665 and #6669 which address the `linkerd2-cni`, `linkerd-multicluster` and `linkerd-jaeger` charts.
Additional changes from what's already mentioned above:
- Removes the install-namespace option from `linkerd install-cni`, which isn't found in `linkerd install` nor `linkerd viz install` anyways, and it would add some complexity to support.
- Added a dependency on the `partials` chart to the `linkerd-multicluster-link` chart, so that we can tap on the `partials.namespace` helper.
- We don't have any more the restriction on having the muticluster objects live in a separate namespace than linkerd. It's still good practice, and that's the default for the CLI install, but I removed that validation.
Finally, as a side-effect, the `linkerd mc allow` subcommand was fixed; it has been broken for a while apparently:
```console
$ linkerd mc allow --service-account-name foobar
Error: template: linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml:16:7: executing "linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml" at <include "partials.annotations.created-by" $>: error calling include: template: no template "partials.annotations.created-by" associated with template "gotpl"
```
---------
Update: see helm/helm#5465 describing the current best-practice
# Core Helm Charts Split
This removes the `linkerd2` chart, and replaces it with the `linkerd-crds` and `linkerd-control-plane` charts. Note that the viz and other extension charts are not concerned by this change.
Also note the original `values.yaml` file has been split into both charts accordingly.
### UX
```console
$ helm install linkerd-crds --namespace linkerd --create-namespace linkerd/linkerd-crds
...
# certs.yaml should contain identityTrustAnchorsPEM and the identity issuer values
$ helm install linkerd-control-plane --namespace linkerd -f certs.yaml linkerd/linkerd-control-plane
```
### Upgrade
As explained in #6635, this is a breaking change. Users will have to uninstall the `linkerd2` chart and install these two, and eventually rollout the proxies (they should continue to work during the transition anyway).
### CLI
The CLI install/upgrade code was updated to be able to pick the templates from these new charts, but the CLI UX remains identical as before.
### Other changes
- The `linkerd-crds` and `linkerd-control-plane` charts now carry a version scheme independent of linkerd's own versioning, as explained in #7405.
- These charts are Helm v3, which is reflected in the `Chart.yaml` entries and in the removal of the `requirements.yaml` files.
- In the integration tests, replaced the `helm-chart` arg with `helm-charts` containing the path `./charts`, used to build the paths for both charts.
### Followups
- Now it's possible to add a `ServiceProfile` instance for Destination in the `linkerd-control-plane` chart.
The proxy-metrics diagnostics include potentially private information (service names, pod names, etc).
This commit adds an obfuscate flag to the diagnostics proxy-metrics command to obfuscate this data
`diagnostics proxy-metrics --obfuscate`
Closes#6073
Signed-off-by: ahmedalhulaibi <ahmed.alhulaibi41@gmail.com>
PR #6750 adds the config.linkerd.io/default-inbound-policy annotation for setting the default inbound policy for an injected proxy.
This commit adds support for a default-inbound-policy flag in makeProxyFlags so that it can be set with the linkerd inject command.
Closes#6754
Signed-off-by: ahmedalhulaibi <ahmed.alhulaibi41@gmail.com>
Now, that SMI functionality is fully being moved into the
[linkerd-smi](www.github.com/linkerd/linkerd-smi) extension, we can
stop supporting its functionality by default.
This means that the `destination` component will stop reacting
to the `TrafficSplit` objects. When `linkerd-smi` is installed,
It does the conversion of `TrafficSplit` objects to `ServiceProfiles`
that destination components can understand, and will react accordingly.
Also, Whenever a `ServiceProfile` with traffic splitting is associated
with a service, the same information (i.e splits and weights) is also
surfaced through the `UI` (in the new `services` tab) and the `viz cmd`.
So, We are not really loosing any UI functionality here.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When running `linkerd check -o short` there can still be formatting issues:
- When there are no core warnings but there are extension warnings, a newline is printed at the start of the output
- When there are warnings, there is no newline printed between the warnings and the result
Below you can see the extra newline (before `Linkerd extensions checks`) and the lack of a newline on line before `Status check results ...`.
Old:
```shell
$ linkerd check -o short
Linkerd extensions checks
=========================
linkerd-viz
-----------
...
Status check results are √
```
New:
```shell
$ linkerd check -o short
Linkerd extensions checks
=========================
linkerd-viz
-----------
...
Status check results are √
```
---
This fixes the above issues by moving the newline printing to the end of a category—which right now is Core and Extension.
If there is no output for either, then no newline is printed. This results in no stray newlines when running in short output and there are no warnings.
```shell
$ linkerd check -o short
Status check results are √
```
If there is output for a category, then the category handles the newline printing itself meaning we don't need to track if a newline needs to be printed _before_ a category or _before_ the results.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
EndpointSlices are enabled by default in our Kubernetes minimum version of 1.20. Thus we can change the default behavior of the destination controller to use EndpointSlices instead of Endpoints. This unblocks any functionality which is specific to EndpointSlices such as topology aware hints.
Signed-off-by: Alex Leong <alex@buoyant.io>
* build: upgrade to Go 1.17
This commit introduces three changes:
1. Update the `go` directive in `go.mod` to 1.17
2. Update all Dockerfiles from `golang:1.16.2` to
`golang:1.17.3`
3. Update all CI to use Go 1.17
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
* chore: run `go fmt ./...`
This commit synchronizes `//go:build` lines with `// +build` lines.
Reference: https://go.googlesource.com/proposal/+/master/design/draft-gobuild.md
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
The policy APIs are currently at v1beta1, though we continue to support
the (identical) v1alpha1 APIs. This change marks the v1alpha1 variants
as deprecated so that kubectl will emit warnings if they are used.
In our chart values and (some) integration tests, we're using a deprecated
label for node selection. According to the warning messages we get during
installation, the label has been deprecated since k8s `v1.14`:
```
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
Warning: spec.jobTemplate.spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
```
This PR changes all occurrences of `beta.kubernetes.io/node` with
`kubernetes.io/node`.
Fixes#7225
This PR adds changes in partials and values.yaml to allow the optional flags `log-level` and `log-format` to proxy-init.
This approach was used to have backwards compatibility between different linkerd-proxy-init images without needing to change helm charts.
Related to https://github.com/linkerd/linkerd2-proxy-init/pull/47Fixes#6881
Signed-off-by: Gustavo Carvalho <gusfcarvalho@gmail.com>
The policy controller synthesizes identity strings based on service account
names; but it assumed that `linkerd` was the name of the control plane
namespace. This change updates the policy controller to take a
`--control-plane-namespace` command-line argument to set this value in
identity strings. The helm templates have been updated to configure the policy
controller appropriately.
Fixes#7204
Co-authored-by: Oliver Gould <ver@buoyant.io>
Linkerd proxy-init container is currently enforced to run as root.
Removes hardcoding `runAsNonRoot: false` and `runAsUser: 0`. This way
the container inherits the user ID from the proxy-init image instead which
may allow to run as non-root.
Fixes#5505
Signed-off-by: Schlotter, Christian <christian.schlotter@daimler.com>
The resource configuration does not support `ephemeral-storage`.
The [partials.resources](main/charts/partials/templates/_resources.tpl) named template should be updated to support such configuration.
The change can be validated by running under `linkerd2/viz/charts/linkerd-viz` directory
```bash
helm template --set prometheus.resources.ephemeral-storage.limit=4Gi .
```
```bash
helm template --set prometheus.resources.ephemeral-storage.request=4Gi .
```
```bash
helm template \
--set prometheus.resources.ephemeral-storage.limit=4Gi \
--set prometheus.resources.ephemeral-storage.request=4Gi .
```
Make sure it doesn't affect existing resources configuration
```bash
helm template --set prometheus.resources.cpu.limit=4Gi .
```
Fixes#3307
Signed-off-by: Michael Lin <mlzc@hey.com>
Fixes#3260
## Summary
Currently, Linkerd uses a service Account token to validate a pod
during the `Certify` request with identity, through which identity
is established on the proxy. This works well and good, as Kubernetes
attaches the `default` service account token of a namespace as a volume
(unless overridden with a specific service account by the user). Catch
here being that this token is aimed at the application to talk to the
kubernetes API and not specifically for Linkerd. This means that there
are [controls outside of Linkerd](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server), to manage this service token, which
users might want to use, [causing problems with Linkerd](https://github.com/linkerd/linkerd2/issues/3183)
as Linkerd might expect it to be present.
To have a more granular control over the token, and not rely on the
service token that can be managed externally, [Bound Service Tokens](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens)
can be used to generate tokens that are specifically for Linkerd,
that are bound to a specific pod, along with an expiry.
## Background on Bounded Service Tokens
This feature has been GA’ed in Kubernetes 1.20, and is enabled by default
in most cloud provider distributions. Using this feature, Kubernetes can
be asked to issue specific tokens for linkerd usage (through audience bound
configuration), with a specific expiry time (as the validation happens every
24 hours when establishing identity, we can follow the same), bounded to
a specific pod (meaning verification fails if the pod object isn’t available).
Because of all these bounds, and not being able to use this token for
anything else, This feels like the right thing to rely on to validate
a pod to issue a certificate.
### Pod Identity Name
We still use the same service account name as the pod identity
(used with metrics, etc) as these tokens are all generated from the
same base service account attached to the pod (could be defualt, or
the user overriden one). This can be verified by looking at the `user`
field in the `TokenReview` response.
<details>
<summary>Sample TokenReview response</summary>
Here, The new token was created for the vault audience for a pod which
had a serviceAccount token volume projection and was using the `mine`
serviceAccount in the default namespace.
```json
"kind": "TokenReview",
"apiVersion": "authentication.k8s.io/v1",
"metadata": {
"creationTimestamp": null,
"managedFields": [
{
"manager": "curl",
"operation": "Update",
"apiVersion": "authentication.k8s.io/v1",
"time": "2021-10-19T19:21:40Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:spec":{"f:audiences":{},"f:token":{}}}
}
]
},
"spec": {
"token": "....",
"audiences": [
"vault"
]
},
"status": {
"authenticated": true,
"user": {
"username": "system:serviceaccount:default:mine",
"uid": "889a81bd-e31c-4423-b542-98ddca89bfd9",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:default",
"system:authenticated"
],
"extra": {
"authentication.kubernetes.io/pod-name": [
"nginx"
],
"authentication.kubernetes.io/pod-uid": [
"ebf36f80-40ee-48ee-a75b-96dcc21466a6"
]
}
},
"audiences": [
"vault"
]
}
```
</details>
## Changes
- Update `proxy-injector` and install scripts to include the new
projected Volume and VolumeMount.
- Update the `identity` pod to validate the token with the linkerd
audience key.
- Added `identity.serviceAccountTokenProjection` to disable this
feature.
- Updated err'ing logic with `autoMountServiceAccount: false`
to fail only when this feature is disabled.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When no default policy is configured, the identity controller uses
`cluster-unauthenticated` by default; but this may not permit
connections from node IPs. This causes installations to fail in some
environments.
This change updates the identity controller's default policy to
`all-unauthenticated` to match the behavior before policy was
introduced.
Fixes#7104
Fixes#7067
Currently, The policy controller doesn't support prometheus
scrapes on the `admin-http` port under `/metrics` endpoint causing
prometheus to show the target as down.
For now, We can skip the prometheus scrape on the `admin-http` port
by renaming the `admin-http` port to `admin` without having to
update the prometheus config. Once, We start serving the metrics
endpoint, the port can be renamed back.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Expiry date was not used anywhere in the code and yet it was required on
install. All occurrences of `crtExpiry` (template variable) and `identity-issuer-expiry` (annotation) were removed.
## Validation.
It seems that `identity-issuer-expiry` was only set and never read. After this change there is no mentions of `identity-issuer-expiry` (rg "identity-issuer-expiry").
There are occurrences of `crtExpiry`, but they are not relevant:
```
> rg crtExpiry
pkg/tls/cred.go
99: if crtExpiryError(err) {
234:func crtExpiryError(err error) bool {
```
## Backward compatibility
Helm accepts "unknown" values. This change will not break existing pipelines installing/upgrading Linkerd using Helm. When someone specifies `identity.issuer.crtExpiry` (`--set identity.issuer.crtExpiry=$(date -v+8760H +"%Y-%m-%dT%H:%M:%SZ"`) it will be "just" ignored.
Fixes#7024
Signed-off-by: Krzysztof Dryś <krzysztofdrys@gmail.com>