Linkerd proxies no longer omit `hostname` labels for outbound policy metrics (due to potential for high-cardinality).
This change adds Helm templates and annotations to control this behavior, allowing users to opt-in to these outbound hostname labels.
Signed-off-by: Scott Fleener <scott@buoyant.io>
The Helm function `default` will treat a boolean false value as unset and use the default instead of the false, even when it is set. When rendering CRDs during install or upgrade, this can cause Linkerd to fall back to using the `installGatewayAPI` value even when `enableHttpRoutes` is explicitly set to false.
We replace the `default` function with a ternary which checks if the key is present. We also add tests for both CLI and Helm.
Signed-off-by: Alex Leong <alex@buoyant.io>
We add a new value to the `linkerd-crds` Helm chart called `installGatewayAPI` which acts as a default value for `enableHttpRoutes`, `enableTlsRoutes`, and `enableTcpRoutes`.
We also update the logic of the `linkerd install` and `linkerd upgrade` commands to set this `installGatewayAPI` value to true if there are any gateway API CRDs managed by Linkerd on the cluster and false otherwise. Users can still override this setting by specifying the `--set installGatewayAPI=(false|true)` flag.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#13389
Values added:
- `destinationController.podAnnotations`
- annotations only for `linkerd-destination`
- `identity.podAnnotations`
- annotations only for `linkerd-identity`
- `proxyInjector.podAnnotations`
- annotations only for `linkerd-proxy-injector`
Each deployment's podAnnotations take precedence over global one by means of [mergeOverwrite](https://helm.sh/docs/chart_template_guide/function_list/#mergeoverwrite-mustmergeoverwrite).
Signed-off-by: Takumi Sue <u630868b@alumni.osaka-u.ac.jp>
In a previous PR (#13246) we introduced an egress networks namespace that is used to create `EgressNetwork` objects that affect all client workloads.
This change makes this namespace configurable through helm values. Additionally, we unify the naming convention of the arguments to use **egress** as opposed to **external**
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* Configure network-validator and repair-controller to work with IPv6
Fixes#12864
The linkerd-cni network-validator container was binding to the IPv4 wildcard and connecting to an IPv4 address. This wasn't breaking things in IPv6 clusters but it was only validating the iptables rules and not the ip6tables ones. This change introduces logic to use addresses according to the value of `disableIPv6`. If IPv6 is enabled, then the ip6tables rules would get exercised. Note that a more complete change would also exercise both iptables and ip6tables, but for now we're defaulting to ip6tables.
Similarly was the case with repair-controller, but given the IPv4 wildcard was used for the admin server, in IPv6 clusters the kubelet wasn't able to reach the probe endpoints and the container was failing. In this case the fix is just have the admin server bind to `[::]`, which works for IPv4 and IPv6 clusters.
* New "audit" value for default inbound policy
As a preliminary for audit-mode support, this change just adds "audit" to the allowed values for the `proxy.defaultInboundPolicy` helm entry, and to the `--default-inbound-policy` flag for the install CLI. It also adds it to the allowed values for the `config.linkerd.io/default-inbound-policy` annotation.
Default values for `linkerd-init` (resources allocated) are not always
the right fit. We offer default values to ensure proxy-init does not get
in the way of QOS Guaranteed (`linkerd-init` resource limits and
requests cannot be configured in any other way).
Instead of using default values that can be overridden, we can re-use
the proxy's configuration values. For the pod to be QOS Guaranteed, the
values for the proxy have to be set any way. If we re-use the same
values for proxy-init we can ensure we'll always request the same amount
of CPU and memory as needed.
* `linkerd-init` now defaults to the proxy's values
* when the proxy has an annotation configuration for resource requests,
it also impacts `linkerd-init`
* Helm chart and docs have been updated to reflect the missing values.
* tests now no longer use `ProxyInit.Resources`
UPGRADE NOTE:
- Deprecates `proxyInit.resources` field in the Helm values.
- It will be a no-op if specified (no hard failures)
Closes#11320
---------
Signed-off-by: Matei David <matei@buoyant.io>
Fixes#12620
When the Linkerd proxy log level is set to `debug` or higher, the proxy logs HTTP headers which may contain sensitive information.
While we want to avoid logging sensitive data by default, logging of HTTP headers can be a helpful debugging tool. Therefore, we add a `proxy.logHTTPHeaders` Helm value which prevents the logging of HTTP headers when set to false. The default value of this value is false so that headers cannot be logged unless users opt-in.
Signed-off-by: Alex Leong <alex@buoyant.io>
We add support for the `--output/-o` flag in linkerd install and related commands. The supported output formats are yaml (default) and json. Kubectl is able to accept both of these formats which means that the output can be piped into kubectl regardless of which output format is used.
The full list of install related commands which we add json support to is:
* linkerd install
* linkerd prune
* linkerd upgrade
* linkerd uninstall
* linkerd viz install
* linkerd viz prune
* linkerd viz uninstall
* linkerd multicluster install
* linkerd multicluster prune
* linkerd multicluster uninstall
* linkerd jaeger install
* linkerd jaeger prune
* linkerd jaeger uninstall
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#11773
Make the proxy's GUID configurable via `proxy.gid` which defaults to `-1`, in which case the GUID is not set.
Also added ability to set the GUID for proxy-init and the core and extension controllers.
---------
Signed-off-by: Nico Feulner <nico.feulner@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
In certain cases (e.g. high CPU load) kubelets can be slow to read readiness
and liveness responses. Linkerd is configured with a default time out of `1s`
for its probes. To prevent injected pod restarts under high load, this
change makes probe timeouts configurable.
---------
Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Matei David <matei@buoyant.io>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
The `--log-level` flag did not support a `trace` level, despite the
underlying `logrus` library supporting it. Also, at `debug` level, the
Control Plane components were setting klog at v=12, which includes
sensitive data.
Introduce a `trace` log level. Keep klog at `v=12` for `trace`, change
it to `v=6` for `debug`.
Fixeslinkerd/linkerd2#11132
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
* Removed dupe imports
My IDE (vim-gopls) has been complaining for a while, so I decided to take
care of it. Found via
[staticcheck](https://github.com/dominikh/go-tools)
* Add stylecheck to go-lint checks
When users use CNI, we want to ensure that network rewriting inside the pod is setup before allowing linkerd to start. When rewriting isn't happening, we want to exit with a clear error message and enough information in the container log for the administrator to either file a bug report with us or fix their configuration.
This change adds a validator initContainer to all injected workloads, when linkerd is installed with "cniEnabled=false". The validator replaces the noop init container, and will prevent pods from starting up if iptables is not configured.
Part of #8120
Signed-off-by: Steve Jenson <stevej@buoyant.io>
In 2.11.x, proxyInit.runAsRoot was true by default, which caused the
proxy-init's runAsUser field to be 0. proxyInit.runAsRoot is now
defaulted to false in 2.12.0, but runAsUser still isn't
configurable, and when following the upgrade instructions
here, helm doesn't change runAsUser and so it conflicts with the new value
for runAsRoot=false, resulting in the pods erroring with this message:
Error: container's runAsUser breaks non-root policy (pod: "linkerd-identity-bc649c5f9-ckqvg_linkerd(fb3416d2-c723-4664-acf1-80a64a734561)", container: linkerd-init)
This PR adds a new default for runAsUser to avoid this issue.
Depends on #9195
Currently, `linkerd inject --default-inbound-policy` does not set the
`config.linkerd.io/default-inbound-policy` annotation on the injected
resource(s).
The `inject` command does _try_ to set that annotation if it's set in
the `Values` generated by `proxyFlagSet`:
14d1dbb3b7/cli/cmd/inject.go (L485-L487)
...but, the flag in the proxy `FlagSet` doesn't set
`Values.Proxy.DefaultInboundPolicy`, it sets
`Values.PolicyController.DefaultAllowPolicy`:
7c5e3aaf40/cli/cmd/options.go (L375-L379)
This is because the flag set is shared across `linkerd inject` and
`linkerd install` subcommands, and in `linkerd install`, we want to set
the default policy for the whole cluster by configuring the policy
controller. In `linkerd inject`, though, we want to add the annotation
to the injected pods only.
This branch fixes this issue by changing the flag so that it sets the
`Values.Proxy.DefaultInboundPolicy` instead of the
`Values.PolicyController.DefaultAllowPolicy` value. In `linkerd
install`, we then set `Values.PolicyController.DefaultAllowPolicy` based
on the value of `Values.Proxy.DefaultInboundPolicy`, while in `inject`,
we will now actually add the annotation.
This branch is based on PR #9195, which adds validation to reject
invalid values for `--default-inbound-policy`, rather than on `main`.
This is because the validation code added in that PR had to be moved
around a bit, since it now needs to validate the
`Values.Proxy.DefaultInboundPolicy` value rather than the
`Values.PolicyController.DefaultAllowPolicy` value. I thought using
#9195 as a base branch was better than basing this on `main` and then
having to resolve merge conflicts later. When that PR merges, this can
be rebased onto `main`.
Fixes#9168
Closes#9148
With this change, the value of `—default-inbound-policy` is verified to be one
of the accepted values.
When the value is not an accepted value we now error
```shell $ linkerd install --default-inbound-policy=everybody Error:
--default-inbound-policy must be one of: all-authenticated, all-unauthenticated,
cluster-authenticated, cluster-unauthenticated, deny (got everybody) Usage:
linkerd install [flags]
... ```
A unit test has also been added.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
* Allows RSA signed trust anchors on linkerd cli (#7771)
Linkerd currently forces using an ECDSA P-256
issuer certificate along with a ECDSA trust
anchor. Still, it's still cryptographically valid
to have an ECDSA P-256 issuer certificate issued
by an RSA signed CA.
CheckCertAlgoRequirements checks if CA cert uses
ECDSA or RSA 2048/4096 signing algorithm.
Fixes#7771
Signed-off-by: Baeyens, Daniel <daniel.baeyens@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
Some hosts may not have 'nft' modules available. Currently, proxy-init
defaults to using 'iptables-nft'; if the host does not have support for
nft modules, the init container will crash, blocking all injected
workloads from starting up.
This change defaults the 'iptablesMode' value to 'legacy'.
* Update linkerd-control-plane/values file default
* Update proxy-init partial to default to 'legacy' when no mode is
specified
* Change expected values in 'pkg/charts/linkerd2/values_test.go' and in
'cli/cmd/install_test'
* Update golden files
Fixes#9053
Signed-off-by: Matei David <matei@buoyant.io>
Closes#8945
This adds the `policyController.probeNetworks` configuration value so that users
can configure the networks from which probes are expected to be performed.
By default, we allow all networks (`0.0.0.0/0`). Additionally, this value
differs from `clusterNetworks` is that it is a list of networks, and thus we
have to join the values in the Helm templating.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
> In some circumstances, the lifecycle.postStart hook can cause the linkerd-proxy
> container to get stuck waiting for identity verification. After the
> linkerd-await timeout, the container will be restarted and the proxy starts
> without further incident. The linkerd-control-plane helm chart currently has a
> way to disable the lifecycle hook for injected proxies, but not for proxies on
> the control plane pods.
>
> This commit adds a new value to the linkerd-control-plane chart of
> proxy.controlPlaneAwait that can be used to disable the postStart lifecycle hook
> on the destination and proxy-injector pods. This is defaulted to true to
> maintain current behavior.
>
> The linkerd-control-plane chart was templated, setting proxy.controlPlaneAwait
> to true and false, verifying that the postStart lifecycle hook was either
> present or absent depending on the proxy.controlPlaneAwait value.
>
> Fixes#8738
This continues the now stale #8739 and removes the version bumps that were
requested.
Signed-off-by: Jacob Lambert [calrisian777@gmail.com](mailto:calrisian777@gmail.com)
Co-authored-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
This change introduces a new value to be used at install (or upgrade)
time. The value (`proxyInit.iptablesMode=nft|legacy`) is responsible
for starting the proxy-init container in nft or legacy mode.
By default, the init container will use iptables-nft. When the mode is set to
`nft`, it will instead use iptables-nft. Most modern Linux distributions
support both, but a subset (such as RHEL based families) only support
iptables-nft and nf_tables.
Signed-off-by: Matei David <matei@buoyant.io>
Fixes#8660
We add the HttpRoute CRD to the CRDs installed with `linkerd install --crds` and `linkerd upgrade --crds`. You can use the `--set installHttpRoute=false` to skip installing this CRD.
Signed-off-by: Alex Leong <alex@buoyant.io>
Some autoscalers, namely Karpenter, don't allow podAntiAffinity and the enablePodAntiAffinity flag is
currently overloaded with other HA requirements. This commit splits out the PDB and updateStrategy
configuration into separate value inputs.
Fixes#8062
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Evan Hines <evan@firebolt.io>
When we compare generated manifests against fixtures, we do a simple
string comparison to compare output. The diffed data can be pretty hard
to understand.
This change adds a new test helper, `DiffTestYAML` that parses strings
as arbitrary YAML data structures and uses `deep.Equal` to generate a
diff of the datastructures.
Now, when a test fails, we'll get output like:
```
install_test.go:244: YAML mismatches install_output.golden:
slice[32].map[spec].map[template].map[spec].map[containers].slice[3].map[image]: PolicyControllerImageName:PolicyControllerVersion != SomeOtherImage:PolicyControllerVersion
```
While testing this, it became apparent that several of our generated
golden files were not actually valid YAML, due to the `LinkerdVersion`
value being unset. This has been fixed.
Signed-off-by: Oliver Gould <ver@buoyant.io>
This change follows on 4f3c374, which split the install logic for CRDs
and the core control plane, by splitting the upgrade logic for the CRDs
and the core control plane.
Signed-off-by: Oliver Gould <ver@buoyant.io>
We currently have singular `install` and `render` functions, each of
which takes a `crds` bool that completely alters the behavior of the
function. This change splits this behavior into distinct functions so
we have `installCRDs`/`renderCRDs` and `installControlPlane`/
`renderControlPlane`.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Fixes#8364
When `linkerd install` is called with the `--ignore-cluster`, we pass `nil` for the `k8sAPI`. This causes a panic when using this client for validation. We add a conditional so that we skip this validation when the `k8sAPI` is `nil`.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes: #8173
In order to support having custom resources in the default Linkerd installation, it is necessary to add a separate install step to install CRDs before the core install. The Linkerd Helm charts already accomplish this by having CRDs in a separate chart.
We add this functionality to the CLI by adding a `--crds` flag to `linkerd install` and `linkerd upgrade` which outputs manifests for the CRDs only and remove the CRD manifests when the `--crds` flag is not set. To avoid a compounding of complexity, we remove the `config` and `control-plane` stages from install/upgrade. The effect of this is that we drop support for splitting up an install by privilege level (cluster admin vs Linkerd admin).
The Linkerd install flow is now always a 2-step process where `linkerd install --crds` must be run first to install CRDs only and then `linkerd install` is run to install everything else. This more closely aligns the CLI install flow with the Helm install flow where the CRDs are a separate chart. Attempting to run `linkerd install` before the CRDs are installed will result in a helpful error message.
Similarly, upgrade is also a 2-step process of `linkerd upgrade --crds` follow by `linkerd upgrade`.
Signed-off-by: Alex Leong <alex@buoyant.io>
[gocritic][gc] helps to enforce some consistency and check for potential
errors. This change applies linting changes and enables gocritic via
golangci-lint.
[gc]: https://github.com/go-critic/go-critic
Signed-off-by: Oliver Gould <ver@buoyant.io>
Remove usage of controllerImageVersion values field
This change removes the unused `controllerImageVersion` field, first
from the tests, and then from the actual chart values structure. Note
that at this point in time, it is impossible to use
`--controller-image-version` through Helm, yet it still seems to be
working for the CLI.
* We configure the charts to use `linkerdVersionValue` instead of
`controlPlaneImageVersion` (or default to it where appropriate).
* We add the stringslicevar flag (i.e `--set`) to the flagset we use in
upgrade tests. This means instead of testing value overrides through a
dedicated flag, we can now make use of `--set` in upgrade tests. We
first set the linkerdVersionValue in the install option and then
override the policy controller image version and the linkerd
controller image version to test flags work as expected.
* We remove hardcoded values from healthcheck test.
* We remove field from chart values struct.
Signed-off-by: Matei David <matei@buoyant.io>
* Adding support for injecting Webhook CA bundles with cert-manager CA Injector (#7353)
Currently, users need to pass in the caBundle when doing a helm/CLI install. If the user is already using cert-manager to generate webhook certs, they can use the cert-manager CA injector to populate the caBundle for the Webhooks.
Adding inectCaFrom and injectCaFromSecret options to every webhook alongside every caBundle option gives users the ability to add the cert-manager.io/inject-ca-from or cert-manager.io/inject-ca-from-secret annotations to the Webhooks specifying the Certificate or Secret to pull the CA from to accomplish ca bundle injection.
Signed-off-by: Brian Dunnigan <bdun1013dev@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
When installing Linkerd on a cluster with the Docker container runtime, `proxyInit.runAsRoot` but be set to `true` in order for Linkerd to operate. This is checked two different ways: `linkerd check --pre` and `linkerd check`.
#7457 discussed if it's better to emit this as a warning or error, but after some further discussion it makes more sense as a `linkerd install` runtime error so that a user cannot miss this configuration.
It still remains as part of `linkerd check` in case more nodes are added that do not satisfy this condition, or Linkerd is installed through Helm.
```sh
$ linkerd install
there are nodes using the docker container runtime and proxy-init container must run as root user.
try installing linkerd via --set proxyInit.runAsRoot=true
$ linkerd install --set proxyInit.runAsRoot=false
there are nodes using the docker container runtime and proxy-init container must run as root user.
try installing linkerd via --set proxyInit.runAsRoot=true
$ linkerd install --set proxyInit.runAsRoot=""
there are nodes using the docker container runtime and proxy-init container must run as root user.
try installing linkerd via --set proxyInit.runAsRoot=true
$ linkerd install --set proxyInit.runAsRoot=true
...
$ linkerd install --set proxyInit.runAsRoot=1
...
```
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Fixes#6584#6620#7405
# Namespace Removal
With this change, the `namespace.yaml` template is rendered only for CLI installs and not Helm, and likewise the `namespace:` entry in the namespace-level objects (using a new `partials.namespace` helper).
The `installNamespace` and `namespace` entries in `values.yaml` have been removed.
There in the templates where the namespace is required, we moved from `.Values.namespace` to `.Release.Namespace` which is filled-in automatically by Helm. For the CLI, `install.go` now explicitly defines the contents of the `Release` map alongside `Values`.
The proxy-injector has a new `linkerd-namespace` argument given the namespace is no longer persisted in the `linkerd-config` ConfigMap, so it has to be passed in. To pass it further down to `injector.Inject()` without modifying the `Handler` signature, a closure was used.
------------
Update: Merged-in #6638: Similar changes for the `linkerd-viz` chart:
Stop rendering `namespace.yaml` in the `linkerd-viz` chart.
The additional change here is the addition of the `namespace-metadata.yaml` template (and its RBAC), _not_ rendered in CLI installs, which is a Helm `post-install` hook, consisting on a Job that executes a script adding the required annotations and labels to the viz namespace using a PATCH request against kube-api. The script first checks if the namespace doesn't already have an annotations/labels entries, in which case it has to add extra ops in that patch.
---------
Update: Merged-in the approved #6643, #6665 and #6669 which address the `linkerd2-cni`, `linkerd-multicluster` and `linkerd-jaeger` charts.
Additional changes from what's already mentioned above:
- Removes the install-namespace option from `linkerd install-cni`, which isn't found in `linkerd install` nor `linkerd viz install` anyways, and it would add some complexity to support.
- Added a dependency on the `partials` chart to the `linkerd-multicluster-link` chart, so that we can tap on the `partials.namespace` helper.
- We don't have any more the restriction on having the muticluster objects live in a separate namespace than linkerd. It's still good practice, and that's the default for the CLI install, but I removed that validation.
Finally, as a side-effect, the `linkerd mc allow` subcommand was fixed; it has been broken for a while apparently:
```console
$ linkerd mc allow --service-account-name foobar
Error: template: linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml:16:7: executing "linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml" at <include "partials.annotations.created-by" $>: error calling include: template: no template "partials.annotations.created-by" associated with template "gotpl"
```
---------
Update: see helm/helm#5465 describing the current best-practice
# Core Helm Charts Split
This removes the `linkerd2` chart, and replaces it with the `linkerd-crds` and `linkerd-control-plane` charts. Note that the viz and other extension charts are not concerned by this change.
Also note the original `values.yaml` file has been split into both charts accordingly.
### UX
```console
$ helm install linkerd-crds --namespace linkerd --create-namespace linkerd/linkerd-crds
...
# certs.yaml should contain identityTrustAnchorsPEM and the identity issuer values
$ helm install linkerd-control-plane --namespace linkerd -f certs.yaml linkerd/linkerd-control-plane
```
### Upgrade
As explained in #6635, this is a breaking change. Users will have to uninstall the `linkerd2` chart and install these two, and eventually rollout the proxies (they should continue to work during the transition anyway).
### CLI
The CLI install/upgrade code was updated to be able to pick the templates from these new charts, but the CLI UX remains identical as before.
### Other changes
- The `linkerd-crds` and `linkerd-control-plane` charts now carry a version scheme independent of linkerd's own versioning, as explained in #7405.
- These charts are Helm v3, which is reflected in the `Chart.yaml` entries and in the removal of the `requirements.yaml` files.
- In the integration tests, replaced the `helm-chart` arg with `helm-charts` containing the path `./charts`, used to build the paths for both charts.
### Followups
- Now it's possible to add a `ServiceProfile` instance for Destination in the `linkerd-control-plane` chart.
Fixes#3260
## Summary
Currently, Linkerd uses a service Account token to validate a pod
during the `Certify` request with identity, through which identity
is established on the proxy. This works well and good, as Kubernetes
attaches the `default` service account token of a namespace as a volume
(unless overridden with a specific service account by the user). Catch
here being that this token is aimed at the application to talk to the
kubernetes API and not specifically for Linkerd. This means that there
are [controls outside of Linkerd](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server), to manage this service token, which
users might want to use, [causing problems with Linkerd](https://github.com/linkerd/linkerd2/issues/3183)
as Linkerd might expect it to be present.
To have a more granular control over the token, and not rely on the
service token that can be managed externally, [Bound Service Tokens](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens)
can be used to generate tokens that are specifically for Linkerd,
that are bound to a specific pod, along with an expiry.
## Background on Bounded Service Tokens
This feature has been GA’ed in Kubernetes 1.20, and is enabled by default
in most cloud provider distributions. Using this feature, Kubernetes can
be asked to issue specific tokens for linkerd usage (through audience bound
configuration), with a specific expiry time (as the validation happens every
24 hours when establishing identity, we can follow the same), bounded to
a specific pod (meaning verification fails if the pod object isn’t available).
Because of all these bounds, and not being able to use this token for
anything else, This feels like the right thing to rely on to validate
a pod to issue a certificate.
### Pod Identity Name
We still use the same service account name as the pod identity
(used with metrics, etc) as these tokens are all generated from the
same base service account attached to the pod (could be defualt, or
the user overriden one). This can be verified by looking at the `user`
field in the `TokenReview` response.
<details>
<summary>Sample TokenReview response</summary>
Here, The new token was created for the vault audience for a pod which
had a serviceAccount token volume projection and was using the `mine`
serviceAccount in the default namespace.
```json
"kind": "TokenReview",
"apiVersion": "authentication.k8s.io/v1",
"metadata": {
"creationTimestamp": null,
"managedFields": [
{
"manager": "curl",
"operation": "Update",
"apiVersion": "authentication.k8s.io/v1",
"time": "2021-10-19T19:21:40Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:spec":{"f:audiences":{},"f:token":{}}}
}
]
},
"spec": {
"token": "....",
"audiences": [
"vault"
]
},
"status": {
"authenticated": true,
"user": {
"username": "system:serviceaccount:default:mine",
"uid": "889a81bd-e31c-4423-b542-98ddca89bfd9",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:default",
"system:authenticated"
],
"extra": {
"authentication.kubernetes.io/pod-name": [
"nginx"
],
"authentication.kubernetes.io/pod-uid": [
"ebf36f80-40ee-48ee-a75b-96dcc21466a6"
]
}
},
"audiences": [
"vault"
]
}
```
</details>
## Changes
- Update `proxy-injector` and install scripts to include the new
projected Volume and VolumeMount.
- Update the `identity` pod to validate the token with the linkerd
audience key.
- Added `identity.serviceAccountTokenProjection` to disable this
feature.
- Updated err'ing logic with `autoMountServiceAccount: false`
to fail only when this feature is disabled.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Expiry date was not used anywhere in the code and yet it was required on
install. All occurrences of `crtExpiry` (template variable) and `identity-issuer-expiry` (annotation) were removed.
## Validation.
It seems that `identity-issuer-expiry` was only set and never read. After this change there is no mentions of `identity-issuer-expiry` (rg "identity-issuer-expiry").
There are occurrences of `crtExpiry`, but they are not relevant:
```
> rg crtExpiry
pkg/tls/cred.go
99: if crtExpiryError(err) {
234:func crtExpiryError(err error) bool {
```
## Backward compatibility
Helm accepts "unknown" values. This change will not break existing pipelines installing/upgrading Linkerd using Helm. When someone specifies `identity.issuer.crtExpiry` (`--set identity.issuer.crtExpiry=$(date -v+8760H +"%Y-%m-%dT%H:%M:%SZ"`) it will be "just" ignored.
Fixes#7024
Signed-off-by: Krzysztof Dryś <krzysztofdrys@gmail.com>
* Remove `omitWebhookSideEffects` flag/setting
This was introduced back in #2963 to support k8s with versions before 1.12 that didn't support the `sideEffects` property in webhooks. It's been a while we no longer support 1.12, so we can safely drop this.
A few small improvements to our docker build scripts:
* Centralized the list of docker images to a DOCKER_IMAGES variable defined in _docker.sh
* Build scripts now honor the TAG variable, if defined
* Unused docker-images script has been removed
We also update the `--control-plane-version` Linkerd install flag to affect the policy controller version as well.
Taken together, this enables the following workflow for building and deploying changes to individual Linkerd components. For example, suppose you wish to deploy changes which only affect the controller image:
```console
# Begin by building all images at main with a dev tag
> TAG=alex-dev bin/docker-build
# OR begin by retagging all images from a recent release
> bin/docker-retag-all edge-21.8.4 alex-dev
# Make changes and then rebuild specific component
> TAG=alex-dev bin/docker-build-controller
# Load images into kind
> TAG=alex-dev bin/image-load --kind --cluster alex
# Install Linkerd
> bin/linkerd install --control-plane-version alex-dev --proxy-version alex-dev | k apply -f -
```
Signed-off-by: Alex Leong <alex@buoyant.io>
We add a validating admission controller to the policy controller which validates `Server` resources. When a `Server` admission request is received, we look at all existing `Server` resources in the cluster and ensure that no other `Server` has an identical selector and port.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
We've implemented a new controller--in Rust!--that implements discovery
APIs for inbound server policies. This change imports this code from
linkerd/polixy@25af9b5e.
This policy controller watches nodes, pods, and the recently-introduced
`policy.linkerd.io` CRD resources. It indexes these resources and serves
a gRPC API that will be used by proxies to configure the inbound proxy
for policy enforcement.
This change introduces a new policy-controller container image and adds a
container to the `Linkerd-destination` pod along with a `linkerd-policy` service
to be used by proxies.
This change adds a `policyController` object to the Helm `values.yaml` that
supports configuring the policy controller at runtime.
Proxies are not currently configured to use the policy controller at runtime. This
will change in an upcoming proxy release.
* Schedule heartbeat 10 mins after install
... for the Helm installation method, thus aligning it with the CLI
installation method, to reduce the midnight peak on the receiving end.
The logic added into the chart is now reused by the CLI as well.
Also, set `concurrencyPolicy=Replace` so that when a job fails and it's
retried, the retries get canceled when the next scheduled job is triggered.
Finally, the go client only failed when the connection failed;
successful connections with a non 200 response status were considered
successful and thus the job wasn't retried. Fixed that as well.
* destination: pass opaque-ports through cmd flag
Fixes#5817
Currently, Default opaque ports are stored at two places i.e
`Values.yaml` and also at `opaqueports/defaults.go`. As these
ports are used only in destination, We can instead pass these
values as a cmd flag for destination component from Values.yaml
and remove defaultPorts in `defaults.go`.
This means that users if they override `Values.yaml`'s opauePorts
field, That change is propogated both for injection and also
discovery like expected.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5574 and supersedes #5660
- Removed from all the `values.yaml` files all those "do not edit" entries for annotation/label names, hard-coding them in the templates instead.
- The `values.go` files got simplified as a result.
- The `created-by` annotation was also refactored into a reusable partial. This means we had to add a `partials` dependency to multicluster.
* values: removal of .global field
Fixes#5425
With the new extension model, We no longer need `Global` field
as we don't rely on chart dependencies anymore. This helps us
further cleanup Values, and make configuration more simpler.
To make upgrades and the usage of new CLI with older config work,
We add a new method called `config.RemoveGlobalFieldIfPresent` that
is used in the upgrade and `FetchCurrentConfiguration` paths to remove
global field and attach its child nodes if global is present. This is verified
by the `TestFetchCurrentConfiguration`'s older test that has the global
field.
We also don't yet remove .global in some helm stable-upgrade tests for
the initial install to work.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* cli: add helm customization flags to core install
Fixes#5506
This branch adds helm way of customization through
`set`, `set-string`, `values`, `set-files` flags for
`linkerd install` cmd along with unit tests.
For this to work, the helm v3 engine rendering helpers
had to be used instead of our own wrapper type.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>