Fixes#7904
Allow the `Server` CRD to have the `PodSelector` entry be an empty object, by removing the `omitempty` tag from its go type definition and the `oneof` section in the CRD. No update to the CRD version is required, as this is BC change -- The CRD overriding was tested fine.
Also added some unit tests to confirm podSelector conditions are ANDed, and some minor refactorings in the `Selector` constructors.
Co-authored-by: Oliver Gould <ver@buoyant.io>
This edge release fixes some `Instant`-related proxy panics that occur on Amazon
Linux. It also includes many behind the scenes improvements to the project's
CI and linting.
* Removed the `--controller-image-version` install flag to simplify the way that
image versions are handled. The controller image version can be set using the
`--set linkerdVersion` flag or Helm value
* Lowercased logs and removed redundant lines from the Linkerd2 proxy init
container
* Prevented the proxy from logging spurious errors when its pod does not define
any container ports
* Added workarounds to reduce the likelihood of `Instant`-related proxy panics
that occur on Amazon Linux
Signed-off-by: Alex Leong <alex@buoyant.io>
Remove usage of controllerImageVersion values field
This change removes the unused `controllerImageVersion` field, first
from the tests, and then from the actual chart values structure. Note
that at this point in time, it is impossible to use
`--controller-image-version` through Helm, yet it still seems to be
working for the CLI.
* We configure the charts to use `linkerdVersionValue` instead of
`controlPlaneImageVersion` (or default to it where appropriate).
* We add the stringslicevar flag (i.e `--set`) to the flagset we use in
upgrade tests. This means instead of testing value overrides through a
dedicated flag, we can now make use of `--set` in upgrade tests. We
first set the linkerdVersionValue in the install option and then
override the policy controller image version and the linkerd
controller image version to test flags work as expected.
* We remove hardcoded values from healthcheck test.
* We remove field from chart values struct.
Signed-off-by: Matei David <matei@buoyant.io>
* Edge-22.2.2 change notes
## edge-22.2.2
This edge release updates the jaeger extension to be available in ARM
architectures as well, and applies some security-oriented amendments.
* Upgraded jaeger and the opentelemetry-collector to their latest versions,
which now support ARM architectures
* Fixed `linkerd multicluster check` which was reporting false warnings
* Started enforcing TLS v1.2 as a minimum in the webhook servers
* Had the identity controller emit SHA256 certificate fingerprints in its
logs/events, instead of MD5
If the proxy doesn't become ready `linkerd-await` never succeeds
and the proxy's logs don't become accessible.
This change adds a default 2 minute timeout so that pod startup
continues despite the proxy failing to become ready. `linkerd-await`
fails and `kubectl` will report that a post start hook failed.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
## edge-22.2.1
This edge release removed the `disableIdentity` configuration now that the proxy
no longer supports running without identity.
* Added a `privileged` configuration to linkerd-cni which is required by some
environments
* Fixed an issue where the TLS credentials used by the policy validator were not
updated when the credentials were rotated
* Removed the `disableIdentity` configurations now that the proxy no longer
supports running without identity
* Fixed an issue where `linkerd jaeger check` would needlessly fail for BYO
Jaeger or collector installations
* Fixed a Helm HA installation race condition introduced by the stoppage of
namespace creation
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
* Fix HA race when installing through Helm
Fixes#7699
The problem didn't affect 2.11, only latest edges since the Helm charts
got split into `linkerd-crds` and `linkerd-control-plane` and we stopped
creating the linkerd namespace.
With the surrendering of the creation of the namespace, we can no longer
guarantee the existence of the `config.linkerd.io/admission-webhooks`
label, so this PR creates an `objectSelector` for the injector that
filters-out control-plane components, based on the existence of the
`linkerd.io/control-plane-component` label.
Given we still want the multicluster components to be injected, we had
to be rename its `linkerd.io/control-plane-component` label to
`component`, following the same convention used by the other extensions.
The corresponding Prometheus rule for scraping the service mirrors was
updated accordingly.
A similar filter was added for the linkerd-cni DaemonSet.
Also, now that the `kubernetes.io/metadata.name` is prevalent, we're
also using it to filter out the kube-system and cert-manager namespaces.
The former namespace was already mentioned in the docs; the latter is
also included to avoid having races with cert-manager-cainjector which
can be used to provision the injector's cert.
* Remove the `proxy.disableIdentity` config
Fixes#7724
Also:
- Removed the `linkerd.io/identity-mode` annotation.
- Removed the `config.linkerd.io/disable-identity` annotation.
- Removed the `linkerd.proxy.validation` template partial, which only
made sense when `proxy.disableIdentity` was `true`.
- TestInjectManualParams now requires to hit the cluster to retrieve the
trust root.
## edge-22.1.5
This edge release adds support for per-request Access Logging for HTTP inbound
requests in Linkerd. A new annotation i.e. `config.linkerd.io/access-log` is added,
which configures the proxies to emit access logs to stderr. `apache` and `json`
are the supported configuration options, emitting access logs in Apache Common
Log Format and JSON respectively.
Special thanks to @tustvold for all the initial work around this!
* Updated injector to support the new `config.linkerd.io/access-log` annotation
* Added a new `LINKERD2_PROXY_ACCESS_LOG` proxy environment variable to configure
the access log format (thanks @tustvold)
* Updated service mirror controller to emit relevant events when
mirroring is skipped for a service
* Updated various dependencies across the project (thanks @dependabot)
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
With #7661, the proxy supports a `LINKERD2_PROXY_ACCESS_LOG`
configuration with the values `apache` or `json`. This configuration
causes the proxy to emit access logs to stderr. This branch makes it
possible for users to enable access logging by adding an annotation,
`config.linkerd.io/access-log`, that tells the proxy injector to set
this environment variable.
I've also added some tests to ensure that the annotation and the
environment variable are set correctly. I tried to follow the existing
tests as examples of how we do this, but please let me know if I've
overlooked anything!
Closes#7662#1913
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
## edge-22.1.4
This edge release features a new configuration annotation, support for
externally hosted Grafana instances, and other improvements in the CLI,
dashboard and Helm charts. To learn more about using an external Grafana
instance with Linkerd, you can refer to our
[docs](0c3c5cd5ae/linkerd.io/content/2.12/tasks/grafana.md).
* Added a new annotation to configure skipping subnets in the init container
(`config.linkerd.io/skip-subnets`). This configuration option is ideal for
Docker-in-Docker (dind) workloads (thanks @michaellzc!)
* Added support in the dashboard for externally hosted Grafana instances
(thanks @jackgill!)
* Introduced resource block to `linkerd-jaeger` Helm chart (thanks
@yuriydzobak!)
* Introduced parametrized datasource (`DS_PROMETHEUS`) in all Grafana
dashboards. This allows pointing to the right Prometheus datasource when
importing a dashboard
* Introduced a consistent `--ignore-cluster` flag in the CLI for the base
installation and extensions; manifests will now be rendered even if there is
an existing installation in the current Kubernetes context (thanks
@krzysztofdrys!)
* Updated the service mirror controller to skip mirroring services whose
namespaces do not yet exist in the source cluster; previously, the service
mirror would create the namespace itself.
Signed-off-by: Matei David <matei@buoyant.io>
The goal is to support configuring the
`--subnets-to-ignore` flag in proxy-init
This change adds a new annotation `/skip-subnets` which
takes a comma-separated list of valid CIDR.
The argument will map to the `--subnets-to-ignore`
flag in the proxy-init initContainer.
Fixes#6758
Signed-off-by: Michael Lin <mlzc@hey.com>
* release notes for `edge-22.1.3`
## edge-22.1.3
This release removes the Grafana component in the linkerd-viz extension.
Users can now import linkerd dashboards into Grafana from the [Linkerd org](https://grafana.com/orgs/linkerd)
in Grafana. Users can also follow the instructions in the [docs](https://github.com/linkerd/website/pull/1273)
to install a separate Grafana that can be integrated with the Linkerd Dashboard.
* Stopped shipping grafana-based image in the linkerd-viz extension
* Removed `repair` sub-command in the CLI
* Updated various dependencies across the project (thanks @dependabot)
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Stop shipping grafana-based image
Fixes#6045#7358
With this change we stop building a Grafana-based image preloaded with the Linkerd Grafana dashboards.
Instead, we'll recommend users to install Grafana by themselves, and we provide a file `grafana/values.yaml` with a default config that points to all the same Grafana dashboards we had, which are now hosted in https://grafana.com/orgs/linkerd/dashboards .
The new file `grafana/README.md` contains instructions for installing the official Grafana Helm chart, and mentions other available methods.
The `grafana.enabled` flag has been removed, and `grafanaUrl` has been moved to `grafana.url`. This will help consolidating other grafana settings that might emerge, in particular when #7429 gets addressed.
## Dashboards definitions changes
The dashboard definitions under `grafana/dashboards` (which should be kept in sync with what's published in https://grafana.com/orgs/linkerd/dashboards), got updated, adding the `__inputs`, `__elements` and `__requires` entries at the beginning, that were required in order to be published.
This release adds support for using the cert-manager CA Injector to configure
Linkerd's webhooks.
* Fixed a rare issue when a Service's opaque ports annotation does not match
that of the pods in the service
* Disallowed privilege escalation in control plane containers (thanks @kichristensen!)
* Updated the multicluster extension's service mirror controller to make mirror
services empty when the exported service is empty
* Added support for injecting Webhook CA bundles with cert-manager CA Injector
(thanks @bdun1013!)
Signed-off-by: Alex Leong <alex@buoyant.io>
Disabling privilege escalation is a security best practice. But
currently this is not supported when installing from Helm.
A parameter called `privilegeEscalationEnabled` is added to the Helm
chart. The default value is `true`to avoid breaking changes to the Helm
chart.
Fixes#7282
Signed-off-by: Kim Christensen <kimworking@gmail.com>
* Adding support for injecting Webhook CA bundles with cert-manager CA Injector (#7353)
Currently, users need to pass in the caBundle when doing a helm/CLI install. If the user is already using cert-manager to generate webhook certs, they can use the cert-manager CA injector to populate the caBundle for the Webhooks.
Adding inectCaFrom and injectCaFromSecret options to every webhook alongside every caBundle option gives users the ability to add the cert-manager.io/inject-ca-from or cert-manager.io/inject-ca-from-secret annotations to the Webhooks specifying the Certificate or Secret to pull the CA from to accomplish ca bundle injection.
Signed-off-by: Brian Dunnigan <bdun1013dev@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
* update chart version for `edge-21.12.4`
This PR updates the chart version of the `linkerd-control-plane`
chart for the latest edge.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#6740
\#6711 removed the usage of unnecessary reference variables
in the proxy template, as they are not needed. Their definations
were left as there were race conditions with extension installs.
As `2.11` was released with that change, Now its a good time to
remove the definations too as no usages should be present from a
`2.11` upgrade.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#6584#6620#7405
# Namespace Removal
With this change, the `namespace.yaml` template is rendered only for CLI installs and not Helm, and likewise the `namespace:` entry in the namespace-level objects (using a new `partials.namespace` helper).
The `installNamespace` and `namespace` entries in `values.yaml` have been removed.
There in the templates where the namespace is required, we moved from `.Values.namespace` to `.Release.Namespace` which is filled-in automatically by Helm. For the CLI, `install.go` now explicitly defines the contents of the `Release` map alongside `Values`.
The proxy-injector has a new `linkerd-namespace` argument given the namespace is no longer persisted in the `linkerd-config` ConfigMap, so it has to be passed in. To pass it further down to `injector.Inject()` without modifying the `Handler` signature, a closure was used.
------------
Update: Merged-in #6638: Similar changes for the `linkerd-viz` chart:
Stop rendering `namespace.yaml` in the `linkerd-viz` chart.
The additional change here is the addition of the `namespace-metadata.yaml` template (and its RBAC), _not_ rendered in CLI installs, which is a Helm `post-install` hook, consisting on a Job that executes a script adding the required annotations and labels to the viz namespace using a PATCH request against kube-api. The script first checks if the namespace doesn't already have an annotations/labels entries, in which case it has to add extra ops in that patch.
---------
Update: Merged-in the approved #6643, #6665 and #6669 which address the `linkerd2-cni`, `linkerd-multicluster` and `linkerd-jaeger` charts.
Additional changes from what's already mentioned above:
- Removes the install-namespace option from `linkerd install-cni`, which isn't found in `linkerd install` nor `linkerd viz install` anyways, and it would add some complexity to support.
- Added a dependency on the `partials` chart to the `linkerd-multicluster-link` chart, so that we can tap on the `partials.namespace` helper.
- We don't have any more the restriction on having the muticluster objects live in a separate namespace than linkerd. It's still good practice, and that's the default for the CLI install, but I removed that validation.
Finally, as a side-effect, the `linkerd mc allow` subcommand was fixed; it has been broken for a while apparently:
```console
$ linkerd mc allow --service-account-name foobar
Error: template: linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml:16:7: executing "linkerd-multicluster/templates/remote-access-service-mirror-rbac.yaml" at <include "partials.annotations.created-by" $>: error calling include: template: no template "partials.annotations.created-by" associated with template "gotpl"
```
---------
Update: see helm/helm#5465 describing the current best-practice
# Core Helm Charts Split
This removes the `linkerd2` chart, and replaces it with the `linkerd-crds` and `linkerd-control-plane` charts. Note that the viz and other extension charts are not concerned by this change.
Also note the original `values.yaml` file has been split into both charts accordingly.
### UX
```console
$ helm install linkerd-crds --namespace linkerd --create-namespace linkerd/linkerd-crds
...
# certs.yaml should contain identityTrustAnchorsPEM and the identity issuer values
$ helm install linkerd-control-plane --namespace linkerd -f certs.yaml linkerd/linkerd-control-plane
```
### Upgrade
As explained in #6635, this is a breaking change. Users will have to uninstall the `linkerd2` chart and install these two, and eventually rollout the proxies (they should continue to work during the transition anyway).
### CLI
The CLI install/upgrade code was updated to be able to pick the templates from these new charts, but the CLI UX remains identical as before.
### Other changes
- The `linkerd-crds` and `linkerd-control-plane` charts now carry a version scheme independent of linkerd's own versioning, as explained in #7405.
- These charts are Helm v3, which is reflected in the `Chart.yaml` entries and in the removal of the `requirements.yaml` files.
- In the integration tests, replaced the `helm-chart` arg with `helm-charts` containing the path `./charts`, used to build the paths for both charts.
### Followups
- Now it's possible to add a `ServiceProfile` instance for Destination in the `linkerd-control-plane` chart.
Now, that SMI functionality is fully being moved into the
[linkerd-smi](www.github.com/linkerd/linkerd-smi) extension, we can
stop supporting its functionality by default.
This means that the `destination` component will stop reacting
to the `TrafficSplit` objects. When `linkerd-smi` is installed,
It does the conversion of `TrafficSplit` objects to `ServiceProfiles`
that destination components can understand, and will react accordingly.
Also, Whenever a `ServiceProfile` with traffic splitting is associated
with a service, the same information (i.e splits and weights) is also
surfaced through the `UI` (in the new `services` tab) and the `viz cmd`.
So, We are not really loosing any UI functionality here.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
EndpointSlices are enabled by default in our Kubernetes minimum version of 1.20. Thus we can change the default behavior of the destination controller to use EndpointSlices instead of Endpoints. This unblocks any functionality which is specific to EndpointSlices such as topology aware hints.
Signed-off-by: Alex Leong <alex@buoyant.io>
The policy APIs are currently at v1beta1, though we continue to support
the (identical) v1alpha1 APIs. This change marks the v1alpha1 variants
as deprecated so that kubectl will emit warnings if they are used.
In our chart values and (some) integration tests, we're using a deprecated
label for node selection. According to the warning messages we get during
installation, the label has been deprecated since k8s `v1.14`:
```
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
Warning: spec.jobTemplate.spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
```
This PR changes all occurrences of `beta.kubernetes.io/node` with
`kubernetes.io/node`.
Fixes#7225
This PR adds changes in partials and values.yaml to allow the optional flags `log-level` and `log-format` to proxy-init.
This approach was used to have backwards compatibility between different linkerd-proxy-init images without needing to change helm charts.
Related to https://github.com/linkerd/linkerd2-proxy-init/pull/47Fixes#6881
Signed-off-by: Gustavo Carvalho <gusfcarvalho@gmail.com>
The policy controller synthesizes identity strings based on service account
names; but it assumed that `linkerd` was the name of the control plane
namespace. This change updates the policy controller to take a
`--control-plane-namespace` command-line argument to set this value in
identity strings. The helm templates have been updated to configure the policy
controller appropriately.
Fixes#7204
Co-authored-by: Oliver Gould <ver@buoyant.io>
Linkerd proxy-init container is currently enforced to run as root.
Removes hardcoding `runAsNonRoot: false` and `runAsUser: 0`. This way
the container inherits the user ID from the proxy-init image instead which
may allow to run as non-root.
Fixes#5505
Signed-off-by: Schlotter, Christian <christian.schlotter@daimler.com>
The resource configuration does not support `ephemeral-storage`.
The [partials.resources](main/charts/partials/templates/_resources.tpl) named template should be updated to support such configuration.
The change can be validated by running under `linkerd2/viz/charts/linkerd-viz` directory
```bash
helm template --set prometheus.resources.ephemeral-storage.limit=4Gi .
```
```bash
helm template --set prometheus.resources.ephemeral-storage.request=4Gi .
```
```bash
helm template \
--set prometheus.resources.ephemeral-storage.limit=4Gi \
--set prometheus.resources.ephemeral-storage.request=4Gi .
```
Make sure it doesn't affect existing resources configuration
```bash
helm template --set prometheus.resources.cpu.limit=4Gi .
```
Fixes#3307
Signed-off-by: Michael Lin <mlzc@hey.com>
Fixes#3260
## Summary
Currently, Linkerd uses a service Account token to validate a pod
during the `Certify` request with identity, through which identity
is established on the proxy. This works well and good, as Kubernetes
attaches the `default` service account token of a namespace as a volume
(unless overridden with a specific service account by the user). Catch
here being that this token is aimed at the application to talk to the
kubernetes API and not specifically for Linkerd. This means that there
are [controls outside of Linkerd](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server), to manage this service token, which
users might want to use, [causing problems with Linkerd](https://github.com/linkerd/linkerd2/issues/3183)
as Linkerd might expect it to be present.
To have a more granular control over the token, and not rely on the
service token that can be managed externally, [Bound Service Tokens](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens)
can be used to generate tokens that are specifically for Linkerd,
that are bound to a specific pod, along with an expiry.
## Background on Bounded Service Tokens
This feature has been GA’ed in Kubernetes 1.20, and is enabled by default
in most cloud provider distributions. Using this feature, Kubernetes can
be asked to issue specific tokens for linkerd usage (through audience bound
configuration), with a specific expiry time (as the validation happens every
24 hours when establishing identity, we can follow the same), bounded to
a specific pod (meaning verification fails if the pod object isn’t available).
Because of all these bounds, and not being able to use this token for
anything else, This feels like the right thing to rely on to validate
a pod to issue a certificate.
### Pod Identity Name
We still use the same service account name as the pod identity
(used with metrics, etc) as these tokens are all generated from the
same base service account attached to the pod (could be defualt, or
the user overriden one). This can be verified by looking at the `user`
field in the `TokenReview` response.
<details>
<summary>Sample TokenReview response</summary>
Here, The new token was created for the vault audience for a pod which
had a serviceAccount token volume projection and was using the `mine`
serviceAccount in the default namespace.
```json
"kind": "TokenReview",
"apiVersion": "authentication.k8s.io/v1",
"metadata": {
"creationTimestamp": null,
"managedFields": [
{
"manager": "curl",
"operation": "Update",
"apiVersion": "authentication.k8s.io/v1",
"time": "2021-10-19T19:21:40Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:spec":{"f:audiences":{},"f:token":{}}}
}
]
},
"spec": {
"token": "....",
"audiences": [
"vault"
]
},
"status": {
"authenticated": true,
"user": {
"username": "system:serviceaccount:default:mine",
"uid": "889a81bd-e31c-4423-b542-98ddca89bfd9",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:default",
"system:authenticated"
],
"extra": {
"authentication.kubernetes.io/pod-name": [
"nginx"
],
"authentication.kubernetes.io/pod-uid": [
"ebf36f80-40ee-48ee-a75b-96dcc21466a6"
]
}
},
"audiences": [
"vault"
]
}
```
</details>
## Changes
- Update `proxy-injector` and install scripts to include the new
projected Volume and VolumeMount.
- Update the `identity` pod to validate the token with the linkerd
audience key.
- Added `identity.serviceAccountTokenProjection` to disable this
feature.
- Updated err'ing logic with `autoMountServiceAccount: false`
to fail only when this feature is disabled.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When no default policy is configured, the identity controller uses
`cluster-unauthenticated` by default; but this may not permit
connections from node IPs. This causes installations to fail in some
environments.
This change updates the identity controller's default policy to
`all-unauthenticated` to match the behavior before policy was
introduced.
Fixes#7104
Fixes#7067
Currently, The policy controller doesn't support prometheus
scrapes on the `admin-http` port under `/metrics` endpoint causing
prometheus to show the target as down.
For now, We can skip the prometheus scrape on the `admin-http` port
by renaming the `admin-http` port to `admin` without having to
update the prometheus config. Once, We start serving the metrics
endpoint, the port can be renamed back.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Expiry date was not used anywhere in the code and yet it was required on
install. All occurrences of `crtExpiry` (template variable) and `identity-issuer-expiry` (annotation) were removed.
## Validation.
It seems that `identity-issuer-expiry` was only set and never read. After this change there is no mentions of `identity-issuer-expiry` (rg "identity-issuer-expiry").
There are occurrences of `crtExpiry`, but they are not relevant:
```
> rg crtExpiry
pkg/tls/cred.go
99: if crtExpiryError(err) {
234:func crtExpiryError(err error) bool {
```
## Backward compatibility
Helm accepts "unknown" values. This change will not break existing pipelines installing/upgrading Linkerd using Helm. When someone specifies `identity.issuer.crtExpiry` (`--set identity.issuer.crtExpiry=$(date -v+8760H +"%Y-%m-%dT%H:%M:%SZ"`) it will be "just" ignored.
Fixes#7024
Signed-off-by: Krzysztof Dryś <krzysztofdrys@gmail.com>
Policy CRD changes (#6943) merged and changed `destination-rbac.yml`, and didn't have as an ancestor the changes from #6954 that calculate that file's sha. This PR then updates that SHA in the golden files.
Fixes#6827
We upgrade the Server and ServerAuthorization CRD versions from v1alpha1 to v1beta1. This version update does not change the schema at all and the v1alpha1 versions will continue to be served for now. We also update the CLI and control plane to use the v1beta1 versions.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#6940
Upon every upgrade the certs used by the policy controller validator
change, but the controller doesn't detect that, which breaks the webhook
requests. As a temporary solution, we force the pod to restart by adding
a `checksum/config` annotation to the manifest, like the injector
currently has.
* Remove `omitWebhookSideEffects` flag/setting
This was introduced back in #2963 to support k8s with versions before 1.12 that didn't support the `sideEffects` property in webhooks. It's been a while we no longer support 1.12, so we can safely drop this.
The policy-controller's validating admission controller only handles
`Server` resources; but its webhook configuration processes
`ServerAuthorization` resources as well, which results in errors (and
rejections in HA).
This change modifies the webhook configuration so that
`ServerAuthorization` resources are not validated by the admission
controller. This can be restored if the admission controller grows
the ability to validate these resources.
Fixes#6933
We've previously handled inbound connections on 443 as opaque, meaning
that we don't do any TLS detection.
This prevents the proxy from reporting meaningful metadata on these TLS
connections--especially the connection's SNI value.
This change also simplifies the core control plane's configuration for
skipping outbound connection on 443 to be much simpler (and
documented!).
The policy controller only emitted logs in the default plain format.
This change adds new CLI flags to the policy-controller: `--log-format`
and `--log-level` that configure logging (replacing the `RUST_LOG`
environment variable). The helm chart is updated to configure these
flags--the `controllerLogLevel` variable is used to configure the policy
controller as well.
Example:
```
{"timestamp":"2021-09-15T03:30:49.552704Z","level":"INFO","fields":{"message":"HTTP admin server listening","addr":"0.0.0.0:8080"},"target":"linkerd_policy_controller::admin","spans":[{"addr":"0.0.0.0:8080","name":"serve"}]}
{"timestamp":"2021-09-15T03:30:49.552689Z","level":"INFO","fields":{"message":"gRPC server listening","addr":"0.0.0.0:8090"},"target":"linkerd_policy_controller","spans":[{"addr":"0.0.0.0:8090","cluster_networks":"[10.0.0.0/8, 100.64.0.0/10, 172.16.0.0/12, 192.168.0.0/16]","name":"grpc"}]}
{"timestamp":"2021-09-15T03:30:49.567734Z","level":"DEBUG","fields":{"message":"Ready"},"target":"linkerd_policy_controller_k8s_index"}
^C{"timestamp":"2021-09-15T03:30:51.245387Z","level":"DEBUG","fields":{"message":"Received ctrl-c"},"target":"linkerd_policy_controller"}
{"timestamp":"2021-09-15T03:30:51.245473Z","level":"INFO","fields":{"message":"Shutting down"},"target":"linkerd_policy_controller"}
```
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
In #6873 we made it so that linkerd-identity also discovers its own
policy, using the default policy at startup. So we need to force the
default policy to be `all-unauthenticated` just like we do for
destination and the injector; otherwise when installing linkerd with a
`deny` default policy the linkerd-identity pod won't start.
Now that the proxy uses its default policy at startup and can discover
its policies lazily, the identity controller no longer must be exempt
from policy discovery. This enables the identity controller to enforce
admin server policies, in particular.
This change enables policy discovery on the identity controller.
We initially implemented a mechanism to automatically authorize
unauthenticated traffic from each pod's Kubelet's IP. Our initial method
of determining a pod's Kubelet IP--using the first IP from its node's
pod CIDRs--is not a generally usable solution. In particular, CNIs
complicate matters (and EKS doesn't even set the podCIDRs field).
This change removes the policy controller's node watch and removes the
`default:kubelet` authorization. When using a restrictive default
policy, users will have to define `serverauthorization` resources that
permit kubelet traffic. It's probably possible to programatically
generate these authorizations (i.e. by inspecting pod probe
configurations); but this is out of scope for the core control plane
functionality.