Followup to #12844
This new field defines the default policy for Servers, i.e. if a request doesn't match the policy associated to a Server then this policy applies. The values are the same as for `proxy.defaultInboundPolicy` and the `config.linkerd.io/default-inbound-policy` annotation (all-unauthenticated, all-authenticated, cluster-authenticated, cluster-unauthenticated, deny), plus a new value "audit". The default is "deny", thus remaining backwards-compatible.
This field is also exposed as an additional printer column.
Default values for `linkerd-init` (resources allocated) are not always
the right fit. We offer default values to ensure proxy-init does not get
in the way of QOS Guaranteed (`linkerd-init` resource limits and
requests cannot be configured in any other way).
Instead of using default values that can be overridden, we can re-use
the proxy's configuration values. For the pod to be QOS Guaranteed, the
values for the proxy have to be set any way. If we re-use the same
values for proxy-init we can ensure we'll always request the same amount
of CPU and memory as needed.
* `linkerd-init` now defaults to the proxy's values
* when the proxy has an annotation configuration for resource requests,
it also impacts `linkerd-init`
* Helm chart and docs have been updated to reflect the missing values.
* tests now no longer use `ProxyInit.Resources`
UPGRADE NOTE:
- Deprecates `proxyInit.resources` field in the Helm values.
- It will be a no-op if specified (no hard failures)
Closes#11320
---------
Signed-off-by: Matei David <matei@buoyant.io>
We add an -o/--output flag to the remaining commands which render kubernetes resources and do not yet have this flag. The supported values for this flag are "yaml" (default) and "json". The commands are:
linkerd mulitcluster allow
linkerd multicluster link
linkerd multicluster unlink
linkerd viz allow-scrapes
Signed-off-by: Alex Leong <alex@buoyant.io>
* IPv6 integration tests
This adds a new test `TestDualStack` to the deep suite that ensures requests to a dual stack service are always routed the the IPv6 endpoint.
It also amends other tests in the suite for them to work in IPv6-only clusters:
- skipports: replaced the booksapp with emojivoto, given the servers in the former don't bind to IPv6 addresses
- endpoints: amended the regexes to include IPv6 addresses
- localhost: bumped nginx for it to bind to the IPv6 loopback as well
Note the `TestDualStack` test is disabled by default because Github runners don't support IPv6. To run it locally, first deploy a dual-stack cluster via:
```
kind create cluster --config test/integration/deep/kind-dualstack.yml
```
(for testing IPv6-only clusters, use the `kind-ipv6.yml` config)
Then load the images and trigger the test with:
```
bin/tests --name deep-dual-stack --skip-cluster-create $PWD/target/cli/linux-amd64/linkerd
```
Fixes#11773
Make the proxy's GUID configurable via `proxy.gid` which defaults to `-1`, in which case the GUID is not set.
Also added ability to set the GUID for proxy-init and the core and extension controllers.
---------
Signed-off-by: Nico Feulner <nico.feulner@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
This changes the default of the Helm value `disableIPv6` to `true`.
Additionally, the proxy's `LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS` env var
is now set accordingly to that value.
This addresses an incompatibility with GKE introduced in last week's
edge (`edge-24.5.1`): in default IPv4-only nodes in GKE clusters the
proxy can't bind to `::1`, so we have make IPv6 opt-in to avoid
surprises.
As part of the ongoing effort to support IPv6/dual-stack networks, this change
enables the proxy to properly forward IPv6 connections:
- Adds the new `LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS` environment variable when
injecting the proxy. This is supported as of proxy v2.228.0 which was just
pulled into the linkerd2 repo in #2d5085b56e465ef56ed4a178dfd766a3e16a631d.
This adds the IPv6 loopback address (`[::1]`) to the IPv4 one (`127.0.0.1`)
so the proxy can forward outbound connections received via IPv6. The injector
will still inject `LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR` to support the rare
case where the `proxy.image.version` value is overridden with an older
version. The new proxy still considers that variable, but it's superseded by
the new one. The old variable is considered deprecated and should be removed
in the future.
- The values for `LINKERD2_PROXY_CONTROL_LISTEN_ADDR`,
`LINKERD2_PROXY_ADMIN_LISTEN_ADDR` and `LINKERD2_PROXY_INBOUND_LISTEN_ADDR`
have been updated to point to the IPv6 wildcard address (`[::]`) instead of
the IPv4 one (`0.0.0.0`) for the same reason. Unlike with the loopback
address, the IPv6 wildcard address suffices to capture both IPv4 and IPv6
traffic.
- The endpoint translator's `getInboundPort()` has been updated to properly
parse the IPv6 loopback address retrieved from the proxy container manifest.
A unit test was added to validate the behavior.
Added the test `deep-native-sidecar` which runs the `deep` test with the
new flag `--native-sidecar`.
Also replaced the final `WaitRollout` call in `install_test.go` with a
`linkerd check` call, to also allow us verifying that command is working
as intended.
This commit adds destination controller configuration that enables default
keep-alives for meshed HTTP/2 clients.
This is accomplished by encoding the raw protobuf message structure into the
helm values, and then encoding that as JSON in the destination controller's
command-line options. This allows operators to set any supported HTTP/2 client
configuration without having to modify the destination controller.
This removes the `upgrade-stable` integration test and refactors the
`helm-upgrade` one to upgrade from the last published edge helm charts
instead of the last stable.
The proxy-injector package has a `ResourceConfig` type that is
responsible for parsing resources, applying overrides, and serialising a
series of configuration values to a Kubernetes patch. The functionality
is very concrete in its assumption; it always relies on a pod spec and
it mutates inner state when deciding on which overrides to apply.
This is not a flexible way to handle injection and configuration
overriding for other types of resources. We change this by turning
methods previously defined on `ResourceConfig` into free-standing
functions. These functions can be applied for any type of resources in
order to compute a set of configuration values based on annotation
overrides. Through the change, the functions can be used to compute
static configuration for non-Pod types or can be used in tests.
Signed-off-by: Matei David <matei@buoyant.io>
We keep track of our proxy-init and CNI plugin versions in two exported
variables in `pkg/version/version.go`. As part of our release process,
we require these versions to be bumped when the iptables dependencies
are bumped.
In our multicluster test, we provide a proxy-init version that's
hardcoded. Instead of relying on the release coordinator to bump the
image in the test (which can be easily missed), use the already exported
version.
Signed-off-by: Matei David <matei@buoyant.io>
When we do a `GetProfile` lookup for an unmeshed pod, we set the `weightedAddr.ProtocolHint` to an empty value `&pb.ProtocolHint{}` to indicate that the address is unmeshed and has no protocol hint. However, when the looked up port is in the default opaque list, we erroneously check if `weightedAddr.ProtocolHint != nil` to determine if we should attempt to get the inbound listen port for that pod. Since `&pb.ProtocolHint{} != nil`, we attempt to get the inbound listen port for the unmeshed pod. This results in an error, preventing any valid `GetProfile` responses from being returned.
We update the initialization logic for `weightedAddr.ProtocolHint` to only create a struct when a protocol hint is present and to leave it as `nil` if the pod is unmeshed.
We add a simple unit test for this behavior as well.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Reenable cni-calico-deep integration test
Fixes#11567
The trick is to run the test under k8s `v1.27.6-k3s1` as the following
versions break Calico in k3s (see k3d-io/k3d#1375).
Also removed the `continue-on-error: true` directive in the integration
workflow because it was hiding this problem.
The multicluster extension has always allowed the extension to be
installed without a gateway; the idea being that users would provide
their own. With p2p, we extended this to allow links that do not specify
a gateway at all, but in the process we missed changing a key check
-- `multicluster-gateways-endpoints` -- that asserts all links have a
probe service.
Without a gateway on the other end, a link will not have a probe spec
(or a gateway address) so it makes no sense to run this check, there
will never be a probe service created in the source cluster. To fix this
issue, we skip the check when the link misses either a gateway address
or a probe spec.
Fixes#11428
Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
64b66f921 changed the behavior of healthcheck.CheckProxyVersionsUpToDate
so that it errors when there are no channels provided. The viz tracing
test uses this utility to generate the expected error message, and it
did so without providing any channels.
This regression is fixed by instantiating the Channels struct with data.
* Bump CNI plugin to v1.2.1
* Bump proxy-init to v2.2.2
Both dependencies include a fix for CVE-2023-2603. Since alpine is used
as the runtime image, there is a security vulnerability detected in the
produced images (due to an issue with libcap). The alpine images have
been bumped to address the CVE.
Signed-off-by: Matei David <matei@buoyant.io>
Add an integration test that exercises the direct pod-to-pod multicluster mode.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
Adds support for remote discovery to the destination controller.
When the destination controller gets a `Get` request for a Service with the `multicluster.linkerd.io/remote-discovery` label, this is an indication that the destination controller should discover the endpoints for this service from a remote cluster. The destination controller will look for a remote cluster which has been linked to it (using the `linkerd multicluster link` command) with that name. It will look at the `multicluster.linkerd.io/remote-discovery` label for the service name to look up in that cluster. It then streams back the endpoint data for that remote service.
Since we now have multiple client-go informers for the same resource types (one for the local cluster and one for each linked remote cluster) we add a `cluster` label onto the prometheus metrics for the informers and EndpointWatchers to ensure that each of these components' metrics are correctly tracked and don't overwrite each other.
---------
Signed-off-by: Alex Leong <alex@buoyant.io>
Traffic may be routed over the loopback interface for a pod when the pod
either tries to communicate with itself using its IP, or when a pod
communicates with itself using its logical address. In the latter case,
a proportion of the traffic may be resolved to the pod's own IP by the
balancer, in which case the traffic is again routed over loopback.
This change adds an integration test to assert that locally routed
traffic does not result in any unexpected errors.
---------
Signed-off-by: Matei David <matei@buoyant.io>
* edge-23.4.2
This edge release contains a number of bug fixes.
* CLI
* Fixed `linkerd uninstall` issue for HttpRoute
* The `linkerd diagnostics policy` command now displays outbound policy when
the target resource is a Service
* CNI
* Fixed incompatibility issue with AWS CNI addon in EKS, that was
forbidding pods to acquire networking after scaling up nodes.
(thanks @frimik!)
* Added --set flag to install-cni plugin (thanks @amit-62!)
* Control Plane
* Fixed an issue where the policy controller always used the default
`cluster.local` domain
* Send Opaque protocol hint for opaque ports in destination controller
* Helm
* Fixed an issue in the viz Helm chart where the namespace metadata template
would throw `unexpected argument found` errors
* Fixed Jaeger chart installation failure
* Multicluster
* Remove namespace field from cluster scoped resources to fix pruning
* Proxy
* Updated `h2` dependency to include a patch for a theoretical
denial-of-service vulnerability discovered in CVE-2023-26964
* Handle Opaque protocol hints on endpoints
* Changed the proxy's default log level to silence warnings from
`trust_dns_proto` that are generally spurious.
* Added `outbound_http_balancer_endpoints` metric
* Fixed missing route_ metrics for requests with ServiceProfiles
* Viz
* Bump prometheus image to v2.43.0
* Add the `kubelet` NetworkAuthentication back since it is used by the
`linkerd viz allow-scrapes` subcommand.
---------
Signed-off-by: David McLaughlin <david@dmclaughlin.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
We have a number of tests in the `test/integration/install` directory which exercise basic functionality such as injecting pods and sending traffic. These test are not currently run at all.
We update a number of tests which were previously just installing Linkerd to also run these basic tests.
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Matei David <matei@buoyant.io>
proxy-init v2.2.1:
* Sanitize `subnets-to-ignore` flag
* Dep bumps
cni-plugin v1.1.0:
* Add support for the `config.linkerd.io/skip-subnets` annotation
* Dep bumps
validator v0.1.2:
* Dep bumps
Also, `linkerd-network-validator` is now released wrapped in a tar file, so this PR also amends `Dockerfile-proxy` to account for that.
Our multicluster integration tests used to depend on viz. Viz was used
to check the state of the gateways (`linkerd multicluster gateways`
required it). Since this is no longer the case, we can remove this
dependency to get back a few seconds of execution times (multicluster
tests are famously slow).
---------
Signed-off-by: Matei David <matei@buoyant.io>
The existing `linkerd check` command runs extension checks based on extension namespaces already on-cluster. This approach does not permit running extension checks without cluster-side components.
Introduce "CLI Checks". These extensions run as part of `linkerd check`, if they satisfy the following criteria:
1) executable in PATH
2) prefixed by `linkerd-`
3) supports an `_extension-metadata` subcommand, that outputs self-identifying
JSON, for example:
```
$ linkerd-foo _extension-metadata
{
"name": "linkerd-foo",
"checks": "always"
}
```
4) The `name` value from `_extension-metadata` must match the filename. And `checks` must equal `always`.
If a CLI Check is found that also would have run as an on-cluster extension check, it is run as a CLI Check only.
Fixes#10544
To support Gateway API-style routes in the outbound proxy, we need to begin
discovering this route configuration from the control plane (via the new
`OutboundPolicies` API).
This change updates the proxy as follows:
1. Policy controller configuration is now required for the proxy.
Previously, the policy API was optionally configured for the inbound
proxy.
2. The sidecar and ingress proxies are updated to use client policies.
Service profile configurations continue to be used when they include
HTTP routes and/or traffic split. Otherwise, a client policy is used
to route traffic.
Outbound policies are currently discovered for *all* outbound IP addresses. Over
time, the policy controller will assume responsibility to make *all* routing
decisions. It does not yet serve responses for all cases, however, so some
fallback behavior exists to use endpoint metadata from profile discovery,
if it exists.
The multi-cluster gateway configuration does not yet use policies for
outbound routing. Furthermore, the proxy reports an IP logical address for
policy routes (instead of a named address, as is done with profiles). There
are no new metrics or labels introduced in this PR. Metrics changes will be made
in follow-up changes.
---
* outbound: Decouple backend caching from request distribution (linkerd/linkerd2-proxy#2284)
* build(deps): bump socket2 from 0.4.7 to 0.4.9 (linkerd/linkerd2-proxy#2290)
* README: comment just-cargo and make it more clear (linkerd/linkerd2-proxy#2292)
* build(deps): bump prettyplease from 0.1.23 to 0.1.24 (linkerd/linkerd2-proxy#2293)
* build(deps): bump tokio from 1.25.0 to 1.26.0 (linkerd/linkerd2-proxy#2286)
* build(deps): bump petgraph from 0.6.2 to 0.6.3 (linkerd/linkerd2-proxy#2285)
* client-policy: add protobuf conversion (linkerd/linkerd2-proxy#2289)
* integration: add test policy controller (linkerd/linkerd2-proxy#2288)
* outbound: change `push_discover` to take a `Service` (linkerd/linkerd2-proxy#2291)
* build(deps): bump rustix from 0.36.7 to 0.36.9 (linkerd/linkerd2-proxy#2295)
* build(deps): bump serde_json from 1.0.93 to 1.0.94 (linkerd/linkerd2-proxy#2296)
* build(deps): bump async-trait from 0.1.64 to 0.1.66 (linkerd/linkerd2-proxy#2297)
* build(deps): bump thiserror from 1.0.38 to 1.0.39 (linkerd/linkerd2-proxy#2298)
* build(deps): bump mio from 0.8.5 to 0.8.6 (linkerd/linkerd2-proxy#2299)
* separate policy client config from `inbound::Config` (linkerd/linkerd2-proxy#2307)
* outbound: Require ClientPolicy discovery (linkerd/linkerd2-proxy#2265)
* just: Fix docker tag formatting (linkerd/linkerd2-proxy#2312)
* outbound: Report concrete authorities for policies (linkerd/linkerd2-proxy#2313)
Signed-off-by: Oliver Gould <ver@buoyant.io>
While integrating a new proxy version, we needed to make a few test
changes to improve diagnostics. These changes are probably worthwhile in
general:
1. We have unused test resources in the trafficsplit tests. These can be
removed.
2. We can simply inline our ServiceProfile configurations in the
trafficsplit tests. There's not a lot of value in having that
decoupled from the test.
3. We now enable verbose proxy logs and emit proxy logs when the
trafficsplit test fails
4. The norelay test is also updated for clarity and to include
additional proxy logs on failure.
Fixes#9965
Adds a `path` property to the RedirectRequestFilter in all versions. This property was absent from the CRD even though it appears in the gateway API documentation and is represented in the internal types. Adding this property to the CRD will also users to specify it.
Add a new version to the HTTPRoute CRD: v1beta2. This new version includes two changes from v1beta1:
* Added `port` property to `parentRef` for use when the parentRef is a Service
* Added `backendRefs` property to HTTPRoute rules
We switch the storage version of the HTTPRoute CRD from v1alpha1 to v1beta2 so that these new fields may be persisted.
We also update the policy admission controller to allow an HTTPRoute parentRef type to be Service (in addition to Server).
Signed-off-by: Alex Leong <alex@buoyant.io>
Supersedes #9856, now that the `linkerd check` logic in the integrations tests got cleaned up via #9989.
The helm-upgrade test had been commented-out when we jumped to the new 2.12 helm charts. It can be used again to test upgrades from 2.12.x.
- Some of the logic in `test/integration/install/install_test.go` still hadn't considered the need to upgrade both the `linkerd-crds` and `linkerd-control-plane` charts, so that got fixed.
- Removed references to the now-deprecated `linkerd2` chart.
- Improved the `helm_cleanup()` function by uninstalling the charts in reverse order (extensions first, core last). We delete the namespaces afterwards because helm sometimes doesn't remove them, and so we shouldn't fail if we attempt to delete one that is already gone. Also removed unneeded `kubectl wait`s because `kubect delete ns` should be blocking.
Fixes#10003
When endpoints are removed from an EndpointSlice resource, the destination controller builds a list of addresses to remove. However, if any of the removed endpoints have a Pod as their targetRef, we will attempt to fetch that pod to build the address to remove. If that pod has already been removed from the informer cache, this will fail and the endpoint will be skipped in the list of endpoints to be removed. This results in stale endpoints being stuck in the address set and never being removed.
We update the endpoint watcher to construct only a list of endpoint IDs for endpoints to remove, rather than fetching the entire pod object. Since we no longer attempt to fetch the pod, this operation is now infallible and endpoints will no longer be skipped during removal.
We also add a `TestEndpointSliceScaleDown` test to exercise this.
Signed-off-by: Alex Leong <alex@buoyant.io>
* Refactor `linkerd check` calls in the integration tests
Extracted logic into the new file `testutil/test_helper_check.go` which exposes the functions `TestCheckPre`, `TestCheck` and `TestCheckProxy`.
`linkerd check --output json` is called so its output is properly captured without the need of golden files.
Besides checking that there are no errors (although warnings are allowed), we check that the expected check categories are returned.
The plan is to leverage this in #9856 when re-enabling the helm-upgrade test.
Addresses: #9311
* Set injected `proxy-version` annotation to `values.LinkerdVersion` when image version is empty.
* Set `Proxy.Image.Version` consistently between CLI and Helm
Tested when installed via CLI:
```
$ k get po -o yaml -n emojivoto | grep proxy-version
linkerd.io/proxy-version: dev-0911ad92-jchase
linkerd.io/proxy-version: dev-0911ad92-jchase
linkerd.io/proxy-version: dev-0911ad92-jchase
linkerd.io/proxy-version: dev-0911ad92-jchase
```
Untested when installed via Helm.
Signed-off-by: Jeremy Chase <jeremy.chase@gmail.com>
Co-authored-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
The root cause of https://github.com/linkerd/linkerd2/issues/9521 was that there were clusterip Services which were not in Linkerd's cluster networks. This means that Linkerd was not performing discovery when connecting to these services and therefore was not doing mTLS. This issue was difficult to detect and diagnose.
We add a check which verifies that all clusterIP services in the cluster have their clusterIP in the cluster networks. This is very similar to the existing check which verifies that all pods have a podIP in the cluster networks.
Signed-off-by: Alex Leong <alex@buoyant.io>
The upgrade stable test starts by installing the latest stable release of Linkerd. Previously, that was stable-2.11.4 which did not require installing the CRDs as a separate step. Now the latest is stable-2.12.0 which does require installing the CRDs first. This was causing the install step to fail in this test.
We update the test to first install the CRDs.
Signed-off-by: Alex Leong <alex@buoyant.io>
<!-- Thanks for sending a pull request!
If you already have a well-structured git commit message, chances are GitHub
set the title and description of this PR to the git commit message subject and
body, respectively. If so, you may delete these instructions and submit your PR.
If this is your first time, please read our contributor guide:
https://github.com/linkerd/linkerd2/blob/main/CONTRIBUTING.md
The title and description of your Pull Request should match the git commit
subject and body, respectively. Git commit messages are structured as follows:
```
Subject
Problem
Solution
Validation
Fixes #[GitHub issue ID]
DCO Sign off
```
Example git commit message:
```
Introduce Pull Request Template
GitHub's community guidelines recommend a pull request template, the repo was
lacking one.
Introduce a `PULL_REQUEST_TEMPLATE.md` file.
Once merged, the
[Community profile checklist](https://github.com/linkerd/linkerd2/community)
should indicate the repo now provides a pull request template.
Fixes#3321
Signed-off-by: Jane Smith <jane.smith@example.com>
```
Note the git commit message subject becomes the pull request title.
For more details around git commits, see the section on Committing in our
contributor guide:
https://github.com/linkerd/linkerd2/blob/main/CONTRIBUTING.md#committing
-->
Signed-off-by: Alex Leong <alex@buoyant.io>
Depends on #9195
Currently, `linkerd inject --default-inbound-policy` does not set the
`config.linkerd.io/default-inbound-policy` annotation on the injected
resource(s).
The `inject` command does _try_ to set that annotation if it's set in
the `Values` generated by `proxyFlagSet`:
14d1dbb3b7/cli/cmd/inject.go (L485-L487)
...but, the flag in the proxy `FlagSet` doesn't set
`Values.Proxy.DefaultInboundPolicy`, it sets
`Values.PolicyController.DefaultAllowPolicy`:
7c5e3aaf40/cli/cmd/options.go (L375-L379)
This is because the flag set is shared across `linkerd inject` and
`linkerd install` subcommands, and in `linkerd install`, we want to set
the default policy for the whole cluster by configuring the policy
controller. In `linkerd inject`, though, we want to add the annotation
to the injected pods only.
This branch fixes this issue by changing the flag so that it sets the
`Values.Proxy.DefaultInboundPolicy` instead of the
`Values.PolicyController.DefaultAllowPolicy` value. In `linkerd
install`, we then set `Values.PolicyController.DefaultAllowPolicy` based
on the value of `Values.Proxy.DefaultInboundPolicy`, while in `inject`,
we will now actually add the annotation.
This branch is based on PR #9195, which adds validation to reject
invalid values for `--default-inbound-policy`, rather than on `main`.
This is because the validation code added in that PR had to be moved
around a bit, since it now needs to validate the
`Values.Proxy.DefaultInboundPolicy` value rather than the
`Values.PolicyController.DefaultAllowPolicy` value. I thought using
#9195 as a base branch was better than basing this on `main` and then
having to resolve merge conflicts later. When that PR merges, this can
be rebased onto `main`.
Fixes#9168
This branch adds a check to `linkerd viz check --proxy` that checks if
the data plane namespace (or any namespace, if the check is run without
a namespace) has the `config.linkerd.io/default-inbound-policy: deny`
annotation, indicating that the `linkerd-viz` Prometheus instance may
not be authorized to scrape proxies in that namespace.
For example, after installing emojivoto with the default-deny
annotation:
```
linkerd-viz-data-plane
----------------------
√ data plane namespace exists
‼ prometheus is authorized to scrape data plane pods
prometheus may not be authorized to scrape the following pods:
* emojivoto/emoji-699d77c79-77w7f
* emojivoto/voting-55d76f4bcb-6lsml
* emojivoto/web-6c54d9554d-md2sd
* emojivoto/vote-bot-b57689ffb-fq8t5
see https://linkerd.io/2/checks/#l5d-viz-data-plane-prom-authz for hints
```
This check is a warning rather than fatal, because it's possible that
user-created policies may exist that authorize scrapes, which the check
is not currently aware of. We could, potentially, do more exhaustive
checking for whether _any_ policy would authorize scrapes, but that
would require reimplementing a bunch of policy logic inside the `viz`
extension CLI. For now, I settled on making the check a warning, and
having the error message say "prometheus _may_ not be authorized...".
The subsequent check that data plane metrics exist will fail if
Prometheus actually can't scrape anything.
In a subsequent branch, I'll add a `linkerd viz` subcommand for
generating policy resources to allow Prometheus to scrape the proxies in
a namespace; once this is implemented, the check will also check for the
existance of such a policy in that namespace. If the policy does not
exist, the check output will suggest using that command to generate a
policy to allow scrapes.
See #9150
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
* Allows RSA signed trust anchors on linkerd cli (#7771)
Linkerd currently forces using an ECDSA P-256
issuer certificate along with a ECDSA trust
anchor. Still, it's still cryptographically valid
to have an ECDSA P-256 issuer certificate issued
by an RSA signed CA.
CheckCertAlgoRequirements checks if CA cert uses
ECDSA or RSA 2048/4096 signing algorithm.
Fixes#7771
Signed-off-by: Baeyens, Daniel <daniel.baeyens@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>