* viz: move some components into linkerd-viz
This branch moves the grafana,prometheus,web, tap components
into a new viz chart, following the same extension model that
multi-cluster and jaeger follow.
The components in viz are not injected during install time, and
will go through the injector. The `viz install` does not have any
cli flags to customize the install directly but instead follow the Helm
way of customization by using flags such as
`set`, `set-string`, `values`, `set-files`.
**Changes Include**
- Move `grafana`, `prometheus`, `web`, `tap` templates into viz extension.
- Remove all add-on related charts, logic and tests w.r.t CLI & Helm.
- Clean up `linkerd2/values.go` & `linkerd2/values.yaml` to not contain
fields related to viz components.
- Update `linkerd check` Healthchecks to not check for viz components.
- Create a new top level `viz` directory with CLI logic and Helm charts.
- Clean fields in the `viz/Values.yaml` to be in the `<component>.<property>`
model. Ex: `prometheus.resources`, `dashboard.image.tag`, etc so that it is
consistent everywhere.
**Testing**
```bash
# Install the Core Linkerd Installation
./bin/linkerd install | k apply -f -
# Wait for the proxy-injector to be ready
# Install the Viz Extension
./bin/linkerd cli viz install | k apply -f -
# Customized Install
./bin/linkerd cli viz install --set prometheus.enabled=false | k apply -f -
```
What is not included in this PR:
- Move of Controller from core install into the viz extension.
- Simplification and refactoring of the core chart i.e removing `.global`, etc.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
## What
This change moves the `linkerd check --multicluster` functionality under it's
own multicluster subcommand: `linkerd multicluster check`.
There should be no functional changes as a result of this change. `linkerd
check` no longer checks for anything multicluster related and the
`--multicluster` flag has been removed.
## Why
Closes#5208
The bulk of these changes are moving all the multicluster checks from
`pkg/healthcheck` into the multicluster package.
Doing this completely separates it from core Linkerd. It still uses
`pkg/healtcheck` when possible, but anything that is used only by `multicluster
check` has been moved.
**Note the the `kubernetes-api` and `linkerd-existence` checks are run.**
These checks are required for setting up the Linkerd health checker. They set
the health checker's `kubeAPI`, `linkerdConfig`, and `apiClient` fields.
These could be set manually so that the only check the user sees is
`linkerd-multicluster`, but I chose not to do this.
If any of the setting functions errors, it would just tell the user to run
`linkerd check` and ensure the installation is correct. I find the user error
handling to be better by including these required checks since they should be
run in the first place.
## How to test
Installing Linkerd and multicluster should result in a basic check output:
```
$ bin/linkerd install |kubectl apply -f -
..
$ bin/linkerd check
..
$ bin/linkerd multicluster install |kubectl apply -f -
..
$ bin/linkerd multicluster check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API
linkerd-multicluster
--------------------
√ Link CRD exists
Status check results are √
```
After linking a cluster:
```
$ bin/linkerd multicluster check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API
linkerd-multicluster
--------------------
√ Link CRD exists
√ Link resources are valid
* k3d-y
√ remote cluster access credentials are valid
* k3d-y
√ clusters share trust anchors
* k3d-y
√ service mirror controller has required permissions
* k3d-y
√ service mirror controllers are running
* k3d-y
× all gateway mirrors are healthy
probe-gateway-k3d-y.linkerd-multicluster mirrored from cluster [k3d-y] has no endpoints
see https://linkerd.io/checks/#l5d-multicluster-gateways-endpoints for hints
Status check results are ×
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Fixes#5385 (second bug in there)
Added missing label `linkerd.io/control-plane-ns=linkerd` that all the
control plane resources must have, that is passed to `kubectl apply
--prune`
## Summary
This changes the destination service to start indicating whether a profile is an
opaque protocol or not.
Currently, profiles returned by the destination service are built by chaining
together updates coming from watching Profile and Traffic Split updates.
With this change, we now also watch updates to Opaque Port annotations on pods
and namespaces; if an update occurs this is now included in building a profile
update and is sent to the client.
## Details
Watching updates to Profiles and Traffic Splits is straightforward--we watch
those resources and if an update occurs on one associated to a service we care
about then the update is passed through.
For Opaque Ports this is a little different because it is an annotation on pods
or namespaces. To account for this, we watch the endpoints that we should care
about.
### When host is a Pod IP
When getting the profile for a Pod IP, we check for the opaque ports annotation
on the pod and the pod's namespace. If one is found, we'll indicate if the
profile is an opaque protocol if the requested port is in the annotation.
We do not subscribe for updates to this pod IP. The only update we really care
about is if the pod is deleted and this is already handled by the proxy.
### When host is a Service
When getting the profile for a Service, we subscribe for updates to the
endpoints of that service. For any ports set in the opaque ports annotation on
any of the pods, we check if the requested port is present.
Since the endpoints for a service can be added and removed, we do subscribe for
updates to the endpoints of the service.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Fixes#5385
## The problems
- `linkerd install --ha` isn't honoring flags
- `linkerd upgrade --ha` is overridding existing configs silently or failing with an error
- *Upgrading HA instances from before 2.9 to version 2.9.1 results in configs being overridden silently, or the upgrade fails with an error*
## The cause
The change in #5358 attempted to fix `linkerd install --ha` that was only applying some of the `values-ha.yaml` defaults, by calling `charts.NewValues(true)` and merging that with the values built from `values.yaml` overriden by the flags. It turns out the `charts.NewValues()` implementation was by itself merging against `values.yaml` and as a result any flag was getting overridden by its default.
This also happened when doing `linkerd upgrade --ha` on an existing instance, which could result in silently overriding settings, or it could also fail loudly like for example when upgrading set up that has an external issuer (in this case the issuer cert won't be able to be read during upgrade and an error would occur as described in #5385).
Finally, when doing `linkerd upgrade` (no --ha flag) on an HA install from before 2.9 results in configs getting overridden as well (silently or with an error) because in order to generate the `linkerd-config-overrides` secret, the original install flags are retrieved from `linkerd-config` via the `loadStoredValuesLegacy()` function which then effectively ends up performing a `linkerd upgrade` with all the flags used for `linkerd install` and falls into the same trap as above.
## The fix
In `values.go` the faulting merging logic is not used anymore, so now `NewValues()` only returns the default values from `values.yaml` and doesn't require an argument anymore. It calls `readDefaults()` which now only returns the appropriate values depending on whether we're on HA or not.
There's a new function `MergeHAValues()` that merges `values-ha.yaml` into the current values (it doesn't look into `values.yaml` anymore), which is only used when processing the `--ha` flag in `options.go`.
## How to test
To replicate the issue try setting a custom setting and check it's not applied:
```bash
linkerd install --ha --controller-log level debug | grep log.level
- -log-level=info
```
## Followup
This wasn't caught because we don't have HA integration tests. Now that our test infra is based on k3d, it should be easy to make such a test using a cluster with multiple nodes. Either that or issuing `linkerd install --ha` with additional configs and compare against a golden file.
* jaeger: add check sub command
This adds a new `linkerd jaeger check` command to have checks w.r.t
jaeger extension. This is similar to that of the `linkerd check` cmd.
As jaeger is a separate package, It was a bit complex for this to work
as not all types and fields from healthcheck pkg are public, Helper
funcs were used to mitigate this.
This has the following changes:
- Adds a new `check.go` file under the jaeger extension pkg
- Moves some commonly needed funcs and types from `cli/cmd/check.go`
and `pkg/healthcheck/health.go` into
`pkg/healthcheck/healthcheck_output.go`.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Add a `linkerd jaeger uninstall` command which prints the linkerd-jaeger extension resources so that they can be deleted. This is similar to the `linkerd uninstall` command.
```
> bin/linkerd jaeger uninstall | k delete -f -
clusterrole.rbac.authorization.k8s.io "linkerd-jaeger-linkerd-jaeger-proxy-mutator" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-jaeger-linkerd-jaeger-proxy-mutator" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-proxy-mutator-webhook-config" deleted
namespace "linkerd-jaeger" deleted
```
Signed-off-by: Alex Leong <alex@buoyant.io>
* upgrades: make webhooks restart if TLS creds are updated
Fixes#5231
Currently, we do not re-use the TLS certs during upgrades, which
means that the secrets are updated while the webhooks are still
paired with the older ones, causing the webhook requests to fail.
This can be solved by making webhooks be restarted whenever there
is a change in the certs. This can be performed by storing the hash
of the `*-rbac` file, which contains the secrets, thus making the
pod templates change whenever there is an update to the certs thus
making restarts required.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When testing the `linkerd2-cni` chart with `ct`, it flags up usage
of some deprecated apiVersions.
This PR aligns the RBAC API group across all resources in the chart.
---
Signed-off-by: Simon Weald <glitchcrab-github@simonweald.com>
* `linkerd install --ha` was only partially applying HA config
Fixes#5342
`values-ha.yml` contains the specific config for HA, but only the proxy
resources controller replicas settings were applied. This PR adds
EnablePodAntiafinity, WebhookFailurePolicy and all the resource settings
for the other CP pods.
Also the `--controller-replicas` flag is moved after the HA flags so it
can override the HA settings.
Finally, some comments no longer relevant were removed.
## How to test
Perform `linkerd install --ha` and make sure the values in
`values-ha.yml` are propagated correctly in the produced yaml.
## 2.9.1
After merging to `main`, this should be cherry-picked into the
`release/stable-2.9` branch.
Co-authored-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including:
* removing the tracing components from the main control plane chart
* removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension)
* removing tracing related checks (these will be added back into `linkerd jaeger check`)
* removing related tests
We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension. To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to.
Note that for now, only the control plane components send traces; the proxies in the control plane do not. This is because the linkerd-jaeger injector is not yet available. However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available.
I tested this by doing the following:
1. bin/linkerd install | kubectl apply -f -
1. bin/helm install jaeger jaeger/charts/jaeger
1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f -
1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686
1. open http://localhost:16686
1. see traces from the linkerd control plane
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#5257
This branch movies mc charts and cli level code to a new
top level directory. None of the logic is changed.
Also, moves some common types into `/pkg` so that they
are accessible both to the main cli and extensions.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Add automatic readme generation for charts
The current readmes for each chart is generated
manually and doesn't contain all the information available.
Utilize helm-docs to automatically fill out readme.mds
for the helm charts by pulling metadata from values.yml.
Fixes#4156
Co-authored-by: GMarkfjard <gabma047@student.liu.se>
* extension: Add new jaeger binary
This branch adds a new jaeger binary project in the jaeger directory.
This follows the same logic as that of `linkerd install`. But as
`linkerd install` VFS logic expects charts to be present in `/charts`
directory, This command gets its own static pkg to generate its own
VFS for its chart.
This covers only the install part of the command
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#4874
This branch upgrades Helm sdk from v2 to v3 *without any functionaly
changes*, just replacing types with newer API's.
This should not effect our current support for Helm v2 as we did not
change any of the underlying tempaltes(which work with Helm v2). This
works becuase we did not use any of the API's that read the Chart
metadata (which are the only ones changed from v2 to v3) and currently
manually load files and pass ito the sdk.
This PR should provide a great point to start more of the newer Helm v3
API's including for the upgrade workflow thus allowing us to make
Linkerd CLI more simpler.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
CLI crashes if linkerd-config contains unexpected values.
Add a safe accessor that initializes an empty Global on the first
access. Refactor all accesses to use the newly introduced accessor using
gopls.
Add test for linkerd-config data without Global.
Fixes#5215
Co-authored-by: Itai Schwartz <yitai27@gmail.com>
Signed-off-by: Hod Bin Noon <bin.noon.hod@gmail.com>
As discussed in #5228, it is not correct for root and intermediate
certs to have SAN. This PR updates the check to not verify the
intermediate issuer cert with the identity dns name (which checks with
SAN and not CN as the the `verify` func is used to verify leaf certs and
not root and intermediate certs). This PR also avoids setting a SAN
field when generating certs in the `install` command.
Fixes#5228
This upgrades both the proxy-init image itself, and the go dependency on
proxy-init as a library, which fixes CNI in k3s and any host using
binaries coming from BusyBox, where `nsenter` has an
issue parsing arguments (see rancher/k3s#1434).
Fixes#5191
The logs command adds a external dependency that we forked to work but
does not fit within linkerd's core set of responsibilities. Hence, This
is being removed.
For capabilities like this, The Kubernetes plugin ecosystem has better
and well maintained tools that can be used.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5190
`linkerd get` is not used currently and works only for pods. This can be
removed instead as per the issue. This branch removes the command and
also the associated unit and integration tests.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Per #5165, Kubernetes does not necessarily limit the proxy's access to
cores via `cgroups` when a CPU limit is set. As of #5168, the proxy now
supports a `LINKERD2_PROXY_CORES` environment configuration that
augments CPU detection from the host operating system.
This change modifies the proxy injector to ensure that this environment
is configured from the `Values.proxy.cores` Helm value, the
`config.linkerd.io/proxy-cpu-limit` annotation, and the `--proxy-cpu-limit`
install flag.
As discussed in #5167 & #5169, Kubernetes CPU limits are not necessarily
discoverable from within the pod. This means that the control plane
processes may allocate far more threads than can actually be used by the
process given its process limits.
This change removes the default CPU limits for all control plane
components. CPU limits may still be set via Helm configuration.
Now that the proxy can use more than one core, this behavior should be
enabled by default, even in HA mode.
This change modifies the default HA helm values to unset the cpu limit
for proxy containers.
After the 2.9 multicluster refactoring, `linkerd mc install`'s only
workload installed is the nginx gateway, whose docker image is
configured through the flags `--gateway-nginx-image` and
`--gateway-nginx-image-version`. Thus there's no longer need of the
`--registry` flag, which is used OTOH by `linkerd mc link` which deploys the service mirror.
Currently, For legacy upgrades we are fetching even external certs and
using it for upgrades which contradicts the condition at
https://github.com/linkerd/linkerd2/blob/master/cli/cmd/options.go#L550
used with install and thus causing errors.
Instead we don't retrieve them with upgrades and hence they don't get
stored into the config and secrets which seems correct as we do not want
to store certs in the config and use them with upgrades when they are
created externally.
This touches only the upgrade path i.e `fetchIssuers` and would not
effect the retrievel of external certs for checks, etc.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* charts: Do not store .component in linkerd-config
This removes the `.component` fields from `Values.go` and also prevents them from being emitted into `linkerd-config` by attaching them into a temporary variable during injection.
This also simplies inbound and outbound Skip ports helm logic and adds quotes to them.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* cli: add `--ingress` flag to inject cmd
This PR adds a new inject flag called `--ingress` which when enabled
adds a new annotation i.e `linkerd.io/inject: ingress`.
This annotation is not applied in the `--manual` case and the env
variable is directly set.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
`linkerd mc link` wasn't properly setting the `gatewayAddresses` field
when such address had a `Hostname` field instead of `Ip`, like is the
case in EKS services of type LoadBalancer.
* Use errors.Is instead of checking underlying err messages
Fixes#5132
This PR replaces the usage of `strings.hasSuffix` with `errors.Is`
wherever error messages are being checked. So, that the code is not
effected by changes in the underlying message. Also adds a string
const for http2 response body closed error
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5121
* cli: skip emitting warnings in Profile
Whenever the tapDuration gets completed, there is a warning occured
which we do not emit. This looks like it has been changed in the latest
versions of the dependency.
* Use context.withDeadline instead of client.timeout
The usage of `client.Timeout` is not working correctly causing `W1022
17:20:12.372780 19049 transport.go:260] Unable to cancel request for
promhttp.RoundTripperFunc` to be emitted by the Kubernetes Client.
This is fixed by using context.WithDeadline and passing that into the
http Request.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5118
This PR adds a new supported value for the `linkerd.io/inject` annotation. In addition to `enabled` and `disabled`, this annotation may now be set to `ingress`. This functions identically to `enabled` but it also causes the `LINKERD2_PROXY_INGRESS_MODE="true"` environment variable to be set on the proxy. This causes the proxy to operate in ingress mode as described in #5118
With this set, ingresses are able to properly load service profiles based on the l5d-dst-override header.
Signed-off-by: Alex Leong <alex@buoyant.io>
Followup to #5100
We had both `controllerImageVersion` and `global.controllerImageVersion`
configs, but only the latter was taken into account in the chart
templates, so this change removes all of its references.
There is no longer a proxy config `DESTINATION_GET_NETWORKS`. Instead of
reflecting this implementation in our values.yaml, this changes this
variable to the more general `clusterNetworks` to emphasize its
similarity to `clusterDomain` for the purposes of discovery.
The proxy no longer honors DESTINATION_GET variables, as profile lookups
inform when endpoint resolution is performed. Also, there is no longer
a router capacity limit.
As described in #5105, it's not currently possible to set the proxy log
level to `off`. The proxy injector's template does not quote the log
level value, and so the `off` value is handled as `false`. Thanks, YAML.
This change updates the proxy template to use helm's `quote` function
throughout, replacing manually quoted values and fixing the quoting for
the log level value.
We also remove the default logFormat value, as the default is specified
in values.yaml.
Currently the tracing deployments do not start on clusters where
restricted PodSecurityPolicies are enforced.
This PR adds the subchart's ServiceAccounts to the `linkerd-psp`
RoleBinding, thereby allowing the deployments to be satisfied.
Signed-off-by: Simon Weald <glitchcrab-github@simonweald.com>
It appears that Amazon can use the `100.64.0.0/10` network, which is
technically private, for a cluster's Pod network.
Wikipedia describes the network as:
> Shared address space for communications between a service provider
> and its subscribers when using a carrier-grade NAT.
In order to avoid requiring additional configuration on EKS clusters, we
should permit discovery for this network by default.
The proxy has a default, hardcoded set of ports on which it doesn't do
protocol detection (25, 587, 3306 -- all of which are server-first
protocols). In a recent change, this default set was removed from
the outbound proxy, since there was no way to configure it to anything
other than the default set. I had thought that there was a default set
applied to proxy-init, but this appears to not be the case.
This change adds these ports to the default Helm values to restore the
prior behavior.
I have also elected to include 443 in this set, as it is generally our
recommendation to avoid proxying HTTPS traffic, since the proxy provides
very little value on these connections today.
Additionally, the memcached port 11211 is skipped by default, as clients
do not issue any sort of preamble that is immediately detectable.
These defaults may change in the future, but seem like good choices for
the 2.9 release.
The TestUpgradeOverwriteRemoveAddonKeys was not actually verifying that the fields which should be removed were actually removed. Thus it failed to catch an error with the test itself where the `addon-overwrite` flag was spelled incorrectly and not properly registered.
We update the test to verify that the field is removed and fix the test by correcting the spelling of the flag and properly registering it.
Signed-off-by: Alex Leong <alex@buoyant.io>