Pods with unusual DNS configurations may not be able to resolve the
control plane's domain names. We can avoid search path shenanigans by
adding a trailing dot to these names.
Closes#5545.
This change moves all tap and tap-injector code into the viz directory.
The tap and tap-injector components now also use a new tap image—separating
these components from the controller image that they are currently part of. This
means the controller image has removed all its build dependencies related to
tap.
Finally, the tap Protobuf has been separated from the metrics-api and moved into
it's own `.proto` file and gen directory. This introduces a clear split between
metrics-api and tap Protobuf.
There is no change in behavior for the `viz tap` command.
### Reviewing
#### Docker images
All the bin directory scripts should be updated to build and load the tap image.
All the CI workflows should be updated to build and push the tap image.
#### Controller and pkg directories
This is primarily deletions. Most of the deleted code in this directory is now
in the tap directory of the Viz extension.
#### viz/tap
This is the location that all the tap related code now lives in. New files are
mostly moved from the controller and pkg directories. Imports have all been
updated to point at the right locations and Protobuf.
The Protobuf here is taken from metrics-api and contains all tap-related
Protobuf.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
(Background information)
In our company we are checking the sops-encrypted Linkerd manifest into GitHub repository,
and I came across the following problem.
---
Three dashes mean the start of the YAML document (or the end of the
directive).
https://yaml.org/spec/1.2/spec.html#id2800132
If there are only comments between `---`, the document is empty.
Assume the file which include an empty document at the top of itself.
```yaml
---
# foo
---
apiVersion: v1
kind: Namespace
metadata:
name: foo
---
# bar
---
apiVersion: v1
kind: Namespace
metadata:
name: bar
```
When we encrypt and decrypt it with [sops](https://github.com/mozilla/sops), the empty document will be
converted to `{}`.
```yaml
{}
---
apiVersion: v1
kind: Namespace
metadata:
name: foo
---
apiVersion: v1
kind: Namespace
metadata:
name: bar
```
It is invalid as k8s manifest ([apiVersion not set, kind not set]).
```
error validating data: [apiVersion not set, kind not set]
```
---
I'm afraid that it's sops's problem (at least partly), but anyhow this modification is enough harmless I think.
Thank you.
Signed-off-by: Takumi Sue <u630868b@alumni.osaka-u.ac.jp>
*Closes #5484*
### Changes
---
*Overview*:
* Update golden files and make necessary spec changes
* Update test files for viz
* Add v1 to healthcheck and uninstall
* Fix link-crd clusterDomain field validation
- To update to v1, I had to change crd schemas to be version-based (i.e each version has to declare its own schema). I noticed an error in the link-crd (`targetClusterDomain` was `targetDomainName`). Also, additionalPrinterColumns are also version-dependent as a field now.
- For `admissionregistration` resources I had to add an additional `admissionReviewVersions` field -- I included `v1` and `v1beta1`.
- In `healthcheck.go` and `resources.go` (used by `uninstall`) I had to make some changes to the client-go versions (i.e from `v1beta1` to `v1` for admissionreg and apiextension) so that we don't see any warning messages when uninstalling or when we do any install checks.
I tested again different cli and k8s versions to have a bit more confidence in the changes (in addition to automated tests), hope the cases below will be enough, if not let me know and I can test further.
### Tests
Linkerd local build CLI + k8s 1.19+
`install/check/mc-check/mc-install/mc-link/viz-install/viz-check/uninstall/`
```
$ kubectl version
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2+k3s1", GitCommit:"1d4adb0301b9a63ceec8cabb11b309e061f43d5f", GitTreeState:"clean", BuildDate:"2021-01-14T23:52:37Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
$ bin/linkerd version
Client version: git-b0fd2ec8
Server version: unavailable
$ bin/linkerd install | kubectl apply -f -
- no errors, no version warnings -
$ bin/linkerd check --expected-version git-b0fd2ec8
Status check results are :tick:
# MC
$ bin/linkerd mc install | k apply -f -
- no erros, no version warnings -
$ bin/linkerd mc check
Status check results are :tick:
$ bin/linkerd mc link foo | k apply -f - # test crd creation
# had a validation error here because the schema had targetDomainName instead of targetClusterDomain
# changed, rebuilt cli, re-installed mc, tried command again
secret/cluster-credentials-foo created
link.multicluster.linkerd.io/foo created
...
# VIZ
$ bin/linkerd viz install | k apply -f -
- no errors, no version warnings -
$ bin/linkerd viz check
- no errors, no version warnings -
Status check results are :tick:
$ bin/linkerd uninstall | k delete -f -
- no errors, no version warnings -
```
Linkerd local build CLI + k8s 1.17
`check-pre/install/mc-check/mc-install/mc-link/viz-install/viz-check`
```
$ kubectl version
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.17-rc1+k3s1", GitCommit:"e8c9484078bc59f2cd04f4018b095407758073f5", GitTreeState:"clean", BuildDate:"2021-01-14T06:20:56Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
$ bin/linkerd version
Client version: git-3d2d4df1 # made changes to link-crd after prev test case
Server version: unavailable
$ bin/linkerd check --pre --expected-version git-3d2d4df1
- no errors, no version warnings -
Status check results are :tick:
$ bin/linkerd install | k apply -f -
- no errors, no version warnings -
$ bin/linkerd check --expected-version git-3d2d4df1
- no errors, no version warnings -
Status check results are :tick:
$ bin/linkerd mc install | k apply -f -
- no errors, no version warnings -
$ bin/linkerd mc check
- no errors, no version warnings -
Status check results are :tick:
$ bin/linkerd mc link --cluster-name foo | k apply -f -
bin/linkerd mc link --cluster-name foo | k apply -f -
secret/cluster-credentials-foo created
link.multicluster.linkerd.io/foo created
# VIZ
$ bin/linkerd viz install | k apply -f -
- no errors, no version warnings -
$ bin/linkerd viz check
- no errors, no version warnings -
- hangs up indefinitely after linkerd-viz can talk to Kubernetes
```
Linkerd edge (21.1.3) CLI + k8s 1.17 (already installed)
`check`
```
$ linkerd version
Client version: edge-21.1.3
Server version: git-3d2d4df1
$ linkerd check
- no errors -
- warnings: mismatch between cli & control plane, control plane not up to date (both expected) -
Status check results are :tick:
```
Linkerd stable (2.9.2) CLI + k8s 1.17 (already installed)
`check/uninstall`
```
$ linkerd version
Client version: stable-2.9.2
Server version: git-3d2d4df1
$ linkerd check
× control plane ClusterRoles exist
missing ClusterRoles: linkerd-linkerd-tap
see https://linkerd.io/checks/#l5d-existence-cr for hints
Status check results are ×
# viz wasn't installed, hence the error, installing viz didn't help since
# the res is named `viz-tap` now
# moving to uninstall
$ linkerd uninstall | k delete -f -
- no warnings, no errors -
```
_Note_: I used `go test ./cli/cmd/... --generate` which is why there are so many changes 😨
Signed-off-by: Matei David <matei.david.35@gmail.com>
* Protobuf changes:
- Moved `healthcheck.proto` back from viz to `proto/common` as it remains being used by the main `healthcheck.go` library (it was moved to viz by #5510).
- Extracted from `viz.proto` the IP-related types and put them in `/controller/gen/common/net` to be used by both the public and the viz APIs.
* Added chart templates for new viz linkerd-metrics-api pod
* Spin-off viz healthcheck:
- Created `viz/pkg/healthcheck/healthcheck.go` that wraps the original `pkg/healthcheck/healthcheck.go` while adding the `vizNamespace` and `vizAPIClient` fields which were removed from the core `healthcheck`. That way the core healthcheck doesn't have any dependencies on viz, and viz' healthcheck can now be used to retrieve viz api clients.
- The core and viz healthcheck libs are now abstracted out via the new `healthcheck.Runner` interface.
- Refactored the data plane checks so they don't rely on calling `ListPods`
- The checks in `viz/cmd/check.go` have been moved to `viz/pkg/healthcheck/healthcheck.go` as well, so `check.go`'s sole responsibility is dealing with command business. This command also now retrieves its viz api client through viz' healthcheck.
* Removed linkerd-controller dependency on Prometheus:
- Removed the `global.prometheusUrl` config in the core values.yml.
- Leave the Heartbeat's `-prometheus` flag hard-coded temporarily. TO-DO: have it automatically discover viz and pull Prometheus' endpoint (#5352).
* Moved observability gRPC from linkerd-controller to viz:
- Created a new gRPC server under `viz/metrics-api` moving prometheus-dependent functions out of the core gRPC server and into it (same thing for the accompaigning http server).
- Did the same for the `PublicAPIClient` (now called just `Client`) interface. The `VizAPIClient` interface disappears as it's enough to just rely on the viz `ApiClient` protobuf type.
- Moved the other files implementing the rest of the gRPC functions from `controller/api/public` to `viz/metrics-api` (`edge.go`, `stat_summary.go`, etc.).
- Also simplified some type names to avoid stuttering.
* Added linkerd-metrics-api bootstrap files. At the same time, we strip out of the public-api's `main.go` file the prometheus parameters and other no longer relevant bits.
* linkerd-web updates: it requires connecting with both the public-api and the viz api, so both addresses (and the viz namespace) are now provided as parameters to the container.
* CLI updates and other minor things:
- Changes to command files under `cli/cmd`:
- Updated `endpoints.go` according to new API interface name.
- Updated `version.go`, `dashboard` and `uninstall.go` to pull the viz namespace dynamically.
- Changes to command files under `viz/cmd`:
- `edges.go`, `routes.go`, `stat.go` and `top.go`: point to dependencies that were moved from public-api to viz.
- Other changes to have tests pass:
- Added `metrics-api` to list of docker images to build in actions workflows.
- In `bin/fmt` exclude protobuf generated files instead of entire directories because directories could contain both generated and non-generated code (case in point: `viz/metrics-api`).
* Add retry to 'tap API service is running' check
* mc check shouldn't err when viz is not available. Also properly set the log in multicluster/cmd/root.go so that it properly displays messages when --verbose is used
* Introduce OpenAPIV3 validation for CRDs
* Add validation to link crd
* Add validation to sp using kube-gen
* Add openapiv3 under schema fields in specific versions
* Modify fields to rid spec of yaml errors
* Add top level validation for all three CRDs
Signed-off-by: Matei David <matei.david.35@gmail.com>
## What this changes
This adds a tap-injector component to the `linkerd-viz` extension which is
responsible for adding the tap service name environment variable to the Linkerd
proxy container.
If a pod does not have a Linkerd proxy, no action is taken. If tap is disabled
via annotation on the pod or the namespace, no action is taken.
This also removes the environment variable for explicitly disabling tap through
an environment variable. Tap status for a proxy is now determined only be the
presence or absence of the tap service name environment variable.
Closes#5326
## How it changes
### tap-injector
The tap-injector component determines if `LINKERD2_PROXY_TAP_SVC_NAME` should be
added to a pod's Linkerd proxy container environment. If the pod satisfies the
following, it is added:
- The pod has a Linkerd proxy container
- The pod has not already been mutated
- Tap is not disabled via annotation on the pod or the pod's namespace
### LINKERD2_PROXY_TAP_DISABLED
Now that tap is an extension of Linkerd and not a core component, it no longer
made sense to explicitly enable or disable tap through this Linkerd proxy
environment variable. The status of tap is now determined only be if the
tap-injector adds or does not add the `LINKERD2_PROXY_TAP_SVC_NAME` environment
variable.
### controller image
The tap-injector has been added to the controller image's several startup
commands which determines what it will do in the cluster.
As a follow-up, I think splitting out the `tap` and `tap-injector` commands from
the controller image into a linkerd-viz image (or something like that) makes
sense.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
The last viz refactoring removed support for modifying the k8s resources
used by the proxies injected into the control plane components (values
like `tapProxyResources`, `prometheus.proxy.resources`, etc).
This adds them back, using a consistent naming: `tap.proxy.resources`,
`dashboard.proxy.resources`, etc.
Also fixes the tap helm template that was making reference to
`.Values.tapResources` instead of `.Values.tap.resources`.
Co-authored-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
The container-image `ghcr.io/linkerd/cni-plugin:stable-2.9.1` does not contain the `kill` command as an executable. Instead, it is available as a shell built-in. In its current state, Kubernetes emits error events whenever linkerd2-cni pods are terminated because the `kill` command can not be found.
Signed-off-by: Mitch Hulscher <mitch.hulscher@lib.io>
* viz: move some components into linkerd-viz
This branch moves the grafana,prometheus,web, tap components
into a new viz chart, following the same extension model that
multi-cluster and jaeger follow.
The components in viz are not injected during install time, and
will go through the injector. The `viz install` does not have any
cli flags to customize the install directly but instead follow the Helm
way of customization by using flags such as
`set`, `set-string`, `values`, `set-files`.
**Changes Include**
- Move `grafana`, `prometheus`, `web`, `tap` templates into viz extension.
- Remove all add-on related charts, logic and tests w.r.t CLI & Helm.
- Clean up `linkerd2/values.go` & `linkerd2/values.yaml` to not contain
fields related to viz components.
- Update `linkerd check` Healthchecks to not check for viz components.
- Create a new top level `viz` directory with CLI logic and Helm charts.
- Clean fields in the `viz/Values.yaml` to be in the `<component>.<property>`
model. Ex: `prometheus.resources`, `dashboard.image.tag`, etc so that it is
consistent everywhere.
**Testing**
```bash
# Install the Core Linkerd Installation
./bin/linkerd install | k apply -f -
# Wait for the proxy-injector to be ready
# Install the Viz Extension
./bin/linkerd cli viz install | k apply -f -
# Customized Install
./bin/linkerd cli viz install --set prometheus.enabled=false | k apply -f -
```
What is not included in this PR:
- Move of Controller from core install into the viz extension.
- Simplification and refactoring of the core chart i.e removing `.global`, etc.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
## Summary
This changes the destination service to start indicating whether a profile is an
opaque protocol or not.
Currently, profiles returned by the destination service are built by chaining
together updates coming from watching Profile and Traffic Split updates.
With this change, we now also watch updates to Opaque Port annotations on pods
and namespaces; if an update occurs this is now included in building a profile
update and is sent to the client.
## Details
Watching updates to Profiles and Traffic Splits is straightforward--we watch
those resources and if an update occurs on one associated to a service we care
about then the update is passed through.
For Opaque Ports this is a little different because it is an annotation on pods
or namespaces. To account for this, we watch the endpoints that we should care
about.
### When host is a Pod IP
When getting the profile for a Pod IP, we check for the opaque ports annotation
on the pod and the pod's namespace. If one is found, we'll indicate if the
profile is an opaque protocol if the requested port is in the annotation.
We do not subscribe for updates to this pod IP. The only update we really care
about is if the pod is deleted and this is already handled by the proxy.
### When host is a Service
When getting the profile for a Service, we subscribe for updates to the
endpoints of that service. For any ports set in the opaque ports annotation on
any of the pods, we check if the requested port is present.
Since the endpoints for a service can be added and removed, we do subscribe for
updates to the endpoints of the service.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
The linkerd-cni helm chart is missing tolerations on the daemonset. This
prevents the linkerd-cni daemonset from being installed on all intended
nodes.
We use the same template partial as used in the main linkerd helm chart
to add tolerations if specified to the linkerd-cni daemonset spec.
Fixes#5368
Signed-off-by: Rishabh Jain <rishabh@onesignal.com>
* upgrades: make webhooks restart if TLS creds are updated
Fixes#5231
Currently, we do not re-use the TLS certs during upgrades, which
means that the secrets are updated while the webhooks are still
paired with the older ones, causing the webhook requests to fail.
This can be solved by making webhooks be restarted whenever there
is a change in the certs. This can be performed by storing the hash
of the `*-rbac` file, which contains the secrets, thus making the
pod templates change whenever there is an update to the certs thus
making restarts required.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When testing the `linkerd2-cni` chart with `ct`, it flags up usage
of some deprecated apiVersions.
This PR aligns the RBAC API group across all resources in the chart.
---
Signed-off-by: Simon Weald <glitchcrab-github@simonweald.com>
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including:
* removing the tracing components from the main control plane chart
* removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension)
* removing tracing related checks (these will be added back into `linkerd jaeger check`)
* removing related tests
We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension. To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to.
Note that for now, only the control plane components send traces; the proxies in the control plane do not. This is because the linkerd-jaeger injector is not yet available. However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available.
I tested this by doing the following:
1. bin/linkerd install | kubectl apply -f -
1. bin/helm install jaeger jaeger/charts/jaeger
1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f -
1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686
1. open http://localhost:16686
1. see traces from the linkerd control plane
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#5257
This branch movies mc charts and cli level code to a new
top level directory. None of the logic is changed.
Also, moves some common types into `/pkg` so that they
are accessible both to the main cli and extensions.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Add automatic readme generation for charts
The current readmes for each chart is generated
manually and doesn't contain all the information available.
Utilize helm-docs to automatically fill out readme.mds
for the helm charts by pulling metadata from values.yml.
Fixes#4156
Co-authored-by: GMarkfjard <gabma047@student.liu.se>
This upgrades both the proxy-init image itself, and the go dependency on
proxy-init as a library, which fixes CNI in k3s and any host using
binaries coming from BusyBox, where `nsenter` has an
issue parsing arguments (see rancher/k3s#1434).
Per #5165, Kubernetes does not necessarily limit the proxy's access to
cores via `cgroups` when a CPU limit is set. As of #5168, the proxy now
supports a `LINKERD2_PROXY_CORES` environment configuration that
augments CPU detection from the host operating system.
This change modifies the proxy injector to ensure that this environment
is configured from the `Values.proxy.cores` Helm value, the
`config.linkerd.io/proxy-cpu-limit` annotation, and the `--proxy-cpu-limit`
install flag.
As discussed in #5167 & #5169, Kubernetes CPU limits are not necessarily
discoverable from within the pod. This means that the control plane
processes may allocate far more threads than can actually be used by the
process given its process limits.
This change removes the default CPU limits for all control plane
components. CPU limits may still be set via Helm configuration.
Now that the proxy can use more than one core, this behavior should be
enabled by default, even in HA mode.
This change modifies the default HA helm values to unset the cpu limit
for proxy containers.
* charts: Do not store .component in linkerd-config
This removes the `.component` fields from `Values.go` and also prevents them from being emitted into `linkerd-config` by attaching them into a temporary variable during injection.
This also simplies inbound and outbound Skip ports helm logic and adds quotes to them.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* docs: Update external prom and grafana readme
Update `Values.yaml` to make it more clear about reverse proxy
configuration with external grafana instances.
Also, adds `global.prometheusUrl` and `global.grafanaUrl` into charts
`README`
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Restrict controlPlaneTracing field only to control plane components
Previously, `global.controlPlaneTracing` was not available during
injection and thus not affecting it.
This commit creates a new method which checks if controlPlaneTracing is
enabled and sets to the defaults if it is. This is done on the
duplicates thus preventing it from not being propagated into
`linkerd-config`
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Fixes#5118
This PR adds a new supported value for the `linkerd.io/inject` annotation. In addition to `enabled` and `disabled`, this annotation may now be set to `ingress`. This functions identically to `enabled` but it also causes the `LINKERD2_PROXY_INGRESS_MODE="true"` environment variable to be set on the proxy. This causes the proxy to operate in ingress mode as described in #5118
With this set, ingresses are able to properly load service profiles based on the l5d-dst-override header.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#5098
When setting up multicluster, a target cluster may wish to create multiple service accounts to be used by source clusters' service mirrors. This allows the target cluster to individually revoke access to each of the source clusters. When using the Linkerd CLI, this can be accomplished by running the `linkerd multicluster allow` command multiple times to create multiple service accounts. However, there is no analogous workflow when installing with Helm.
We update the Helm templates to support interpreting the `remoteMirrorServiceAccountName` value as either a single string or a list of strings. In the case where it is a list, we create a service account and associated RBAC for each entry in the list.
Signed-off-by: Alex Leong <alex@buoyant.io>
There is no longer a proxy config `DESTINATION_GET_NETWORKS`. Instead of
reflecting this implementation in our values.yaml, this changes this
variable to the more general `clusterNetworks` to emphasize its
similarity to `clusterDomain` for the purposes of discovery.
The proxy no longer honors DESTINATION_GET variables, as profile lookups
inform when endpoint resolution is performed. Also, there is no longer
a router capacity limit.
As described in #5105, it's not currently possible to set the proxy log
level to `off`. The proxy injector's template does not quote the log
level value, and so the `off` value is handled as `false`. Thanks, YAML.
This change updates the proxy template to use helm's `quote` function
throughout, replacing manually quoted values and fixing the quoting for
the log level value.
We also remove the default logFormat value, as the default is specified
in values.yaml.
Currently the tracing deployments do not start on clusters where
restricted PodSecurityPolicies are enforced.
This PR adds the subchart's ServiceAccounts to the `linkerd-psp`
RoleBinding, thereby allowing the deployments to be satisfied.
Signed-off-by: Simon Weald <glitchcrab-github@simonweald.com>
It appears that Amazon can use the `100.64.0.0/10` network, which is
technically private, for a cluster's Pod network.
Wikipedia describes the network as:
> Shared address space for communications between a service provider
> and its subscribers when using a carrier-grade NAT.
In order to avoid requiring additional configuration on EKS clusters, we
should permit discovery for this network by default.
The proxy has a default, hardcoded set of ports on which it doesn't do
protocol detection (25, 587, 3306 -- all of which are server-first
protocols). In a recent change, this default set was removed from
the outbound proxy, since there was no way to configure it to anything
other than the default set. I had thought that there was a default set
applied to proxy-init, but this appears to not be the case.
This change adds these ports to the default Helm values to restore the
prior behavior.
I have also elected to include 443 in this set, as it is generally our
recommendation to avoid proxying HTTPS traffic, since the proxy provides
very little value on these connections today.
Additionally, the memcached port 11211 is skipped by default, as clients
do not issue any sort of preamble that is immediately detectable.
These defaults may change in the future, but seem like good choices for
the 2.9 release.
adding loadBalancerIP to linkerd2-multicluster chart
Sometimes you are in need to tell the gateway service to pick up / request a specific IP from the LB.
e.g. when you talk to another cluster that is having another firewall in front and not permitting access from random IPs.
Solution
Minor change in the chart for Multicluster.
Validation
Example in a GKE:
Register a static IP, note it. Then
helm install linkerd-mc linkerd2/linkerd2-multicluster --set loadBalancerIP="<IP>"
Your gateway service will come up with the IP you have given it.
If you don't set the parameter, then the LB will give out a random IP.
If you don't have a cluster, look at the yaml produced by helm template...
and look if the loadBalancerIP: <IP> is there
```
`apiVersion: v1
kind: Service
.
.
.
selector:
app: linkerd-gateway
type: LoadBalancer
loadBalancerIP: 1.1.1.1`
```
Signed-off-by: Markus Bettsteller <markus@bettsteller.de>
This adds the `podAnnotations` and `podLabels` values in `values.yml` for adding custom annotations/labels to all the control plane pods.
Closes (#5025)
Signed-off-by: Raphael Taylor-Davies <r.taylordavies@googlemail.com>
This PR Updates the Injection Logic (both CLI and proxy-injector)
to use `Values` struct instead of protobuf Config, part of our move
in removing the protobuf.
This does not touch any of the flags, install related code.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Alex Leong <alex@buoyant.io>
* Remove dependency of linkerd-config for most control plane components
This PR removes the dependency of `linkerd-config` into control
plane components by making all that information passed through CLI
flags. As most of these components require a couple of flags, passing
them as flags could be more helpful, as updations to the flags trigger a
rollout unlike a configMap update.
This does not update the proxy-injector as it needs a lot more data
and mounting `linkerd-config` is better.