Commit Graph

679 Commits

Author SHA1 Message Date
Alex Leong 948f9a4ece
Update protoc (#6333)
Update protoc from 3.6.0 to 3.15.7

Signed-off-by: Alex Leong <alex@buoyant.io>
2021-06-21 16:37:57 -07:00
dependabot[bot] f4dacaf27f
Bump google.golang.org/protobuf from 1.24.0 to 1.26.0 (#6304)
* Bump google.golang.org/protobuf from 1.24.0 to 1.26.0

Bumps [google.golang.org/protobuf](https://github.com/protocolbuffers/protobuf-go) from 1.24.0 to 1.26.0.
- [Release notes](https://github.com/protocolbuffers/protobuf-go/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf-go/blob/master/release.bash)
- [Commits](https://github.com/protocolbuffers/protobuf-go/compare/v1.24.0...v1.26.0)

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update protobuf

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>

* Update go.sum

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-06-21 10:24:47 -07:00
dependabot[bot] 3bb1b6397d
Bump helm.sh/helm/v3 from 3.4.1 to 3.6.1 (#6286)
* Bump helm.sh/helm/v3 from 3.4.1 to 3.6.1

Bumps [helm.sh/helm/v3](https://github.com/helm/helm) from 3.4.1 to 3.6.1.
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.4.1...v3.6.1)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
2021-06-18 09:34:29 -06:00
Alejandro Pedraza 705f4d9391
Add LINKERD2_PROXY_INBOUND_IPS env var to proxy container (#6270)
This is readily available through the downwardAPI via status.podIPs
2021-06-18 09:33:19 -05:00
Tarun Pothulapati 395cc3677e
sp: prevent `sp.Spec.Routes` from being null'ed (#6271)
## Context

Currently, Whenever a `SP` is created with `Spec.Routes` field not being set from [golang types](https://github.com/linkerd/linkerd2/blob/main/controller/gen/apis/serviceprofile/v1alpha2/types.go#L13), k8s API rejects them with the following error

```bash
ServiceProfile.linkerd.io \"backend-svc.linkerd-smi-app.svc.cluster.local\" is invalid: spec.routes: Invalid value: \"null\": spec.routes in body must be of type array: \"null\"
```

This happens because, Golang automatically renders them it as `Routes: Null` whenever it marshaled into json. This is rejected by k8s API server as it expects that field to be an array.

[This is fixed in k8s >= 1.20](https://github.com/kubernetes/kubernetes/pull/95423) as non-nullable nulls are defaulted, and hence this error happens only in `<=1.19`.

## Problem

As `1.19` is a pretty recent version of k8s, and things like [smi-adaptor](https://github.com/linkerd/linkerd-smi/pull) may not want to manage and make sure `Spec.Routes` is not null all the time.

## Fix

This can be easily be fixed by marking `Spec.Routes` as `omitempty` in its json tags which means that the field is omitted whenever it is not set while being marshaled.

This means that the k8s API won't error out, as that field isn't set to anything invalid.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-06-17 12:08:32 +05:30
Josh Soref 0be792fadc
Spelling (#6215)
This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).

The misspellings have been reported at 0d56327e6f (commitcomment-51603624)

The action reports that the changes in this PR would make it happy: 03a9c310aa

Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2021-06-07 15:16:59 -06:00
wangchenglong01 9ea66c8f73
Condition is always 'false' because 'err' is always 'nil' (#6218)
Remove unnecessary err check

Signed-off-by: Cookie Wang <wangchl01@inspur.com>
2021-06-04 14:57:00 +05:30
Tarun Pothulapati 432f90c4cf
destination: prefer `sp.dstOverrides` over `trafficsplit` (#6156)
This updates the destination to prefer `serviceprofiles.dstOverrides`
over `trafficsplits`. This is useful as it is important for
ServiceProfile to take preference over TrafficSplits when both are
present.

This also makes integration testing the `smi-adaptor` easier.

This also adds unit tests in the `traffic_split_adaptor` to check
for the same.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-05-27 17:34:54 +05:30
Kevin Leimkuhler a12e2226b4
Separate protocol hint setting from H2 upgrades (#6150)
While uncommon, if H2 upgrades are disabled it's possible for an opaque workload
to not have it's hint.OpaqueTransport field set in it's destination profile
response.

This changes the H2 upgrade enabled check to be specific for setting the
hint.Protocol while allowing hint.OpaqueTransport to be set independent of
that value.

Signed-off-by: Kevin Leimkuhler kevin@kleimkuhler.com
Co-authored-by: Oliver Gould <ver@buoyant.io>
2021-05-24 13:15:54 -07:00
Oliver Gould da6d8e5272
Update Go to 1.16.4 (#6170)
Go 1.16.4 includes a fix for a denial-of-service in net/http: golang/go#45710

Go's error file-line formatting changed in 1.16.3, so this change
updates tests to only do suffix matching on these error strings.
2021-05-24 11:57:46 -07:00
Oliver Gould f2eb3162d1
destination: Check port bounds (#6143)
CodeQL analysis flags that our use of `strconv.Atoi` is potentially
incorrect. More information [here][1].

This change addresses this by explicitly checking the bounds of the port
integer before casting it from `int` to `uint32`.

[1]: https://codeql.github.com/codeql-query-help/go/go-incorrect-integer-conversion/
2021-05-24 10:20:47 -07:00
Dennis Adjei-Baah 9dd6524467
Bump proxy init version to v1.3.12 (#6141)
* Skip configuring firewall if rules exists

This change fixes an issue where the `proxy-init` will fail if
`PROXY_INIT_*` chains already exist in the pod's iptables. This then
causes the pod to never start because proxy-init never finishes running
with a non-zero exit code.

In this change, we capture the output of the `iptables-save` command and
then check to see if the output contains the `PROXY_INIT_*` chains. If
they do, exist and log a warning stating that the chains already
exist.

Fixes #5786

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-05-19 17:20:48 -07:00
Tarun Pothulapati 5f2d969cfa
Support Traffic Splitting through `ServiceProfile.dstOverrides` (#6060)
* Support Traffic Splitting through `ServiceProfile.dstOverrides`

This commit
- updates the destination logic to prevent the override of `ServiceProfiles.dstOverrides` when a `TS` is absent and no `dstOverrides` set. This means that any `serviceprofiles.dstOverrides` set by the user are correctly propagated to the proxy and thus making Traffic Splitting through service profiles work.
- This also updates the `profile_translator.toDstOverrides` to use the port from `GetProfiles` when there is no port in `dstOverrides.authority`


After these changes, The following `ServiceProfile` to perform
traffic splitting should work.

```yaml
apiVersion: linkerd.io/v1alpha2
kind: ServiceProfile
metadata:
  name: backend-svc.linkerd-trafficsplit-test-sp.svc.cluster.local
spec:
  dstOverrides:
  - authority: "backend-svc.linkerd-trafficsplit-test-sp.svc.cluster.local.:8080"
    weight: 500m
  - authority: "failing-svc.linkerd-trafficsplit-test-sp.svc.cluster.local.:8080"
    weight: 500m
```


This PR also adds an integration test i.e `TestTrafficSplitCliWithSP` which  checks for
Traffic Splitting through Service Profiles. This integration test follows the same pattern 
of other traffic split integration tests but Instead, we perform `linkerd viz stat` on the
client and check if there are expected server objects to be there as `linkerd viz stat ts`
does not _yet_ work with Service Profiles.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-05-03 23:39:56 +05:30
wangchenglong01 89c4126d90
spSharedInformers, tsSharedInformers may be nil, calling the metho will cause panic (#6030)
* spSharedInformers, tsSharedInformers may be nil, calling the metho will cause panic

Signed-off-by: Cookie Wang <wangchl01@inspur.com>

* spSharedInformers, tsSharedInformers may be nil, calling the metho will cause panic

Signed-off-by: Cookie Wang <wangchl01@inspur.com>
2021-04-22 08:33:03 -04:00
Kevin Leimkuhler 1071ec2e77
Add support for awaiting proxy readiness (#5967)
### What

This change adds the `config.linkerd.io/proxy-await` annotation which when set will delay application container start until the proxy is ready. This allows users to force application containers to wait for the proxy container to be ready without modifying the application's Docker image. This is different from the current use-case of [linkerd-await](https://github.com/olix0r/linkerd-await) which does require modifying the image.

---

To support this, Linkerd is using the fact that containers are started in the order that they appear in `spec.containers`. If `linkerd-proxy` is the first container, then it will be started first.

Kubernetes will start each container without waiting on the result of the previous container. However, if a container has a hook that is executed immediately after container creation, then Kubernetes will wait on the result of that hook before creating the next container. Using a `PostStart` hook in the `linkerd-proxy` container, the `linkerd-await` binary can be run and force Kubernetes to pause container creation until the proxy is ready. Once `linkerd-await` completes, the container hook completes and the application container is created.

Adding the `config.linkerd.io/await-proxy` annotation to a pod's metadata results in the `linkerd-proxy` container being the first container, as well as having the container hook:

```yaml
postStart:
  exec:
    command:
    - /usr/lib/linkerd/linkerd-await
```

---

### Update after draft

There has been some additional discussion both off GitHub as well as on this PR (specifically with @electrical).

First, we decided that this feature should be enabled by default. The reason for this is more often than not, this feature will prevent start-up ordering issues from occurring without having any negative effects on the application. Additionally, this will be a part of edges up until the 2.11 (the next stable release) and having it enabled by default will allow us to check that it does not conflict often with applications. Once we are closer to 2.11, we'll be able to determine if this should be disabled by default because it causes more issues than it prevents.

Second, this feature will remain configurable; if disabled, then upon injection the proxy container will not be made the first container in the pod manifest. This is important for the reasons discussed with @electrical about tools that make assumptions about app containers being the first container. For example, Rancher defaults to showing overview pages for the `0` index container, and if the proxy container was always `0` then this would defeat the purpose of the overview page.

### Testing

To test this I used the `sleep.sh` script and changed `Dockerfile-proxy` to use it as it's `ENTRYPOINT`. This forces the container to sleep for 20 seconds before starting the proxy.

---

`sleep.sh`:

```bash
#!/bin/bash
echo "sleeping..."
sleep 20
/usr/bin/linkerd2-proxy-run
```

`Dockerfile-proxy`:

```textile
...
COPY sleep.sh /sleep.sh
RUN ["chmod", "+x", "/sleep.sh"]
ENTRYPOINT ["/sleep.sh"]
```

---

```bash
# Build and install with the above changes
$ bin/docker-build
...
$ bin/image-load --k3d
...
$ bin/linkerd install |kubectl apply -f -
```

Annotate the `emoji` deployment so that it's the only workload that should wait for it's proxy to be ready and inject it:

```bash
cat emojivoto.yaml |bin/linkerd inject - |kubectl apply -f -
```

You can then see that the `emoji` deployment is not starting its application container until the proxy is ready:

```bash
$ kubectl get -n emojivoto pods
NAME                        READY   STATUS            RESTARTS   AGE
voting-ff4c54b8d-sjlnz      1/2     Running           0          9s
emoji-f985459b4-7mkzt       0/2     PodInitializing   0          9s
web-5f86686c4d-djzrz        1/2     Running           0          9s
vote-bot-6d7677bb68-mv452   1/2     Running           0          9s
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-04-21 17:43:23 -04:00
Tarun Pothulapati fac28ff8a7
destination: Remove support for IP Queries in `Get` API (#6018)
* destination: Remove support for IP Queries in `Get` API

Fixes #5246

This PR updates the destination to report an error when `Get`
is called for IP Queries. As the issue mentions, The proxies
are not using this API anymore and it helps to simplify and
remove unnecessary logic.

This removes the relevant `IPWatcher` logic, along with
unit tests

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-04-21 12:40:40 +05:30
Matei David 99d15f8877
Add ns annotation inheritance to pods (#6002) (#6002)
Closes #5977  

## What

This changes adds support for namespace configuration annotation inheritance for pods. Any annotations (e.g `config.linkerd.io/skip-outbound-ports` or `config.linkerd.io/proxy-await`) that are applied against a namespace will now also be applied to pods running in that namespace by the _proxy-injector_. 

* Pods do not inherit annotations from their namespaces; the exception to this is `opaque-ports` introduced in #5941. This expands on the work by allowing all config annotations to be inherited.
* Main advantage here is that instead of applying annotations on a workload-by-workload basis we can just apply them against the namespace and it will be mirrored on all pods within the namespace.
* Through this change the controller can also check the proxy's configuration directly from the pod's meta rather than from env variables.

## How

Change is pretty straightforward. We want to make sure that before we apply a JSON patch we first copy all of the namespace annotations to the pod. The logic that was in place takes care of applying the patch.

* One obvious constraint is that we want only want valid configuration annotations to be applied. To be a "valid" configuration it has to exist and it has to be prefixed with `config.linkerd.io` -- the easiest way to do this is to go through all of the available proxy configuration options and check whether any of the options are included in the namespace's annotations (done in `GetNsConfigKeys()` where we fetch all annotation keys from the namespace).
* A consideration I had with this change is whether to add `opaque-ports` as part of all of the config keys; opaque ports is a bit different though since it can be applied on a pod as well as a service -- through this change we only want to apply config annotations to pods. I chose to keep the two separate.
* Added a unit test that checks if a pod inherits config annotations from its namespace; this also includes an invalid annotation which doesn't show up in the "expected" patch to test we validate configuration correctly.

### Tests
---

I injected emojivoto and added an annotation to its namespace:

```
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "34567"
    config.linkerd.io/proxy-log-level: debug
    config.linkerd.io/skip-outbound-ports: "44556"
    linkerd.io/inject: enabled
```

The deployment specs do not have any additional annotations as part of the pod template metadata. I first tested if the above annotations would be inherited with the current edge release (I expected opaque ports to be).

**Before changes**:
```
apiVersion: v1
kind: Pod
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "34567"
    linkerd.io/created-by: linkerd/proxy-injector edge-21.4.1
    linkerd.io/identity-mode: default
    linkerd.io/inject: enabled
    linkerd.io/proxy-version: edge-21.4.1
  creationTimestamp: "2021-04-08T14:33:10Z"
  generateName: emoji-696d9d8f95-
  labels:
    app: emoji-svc
    linkerd.io/control-plane-ns: linkerd
    linkerd.io/proxy-deployment: emoji
    linkerd.io/workload-ns: emojivoto
    pod-template-hash: 696d9d8f95
    version: v11
spec:
  initContainers:
  - args:
    - --incoming-proxy-port
    - "4143"
    - --outgoing-proxy-port
    - "4140"
    - --proxy-uid
    - "2102"
    - --inbound-ports-to-ignore
    - 4190,4191
    - --outbound-ports-to-ignore
    - "44556"
    image: cr.l5d.io/linkerd/proxy-init:v1.3.9
    imagePullPolicy: IfNotPresent
    name: linkerd-init
```
(opaque ports is in there, skip outbound isn't -- although the initContainer gets the right argument since this is already applied from the namespace by the proxy injector).

**After the changes**:
```
apiVersion: v1
kind: Pod
metadata:
  annotations:
    config.linkerd.io/opaque-ports: "34567"
    config.linkerd.io/proxy-log-level: debug
    config.linkerd.io/skip-outbound-ports: "44556"
    linkerd.io/created-by: linkerd/proxy-injector dev-a7bb62fd-matei
    linkerd.io/identity-mode: default
    linkerd.io/inject: enabled
    linkerd.io/proxy-version: dev-a7bb62fd-matei
  creationTimestamp: "2021-04-08T14:42:06Z"
  generateName: web-5f86686c4d-
  labels:
    app: web-svc
    linkerd.io/control-plane-ns: linkerd
    linkerd.io/proxy-deployment: web
    linkerd.io/workload-ns: emojivoto
    pod-template-hash: 5f86686c4d
    version: v11
  initContainers:
  - args:
    - --incoming-proxy-port
    - "4143"
    - --outgoing-proxy-port
    - "4140"
    - --proxy-uid
    - "2102"
    - --inbound-ports-to-ignore
    - 4190,4191
    - --outbound-ports-to-ignore
    - "44556"
    image: cr.l5d.io/linkerd/proxy-init:v1.3.9
    imagePullPolicy: IfNotPresent
    name: linkerd-init
```
(opaque ports is there and so is skip outbound and the proxy log level, correct options still passed to the initContainers).

*Edit*: made a small change, had a look at `GetNsConfigKeys()` and thought it'd be better to keep the slice of keys as a fixed length array since we know there will be at most `len(ProxyAnnotations)` at any point. Not sure such a big size is warranted but we can avoid calling append for every element.

Signed-off-by: Matei David <matei@buoyant.io>
2021-04-20 22:25:02 -04:00
Alejandro Pedraza 6980e45e1d
Remove the `linkerd-controller` pod (#6039)
* Remove the `linkerd-controller` pod

Now that we got rid of the `Version` API (#6000) and the destination API forwarding business in `linkerd-controller` (#5993), we can get rid of the `linkerd-controller` pod.

## Removals

- Deleted everything under `/controller/api/public` and `/controller/cmd/public-api`.
- Moved `/controller/api/public/test_helper.go` to `/controller/api/destination/test_helper.go` because those are really utils for destination testing. I also extracted from there the prometheus mock structs and put that under `/pkg/prometheus/test_helper.go`, which is now by both the `linkerd diagnostics endpoints` and the `metrics-api` tests, removing some duplication.
- Deleted the `controller.yaml` and `controller-rbac.yaml` helm templates along with the `publicAPIResources` and `publicAPIProxyResources` helm values.

## Health checks

- Removed the `can initialize the client` check given such client is no longer needed. The `linkerd-api` section was left with only the check `control pods are ready`, so I moved that under the `linkerd-existence` section and got rid of the `linkerd-api` section altogether.
- In that same `linkerd-existence` section, got rid of the `controller pod is running` check.

## Other changes

- Fixed the Control Plane section of the dashboard, taking account the disappearance of `linkerd-controller` and previously, of `linkerd-sp-validator`.
2021-04-19 09:57:45 -05:00
Dennis Adjei-Baah 78363ca894
Disable protocol and TLS hints on skipped ports (#6022)
When a pod is configured with `skip-inbound-ports` annotation, a client
proxy trying to connect to that pod tries to connect to it via H2 and
also tries to initiate a TLS connection. This issue is caused by the
destination controller when it sends protocol and TLS hints to the
client proxy for that skipped port.

This change fixes the destination controller so that it no longer
sends protocol and TLS identity hints to outbound proxies resolving a
`podIP:port` that is on a skipped inbound port.

I've included a test that exhibits this error prior to this fix but you
can also test the prior behavior by:

```bash
curl https://run.linkerd.io/booksapp.yml > booksapp.yaml

# edit either the books or authors service to:
1: Configure a failure rate of 0.0
2: add the `skip-inbound-ports` config annotation

bin/linkerd viz stat pods webapp

There should be no successful requests on the webapp deployment
```
Fixes #5995

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-04-16 12:44:17 -04:00
Alejandro Pedraza c24585e6ea
Removed `Version` API from the public-api (#6000)
* Removed `Version` API from the public-api

This is a sibling PR to #5993, and it's the second step towards removing the `linkerd-controller` pod.

This one deals with a replacement for the `Version` API, fetching instead the `linkerd-config` CM and retrieving the `LinkerdVersion` value.

## Changes to the public-api

- Removal of the `publicPb.ApiClient` entry from the `Client` interface
- Removal of the `publicPb.ApiServer` entry from the `Server` interface
- Removal of the `Version` and related methods from `client.go`, `grpc_server.go` and `http_server.go`

## Changes to `linkerd version`

- Removal of all references to the public API.
- Call `healthcheck.GetServerVersion` to retrieve the version

## Changes to `linkerd check`

- Removal of the "can query the control API" check from the "linkerd-api" section
- Addition of a new "can retrieve the control plane version" check under the "control-plane-version" section

## Changes to `linkerd-web`

- The version is now retrieved from the `linkerd-config` CM instead of a public-API call.
- Removal of all references to the public API.
- Removal of the `data-go-version` global attribute on the dashboard, which wasn't being used.

## Other changes

- Added `ValuesFromConfigMap` function in `values.go` to convert the `linkerd-config` CM into a `*Values` struct instance
- Removal of the `public` protobuf
- Refactor 'linkerd repair' to use the refactored 'healthcheck.GetServerVersion()' function
2021-04-16 11:23:55 -05:00
Alejandro Pedraza b22b5849e3
Removed Destination's `Get` API from the public-api (#5993)
* Removed Destination's `Get` API from the public-api

This is the first step towards removing the `linkerd-controller` pod. It deals with removing the Destination `Get` http and gRPC endpoint it exposes, that only the `linkerd diagnostics endpoints` is consuming.

Removed all references to Destination in the public-api, including all the gRPC-to-http-to-gRPC forwardings:
- Removed the `Get` method from the public-api gRPC server that forwarded the connection from the controller pod to the destination pod. Clients should now connect directly to the destination service via gRPC.
- Likewise, removed the destination boilerplate in the public-api http server (and its `main.go`) that served the `Get` destination endpoint and forwarded it into the gRPC server.
- Finally, removed the destination boilerplate from the public-api's `client.go` that created a client connecting to the http API.
2021-04-16 09:04:17 -05:00
Dennis Adjei-Baah 19b31ee894
Update proxy-init version to v1.3.11 (#6013)
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-04-09 18:30:05 -04:00
Alejandro Pedraza 61f443ad05
Schedule heartbeat 10 mins after install (#5973)
* Schedule heartbeat 10 mins after install

... for the Helm installation method, thus aligning it with the CLI
installation method, to reduce the midnight peak on the receiving end.

The logic added into the chart is now reused by the CLI as well.

Also, set `concurrencyPolicy=Replace` so that when a job fails and it's
retried, the retries get canceled when the next scheduled job is triggered.

Finally, the go client only failed when the connection failed;
successful connections with a non 200 response status were considered
successful and thus the job wasn't retried. Fixed that as well.
2021-03-31 07:49:36 -05:00
Kevin Leimkuhler a11012819c
Add opaque ports namespace inheritance to pods (#5941)
### What

When a namespace has the opaque ports annotation, pods and services should
inherit it if they do not have one themselves. Currently, services do this but
pods do not. This can lead to surprising behavior where services are correctly
marked as opaque, but pods are not.

This changes the proxy-injector so that it now passes down the opaque ports
annotation to pods from their namespace if they do not have their own annotation
set. Closes #5736.

### How

The proxy-injector webhook receives admission requests for pods and services.
Regardless of the resource kind, it now checks if the resource should inherit
the opaque ports annotation from its namespace. It should inherit it if the
namespace has the annotation but the resource does not.

If the resource should inherit the annotation, the webhook creates an annotation
patch which is only responsible for adding the opaque ports annotation.

After generating the annotation patch, it checks if the resource is injectable.
From here there are a few scenarios:

1. If no annotation patch was created and the resource is not injectable, then
   admit the request with no changes. Examples of this are services with no OP
   annotation and inject-disabled pods with no OP annotation.
2. If the resource is a pod and it is injectable, create a patch that includes
   the proxy and proxy-init containers—as well as any other annotations and
   labels.
3. The above two scenarios lead to a patch being generated at this point, so no
   matter the resource the patch is returned.

### UI changes

Resources are now reported to either be "injected", "skipped", or "annotated".

The first pass at this PR worked around the fact that injection reports consider
services and namespaces injectable. This is not accurate because they don't have
pod templates that could be injected; they can however be annotated.

To fix this, an injection report now considers resources "annotatable" and uses
this to clean up some logic in the `inject` command, as well as avoid a more
complex proxy-injector webhook.

What's cool about this is it fixes some `inject` command output that would label
resources as "injected" when they were not even mutated. For example, namespaces
were always reported as being injected even if annotations were not added. Now,
it will properly report that a namespace has been "annotated" or "skipped".

### Tests

For testing, unit tests and integration tests have been added. Manual testing
can be done by installing linkerd with `debug` controller log levels, and
tailing the proxy-injector's app container when creating pods or services.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-03-29 19:41:15 -04:00
Bruce Chen Wenliang b84b2077d3
Ignore pods in "Terminating" when watching IP addresses. (#5940)
Fixes #5939

Some CNIs reasssign the IP of a terminating pod to a new pod, which
leads to duplicate IPs in the cluster.

It eventually triggers #5939.

This commit will make the IPWatcher, when given an IP, filter out the terminating pods
(when a pod is given a deletionTimestamp).

The issue is hard reproduce because we are not able to assign a
particular IP to a pod manually.

Signed-off-by: Bruce <wenliang.chen@personio.de>

Co-authored-by: Bruce <wenliang.chen@personio.de>
2021-03-24 18:21:42 +05:30
Kevin Leimkuhler 3f72c998b3
Handle pod lookups for pods that map to a host IP and host port (#5904)
This fixes an issue where pod lookups by host IP and host port fail even though
the cluster has a matching pod.

Usually these manifested as `FailedPrecondition` errors, but the messages were
too long and resulted in http/2 errors. This change depends on #5893 which fixes
that separate issue.

This changes how often those `FailedPrecondition` errors actually occur. The
destination service now considers pod host IPs and should reduce the frequency
of those errors.

Closes #5881 

---

Lookups like this happen when a pod is created with a host IP and host port set
in its spec. It still has a pod IP when running, but requests to
`hostIP:hostPort` will also be redirected to the pod. Combinations of host IP
and host Port are unique to the cluster and enforced by Kubernetes.

Currently, the destination services fails to find pods in this scenario because
we only keep an index with pod and their pod IPs, not pods and their host IPs.
To fix this, we now also keep an index of pods and their host IPs—if and only if
they have the host IP set.

Now when doing a pod lookup, we consider both the IP and the port. We perform
the following steps:

1. Do a lookup by IP in the pod podIP index
  - If only one pod is found then return it
2. 0 or more than 1 pods have the same pod IP
3. Do a lookup by IP in the pod hostIP index
  - If any number of pods were found, we know that IP maps to a node IP.
    Therefore, we search for a pod with a matching host Port. If one exists then
    return it; if not then there is no pod that matches `hostIP:port`
4. The IP does not map to a host IP
5. If multiple pods were found in `1`, then we know there are pods with
   conflicting podIPs and an error is returned
6. If no pounds were found in `1` then there is no pod that matches `IP:port`

---

Aside from the additional IP watcher test being added, this can be tested with
the following steps:

1. Create a kind cluster. kind is required because it's pods in `kube-system`
   have the same pod IPs; this not the case with k3d: `bin/kind create cluster`
2. Install Linkerd with `4445` marked as opaque: `linkerd install --set
   proxy.opaquePorts="4445" |kubectl apply -f -`
2. Get the node IP: `kubectl get -o wide nodes`
3. Pull my fork of `tcp-echo`:

```
$ git clone https://github.com/kleimkuhler/tcp-echo
...
$ git checkout --track kleimkuhler/host-pod-repro
```

5. `helm package .`
7. Install `tcp-echo` with the server not injected and correct host IP: `helm
   install tcp-echo tcp-echo-0.1.0.tgz --set server.linkerdInject="false" --set
   hostIP="..."`
8. Looking at the client's proxy logs, you should not observe any errors or
   protocol detection timeouts.
9. Looking at the server logs, you should see all the requests coming through
   correctly.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-03-18 13:29:43 -04:00
Riccardo Freixo 66b89f55e7
Fix named port resolution mid roll out (#5912) (#5911)
# Problem

While rolling out often not all pods will be ready in all the same set of
ports, leading the Kubernetes Endpoints API to return multiple subsets,
each covering a different set of ports, with the end result that the
same address gets repeated across subsets.

The old code for endpointsToAddresses would loop through all subsets, and the
later occurrences of an address would overwrite previous ones, with the
last one prevailing.

If the last subset happened to be for an irrelevant port, and the port to
be resolved is named, resolveTargetPort would resolve to port 0, which would
return port 0 to clients, ultimately leading linkerd-proxy to forward
connections to port 0.

This only happens if the pods selected by a service expose > 1 port, the
service maps to > 1 of these ports, and at least one of these ports is named.

# Solution

Never write an address to set of addresses if resolved port is 0, which
indicates named port resolution failed.

# Validation

Added a test case.

Signed-off-by: Riccardo Freixo <riccardofreixo@gmail.com>
2021-03-17 17:40:11 -04:00
Kevin Leimkuhler 1544d90150
dest: Reduce possible response size in destination service errors (#5893)
This reduces the possible HTTP response size from the destination service when
it encounters an error during a profile lookup.

If multiple objects on a cluster share the same IP (such as pods in
`kube-system`), the destination service will return an error with the two
conflicting pod yamls.

In certain cases, these pod yamls can be too large for the HTTP response and the
destination pod's proxy will indicate that with the following error:

```
hyper::proto::h2::server: send response error: user error: header too big
```

From the app pod's proxy, this results in the following error:

```
poll_profile: linkerd_service_profiles::client: Could not fetch profile error=status: Unknown, message: "http2 error: protocol error: unexpected internal error encountered"
```

We now only return the conflicting pods (or services) names. This reduces the
size of the returned error and fixes these warnings from occurring.

Example response error:

```
poll_profile: linkerd_service_profiles::client: Could not fetch profile error=status: FailedPrecondition, message: "Pod IP address conflict: kube-system/kindnet-wsflq, kube-system/kube-scheduler-kind-control-plane", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc", "date": "Fri, 12 Mar 2021 19:54:09 GMT"} }
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-03-16 13:28:05 -04:00
Dennis Adjei-Baah 7f0529ed7c
update go.mod and docker images to go 1.16.2 (#5890)
* update go.mod and docker images to go 1.16.1

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>

* update test error messages for ParseDuration

* update go version to 1.16.2
2021-03-15 11:20:16 -05:00
Dennis Adjei-Baah 96df251477
Fix sp-validator not setting requestUID (#5866)
This change fixes an issue where the `linkerd-sp-validator` does not set
the `request.UID` for an `admissionResponse`. This causes an issue that
prevents service profiles from being added or updated.

Fixes #5862

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-03-04 09:57:57 -08:00
Tarun Pothulapati 5c1a375a51
destination: pass opaque-ports through cmd flag (#5829)
* destination: pass opaque-ports through cmd flag

Fixes #5817

Currently, Default opaque ports are stored at two places i.e
`Values.yaml` and also at `opaqueports/defaults.go`. As these
ports are used only in destination, We can instead pass these
values as a cmd flag for destination component from Values.yaml
and remove defaultPorts in `defaults.go`.

This means that users if they override `Values.yaml`'s opauePorts
field, That change is propogated both for injection and also
discovery like expected.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-03-01 16:00:20 +05:30
Oliver Gould ab2a809e1b
docker: Avoid specifying TARGETARCH for await (#5835)
When introducing the `linkerd-await` helper, we provided a default value
for `TARGETARCH`. This appears to interfere with multi-arch image
builds, causing ARM builds to fetch amd64 binaries.

Unsetting this default appears to fix this issue.
2021-02-26 07:30:14 -05:00
Oliver Gould 9e9b40d5a2
Add the linkerd-await helper to all Linkerd containers (#5821)
When a container starts up, we generally want to wait for the proxy to
initialize before starting the controller (which may initiate outbound
connections, especially to the Kubernetes API). This is true for all
pods except the identity controller, which must start before its proxy.

This change adds the linkerd-await helper to all of our container
images. Its use is explicitly disabled in the identity controller, due
to startup ordering constraints, and the heartbeat controller, because
it does not run a proxy currently.

Fixes #5819
2021-02-25 10:35:04 -08:00
Dennis Adjei-Baah 15d1809bd0
Remove linkerd prefix from extension resources (#5803)
* Remove linkerd prefix from extension resources

This change removes the `linkerd-` prefix on all non-cluster resources
in the jaeger and viz linkerd extensions. Removing the prefix makes all
linkerd extensions consistent in their naming.

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-02-25 11:01:31 -05:00
Kevin Leimkuhler 07d5071cc4
Remove default skip ports and add to opaque ports (#5810)
This change removes the default ignored inbound and outbound ports from the
proxy init configuration.

These ports have been moved to the the `proxy.opaquePorts` configuration so that
by default, installations will proxy all traffic on these ports opaquely.

Closes #5571 
Closes #5595 

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-24 16:22:09 -05:00
Kevin Leimkuhler 51a965e228
Return default opaque ports in the destination service (#5814)
This changes the destination service to always use a default set of opaque ports
for pods and services. This is so that after Linkerd is installed onto a
cluster, users can benefit from common opaque ports without having to annotate
the workloads that serve the applications.

After #5810 merges, the proxy containers will be have the default opaque ports
`25,443,587,3306,5432,11211`. This value on the proxy container does not affect
traffic though; it only configures the proxy.

In order for clients and servers to detect opaque protocols and determine opaque
transports, the pods and services need to have these annotations.

The ports `25,443,587,3306,5432,11211` are now handled opaquely when a pod or
service does not have the opaque ports annotation. If the annotation is present
with a different value, this is used instead of the default. If the annotation
is present but is an empty string, there are no opaque ports for the workload.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-24 14:55:31 -05:00
Kevin Leimkuhler 5bd5db6524
Revert "Rename multicluster annotation prefix and move when possible (#5771)" (#5813)
This reverts commit f9ab867cbc which renamed the
multicluster label name from `mirror.linkerd.io` to `multicluster.linkerd.io`.

While this change was made to follow similar namings in other extensions, it
complicates the multicluster upgrade process due to the secret creation.

`mirror.linkerd.io` is not that important of a label to change and this will
allow a smoother upgrade process for `stable-2.10.x`

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-24 12:54:52 -05:00
Kevin Leimkuhler ff93d2d317
Mirror opaque port annotations on services (#5770)
This change introduces an opaque ports annotation watcher that will send
destination profile updates when a service has its opaque ports annotation
change.

The user facing change introduced by this is that the opaque ports annotation is
now required on services when using the multicluster extension. This is because
the service mirror will create mirrored services in the source cluster, and
destination lookups in the source cluster need to discover that the workloads in
the target cluster are opaque protocols.

### Why

Closes #5650

### How

The destination server now has a new opaque ports annotation watcher. When a
client subscribes to updates for a service name or cluster IP, the `GetProfile`
method creates a profile translator stack that passes updates through resource
adaptors such as: traffic split adaptor, service profile adaptor, and now opaque
ports adaptor.

When the annotation on a service changes, the update is passed through to the
client where the `opaque_protocol` field will either be set to true or false.

A few scenarios to consider are:

  - If the annotation is removed from the service, the client should receive
    an update with no opaque ports set.
  - If the service is deleted, the stream stays open so the client should
    receive an update with no opaque ports set.
  - If the service has the annotation added, the client should receive that
    update.

### Testing

Unit test have been added to the watcher as well as the destination server.

An integration test has been added that tests the opaque port annotation on a
service.

For manual testing, using the destination server scripts is easiest:

```
# install Linkerd

# start the destination server
$ go run controller/cmd/main.go destination -kubeconfig ~/.kube/config

# Create a service or namespace with the annotation and inject it

# get the destination profile for that service and observe the opaque protocol field
$ go run controller/script/destination-client/main.go -method getProfile -path test-svc.default.svc.cluster.local:8080
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000} 
INFO[0000]                                              
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000} 
INFO[0000]
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-23 13:36:17 -05:00
Max Goltzsche a8e5bff21f
Provide CA cert as env var to identity controller. (#5690)
Currently the identity controller is the only component that receives the CA certificate / trust anchors as option `-identity-trust-anchors-pem` instead of an env var.
This stops one from letting it read the trust anchors from a Secret that is managed by e.g. cert-manager.

This PR uses an env var instead of the option to provide the trust anchors. For most helm chart users this doesn't change anything. However using kustomize the helm output manifest can now be adjusted (again) so that the certificate is loaded from a ConfigMap or Secret like in [this example](https://github.com/mgoltzsche/khelm/tree/master/example/kpt/linkerd) which aims to produce a static manifest to make the installation/update more declarative and support GitOps workflows.

This PR does not provide chart options/values to specify Secrets upfront - it would introduce dependencies to other operators.

Relates to #3843, see https://github.com/linkerd/linkerd2/issues/3843#issuecomment-775516217

Fixes #3321

Signed-off-by: Max Goltzsche <max.goltzsche@gmail.com>
2021-02-22 10:30:43 -05:00
Dennis Adjei-Baah 57db567204
send serviceprofile counts in heartbeat URL (#5775)
This change counts the number of service profiles installed in a cluster
and adds that info to the heartbeat HTTP request.

Fixes #5474

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-02-19 10:11:26 -05:00
Kevin Leimkuhler f9ab867cbc
Rename multicluster annotation prefix and move when possible (#5771)
This renames the multicluster annotation prefix from `mirror.linkerd.io` to
`multicluster.linkerd.io` in order to reflect other extension naming patterns.

Additionally, it moves labels only used in the Multicluster extension into their
own labels file—again to reflect other extensions.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-18 17:10:33 -05:00
Dennis Adjei-Baah 89ec6056c0
Report extension usage (#5761)
This change modifies `linkerd-heartbeat` to report on all linkerd
extensions being used in a cluster.

The heartbeat URL now includes extension names and a value of 
`"1"` if an extension is installed. 
```
https://versioncheck.linkerd.io/version.json?install-time=1613518301&k8s-version=v1.20.2&linkerd-jaeger=1&linkerd-multicluster=1&linkerd-viz=1
```

Fixes #5516

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-02-18 07:33:35 -08:00
Kevin Leimkuhler edd3812f30
Add services to proxy injector for opaque ports annotation (#5766)
This adds namespace inheritance of the opaque ports annotation to services. 

This means that the proxy injector now watches services creation in a cluster.
When a new service is created, the webhook receives an admission request for
that service and determines whether a patch needs to be created.

A patch is created if the service does not have the annotation, but the
namespace does. This means the service inherits the annotation from the
namespace.

A patch is not created if the service and the namespace do not have the
annotation, or the service has the annotation. In the case of the service having
the annotation, we don't even need to check the namespace since it would not
inherit it anyways.

If a namespace has the annotation value changed, this will not be reflected on
the service. The service would need to be recreated so that it goes through
another admission request.

None of this applies to the `inject` command which still skips service
injection. We rely on being able to check the namespace annotations, and this is
only possible in the proxy injector webhook when we can query the k8s API.

Closes #5737

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-17 20:58:18 -05:00
Oliver Gould 6dc7efd704
docker: Access container images via cr.l5d.io (#5756)
We've created a custom domain, `cr.l5d.io`, that redirects to `ghcr.io`
(using `scarf.sh`). This custom domain allows us to swap the underlying
container registry without impacting users. It also provides us with
important metrics about container usage, without collecting PII like IP
addresses.

This change updates our Helm charts and CLIs to reference this custom
domain. The integration test workflow now refers to the new domain,
while the release workflow continues to use the `ghcr.io/linkerd` registry
for the purpose of publishing images.
2021-02-17 14:31:54 -08:00
Alejandro Pedraza 826d579924
Upgrade proxy-init to v1.3.9 (#5759)
Fixes #5755 follow-up to #5750 and #5751

- Unifies the Go version across Docker and CI to be 1.14.15;
- Updates the GitHub Actions base image from ubuntu-18.04 to ubuntu-20.04; and
- Updates the runtime base image from debian:buster-20201117-slim to debian:buster-20210208-slim.
2021-02-16 17:59:28 -05:00
Oliver Gould 2774780fb8
Update Go to 1.14.15 (#5751)
The Go-1.14 release branch includes a number of important updates. This
change updates our containers' base image to the latest release, 1.14.15

See linkerd/linkerd2-proxy-init#32
Fixes #5655
2021-02-16 08:40:06 -08:00
Kevin Leimkuhler 5dc662ae97
Remove namespace inheritance of opaque ports annotation (#5739)
This change removes the namespace inheritance of the opaque ports annotation.
Now when setting opaque port related fields in destination profile responses, we
only look at the pod annotations.

This prepares for #5736 where the proxy-injector will add the annotation from
the namespace if the pod does not have it already.

Closes #5735

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-15 10:21:20 -05:00
Filip Petkovski 73f9fb3518
Use a shared informer when getting node topology (#5722)
Getting information about node topology queries the k8s api directly.
In an environment with high traffic and high number of pods, the
k8s api server can become overwhelmed or start throttling requests.

This MR introduces a node informer to resolve the bottleneck and
fetch node information asynchronously.

Fixes #5684

Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
2021-02-12 11:05:38 -05:00
Tarun Pothulapati a393c42536
values: removal of .global field (#5699)
* values: removal of .global field

Fixes #5425

With the new extension model, We no longer need `Global` field
as we don't rely on chart dependencies anymore. This helps us
further cleanup Values, and make configuration more simpler.

To make upgrades and the usage of new CLI with older config work,
We add a new method called `config.RemoveGlobalFieldIfPresent` that
is used in the upgrade and `FetchCurrentConfiguration` paths to remove
global field and attach its child nodes if global is present. This is verified
by the `TestFetchCurrentConfiguration`'s older test that has the global
field.

We also don't yet remove .global in some helm stable-upgrade tests for
the initial install to work.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-02-11 23:38:34 +05:30
Oliver Gould 8f2d01c5c0
Use fully-qualified DNS names in proxy configuration (#5707)
Pods with unusual DNS configurations may not be able to resolve the
control plane's domain names. We can avoid search path shenanigans by
adding a trailing dot to these names.
2021-02-10 08:27:35 -08:00
Kevin Leimkuhler 75fcc9d623
Move tap from core into Viz extension (#5651)
Closes #5545.

This change moves all tap and tap-injector code into the viz directory. 

The tap and tap-injector components now also use a new tap image—separating
these components from the controller image that they are currently part of. This
means the controller image has removed all its build dependencies related to
tap.

Finally, the tap Protobuf has been separated from the metrics-api and moved into
it's own `.proto` file and gen directory. This introduces a clear split between
metrics-api and tap Protobuf.

There is no change in behavior for the `viz tap` command.

### Reviewing

#### Docker images

All the bin directory scripts should be updated to build and load the tap image.
All the CI workflows should be updated to build and push the tap image.

#### Controller and pkg directories

This is primarily deletions. Most of the deleted code in this directory is now
in the tap directory of the Viz extension.

#### viz/tap

This is the location that all the tap related code now lives in. New files are
mostly moved from the controller and pkg directories. Imports have all been
updated to point at the right locations and Protobuf.

The Protobuf here is taken from metrics-api and contains all tap-related
Protobuf.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-09 12:43:21 -05:00
Alejandro Pedraza a04b30d2ab
Simplify SelfCheck API (#5665)
Fixes #5575

Now that only viz makes use of the `SelfCheck` api, merged the `healthcheck.proto` into `viz.proto`.

Also removed the "checkRPC" functionality that was used for handling multiple API responses and was only used by `SelfCheck`, because the extra complexity was not granted. Revert to use the plain vanilla "check" by just concatenating error responses.

## Success Output

```bash
$ bin/linkerd viz check
...
linkerd-viz
-----------
...
√ viz extension self-check
```

## Failure Examples

Failure when viz fails to connect to the k8s api:
```bash
$ bin/linkerd viz check
...
linkerd-viz
-----------
...
× viz extension self-check
    Error calling the Kubernetes API: someerror
    see https://linkerd.io/checks/#l5d-api-control-api for hints

Status check results are ×
```

Failure when viz fails to connect to Prometheus:
```bash
$ bin/linkerd viz check
...
linkerd-viz
-----------
...
× viz extension self-check
    Error calling Prometheus from the control plane: someerror
    see https://linkerd.io/checks/#l5d-api-control-api for hints

Status check results are ×
```

Failure when viz fails to connect to both the k8s api and Prometheus:
```bash
$ bin/linkerd viz check
...
linkerd-viz
-----------
...
× viz extension self-check
    Error calling the Kubernetes API: someerror
    Error calling Prometheus from the control plane: someerror
    see https://linkerd.io/checks/#l5d-api-control-api for hints

Status check results are ×
```
2021-02-05 10:13:45 -05:00
Kevin Leimkuhler 228d8e9e95
Add tracing enabled annotation (#5643)
This change adds the `jaeger.linkerd.io/tracing-enabled` annotation which is
automatically added by the Jaeger extension's `jaeger-injector`.

All pods that receive this annotation have also had the required environment
variables and volume/volume mounts add by the injector.

The purpose of this annotation is that it will allow `jaeger check` to check for
the presence of this annotation instead of needing to look at the proxy
containers directly. If this annotation is not present on pods, `jaeger check`
can warn users that tracing is not configured for those pods. This is similar to
`viz check` warning users that tap is not configured—recenlty added in #5602.

Closes #5632

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-03 14:05:15 -05:00
Tarun Pothulapati cd2e911be3
viz: add data-plane and prometheus healthchecks (#5602)
* viz: add data-plane and prometheus healthchecks

Fixes #5325

This branch adds the remaining healthchecks for the viz extension
i.e

- Data-plane metrics check in Prometheus
- `--proxy` mode which also checks for tap injections based
  on annotations.

For this, The following changes were needed
- Category.ID is made public so that --proxy toggleness can be
allowed
- Made tap env key as a field so that it can be re-used for
checks

simplify viz.NewHealthChecker by removing the need to
 pass categoryIDs and instead using
hc.appendCategories directly at the caller to add the
required categories. This is possible by dividing the vizCategories
into separate functions

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-02-01 23:01:13 +05:30
Kevin Leimkuhler 964a190069
viz: only tap pods that have tap explicitly enabled (#5608)
## What this changes

This allows the tap controller to inform `tap` users when pods either have tap
disabled or tap is not enabled yet.

## Why

When a user taps a resource that has not been admitted by the Viz extension's
`tap-injector`, tap is not explicitly disabled but it is also not enabled.
Therefore, the `tap` command hangs and provides no feedback to the user.

Closes #5544

## How

A new `viz.linkerd.io/tap-enabled` annotation is introduced which is
automatically added by the Viz extension's `tap-injector`. This annotation is
added to a pod when it is able to be tapped; this means that the pod and the
pod's namespace do not have the `config.linkerd.io/disable-tap` annotation
added.

When a user attempts to tap a resource, the tap controller now looks for this
new annotation; if the annotation is present on the pod then that pod is
tappable.

If the annotation is not present or tap is explicitly disabled, an error is
returned.

## UI changes

Multiple errors can now occur when trying to tap a resource:

1. There are no pods for the resource.
2. There are pods for the resource, but tap is disabled via pod or namespace
   annotation.
3. There are pods for the resource, but tap is not yet enabled because the
   `tap-injector` did not admit the resource.

Errors are now handled as shown below:

Tap is disabled:

```
❯ bin/linkerd viz tap deploy/test
Error: no pods to tap for deployment/test
pods found with tap disabled via the config.linkerd.io/disable-tap annotation
```

Tap is not enabled:

```
❯ bin/linkerd viz tap deploy/test
Error: no pods to tap for deployment/test
pods found with tap not enabled; try restarting resource so that it can be injected
```

There are a mix of pods with tap disabled or tap not enabled:

```
❯ bin/linkerd viz tap deploy/test
Error: no pods to tap for deployment/test
pods found with tap disabled via the config.linkerd.io/disable-tap annotation
pods found with tap not enabled; try restarting resource so that it can be injected
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-01-28 17:37:45 -05:00
Alex Leong 964ce11559
Update generated serviceprofile code (#5580)
I ran `bin/update-codegen.sh` to update the generated code to include the opaque ports in the generated deepcopy function for service profiles.

Signed-off-by: Alex Leong <alex@buoyant.io>
2021-01-22 14:34:49 -08:00
Alejandro Pedraza 8ac5360041
Extract from public-api all the Prometheus dependencies, and moves things into a new viz component 'linkerd-metrics-api' (#5560)
* Protobuf changes:
- Moved `healthcheck.proto` back from viz to `proto/common` as it remains being used by the main `healthcheck.go` library (it was moved to viz by #5510).
- Extracted from `viz.proto` the IP-related types and put them in `/controller/gen/common/net` to be used by both the public and the viz APIs.

* Added chart templates for new viz linkerd-metrics-api pod

* Spin-off viz healthcheck:
- Created `viz/pkg/healthcheck/healthcheck.go` that wraps the original `pkg/healthcheck/healthcheck.go` while adding the `vizNamespace` and `vizAPIClient` fields which were removed from the core `healthcheck`. That way the core healthcheck doesn't have any dependencies on viz, and viz' healthcheck can now be used to retrieve viz api clients.
- The core and viz healthcheck libs are now abstracted out via the new `healthcheck.Runner` interface.
- Refactored the data plane checks so they don't rely on calling `ListPods`
- The checks in `viz/cmd/check.go` have been moved to `viz/pkg/healthcheck/healthcheck.go` as well, so `check.go`'s sole responsibility is dealing with command business. This command also now retrieves its viz api client through viz' healthcheck.

* Removed linkerd-controller dependency on Prometheus:
- Removed the `global.prometheusUrl` config in the core values.yml.
- Leave the Heartbeat's `-prometheus` flag hard-coded temporarily. TO-DO: have it automatically discover viz and pull Prometheus' endpoint (#5352).

* Moved observability gRPC from linkerd-controller to viz:
- Created a new gRPC server under `viz/metrics-api` moving prometheus-dependent functions out of the core gRPC server and into it (same thing for the accompaigning http server).
- Did the same for the `PublicAPIClient` (now called just `Client`) interface. The `VizAPIClient` interface disappears as it's enough to just rely on the viz `ApiClient` protobuf type.
- Moved the other files implementing the rest of the gRPC functions from `controller/api/public` to `viz/metrics-api` (`edge.go`, `stat_summary.go`, etc.).
- Also simplified some type names to avoid stuttering.

* Added linkerd-metrics-api bootstrap files. At the same time, we strip out of the public-api's `main.go` file the prometheus parameters and other no longer relevant bits.

* linkerd-web updates: it requires connecting with both the public-api and the viz api, so both addresses (and the viz namespace) are now provided as parameters to the container.

* CLI updates and other minor things:
- Changes to command files under `cli/cmd`:
  - Updated `endpoints.go` according to new API interface name.
  - Updated `version.go`, `dashboard` and `uninstall.go` to pull the viz namespace dynamically.
- Changes to command files under `viz/cmd`:
  - `edges.go`, `routes.go`, `stat.go` and `top.go`: point to dependencies that were moved from public-api to viz.
- Other changes to have tests pass:
  - Added `metrics-api` to list of docker images to build in actions workflows.
  - In `bin/fmt` exclude protobuf generated files instead of entire directories because directories could contain both generated and non-generated code (case in point: `viz/metrics-api`).

* Add retry to 'tap API service is running' check

* mc check shouldn't err when viz is not available. Also properly set the log in multicluster/cmd/root.go so that it properly displays messages when --verbose is used
2021-01-21 18:26:38 -05:00
Kevin Leimkuhler e7f2a3fba3
viz: add tap-injector (#5540)
## What this changes

This adds a tap-injector component to the `linkerd-viz` extension which is
responsible for adding the tap service name environment variable to the Linkerd
proxy container.

If a pod does not have a Linkerd proxy, no action is taken. If tap is disabled
via annotation on the pod or the namespace, no action is taken.

This also removes the environment variable for explicitly disabling tap through
an environment variable. Tap status for a proxy is now determined only be the
presence or absence of the tap service name environment variable.

Closes #5326

## How it changes

### tap-injector

The tap-injector component determines if `LINKERD2_PROXY_TAP_SVC_NAME` should be
added to a pod's Linkerd proxy container environment. If the pod satisfies the
following, it is added:

- The pod has a Linkerd proxy container
- The pod has not already been mutated
- Tap is not disabled via annotation on the pod or the pod's namespace

### LINKERD2_PROXY_TAP_DISABLED

Now that tap is an extension of Linkerd and not a core component, it no longer
made sense to explicitly enable or disable tap through this Linkerd proxy
environment variable. The status of tap is now determined only be if the
tap-injector adds or does not add the `LINKERD2_PROXY_TAP_SVC_NAME` environment
variable.

### controller image

The tap-injector has been added to the controller image's several startup
commands which determines what it will do in the cluster.

As a follow-up, I think splitting out the `tap` and `tap-injector` commands from
the controller image into a linkerd-viz image (or something like that) makes
sense.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-01-21 11:24:08 -05:00
Oleh Ozimok c416e78261
destination: Fix crash when EndpointSlices are enabled (#5543)
The Destination controller can panic due to a nil-deref when
the EndpointSlices API is enabled.

This change updates the controller to properly initialize values
to avoid this segmentation fault.

Fixes #5521

Signed-off-by: Oleg Ozimok <oleg.ozimok@corp.kismia.com>
2021-01-15 12:52:11 -08:00
Alejandro Pedraza f3b1ebfa99
Separate observability API (#5510)
* Separate observability API

Closes #5312

This is a preliminary step towards moving all the observability API into `/viz`, by first moving its protobuf into `viz/metrics-api`. This should facilitate review as the go files are not moved yet, which will happen in a followup PR. There are no user-facing changes here.

- Moved `proto/common/healthcheck.proto` to `viz/metrics-api/proto/healthcheck.prot`
- Moved the contents of `proto/public.proto` to `viz/metrics-api/proto/viz.proto` except for the `Version` Stuff.
- Merged `proto/controller/tap.proto` into `viz/metrics-api/proto/viz.proto`
- `grpc_server.go` now temporarily exposes `PublicAPIServer` and `VizAPIServer` interfaces to separate both APIs. This will get properly split in a followup.
- The web server provides handlers for both interfaces.
- `cli/cmd/public_api.go` and `pkg/healthcheck/healthcheck.go` temporarily now have methods to access both APIs.
- Most of the CLI commands will use the Viz API, except for `version`.

The other changes in the go files are just changes in the imports to point to the new protobufs.

Other minor changes:
- Removed `git add controller/gen` from `bin/protoc-go.sh`
2021-01-13 14:34:54 -05:00
Filip Petkovski 40192e258a
Ignore pods with status.phase=Succeeded when watching IP addresses (#5412)
Ignore pods with status.phase=Succeeded when watching IP addresses

When a pod terminates successfully, some CNIs will assign its IP address
to newly created pods. This can lead to duplicate pod IPs in the same
Kubernetes cluster.

Filter out pods which are in a Succeeded phase since they are not 
routable anymore.

Fixes #5394

Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
2021-01-12 12:25:37 -05:00
Tarun Pothulapati e04647fb8d
remove prom check for public-api self-check (#5436)
Currently, public-api is part of the core control-plane where
the prom check fails when ran before the viz extension is installed.
This change comments out that check, Once metrics api is moved into
viz, maybe this check can be part of it instead or directly part of
`linkerd viz check`.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-01-05 17:22:39 -05:00
Alejandro Pedraza d3d7f4e2e2
Destination should return `OpaqueTransport` hint when annotation matches resolved target port (#5458)
The destination service now returns `OpaqueTransport` hint when the annotation
matches the resolve target port. This is different from the current behavior
which always sets the hint when a proxy is present.

Closes #5421

This happens by changing the endpoint watcher to set a pod's opaque port
annotation in certain cases. If the pod already has an annotation, then its
value is used. If the pod has no annotation, then it checks the namespace that
the endpoint belongs to; if it finds an annotation on the namespace then it
overrides the pod's annotation value with that.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-01-05 14:54:55 -05:00
Kevin Leimkuhler b830efdad7
Add OpaqueTransport field to destination protocol hints (#5421)
## What

When the destination service returns a destination profile for an endpoint,
indicate if the endpoint can receive opaque traffic.

## Why

Closes #5400

## How

When translating a pod address to a destination profile, the destination service
checks if the pod is controlled by any linkerd control plane. If it is, it can
set a protocol hint where we indicate that it supports H2 and opaque traffic.

If the pod supports opaque traffic, we need to get the port that it expects
inbound traffic on. We do this by getting the proxy container and reading it's
`LINKERD2_PROXY_INBOUND_LISTEN_ADDR` environment variable. If we successfully
parse that into a port, we can set the opaque transport field in the destination
profile.

## Testing

A test has been added to the destination server where a pod has a
`linkerd-proxy` container. We can expect the `OpaqueTransport` field to be set
in the returned destination profile's protocol hint.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-12-23 11:06:39 -05:00
Kevin Leimkuhler 7c0843a823
Add opaque ports to destination service updates (#5294)
## Summary

This changes the destination service to start indicating whether a profile is an
opaque protocol or not.

Currently, profiles returned by the destination service are built by chaining
together updates coming from watching Profile and Traffic Split updates.

With this change, we now also watch updates to Opaque Port annotations on pods
and namespaces; if an update occurs this is now included in building a profile
update and is sent to the client.

## Details

Watching updates to Profiles and Traffic Splits is straightforward--we watch
those resources and if an update occurs on one associated to a service we care
about then the update is passed through.

For Opaque Ports this is a little different because it is an annotation on pods
or namespaces. To account for this, we watch the endpoints that we should care
about.

### When host is a Pod IP

When getting the profile for a Pod IP, we check for the opaque ports annotation
on the pod and the pod's namespace. If one is found, we'll indicate if the
profile is an opaque protocol if the requested port is in the annotation.

We do not subscribe for updates to this pod IP. The only update we really care
about is if the pod is deleted and this is already handled by the proxy.

### When host is a Service

When getting the profile for a Service, we subscribe for updates to the
endpoints of that service. For any ports set in the opaque ports annotation on
any of the pods, we check if the requested port is present.

Since the endpoints for a service can be added and removed, we do subscribe for
updates to the endpoints of the service.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-12-18 12:38:59 -05:00
Alejandro Pedraza d661054795
Fix CLI install/upgrade overriding settings in HA (#5399)
Fixes #5385

## The problems

- `linkerd install --ha` isn't honoring flags
- `linkerd upgrade --ha` is overridding existing configs silently or failing with an error
- *Upgrading HA instances from before 2.9 to version 2.9.1 results in configs being overridden silently, or the upgrade fails with an error*

## The cause

The change in #5358 attempted to fix `linkerd install --ha` that was only applying some of the `values-ha.yaml` defaults, by calling `charts.NewValues(true)` and merging that with the values built from `values.yaml` overriden by the flags. It turns out the `charts.NewValues()` implementation was by itself merging against `values.yaml` and as a result any flag was getting overridden by its default.

This also happened when doing `linkerd upgrade --ha` on an existing instance, which could result in silently overriding settings, or it could also fail loudly like for example when upgrading set up that has an external issuer (in this case the issuer cert won't be able to be read during upgrade and an error would occur as described in #5385).

Finally, when doing `linkerd upgrade` (no --ha flag) on an HA install from before 2.9 results in configs getting overridden as well (silently or with an error) because in order to generate the `linkerd-config-overrides` secret, the original install flags are retrieved from `linkerd-config` via the `loadStoredValuesLegacy()` function which then effectively ends up performing a `linkerd upgrade` with all the flags used for `linkerd install` and falls into the same trap as above.

## The fix

In `values.go` the faulting merging logic is not used anymore, so now `NewValues()` only returns the default values from `values.yaml` and doesn't require an argument anymore. It calls `readDefaults()` which now only returns the appropriate values depending on whether we're on HA or not.
There's a new function `MergeHAValues()` that merges `values-ha.yaml` into the current values (it doesn't look into `values.yaml` anymore), which is only used when processing the `--ha` flag in `options.go`.

## How to test

To replicate the issue try setting a custom setting and check it's not applied:
```bash
linkerd install --ha --controller-log level debug | grep log.level
- -log-level=info
```

## Followup

This wasn't caught because we don't have HA integration tests. Now that our test infra is based on k3d, it should be easy to make such a test using a cluster with multiple nodes. Either that or issuing `linkerd install --ha` with additional configs and compare against a golden file.
2020-12-18 12:11:52 -05:00
Alejandro Pedraza 578d4a19e9
Have the tap APIServer refresh its cert automatically (#5388)
Followup to #5282, fixes #5272 in its totality.

This follows the same pattern as the injector/sp-validator webhooks, leveraging `FsCredsWatcher` to watch for changes in the cert files.

To reuse code from the webhooks, we moved `updateCert()` to `creds_watcher.go`, and `run()` as well (which now is called `ProcessEvents()`).

The `TestNewAPIServer` test in `apiserver_test.go` was removed as it really was just testing two things: (1) that `apiServerAuth` doesn't error which is already covered in the following test, and (2) that the golib call `net.Listen("tcp", addr)` doesn't error, which we're not interested in testing here.

## How to test

To test that the injector/sp-validator functionality is still correct, you can refer to #5282

The steps below are similar, but focused towards the tap component:

```bash
# Create some root cert
$ step certificate create linkerd-tap.linkerd.svc ca.crt ca.key   --profile root-ca --no-password --insecure

# configure tap's caBundle to be that root cert
$ cat > linkerd-overrides.yml << EOF
tap:
  externalSecret: true
  caBundle: |
    < ca.crt contents>
EOF

# Install linkerd
$ bin/linkerd install --config linkerd-overrides.yml | k apply -f -

# Generate an intermediatery cert with short lifespan
$ step certificate create linkerd-tap.linkerd.svc ca-int.crt ca-int.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 4m --no-password --insecure --san linkerd-tap.linkerd.svc

# Create the secret using that intermediate cert
$ kubectl create secret tls \
  linkerd-tap-k8s-tls \
   --cert=ca-int.crt \
   --key=ca-int.key \
   --namespace=linkerd

# Rollout the tap pod for it to pick the new secret
$ k -n linkerd rollout restart deploy/linkerd-tap

# Tap should work
$ bin/linkerd tap -n linkerd deploy/linkerd-web
req id=0:0 proxy=in  src=10.42.0.15:33040 dst=10.42.0.11:9994 tls=true :method=GET :authority=10.42.0.11:9994 :path=/metrics
rsp id=0:0 proxy=in  src=10.42.0.15:33040 dst=10.42.0.11:9994 tls=true :status=200 latency=1779µs
end id=0:0 proxy=in  src=10.42.0.15:33040 dst=10.42.0.11:9994 tls=true duration=65µs response-length=1709B

# Wait 5 minutes and rollout tap again
$ k -n linkerd rollout restart deploy/linkerd-tap

# You'll see in the logs that the cert expired:
$ k -n linkerd logs -f deploy/linkerd-tap tap
2020/12/15 16:03:41 http: TLS handshake error from 127.0.0.1:45866: remote error: tls: bad certificate
2020/12/15 16:03:41 http: TLS handshake error from 127.0.0.1:45870: remote error: tls: bad certificate

# Recreate the secret
$ step certificate create linkerd-tap.linkerd.svc ca-int.crt ca-int.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 4m --no-password --insecure --san linkerd-tap.linkerd.svc
$ k -n linkerd delete secret linkerd-tap-k8s-tls
$ kubectl create secret tls \
  linkerd-tap-k8s-tls \
   --cert=ca-int.crt \
   --key=ca-int.key \
   --namespace=linkerd

# Wait a few moments and you'll see the certs got reloaded and tap is working again
time="2020-12-15T16:03:42Z" level=info msg="Updated certificate" addr=":8089" component=apiserver
```
2020-12-16 17:46:14 -05:00
Alex Leong cdc57d1af0
Use linkerd-jaeger extension for control plane tracing (#5299)
Now that tracing has been split out of the main control plane and into the linkerd-jaeger extension, we remove references to tracing from the main control plane including:

* removing the tracing components from the main control plane chart
* removing the tracing injection logic from the main proxy injector and inject CLI (these will be added back into the new injector in the linkerd-jaeger extension)
* removing tracing related checks (these will be added back into `linkerd jaeger check`)
* removing related tests

We also update the `--control-plane-tracing` flag to configure the control plane components to send traces to the linkerd-jaeger extension.  To make sure this works even when the linkerd-jaeger extension is installed in a non-default namespace, we also add a `--control-plane-tracing-namespace` flag which can be used to change the namespace that the control plane components send traces to.

Note that for now, only the control plane components send traces; the proxies in the control plane do not.  This is because the linkerd-jaeger injector is not yet available.  However, this change adds the appropriate namespace annotations to the control plane namespace to configure the proxies to send traces to the linkerd-jaeger extension once the linkerd-jaeger injector is available.

I tested this by doing the following:

1. bin/linkerd install | kubectl apply -f -
1. bin/helm install jaeger jaeger/charts/jaeger
1. bin/linkerd upgrade --control-plane-tracing=true | kubectl apply -f -
1. kubectl -n linkerd-jaeger port-forward svc/jaeger 16686
1. open http://localhost:16686
1. see traces from the linkerd control plane

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-12-08 14:34:26 -08:00
Tarun Pothulapati 72a0ca974d
extension: Separate multicluster chart and binary (#5293)
Fixes #5257

This branch movies mc charts and cli level code to a new
top level directory. None of the logic is changed.

Also, moves some common types into `/pkg` so that they
are accessible both to the main cli and extensions.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2020-12-04 16:36:10 -08:00
Alejandro Pedraza 4c634a3816
Have webhooks refresh their certs automatically (#5282)
* Have webhooks refresh their certs automatically

Fixes partially #5272

In 2.9 we introduced the ability for providing the certs for `proxy-injector` and `sp-validator` through some external means like cert-manager, through the new helm setting `externalSecret`.
We forgot however to have those services watch changes in their secrets, so whenever they were rotated they would fail with a cert error, with the only workaround being to restart those pods to pick the new secrets.

This addresses that by first abstracting out `FsCredsWatcher` from the identity controller, which now lives under `pkg/tls`.

The webhook's logic in `launcher.go` no longer reads the certs before starting the https server, moving that instead into `server.go` which in a similar way as identity will receive events from `FsCredsWatcher` and update `Server.cert`. We're leveraging `http.Server.TLSConfig.GetCertificate` which allows us to provide a function that will return the current cert for every incoming request.

### How to test

```bash
# Create some root cert
$ step certificate create linkerd-proxy-injector.linkerd.svc ca.crt ca.key \
  --profile root-ca --no-password --insecure --san linkerd-proxy-injector.linkerd.svc

# configure injector's caBundle to be that root cert
$ cat > linkerd-overrides.yaml << EOF
proxyInjector:
  externalSecret: true
    caBundle: |
      < ca.crt contents>
EOF

# Install linkerd. The injector won't start untill we create the secret below
$ bin/linkerd install --controller-log-level debug --config linkerd-overrides.yaml | k apply -f -

# Generate an intermediatery cert with short lifespan
step certificate create linkerd-proxy-injector.linkerd.svc ca-int.crt ca-int.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 4m --no-password --insecure --san linkerd-proxy-injector.linkerd.svc

# Create the secret using that intermediate cert
$ kubectl create secret tls \
  linkerd-proxy-injector-k8s-tls \
   --cert=ca-int.crt \
   --key=ca-int.key \
   --namespace=linkerd

# start following the injector log
$ k -n linkerd logs -f -l linkerd.io/control-plane-component=proxy-injector -c proxy-injector

# Inject emojivoto. The pods should be injected normally
$ bin/linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -

# Wait about 5 minutes and delete a pod
$ k -n emojivoto delete po -l app=emoji-svc

# You'll see it won't be injected, and something like "remote error: tls: bad certificate" will appear in the injector logs.

# Regenerate the intermediate cert
$ step certificate create linkerd-proxy-injector.linkerd.svc ca-int.crt ca-int.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 4m --no-password --insecure --san linkerd-proxy-injector.linkerd.svc

# Delete the secret and recreate it
$ k -n linkerd delete secret linkerd-proxy-injector-k8s-tls
$ kubectl create secret tls \
  linkerd-proxy-injector-k8s-tls \
   --cert=ca-int.crt \
   --key=ca-int.key \
   --namespace=linkerd

# Wait a couple of minutes and you'll see some filesystem events in the injector log along with a "Certificate has been updated" entry
# Then delete the pod again and you'll see it gets injected this time
$ k -n emojivoto delete po -l app=emoji-svc

```
2020-12-04 16:25:59 -05:00
Alejandro Pedraza 9cbfb08a38
Bump proxy-init to v1.3.8 (#5283) 2020-11-27 09:07:34 -05:00
hodbn 92eb174e06
Add safe accessor for Global in linkerd-config (#5269)
CLI crashes if linkerd-config contains unexpected values.

Add a safe accessor that initializes an empty Global on the first
access. Refactor all accesses to use the newly introduced accessor using
gopls.

Add test for linkerd-config data without Global.

Fixes #5215

Co-authored-by: Itai Schwartz <yitai27@gmail.com>
Signed-off-by: Hod Bin Noon <bin.noon.hod@gmail.com>
2020-11-23 12:45:58 -08:00
Kevin Leimkuhler ca86a31816
Add destination service tests for the IP path (#5266)
This adds additional tests for the destination service that assert `GetProfile`
behavior when the path is an IP address.

1. Assert that when the path is a cluster IP, the configured service profile is
   returned.
2. Assert that when the path a pod IP, the endpoint field is populated in the
   service profile returned.
3. Assert that when the path is not a cluster or pod IP, the default service
   profile is returned.
4. Assert that when path is a pod IP with or without the controller annotation,
the endpoint has or does not have a protocol hint

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-11-23 13:17:05 -05:00
Alejandro Pedraza 4687dc52aa
Refactor webhook framework to allow webhooks define their flags (#5256)
* Refactor webhook framework to allow webhook define their flags

Pulled out of `launcher.go` the flag parsing logic and moved it into the `Main` methods of the webhooks (under `controller/cmd/proxy.injector/main.go` and `controller/cmd/sp-validator/main.go`), so that individual webhooks themselves can define the flags they want to use.

Also no longer require that webhooks have cluster-wide access.

Finally, renamed the type `webhook.handlerFunc` to `webhook.Handler` so it can be exported. This will be used in the upcoming jaeger webhook.
2020-11-23 10:40:30 -05:00
Kevin Leimkuhler 92f9387997
Check correct label value when setting protocl hint (#5267)
This fixes an issue where the protocol hint is always set on endpoint responses.
We now check the right value which determines if the pod has the required label.

A test for this has been added to #5266.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-11-20 13:32:50 -08:00
Oliver Gould 375ffd782f
proxy: v2.121.0 (#5253)
This release changes error handling to teardown the server-side
connection when an unexpected error is encountered.

Additionally, the outbound TCP routing stack can now skip redundant
service discovery lookups when profile responses include endpoint
information.

Finally, the cache implementation has been updated to reduce latency by
removing unnecessary buffers.

---

* h2: enable HTTP/2 keepalive PING frames (linkerd/linkerd2-proxy#737)
* actions: Add timeouts to GitHub actions (linkerd/linkerd2-proxy#738)
* outbound: Skip endpoint resolution on profile hint (linkerd/linkerd2-proxy#736)
* Add a FromStr for dns::Name (linkerd/linkerd2-proxy#746)
* outbound: Avoid redundant TCP endpoint resolution (linkerd/linkerd2-proxy#742)
* cache: Make the cache cloneable with RwLock (linkerd/linkerd2-proxy#743)
* http: Teardown serverside connections on error (linkerd/linkerd2-proxy#747)
2020-11-18 16:55:53 -08:00
Kevin Leimkuhler e65f216d52
Add endpoint to GetProfile response (#5227)
Context: #5209

This updates the destination service to set the `Endpoint` field in `GetProfile`
responses.

The `Endpoint` field is only set if the IP maps to a Pod--not a Service.

Additionally in this scenario, the default Service Profile is used as the base
profile so no other significant fields are set.

### Examples

```
# GetProfile for an IP that maps to a Service
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.43.222.0:9090
INFO[0000] fully_qualified_name:"linkerd-prometheus.linkerd.svc.cluster.local"  retry_budget:{retry_ratio:0.2  min_retries_per_second:10  ttl:{seconds:10}}  dst_overrides:{authority:"linkerd-prometheus.linkerd.svc.cluster.local.:9090"  weight:10000}
```

Before:

```
# GetProfile for an IP that maps to a Pod
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.20
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}}
```


After:

```
# GetProfile for an IP that maps to a Pod
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.20
INFO[0000] retry_budget:{retry_ratio:0.2  min_retries_per_second:10  ttl:{seconds:10}}  endpoint:{addr:{ip:{ipv4:170524692}}  weight:10000  metric_labels:{key:"control_plane_ns"  value:"linkerd"}  metric_labels:{key:"deployment"  value:"fast-1"}  metric_labels:{key:"pod"  value:"fast-1-5cc87f64bc-9hx7h"}  metric_labels:{key:"pod_template_hash"  value:"5cc87f64bc"}  metric_labels:{key:"serviceaccount"  value:"default"}  tls_identity:{dns_like_identity:{name:"default.default.serviceaccount.identity.linkerd.cluster.local"}}  protocol_hint:{h2:{}}}
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-11-18 15:41:25 -05:00
Alejandro Pedraza 5a707323e6
Update proxy-init to v1.3.7 (#5221)
This upgrades both the proxy-init image itself, and the go dependency on
proxy-init as a library, which fixes CNI in k3s and any host using
binaries coming from BusyBox, where `nsenter` has an
issue parsing arguments (see rancher/k3s#1434).
2020-11-13 15:59:14 -05:00
Tarun Pothulapati 4c106e9c08
cli: make check return SkipError when there is no prometheus configured (#5150)
Fixes #5143

The availability of prometheus is useful for some calls in public-api
that the check uses. This change updates the ListPods in public-api
to still return the pods even when prometheus is not configured.

For a test that exclusively checks for prometheus metrics, we have a gate
which checks if a prometheus is configured and skips it othervise.

Signed-off-by: Tarun Pothulapati tarunpothulapati@outlook.com
2020-10-29 19:57:11 +05:30
Alex Leong 4f34ce8e2f
Empty the stored addresses when the endpoint translator gets a NoEndpoints message (#5126)
Signed-off-by: Alex Leong <alex@buoyant.io>
2020-10-22 17:01:03 -07:00
Oliver Gould d0bce594ea
Remove defunct proxy config variables (#5109)
The proxy no longer honors DESTINATION_GET variables, as profile lookups
inform when endpoint resolution is performed.  Also, there is no longer
a router capacity limit.
2020-10-20 16:13:53 -07:00
Oliver Gould c5d3b281be
Add 100.64.0.0/10 to the set of discoverable networks (#5099)
It appears that Amazon can use the `100.64.0.0/10` network, which is
technically private, for a cluster's Pod network.

Wikipedia describes the network as:

> Shared address space for communications between a service provider
> and its subscribers when using a carrier-grade NAT.

In order to avoid requiring additional configuration on EKS clusters, we
should permit discovery for this network by default.
2020-10-19 12:59:44 -07:00
Kevin Leimkuhler eff50936bf
Fix --all-namespaces flag handling (#5085)
## Motivations

Closes #5080

## Solution

When the `--all-namespaces` (`-A`) flag is set for the `linkerd edges` command,
ignore the `namespace` value set by default or `-n`.

This is similar to the behavior for `kubectl`. `kubectl get -A -n linkerd pods`
showing pods in all namespaces.

### Behavior changes

With linkerd and emojivoto installed, this results in:

Before:

```
❯ linkerd edges -A pods
No edges found.
```

After:

```
❯ linkerd edges -A pods
SRC                                   DST                                       SRC_NS      DST_NS      SECURED       
vote-bot-6cb9cb9569-wl6w5             web-5d69bcfdb7-mxf8f                      emojivoto   emojivoto   √  
web-5d69bcfdb7-mxf8f                  emoji-7dc976587b-rb9c5                    emojivoto   emojivoto   √  
web-5d69bcfdb7-mxf8f                  voting-bdf4f778c-pjkjg                    emojivoto   emojivoto   √  
linkerd-prometheus-68d6897d75-ghmgm   emoji-7dc976587b-rb9c5                    linkerd     emojivoto   √  
linkerd-prometheus-68d6897d75-ghmgm   vote-bot-6cb9cb9569-wl6w5                 linkerd     emojivoto   √  
linkerd-prometheus-68d6897d75-ghmgm   voting-bdf4f778c-pjkjg                    linkerd     emojivoto   √  
linkerd-prometheus-68d6897d75-ghmgm   web-5d69bcfdb7-mxf8f                      linkerd     emojivoto   √  
linkerd-controller-7d965cf78d-qw6xj   linkerd-prometheus-68d6897d75-ghmgm       linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-controller-7d965cf78d-qw6xj       linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-destination-74dbb9c46b-nkxgh      linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-grafana-5d9fb67dc6-sn2l8          linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-identity-c875b5d58-b756v          linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-proxy-injector-767b55988d-n9r6f   linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-sp-validator-6c8df84fb9-4w8kc     linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-tap-777fbf7656-p87dm              linkerd     linkerd     √  
linkerd-prometheus-68d6897d75-ghmgm   linkerd-web-546c9444b5-68xpx              linkerd     linkerd     √
```

`linkerd edges -A -n linkerd pods` results in all edges as well (the result
above).

The behavior of `linkerd edges pods` does not change and shows edges in the
`default` namespace.

```
❯ linkerd edges pods
No edges found.
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-10-16 16:49:10 -04:00
Oliver Gould 4f16a234aa
Add a default set of ports to bypass the proxy (#5093)
The proxy has a default, hardcoded set of ports on which it doesn't do
protocol detection (25, 587, 3306 -- all of which are server-first
protocols). In a recent change, this default set was removed from
the outbound proxy, since there was no way to configure it to anything
other than the default set. I had thought that there was a default set
applied to proxy-init, but this appears to not be the case.

This change adds these ports to the default Helm values to restore the
prior behavior.

I have also elected to include 443 in this set, as it is generally our
recommendation to avoid proxying HTTPS traffic, since the proxy provides
very little value on these connections today.

Additionally, the memcached port 11211 is skipped by default, as clients
do not issue any sort of preamble that is immediately detectable.

These defaults may change in the future, but seem like good choices for
the 2.9 release.
2020-10-16 11:53:41 -07:00
Alejandro Pedraza 777b06ac55
Expand 'linkerd edges' to work with TCP connections (#5040)
* Expand 'linkerd edges' to work with TCP connections

Fixes #4999

Before:
```
$ bin/linkerd edges po -owide
SRC                                   DST                                    SRC_NS    DST_NS    CLIENT_ID   SERVER_ID   SECURED
linkerd-prometheus-764ddd4f88-t6c2j   rabbitmq-controller-5c6cf7cc6d-8lxp2   linkerd   default                           √
linkerd-prometheus-764ddd4f88-t6c2j   temp                                   linkerd   default                           √

```

After:
```
$ bin/linkerd edges po -owide
SRC                                   DST                                    SRC_NS    DST_NS    CLIENT_ID         SERVER_ID         SECURED
temp                                  rabbitmq-controller-5c6cf7cc6d-5fpsc   default   default   default.default   default.default   √
linkerd-prometheus-66fb97b7fc-vpnxf   rabbitmq-controller-5c6cf7cc6d-5fpsc   linkerd   default                                       √
linkerd-prometheus-66fb97b7fc-vpnxf   temp                                   linkerd   default                                       √
```

With the latest proxy upgrade to v2.113.0 (#5037), the `tcp_open_total` metric now contains the `client_id` label so that we can replace the http-only metric `response_total` with this one to determine edges for TCP-only connections.

This change basically performs the same query as before, but two times, one for `response_total` and another for `tcp_open_total`. For each resulting entry, the latter is kept if `client_id` is present, otherwise the former is used (if present at all). That way things keep on working for older proxies.

Disclaimers:
- This doesn't fix #3706: if two sources connect to the same destination there's no way to tell them appart from the metrics perspective and their edges can get mangled. To fix that, the proxy would have to expose `src_resource` labels in the `tcp_open_total` total inbound metric.
- Note connections coming from prometheus are still unidentified. The reason is those hit the proxy's admin server (instead of the main container) which doesn't expose metrics.
2020-10-12 09:14:39 -05:00
Alejandro Pedraza 11a5d1d427
Fix Heartbeat mem and cpu stats (#5042)
Since k8s 1.16 cadvisor uses the `container` label instead of
`container_name` in the prometheus metrics it exposes.
The heartbeat queries were using the latter, so they were broken
for k8s version since 1.16.

Note that the `p99-handle-us` value is still missing because the
`request_handle_us` metrics is always zero.
2020-10-08 16:31:16 -05:00
Tarun Pothulapati 1e7bb1217d
Update Injection to use new linkerd-config.values (#5036)
This PR Updates the Injection Logic (both CLI and proxy-injector)
to use `Values` struct instead of protobuf Config, part of our move
in removing the protobuf.

This does not touch any of the flags, install related code.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

Co-authored-by: Alex Leong <alex@buoyant.io>
2020-10-07 09:54:34 -07:00
Tarun Pothulapati 5e774aaf05
Remove dependency of linkerd-config for control plane components (#4915)
* Remove dependency of linkerd-config for most control plane components

This PR removes the dependency of `linkerd-config` into control
plane components by making all that information passed through CLI
flags. As most of these components require a couple of flags, passing
them as flags could be more helpful, as updations to the flags trigger a
rollout unlike a configMap update.

This does not update the proxy-injector as it needs a lot more data
and mounting `linkerd-config` is better.
2020-10-06 22:19:18 +05:30
Kevin Leimkuhler 6b7a39c9fa
Set FQN in profile resolutions (#5019)
## Motivation

Closes #5016

Depends on linkerd/linkerd2-proxy-api#44

## Solution

A `profileTranslator` exists for each service and now has a new
`fullyQualifiedName` field.

This field is used to set the `FullyQualifiedName` field of
`DestinationProfile`s each time an update is sent.

In the case that no service profile exists for a service, a default
`DestinationProfile` is created and we can use the field to set the correct
name.

In the case that a service profile does exist for a service, we still use this
field to set the name to keep it consistent.

### Example

Install linkerd on a cluster and run the destination server:

```
go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
```

Get the IP of a service. Here, we'll get the ip for `linkerd-identity`:

```
> kubectl get -n linkerd svc/linkerd-identity
NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
linkerd-identity   ClusterIP   10.43.161.68   <none>        8080/TCP   4h25m
```

Get the profile of `linkerd-identity` from service name or IP and note the
`FullyQualifiedName` field:

```
> go run controller/script/destination-client/main.go -method getProfile -path 10.43.161.68:8080
INFO[0000] fully_qualified_name:"linkerd-identity.linkerd.svc.cluster.local" ..
```

```
> go run controller/script/destination-client/main.go -method getProfile -path linkerd-identity.linkerd.svc.cluster.local
INFO[0000] fully_qualified_name:"linkerd-identity.linkerd.svc.cluster.local" ..
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-10-01 11:06:00 -04:00
Tarun Pothulapati d0caaa86c4
Bump k8s client-go to v0.19.2 (#5002)
Fixes #4191 #4993

This bumps Kubernetes client-go to the latest v0.19.2 (We had to switch directly to 1.19 because of this issue). Bumping to v0.19.2 required upgrading to smi-sdk-go v0.4.1. This also depends on linkerd/stern#5

This consists of the following changes:

- Fix ./bin/update-codegen.sh by adding the template path to the gen commands, as it is needed after we moved to GOMOD.
- Bump all k8s related dependencies to v0.19.2
- Generate CRD types, client code using the latest k8s.io/code-generator
- Use context.Context as the first argument, in all code paths that touch the k8s client-go interface

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2020-09-28 12:45:18 -05:00
Alejandro Pedraza b30d35f46a
Reset service-mirror component when target's k8s API is unreachable (#4996)
When the service-mirror component can't reach the target's k8s API, the goroutine blocks and it can't be unblocked.

This was happenining specifically in the case of the multicluster integration test (still to be pushed), where the source and target clusters are created in quick succession and the target's API service doesn't always have time to be exposed before being requested by the service mirror.

The fix consists on no longer have restartClusterWatcher be side-effecting, and instead return an error. If such error is not nil then the link watcher is stopped and reset after 10 seconds.
2020-09-25 11:00:28 -05:00
Zahari Dichev 0b649e3ed7
Remove double slash (#4985)
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
2020-09-21 12:15:54 -07:00
Oliver Gould 6d67b84447
profiles: Eliminate default timeout (#4958)
* profiles: Eliminate default timeout
2020-09-10 14:00:18 -07:00
Alejandro Pedraza ccf027c051
Push docker images to ghcr.io instead of gcr.io (#4953)
* Push docker images to ghcr.io instead of gcr.io

The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.

Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.

Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.

Besides that, the other changes are just a global search and replace (s/gcr.io\/linkerd-io/ghcr.io\/linkerd/).
2020-09-10 15:16:24 -05:00
Oliver Gould 7ee638bb0c
inject: Configure the proxy to discover profiles for unnamed services (#4960)
The proxy performs endpoint discovery for unnamed services, but not
service profiles.

The destination controller and proxy have been updated to support
lookups for unnamed services in linkerd/linkerd2#4727 and
linkerd/linkerd2-proxy#626, respectively.

This change modifies the injection template so that the
`proxy.destinationGetNetworks` configuration enables profile
discovery for all networks on which endpoint discovery is permitted.
2020-09-10 12:44:00 -07:00
Zahari Dichev 77c88419b8
Make destination and identity services headless (#4923)
* Make destination and identity svcs headless

Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
2020-09-02 14:53:38 -05:00
Ali Ariff 5186383c81
Add ARM64 Integration Test (#4897)
* Add ARM64 Integration Test

Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>
2020-08-28 10:38:40 -07:00
Alex Leong 9d3cf6ee4d
Move most service-mirror code out of cmd package (#4901)
All of the code for the service mirror controller lives in the `linkerd/linkerd2/controller/cmd` package.  It is typical for control plane components to only have a `main.go` entrypoint in the cmd package.  This can sometimes make it hard to find the service mirror code since I wouldn't expect it to be in the cmd package.

We move the majority of the code to a dedicated controller package, leaving only main.go in the cmd package.  This is purely organizational; no behavior change is expected.

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-08-27 14:17:18 -07:00
Matei David 7ed904f31d
Enable endpoint slices when upgrading through CLI (#4864)
## What/How
@adleong  pointed out in #4780 that when enabling slices during an upgrade, the new value does not persist in the `linkerd-config` ConfigMap. I took a closer look and it seems that we were never overwriting the values in case they were different.

* To fix this, I added an if block when validating and building the upgrade options -- if the current flag value differs from what we have in the ConfigMap, then change the ConfigMap value.
* When doing so, I made sure to check that if the cluster does not support `EndpointSlices` yet the flag is set to true, we will error out. This is done similarly (copy&paste similarily) to what's in the install part.
* Additionally, I have noticed that the helm ConfigMap template stored the flag value under `enableEndpointSlices` field name. I assume this was not changed in the initial PR to reflect the changes made in the protocol buffer. The API (and thus the CLI) uses the field name `endpointSliceEnabled` instead. I have changed the config template so that helm installations will use the same field, which can then be used in the destination service or other components that may implement slice support in the future.

Signed-off-by: Matei David <matei.david.35@gmail.com>
2020-08-24 14:34:50 -07:00
Kevin Leimkuhler c2301749ef
Always return destination overrides for services (#4890)
## Motivation

#4879

## Solution

When no traffic split exists for services, return a single destination override
with a weight of 100%.

Using the destination client on a new linkerd installation, this results in the
following output for `linkerd-identity` service:

```
❯ go run controller/script/destination-client/main.go -method getProfile -path linkerd-identity.linkerd.svc.cluster.local:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"linkerd-identity.linkerd.svc.cluster.local.:8080" weight:100000} 
INFO[0000]
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-08-19 12:25:58 -07:00