Commit Graph

12 Commits

Author SHA1 Message Date
Krzysztof Dryś f92e77f7f0
Remove legacy upgrade and it's references (#7309)
With [linkerd2#5008](https://github.com/linkerd/linkerd2/issues/5008) and associated PRs, we changed the way configuration is handled by storing a helm values struct inside of the configmap.

Now that we have had one stable release with new configuration, were no longer use and need to maintain the legacy config. This commit removes all the associated logic, protobuf files, and references.

Changes Include:

- Removed [`proto/config/config.proto`](https://github.com/linkerd/linkerd2/blob/main/proto/config/config.proto)
- Changed [`bin/protoc-go.sh`](https://github.com/linkerd/linkerd2/blob/main/bin/protoc-go.sh) to not include `config.proto`
- Changed [`FetchLinkerdConfigMap()`](741fde679b/pkg/healthcheck/healthcheck.go (L1768)) in `healthcheck.go` to return only the configmap, with the pb type.
- Changed [`FetchCurrentConfiguration()`](741fde679b/pkg/healthcheck/healthcheck.go (L1647)) only unmarshal and use helm value struct from configmap (as a follow-up to the todo above; note that there's already a todo here to refactor the function once value struct is the default, which has already happened)
- Removed [`upgrade_legacy.go`](https://github.com/linkerd/linkerd2/blob/main/cli/cmd/upgrade_legacy.go)

Signed-off-by: Krzysztof Dryś <krzysztofdrys@gmail.com>
2021-11-29 20:08:58 +05:30
Kevin Leimkuhler 01cbe616f1
Honor Server `proxyProtocol` in destination service `Get` with policy CRD APIs (#7184)
This change ensures that if a Server exists with `proxyProtocol: opaque` that selects an endpoint backed by a pod, that destination requests for that pod reflect the fact that it handles opaque traffic.

Currently, the only way that opaque traffic is honored in the destination service is if the pod has the `config.linkerd.io/opaque-ports` annotation. With the introduction of Servers though, users can set `server.Spec.ProxyProtocol: opaque` to indicate that if a Server selects a pod, then traffic to that pod's `server.Spec.Port` should be opaque. Currently, the destination service does not take this into account.

There is an existing change up that _also_ adds this functionality; it takes a different approach by creating a policy server client for each endpoint that a destination has. For `Get` requests on a service, the number of clients scales with the number of endpoints that back that service.

This change fixes that issue by instead creating a Server watch in the endpoint watcher and sending updates through to the endpoint translator.

The two primary scenarios to consider are

### A `Get` request for some service is streaming when a Server is created/updated/deleted
When a Server is created or updated, the endpoint watcher iterates through its endpoint watches (`servicePublisher` -> `portPublisher`) and if it selects any of those endpoints, the port publisher sends an update if the Server has marked that port as opaque.

When a Server is deleted, the endpoint watcher once again iterates through its endpoint watches and deletes the address set's `OpaquePodPorts` field—ensuring that updates have been cleared of Server overrides.

### A `Get` request for some service happens after a Server is created
When a `Get` request occurs (or new endpoints are added—they both take the same path), we must check if any of those endpoints are selected by some existing Server. If so, we have to take that into account when creating the address set.

This part of the change gives me a little concern as we first must get all the Servers on the cluster and then create a set of _all_ the pod-backed endpoints that they select in order to determine if any of these _new_ endpoints are selected.

## Testing
Right now this can be tested by starting up the destination service locally and running `Get` requests on a service that has endpoints selected by a Server

**app.yaml**
```yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod
  labels:
    app: pod
spec:
  containers:
  - name: app
    image: nginx
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc
spec:
  selector:
    app: pod
  ports:
  - name: http
    port: 80
---
apiVersion: policy.linkerd.io/v1alpha1
kind: Server
metadata:
  name: srv
  labels:
    policy: srv
spec:
  podSelector:
    matchLabels:
      app: pod
  port: 80
  proxyProtocol: HTTP/1
```

```bash
$ go run controller/script/destination-client/main.go -path svc.default.svc.cluster.local:80
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-11-23 20:35:53 -07:00
Alejandro Pedraza 61f443ad05
Schedule heartbeat 10 mins after install (#5973)
* Schedule heartbeat 10 mins after install

... for the Helm installation method, thus aligning it with the CLI
installation method, to reduce the midnight peak on the receiving end.

The logic added into the chart is now reused by the CLI as well.

Also, set `concurrencyPolicy=Replace` so that when a job fails and it's
retried, the retries get canceled when the next scheduled job is triggered.

Finally, the go client only failed when the connection failed;
successful connections with a non 200 response status were considered
successful and thus the job wasn't retried. Fixed that as well.
2021-03-31 07:49:36 -05:00
Dennis Adjei-Baah 57db567204
send serviceprofile counts in heartbeat URL (#5775)
This change counts the number of service profiles installed in a cluster
and adds that info to the heartbeat HTTP request.

Fixes #5474

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-02-19 10:11:26 -05:00
Dennis Adjei-Baah 89ec6056c0
Report extension usage (#5761)
This change modifies `linkerd-heartbeat` to report on all linkerd
extensions being used in a cluster.

The heartbeat URL now includes extension names and a value of 
`"1"` if an extension is installed. 
```
https://versioncheck.linkerd.io/version.json?install-time=1613518301&k8s-version=v1.20.2&linkerd-jaeger=1&linkerd-multicluster=1&linkerd-viz=1
```

Fixes #5516

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-02-18 07:33:35 -08:00
Alejandro Pedraza 11a5d1d427
Fix Heartbeat mem and cpu stats (#5042)
Since k8s 1.16 cadvisor uses the `container` label instead of
`container_name` in the prometheus metrics it exposes.
The heartbeat queries were using the latter, so they were broken
for k8s version since 1.16.

Note that the `p99-handle-us` value is still missing because the
`request_handle_us` metrics is always zero.
2020-10-08 16:31:16 -05:00
Tarun Pothulapati d0caaa86c4
Bump k8s client-go to v0.19.2 (#5002)
Fixes #4191 #4993

This bumps Kubernetes client-go to the latest v0.19.2 (We had to switch directly to 1.19 because of this issue). Bumping to v0.19.2 required upgrading to smi-sdk-go v0.4.1. This also depends on linkerd/stern#5

This consists of the following changes:

- Fix ./bin/update-codegen.sh by adding the template path to the gen commands, as it is needed after we moved to GOMOD.
- Bump all k8s related dependencies to v0.19.2
- Generate CRD types, client code using the latest k8s.io/code-generator
- Use context.Context as the first argument, in all code paths that touch the k8s client-go interface

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2020-09-28 12:45:18 -05:00
Dax McDonald 3088f404ce Upgrade prometheus to v1.2.1 (#3541)
Signed-off-by: Dax McDonald <dax@rancher.com>
2019-12-11 15:26:16 -08:00
Alejandro Pedraza 4b6254b52e
Replaced `uuid` with `uid` from linkerd-config resource (#3694)
* Replaced `uuid` with `uid` from linkerd-config resource

Fixes #3621

Removed the old `uuid` for identifying linkerd installations, and
replaced it with the `uid` property from the `linkerd-config` ConfigMap.

I tested that this `uid` remains the same by updating the config and
also upgrading linkerd, using both the CLI and Helm.

Note that this required granting `linkerd-web` RBAC access to the
`linkerd-config` Config.

I also added an integration test to verify the stability of the uid.
2019-11-13 13:56:01 -05:00
Alejandro Pedraza 8cf4494e78
Add proxy-injector-injections count to heartbeat (#3655)
Fixes #3059
2019-10-31 11:09:00 -05:00
Andrew Seigner 3b55e2e87d
Add container cpu and mem to heartbeat requests (#3238)
PR #3217 re-introduced container metrics collection to
linkerd-prometheus. This enabled linkerd-heartbeat to collect mem and
cpu metrics at the container-level.

Add container cpu and mem metrics to heartbeat requests. For each of
(destination, prometheus, linkerd-proxy), collect maximum memory and p95
cpu.

Concretely, this introduces 7 new query params to heartbeat requests:
- p99-handle-us
- max-mem-linkerd-proxy
- max-mem-destination
- max-mem-prometheus
- p95-cpu-linkerd-proxy
- p95-cpu-destination
- p95-cpu-prometheus

Part of #2961

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2019-08-14 12:04:08 -07:00
Andrew Seigner 64ed8e4a74
Introduce Cluster Heartbeat cronjob (#3056)
`linkerd check`, the web dashboard, and Grafana all perform version
checks to validate Linkerd is up to date. It's common for users to
seldom execute these codepaths. This makes it difficult to identify what
versions of Linkerd are currently in use and what environments it is
being run in, which helps prioritize testing and backports.

Introduce a `heartbeat` CronJob to the default Linkerd install. The
cronjob executes every 24 hours, starting from 5 minutes after
`linkerd install` is run.

Example check URL:
https://versioncheck.linkerd.io/version.json?
  install-time=1562761177&
  k8s-version=v1.15.0&
  meshed-pods=8&
  rps=3&
  source=heartbeat&
  uuid=cc4bb700-3314-426a-9f0f-ec588b9df020&
  version=git-b97ee9f7

Fixes #2961

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2019-07-23 17:12:30 -07:00