Commit Graph

43 Commits

Author SHA1 Message Date
Alejandro Pedraza 38ec59128e
refactor(multicluster): revert Link permissions back to Role (#13848)
#13783 moved the service mirror permissions on Links from a Role to a ClusterRole as a side-effect, and this change reverts that by refactoring the Links API to allow consuming a namespace-scoped API more easily.

- We introduce in our `k8s.API` type a field `L5dClient` alongside the broad `Client` one, which is constructed via the new function `NewL5dNamespacedAPI()`.
- In the service-mirror `main.go` we use that constructor to acquire `linksAPI`, which is used to configure the informer for handling Link events in this file.
- `linksAPI` is also passed down to instantiations of `RemoteClusterServiceWatcher`, where it's used for the direct kube-apiserver calls and for retrieving a Lister.
2025-03-22 05:44:35 -05:00
Alejandro Pedraza a726757fb1
fix(service-mirror): don't restart cluster watch upon Link status updates (#13579)
* fix(service-mirror): don't restart cluster watch upon Link status updates

Every time there's an update to a Link resource the service mirror restarts the cluster watch after cleaning up any existing worker. We recently introduced a status stanza in Link that gets updated upon every mirroring of a service, which was unnecessarily triggering a cluster watcher restart. For a sufficiently high number of services getting mirrored at once this was causing severe contention on the controller, delaying mirroring up to a halt.

This change fixes the situation by only considering changes in the Link Spec for restarting the cluster watch.

* Lower log level

* Extract the resource event handler functions into a separate file, and add unit test making sure the add/update/delete functions are called, and that in particular the update function is _not_ called when updating a Link status.
2025-01-22 12:35:15 -05:00
Alex Leong 50b6a17e68
Add support for federated services to the service mirror controller (#13269)
When the service mirror controller detects a service in the remote cluster which matches the federated service selector (`mirror.linkerd.io/federated=memeber` by default), it will add that service to the federated service in the local cluster named `<svc>-federated`, creating this service if it does not already exist.  To join a service to a federated service, it is added to the `multicluster.linkerd.io/remote-discovery` annotation on the federated service which contains a comma separated list of values in the form `<svc>@<cluster>`.  When a remote service no longer exists or matches the federated service selector, it is removed from the federated service by removing it from the `mutlicluster.linkerd.io/remote-discovery` annotation.

We also add a new `local-service-mirror` deployment to the Linkerd-multicluster extension which watches the local cluster for any services which match the federated service selector.  Any services in the local cluster which match will be added to the federated service by setting the `mutlicluster.linkerd.io/local-discovery` annotation on the federated service to the local service name.

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-11-08 09:34:29 -08:00
Alex Leong 53287d40a8
Reflect changes to server selector in opaqueness (#12031)
Fixes #11995

If a Server is marking a Pod's port as opaque and then the Server's podSelector is updated to no longer select that Pod, then the Pod's port should no longer be marked as opaque. However, this update does not result in any updates from the destination API's Get stream and the port remains marked as opaque.

We fix this by updating the endpoint watcher's handling of Server updates to consider both the old and the new Server.

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-02-05 12:25:16 -08:00
Matei David 9c902dc6b4
Add an endpoints reconciler component for external workloads (#11948)
We introduced an endpoints controller that will be responsible for
managing EndpointSlices for services that select external workloads. We
introduce as a follow-up the reconciler component of the controller that
will be responsible for doing the writes and diffing.

Additionally, the controller is wired-up in the destination service's
main routine and will start if endpoint slice support is enabled.

---------

Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Co-authored-by: Zahari Dichev <zaharidichev@gmail.com>
2024-01-24 16:55:16 +00:00
Zahari Dichev 027d49a9a6
discovery: handle endpoint slices from ExternalWorkload (#11939)
This alters the endpoints slices watcher to handle slices that reference ExternalWorkloads.

Testing
Add the following resources: 

```yaml
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
addressType: IPv4
metadata:
  name: my-external-workload
  namespace: mixed-env
  labels:
    kubernetes.io/service-name: test-1
endpoints:
- addresses:
  - 172.21.0.5
  conditions:
    ready: true
    serving: true
    terminating: false
  targetRef:
    kind: ExternalWorkload
    name: my-external-workload
ports:
- port: 8080
  name: http
---
apiVersion: workload.linkerd.io/v1alpha1
kind: ExternalWorkload
metadata:
  name: my-external-workload
  namespace: mixed-env
  labels:
    app: test
spec:
  meshTls:
    identity: "test"
    serverName: "test"
  workloadIPs:
  - ip: 172.21.0.5
  ports:
  - port: 8080
    name: http
---
apiVersion: v1
kind: Service
metadata:
  name: test-1
  namespace: mixed-env
spec:
  selector:
    app: test
  type: ClusterIP
  ports:
  - name: http
    port: 8080
    targetPort: 8080
    protocol: TCP

```

Observe endpoints:
```
linkerd dg endpoints test-1.mixed-env.svc.cluster.local:8080
```

Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
2024-01-17 15:43:20 -08:00
Matei David 983fc55abc
Introduce new external endpoints controller (#11905)
For mesh expansion, we need to register an ExternalWorkload's service
membership. Service memberships describe which Service objects an
ExternalWorkload is part of (i.e. which service can be used to route
traffic to an external endpoint).

Service membership will allow the control plane to discover
configuration associated with an external endpoint when performing
discovery on a service target.

To build these memberships, we introduce a new controller to the
destination service, responsible for watching Service and
ExternalWorkload objects, and for writing out EndpointSlice objects for
each Service that selects one or more external endpoints.

As a first step, we add a new externalworkload module and a new controller in the
that watches services and workloads. In a follow-up change, 
the ExternalEndpointManager will additionally perform
the necessary reconciliation by writing EndpointSlice objects.

Since Linkerd's control plane may run in HA, we also add a lease object
that will be used by the manager. When a lease is claimed, a flag is
turned on in the manager to let it know it may perform writes.

A more compact list of changes:
* Add a new externalworkload module
* Add an EndpointsController in the module along with necessary mechanisms to watch resources.
* Add RBAC rules to the destination service:
  * Allow policy and destination to read ExternalWorkload objects
  * Allow destination to create / update / read Lease objects

---------

Signed-off-by: Matei David <matei@buoyant.io>
2024-01-17 12:15:28 +00:00
Alejandro Pedraza c67985def0
Extend unit test for HostPort subscriptions (#11439)
Followup to https://github.com/linkerd/linkerd2/pull/11334#issuecomment-1736093592

This extends the test introduced in #11334 to excercise upgrading a
Server associated to a pod's HostPort, and observing how the stream
updates the OpaqueProtocol field.

Helper functions were refactored a bit to allow retrieving the
l5dCRDClientSet used when building the fake API.
2023-10-02 13:51:19 -05:00
Alex Leong 368b63866d
Add support for remote discovery (#11224)
Adds support for remote discovery to the destination controller.

When the destination controller gets a `Get` request for a Service with the `multicluster.linkerd.io/remote-discovery` label, this is an indication that the destination controller should discover the endpoints for this service from a remote cluster.  The destination controller will look for a remote cluster which has been linked to it (using the `linkerd multicluster link` command) with that name.  It will look at the `multicluster.linkerd.io/remote-discovery` label for the service name to look up in that cluster.  It then streams back the endpoint data for that remote service.

Since we now have multiple client-go informers for the same resource types (one for the local cluster and one for each linked remote cluster) we add a `cluster` label onto the prometheus metrics for the informers and EndpointWatchers to ensure that each of these components' metrics are correctly tracked and don't overwrite each other.

---------

Signed-off-by: Alex Leong <alex@buoyant.io>
2023-08-11 09:31:45 -07:00
Matei David 62a8407420
Introduce small improvements to ClusterStore (#11220)
There were a few improvements we could have made to a recent change that
added a ClusterStore concept to the destination service. This PR
introduces the small improvements:

* Sync dynamically created clients in separate go routines
* Refactor metadata API creation
* Remove redundant check in cluster_store_test
* Create a new runtime schema every time a fake metadata API client is
  created to avoid racey behaviour.

Signed-off-by: Matei David <matei@buoyant.io>
2023-08-08 17:58:49 -07:00
Matei David c2780adde1
Introduce an EndpointsWatcher cache structure (#11190)
Currently, the control plane does not support indexing and discovering resources across cluster boundaries. In a multicluster set-up, it is desirable to have access to endpoint data (and by extension, any configuration associated with that endpoint) to support pod-to-pod communication. Linkerd's destination service should be extended to support reading discovery information from multiple sources (i.e. API Servers).

This change introduces an `EndpointsWatcher` cache. On creation, the cache registers a pair of event handlers:
* One handler to read `multicluster` secrets that embed kubeconfig files. For each such secret, the cache creates a corresponding `EndpointsWatcher` to read remote discovery information.
* Another handle to evict entries from the cache when a `multicluster` secret is deleted.

Additionally, a set of tests have been added that assert the behaviour of the cache when secrets are created and/or deleted. The cache will be used by the destination service to do look-ups for services that belong to another cluster, and that are running in a "remote discovery" mode.

---------

Signed-off-by: Matei David <matei@buoyant.io>
2023-08-04 13:59:21 -07:00
Alejandro Pedraza 4a84f2cb32
Implement the k8s metadata API in the Destination controller (#10326)
Fixes #9986

After reviewing the k8s API calls in Destination, it was concluded we
could only swap out the calls to the Node and RS resources to use the
metadata API, as all the other resources (Endpoints, EndpointSlices,
Services, Pod, ServiceProfiles, Server) required fields other than those
found in their metadata section.

This also required completing the `NewFakeAPI` implementation by adding
the missing annotations and labels entries.

## Testing Memory Consumption

The gains here aren't as big as in #9650. In order to test this we need
to push hard and create 4000 RS:

``` bash
for i in {0..4000}; do kubectl create deployment test-pod-$i --image=nginx; done
```

In edge-23.2.1 the destination pod's memory consumption goes from 40Mi
to 160Mi after all the RS were created. With this change, it went from
37Mi to 140Mi.
2023-02-13 17:30:07 -05:00
Alejandro Pedraza 4dbb027f48
Use metadata API in the proxy and tap injectors (#9650)
* Use metadata API in the proxy and tap injectors

Part of #9485

This adds a new `MetadataAPI` similar to the current `k8s.API` hosting informers, but backed by k8s' `metadatainformer` shared informers, which retrieves only the objects metadata, resulting in less memory consumption by its clients. Currently this is only implemented for the proxy and tap injectors. Usage by the destination controller will be implemented as a follow-up.

## Existing API enhancements

Shared objects and logic required by API and MetadataAPI have been moved to the new `k8s.go`, `api_resource.go` and `prometheus.go` files. That includes the `isValidRSParent()` function whose arg is now more generic.

## Unit tests

`/controller/k8s/api_test.go` now also instantiates a MetadataAPI, used in the augmented `TestGetObjects()` and `TestGetOwnerKindAndName()` tests. The `resources` struct was introduced to capture the common fields among tests and simplify `newMockAPI()`'s signature.

## Other Changes

The injector no longer watches for Pods. It only requires watching workloads that own resources (and also watch namespaces), so Pod is not required.

## Testing Memory Consumption

Install linkerd, inject emojivoto and check the injector memory consumption with `kubectl -n linkerd top pod linkerd-proxy-injector-xxx`. It'll start consuming about 16Mi. Then ramp up emojivoto's `voting` deployment replicas to 2000. After 5 minutes memory will stabilize around 32Mi using the current branch. Using the latest edge, it'll stabilize around 110Mi.
2022-11-16 09:21:39 -05:00
Alejandro Pedraza e80a791777
Allow initializing a k8s namespace-scoped API (#8751)
* Allow initializing a k8s namespace-scoped API

This allows reusing the `k8s.API` informers by other projects that don't
necessarily have cluster-wide permissions.
2022-08-04 09:14:26 -05:00
Tarun Pothulapati 4170b49b33
smi: remove default functionality in linkerd (#7334)
Now, that SMI functionality is fully being moved into the
[linkerd-smi](www.github.com/linkerd/linkerd-smi) extension, we can
stop supporting its functionality by default.

This means that the `destination` component will stop reacting
to the `TrafficSplit` objects. When `linkerd-smi` is installed,
It does the conversion of `TrafficSplit` objects to `ServiceProfiles`
that destination components can understand, and will react accordingly.

Also, Whenever a `ServiceProfile` with traffic splitting is associated
with a service, the same information (i.e splits and weights) is also
surfaced through the `UI` (in the new `services` tab) and the `viz cmd`.
So, We are not really loosing any UI functionality here.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-12-03 12:07:30 +05:30
Kevin Leimkuhler 01cbe616f1
Honor Server `proxyProtocol` in destination service `Get` with policy CRD APIs (#7184)
This change ensures that if a Server exists with `proxyProtocol: opaque` that selects an endpoint backed by a pod, that destination requests for that pod reflect the fact that it handles opaque traffic.

Currently, the only way that opaque traffic is honored in the destination service is if the pod has the `config.linkerd.io/opaque-ports` annotation. With the introduction of Servers though, users can set `server.Spec.ProxyProtocol: opaque` to indicate that if a Server selects a pod, then traffic to that pod's `server.Spec.Port` should be opaque. Currently, the destination service does not take this into account.

There is an existing change up that _also_ adds this functionality; it takes a different approach by creating a policy server client for each endpoint that a destination has. For `Get` requests on a service, the number of clients scales with the number of endpoints that back that service.

This change fixes that issue by instead creating a Server watch in the endpoint watcher and sending updates through to the endpoint translator.

The two primary scenarios to consider are

### A `Get` request for some service is streaming when a Server is created/updated/deleted
When a Server is created or updated, the endpoint watcher iterates through its endpoint watches (`servicePublisher` -> `portPublisher`) and if it selects any of those endpoints, the port publisher sends an update if the Server has marked that port as opaque.

When a Server is deleted, the endpoint watcher once again iterates through its endpoint watches and deletes the address set's `OpaquePodPorts` field—ensuring that updates have been cleared of Server overrides.

### A `Get` request for some service happens after a Server is created
When a `Get` request occurs (or new endpoints are added—they both take the same path), we must check if any of those endpoints are selected by some existing Server. If so, we have to take that into account when creating the address set.

This part of the change gives me a little concern as we first must get all the Servers on the cluster and then create a set of _all_ the pod-backed endpoints that they select in order to determine if any of these _new_ endpoints are selected.

## Testing
Right now this can be tested by starting up the destination service locally and running `Get` requests on a service that has endpoints selected by a Server

**app.yaml**
```yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod
  labels:
    app: pod
spec:
  containers:
  - name: app
    image: nginx
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc
spec:
  selector:
    app: pod
  ports:
  - name: http
    port: 80
---
apiVersion: policy.linkerd.io/v1alpha1
kind: Server
metadata:
  name: srv
  labels:
    policy: srv
spec:
  podSelector:
    matchLabels:
      app: pod
  port: 80
  proxyProtocol: HTTP/1
```

```bash
$ go run controller/script/destination-client/main.go -path svc.default.svc.cluster.local:80
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-11-23 20:35:53 -07:00
Tarun Pothulapati 45478b6db8
viz: support `stat` on new policy resources (#6785)
Fixes #6733

As policy resources provide a grouping, statistics summaries should
also be allowed on these groupings which are useful to the user. Them
being port specific provide a great way to break down these metrics
further.

This PR adds support for policy resources i.e `server` and `serverauthorization`
 on the `stat` command.

## Changes

This adds a new path in the `stat_summary.go` file to handle policy
objects. I tried to see if we could re-use some of the other paths
but some of the labels seems to differ and hence a different path
had to be created. We can try to refactor and merge them though.

We support both request and TCP metrics for the `server` resource
while only the former with `serverauthorization` resources
as metrics are generated in this manner.

This also adds these policy objects into the `k8s` package to
make them as known resources.

For both the policy resources, `--from` doesn't work as these
metrics are not exposed from outbound, and there is no way to
query about the client workload from the inbound metrics. `--to`
is supported to get metrics specifically for a destination workload.
(just like on a service)

## Testing

```bash
> curl -sL https://run.linkerd.io/emojivoto.yml | linkerd inject --proxy-log-level debug - | kubectl apply -f -

> kubectl apply -f 897de1a8d5/emojivoto-policy.yml


# Initial values
on  kind-kind  linkerd2 on 🌱 taru [📦📝🤷‍] via 🐼 v1.16.7 via  via ❄️  impure (shell)
➜ ./bin/go-run cli viz stat srv -A -owide                                                                                                         ~/work/linkerd2
NAMESPACE   NAME          UNAUTHORIZED   SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN   READ_BYTES/SEC   WRITE_BYTES/SEC
emojivoto   emoji-grpc          0.0rps   100.00%   1.8rps           1ms           1ms           3ms          1         188.6B/s         2072.9B/s
emojivoto   prom                0.0rps         -        -             -             -             -          -                -                 -
emojivoto   voting-grpc         0.0rps    80.70%   0.9rps           1ms           2ms           3ms          1          91.4B/s           52.7B/s
emojivoto   web-http            0.0rps    90.68%   2.0rps           2ms          10ms          28ms          1         153.7B/s         4509.4B/s

# After changing the `emoji-grpc` authz
on  kind-kind  linkerd2 on 🌱 taru [📦📝🤷‍] via 🐼 v1.16.7 via  via ❄️  impure (shell) took 2s
➜ ./bin/go-run cli viz stat srv -A -owide                                                                                                         ~/work/linkerd2
NAMESPACE   NAME          UNAUTHORIZED   SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN   READ_BYTES/SEC   WRITE_BYTES/SEC
emojivoto   emoji-grpc          0.3rps   100.00%   1.1rps           0ms           0ms           0ms          1         156.5B/s         1282.4B/s
emojivoto   prom                0.0rps         -        -             -             -             -          -                -                 -
emojivoto   voting-grpc         0.0rps    87.88%   0.6rps           0ms           0ms           0ms          1          53.5B/s           31.5B/s
emojivoto   web-http            0.0rps    61.18%   1.4rps           1ms           2ms           2ms          1         110.2B/s         2195.7B/s

# after changing the `web-http` authz

on  kind-kind  linkerd2 on 🌱 taru [📦📝🤷‍] via 🐼 v1.16.7 via  via ❄️  impure (shell)
➜ ./bin/go-run cli viz stat srv -A -owide                                                                                                         ~/work/linkerd2
NAMESPACE   NAME          UNAUTHORIZED   SUCCESS   RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN   READ_BYTES/SEC   WRITE_BYTES/SEC
emojivoto   emoji-grpc          0.0rps         -     -             -             -             -          -                -                 -
emojivoto   prom                0.0rps         -     -             -             -             -          -                -                 -
emojivoto   voting-grpc         0.0rps         -     -             -             -             -          -                -                 -
emojivoto   web-http            1.0rps         -     -             -             -             -          -                -                 -

> linkerd  viz stat srv/emoji-grpc -n emojivoto -owide
NAME         SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN   READ_BYTES/SEC   WRITE_BYTES/SEC
emoji-grpc        100.00%   2.0rps           1ms           1ms           1ms          1         199.9B/s         2208.0B/s

> linkerd  viz stat srv/web-http -n emojivoto -owide
NAME      SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN   READ_BYTES/SEC   WRITE_BYTES/SEC
web-http         94.02%   1.9rps           4ms           9ms          10ms          1         152.7B/s         4505.9B/s

> linkerd  viz stat srv -n emojivoto -o wide                                                      
NAME          MESHED   SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN   READ_BYTES/SEC   WRITE_BYTES/SEC
emoji-grpc         -   100.00%   2.0rps           1ms           1ms           1ms          1         201.6B/s         2209.8B/s
prom               -         -        -             -             -             -          -                -                 -
voting-grpc        -    86.21%   1.0rps           1ms           1ms           1ms          1          98.3B/s           55.9B/s
web-http           -    91.67%   2.0rps           3ms           8ms          10ms          1         157.7B/s         4600.3B/s


> linkerd  viz stat serverauthorization/web-public -n emojivoto
NAME       MESHED   SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99  
web-http        -    89.83%   2.0rps           3ms           9ms          10ms

> linkerd viz stat saz -n emojivoto
NAME          AUTHORIZATION     MESHED   SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99
emoji-grpc    emoji-grpc             -   100.00%   2.0rps           1ms           1ms           1ms
prom          prom-prometheus        -         -        -             -             -             -
voting-grpc   voting-grpc            -    89.83%   1.0rps           1ms           1ms           1ms
web-http      web-public             -    94.96%   2.0rps           1ms           5ms           9ms

> linkerd viz stat saz/web-public -n emojivoto                                                 
NAME       AUTHORIZATION   MESHED   SUCCESS      RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99
web-http   web-public           -    90.00%   2.0rps           1ms           5ms           9ms
```

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-09-15 10:59:36 +05:30
Alejandro Pedraza ae62d92f7d
Fixes unit tests indeterminism (#6496)
We were getting sporadic coverage differences on `controller/k8s/test_helper.go` and `pkg/healthcheck/healthcheck_test.go` on pushes unrelated to those files.

For the former, the problem was in tests in `controller/k8s/api_test.go` that compared slices of pods and services by sorting them. The `Sort` interface was implemented through the methods in `test_helper.go`. There is indeterminism in that sorting at the go library level apparently, in that the `Swap` method is not always called, which impacted the coverage report. The fix consists on comparing those slices item by item without needing to sort beforehand.

As for `healthcheck_test.go`, `validateControlPlanePods()` in `healthcheck.go` short-circuits on the first pod having all its containers ready. The unit tests iterate over maps, an iteration we know is not deterministic, so sometimes the short-circuiting avoided to ever cover the `!container.Ready` block, thus affecting the coverage report. This is fixed by adding a new small test that makes sure that block is covered.
2021-07-19 12:42:45 +05:30
Matei David a2bd230cd6
service topologies: add Kubernetes/API EndpointSlice support (#4696)
Based on the [EndpointSlice PR](https://github.com/linkerd/linkerd2/pull/4663), this is just the k8s/api support for endpointslices to shorten the first PR.

* Adds CRD
* Adds functions that check whether the cluster has EndpointSlice access
* Adds discovery & endpointslice informers to api.

Signed-off-by: Matei David <matei.david.35@gmail.com>
2020-07-06 15:28:48 -07:00
Sergio C. Arteaga cee8e3d0ae Add CronJobs and ReplicaSets to dashboard and CLI (#3687)
This PR adds support for CronJobs and ReplicaSets to `linkerd inject`, the web
dashboard and CLI. It adds a new Grafana dashboard for each kind of resource. 

Closes #3614 
Closes #3630 
Closes #3584 
Closes #3585

Signed-off-by: Sergio Castaño Arteaga tegioz@icloud.com
Signed-off-by: Cintia Sanchez Garcia cynthiasg@icloud.com
2019-12-11 10:02:37 -08:00
Alejandro Pedraza d3d8266c63
If tap source IP matches many running pods then only show the IP (#3513)
* If tap source IP matches many running pods then only show the IP

When an unmeshed source ip matched more than one running pod, tap was
showing the names for all those pods, even though the didn't necessary
originate the connection. This could be reproduced when using pod
network add-on such as Calico.

With this change, if a node matches, return it, otherwise we proceed to look for a matching pod. If exactly one running pod matches we return it. Otherwise we return just the IP.

Fixes #3103
2019-10-25 12:38:11 -05:00
Andrew Seigner 0f9ea553d2 Add APIService fake clientset support (#3569)
The `linkerd upgrade --from-manifests` command supports reading the
manifest output via `linkerd install`. PR #3167 introduced a tap
APIService object into `linkerd install`, but the manifest-reading code
in fake.go was never updated to support this new object kind.

Update the fake clientset code to support APIService objects.

Fixes #3559

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2019-10-21 12:12:19 -07:00
Alex Leong dc2b96f903
Sort slices to avoid flakeyness in tests (#3064)
The `TestGetServicesFor` is flaky because it compares two slices of services which are in a non-deterministic order.

To make this deterministic, we first sort the slices by name.

Signed-off-by: Alex Leong <alex@buoyant.io>
2019-07-16 16:03:21 -07:00
Jonathan Juares Beber e2211f5f77 Introduces owner references verification for pods (#3027)
When getting pods for specific kubernetes resources, the usage of just
labels, as a selector, generates wrong outputs. Once, two resources can use
the same label selector and manage distinct pods, a new mechanism to check
pods for a given resource it's needed. More details on #2932.

This commit introduces a verification through the pod owner references
`UID`s, comparing with the given resource's. Additional logic is needed
when handling `Deployments` since it creates a `ReplicaSet` and this last
one is the actual pod's owner. No verification is done in case of
`Services`.

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2019-07-10 12:44:24 -07:00
Alex Leong c698d6bca1
Add support for TrafficSplits (#2897)
Add support for querying TrafficSplit resources through the common API layer. This is done by depending on the TrafficSplit client bindings from smi-sdk-go.

Signed-off-by: Alex Leong <alex@buoyant.io>
2019-06-11 10:04:42 -07:00
Andrew Seigner 0cfc8c6f1c
Introduce k8s apiextensions support (#2759)
CustomResourceDefinition parsing and retrieval is not available via
client-go's `kubernetes.Interface`, but rather via a separate
`k8s.io/apiextensions-apiserver` package.

Introduce support for CustomResourceDefintion object parsing and
retrieval. This change facilitates retrieval of CRDs from the k8s API
server, and also provides CRD resources as mock objects.

Also introduce a `NewFakeAPI` constructor, deprecating
`NewFakeClientSets`. Callers need no longer be concerned with discreet
clientsets (for k8s resources vs. CRDs vs. (eventually)
ServiceProfiles), and can instead use the unified `KubernetesAPI`.

Part of #2337, in service to multi-stage check.

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2019-04-28 18:55:22 -07:00
Andrew Seigner 8da2cd3fd4
Require cluster-wide k8s API access (#2428)
linkerd/linkerd2#2349 removed the `--single-namespace` flag, in favor of
runtime detection of cluster vs. namespace access, and also
ServiceProfile availability. This maintained control-plane support for
running in these two states.

This change requires control-plane components have cluster-wide
Kubernetes API access and ServiceProfile availability, and will error
out if not. Once #2349 merges, stage 1 install will be a requirement for
a successful stage 2 install.

Part of #2337

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2019-03-07 10:23:18 -08:00
Alejandro Pedraza f155fb9a8f
Have `NewFakeClientSets()` not swallow errors when parsing YAML (#2454)
This helps catching bad YAMLs in test resources

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
2019-03-06 13:53:04 -05:00
Tarun Pothulapati 2184928813 Wire up stats for Jobs (#2416)
Support for Jobs in stat/tap/top cli commands

Part of #2007

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2019-03-01 17:16:54 -08:00
Andrew Seigner ec5a0ca8d9
Authorization-aware control-plane components (#2349)
The control-plane components relied on a `--single-namespace` param,
passed from `linkerd install` into each individual component, to
determine which namespaces they were authorized to access, and whether
to support ServiceProfiles. This command-line flag was redundant given
the authorization rules encoded in the parent `linkerd install` output,
via [Cluster]Role[Binding]s.

Modify the control-plane components to query Kubernetes at startup to
determine which namespaces they are authorized to access, and whether
ServiceProfile support is available. This allows removal of the
`--single-namespace` flag on the components.

Also update `bin/test-cleanup` to cleanup the ServiceProfile CRD.

TODO:
- Remove `--single-namespace` flag on `linkerd install`, part of #2164

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2019-02-26 11:54:52 -08:00
Ivan Sim f6e75ec83a
Add statefulsets to the dashboard and CLI (#2234)
Fixes #1983

Signed-off-by: Ivan Sim <ivan@buoyant.io>
2019-02-08 15:37:44 -08:00
zak 8c413ca38b Wire up stats commands for daemonsets (#2006) (#2086)
DaemonSet stats are not currently shown in the cli stat command, web ui
or grafana dashboard. This commit adds daemonset support for stat.

Update stat command's help message to reference daemonsets.
Update the public-api to support stats for daemonsets.
Add tests for stat summary and api.

Add daemonset get/list/watch permissions to the linkerd-controller
cluster role that's created using the install command.
Update golden expectation test files for install command
yaml manifest output.

Update web UI with daemonsets
Update navigation, overview and pages to list daemonsets and the pods
associated to them.
Add daemonset paths to server, and ui apps.

Add grafana dashboard for daemonsets; a clone of the deployment
dashboard.

Update dependencies and dockerfile hashes

Add DaemonSet support to tap and top commands

Fixes of #2006

Signed-off-by: Zak Knill <zrjknill@gmail.com>
2019-01-24 14:34:13 -08:00
Andrew Seigner 1c302182ef
Enable lint check for comments (#2023)
Commit 1: Enable lint check for comments

Part of #217. Follow up from #1982 and #2018.

A subsequent commit will fix the ci failure.

Commit 2: Address all comment-related linter errors.

This change addresses all comment-related linter errors by doing the
following:
- Add comments to exported symbols
- Make some exported symbols private
- Recommend via TODOs that some exported symbols should should move or
  be removed

This PR does not:
- Modify, move, or remove any code
- Modify existing comments

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2019-01-02 14:03:59 -08:00
Alex Leong 1fe19bf3ce
Add ServiceProfile support to k8s utilities (#1758)
Updates to the Kubernetes utility code in `/controller/k8s` to support interacting with ServiceProfiles.

This makes use of the code generated client added in #1752 

Signed-off-by: Alex Leong <alex@buoyant.io>
2018-10-12 09:35:11 -07:00
Kevin Lingerfelt 46c887ca00
Add --single-namespace install flag for restricted permissions (#1721)
* Add --single-namespace install flag for restricted permissions
* Better formatting in install template
* Mark --single-namespace and --proxy-auto-inject as experimental
* Fix wording of --single-namespace check flag
* Small healthcheck refactor

Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
2018-10-11 10:55:57 -07:00
Ivan Sim 4fba6aca0a Proxy init and sidecar containers auto-injection (#1714)
* Support auto sidecar-injection

1. Add proxy-injector deployment spec to cli/install/template.go
2. Inject the Linkerd CA bundle into the MutatingWebhookConfiguration
during the webhook's start-up process.
3. Add a new handler to the CA controller to create a new secret for the
webhook when a new MutatingWebhookConfiguration is created.
4. Declare a config map to store the proxy and proxy-init container
specs used during the auto-inject process.
5. Ignore namespace and pods that are labeled with
linkerd.io/auto-inject: disabled or linkerd.io/auto-inject: completed
6. Add new flag to `linkerd install` to enable/disable proxy
auto-injection

Proposed implementation for #561.

* Resolve missing packages errors
* Move the auto-inject label to the pod level
* PR review items
* Move proxy-injector to its own deployment
* Ignore pods that already have proxy injected

This ensures the webhook doesn't error out due to proxy that are injected using the  command

* PR review items on creating/updating the MWC on-start
* Replace API calls to ConfigMap with file reads
* Fixed post-rebase broken tests
* Don't mutate the auto-inject label

Since we started using healhcheck.HasExistingSidecars() to ensure pods with
existing proxies aren't mutated, we don't need to use the auto-inject label as
an indicator.

This resolves a bug which happens with the kubectl run command where the deployment
is also assigned the auto-inject label. The mutation causes the pod auto-inject
label to not match the deployment label, causing kubectl run to fail.

* Tidy up unit tests
* Include proxy resource requests in sidecar config map
* Fixes to broken YAML in CLI install config

The ignore inbound and outbound ports are changed to string type to
avoid broken YAML caused by the string conversion in the uint slice.

Also, parameterized the proxy bind timeout option in template.go.

Renamed the sidecar config map to
'linkerd-proxy-injector-webhook-config'.

Signed-off-by: ihcsim <ihcsim@gmail.com>
2018-10-10 12:09:22 -07:00
Kevin Lingerfelt 13aaa82c95
Allow k8s API clients to watch a subset of resources (#1118)
* Allow k8s API clients to watch a subset of resources
* Sort resources

Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
2018-06-14 11:09:01 -07:00
Kevin Lingerfelt b6d429e80d
dst svc: use shared informer instead of custom endpoints informer (#1079)
* Update destination service ot use shared informer instead of custom endpoints informer
* Add additional tests for dst svc endpoints watcher
* Remove service ports when all listeners unsubscribed
* Update go deps

Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
2018-06-13 11:11:57 -07:00
Kevin Lingerfelt bd1d1af38b
dst svc: use shared informer instead of pod watcher (#1073)
* Update desintation service to use shared informer instead of pod watcher
* Add const for pod IP index name

Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
2018-06-12 18:09:47 -07:00
Kevin Lingerfelt 6e66f6d662
Rename Lister to API and expose informers as well as listers (#1072)
Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
2018-06-12 10:27:55 -07:00
Andrew Seigner 9e8cce0838
Destination service returns "Running" pod labels (#781)
When the Destination sees an IP address, it looks up Pods by that IP,
and associates Pod label data to it. If the lookup by IP returned more
than one Pod, it simply picked the first one. This is not correct,
specifically in cases where one pod is in a Running state, and others
are not.

Modify the Destination service to only return label data for Pods in the
Running state.

Fixes #773

Signed-off-by: Andrew Seigner <siggy@buoyant.io>
2018-04-17 14:42:54 -07:00
Phil Calçado 19001f8d38 Add pod-based metric_labels to destinations response (#429) (#654)
* Extracted logic from destination server
* Make tests follow style used elsewhere in the code
* Extract single interface for resolvers
* Add tests for k8s and ipv4 resolvers
* Fix small usability issues
* Update dep
* Act on feedback
* Add pod-based metric_labels to destinations response
* Add documentation on running control plane to BUILD.md

Signed-off-by: Phil Calcado <phil@buoyant.io>

* Fix mock controller in proxy tests (#656)

Signed-off-by: Eliza Weisman <eliza@buoyant.io>

* Address review feedback
* Rename files in the destination package

Signed-off-by: Kevin Lingerfelt <kl@buoyant.io>
2018-04-02 18:36:57 -07:00
Phil Calçado bbed49c5bd Refactor destination service and add tests in preparation to add information about labels (#645)
* Extracted logic from destination server

* Make tests follow style used elsewhere in the code

* Extract single interface for resolvers

* Add tests for k8s and ipv4 resolvers

* Fix small usability issues

* Update dep

* Act on feedback

Signed-off-by: Phil Calcado <phil@buoyant.io>
2018-03-30 11:36:48 -07:00