The destination controller's cluster store registers a gague in its constructor. When this constructor is called multiple times (i.e. in tests), this can lead to a panic.
To avoid this panic, this change updates NewClusterStoreWithDecoder to accept a prometheus registry). The NewClusterStore constructor (used by the application's main) continues to use the default registry, but tests now construct their own temporary registries to avoid duplicate registration errors.
Traffic that is meant for the destination workload can be sent over the opaque transport without issue. However, traffic intended for the proxy itself (metrics scraping, tap) need to be sent directly to the corresponding proxy port to prevent them from being forwarded to the workflow.
This adds in special cases for the admin and control ports, read directly from the environment variables on the pods, that excludes them from being sent over opaque transport.
Signed-off-by: Scott Fleener <scott@buoyant.io>
Followup to #12844
This new field defines the default policy for Servers, i.e. if a request doesn't match the policy associated to a Server then this policy applies. The values are the same as for `proxy.defaultInboundPolicy` and the `config.linkerd.io/default-inbound-policy` annotation (all-unauthenticated, all-authenticated, cluster-authenticated, cluster-unauthenticated, deny), plus a new value "audit". The default is "deny", thus remaining backwards-compatible.
This field is also exposed as an additional printer column.
Services in dual-stack mode result in the creation of two EndpointSlices, one for each IP family. Before this change, the Get Destination API would nondeterministically return the address for any of those ES, depending on which one was processed last by the controller because they would overwrite each other.
As part of the ongoing effort to support IPv6/dual-stack networks, this change fixes that behavior giving preference to IPv6 addresses whenever a service exposes both families.
There are a new set of unit tests in server_ipv6_test.go, and in the TestEndpointTranslatorForPods tests there's a couple of new cases to test the interaction with zone filtering.
Also the server unit tests were updated to segregate the tests and resources dealing with the IPv4/IPv6/dual-stack cases.
This commit adds destination controller configuration that enables default
keep-alives for meshed HTTP/2 clients.
This is accomplished by encoding the raw protobuf message structure into the
helm values, and then encoding that as JSON in the destination controller's
command-line options. This allows operators to set any supported HTTP/2 client
configuration without having to modify the destination controller.
Closes#12395
Failing to iterate over init containers as well as regular containers for finding the proxy in various parts of the code when the proxy is injected as a native sidecar resulted in:
- `Get` Destination API failing in the presence of opaque ports
- Failure having the injector detecting already injected pods
- Various CLI issues
This PR is split into the following commits addressing each issue separately:
a8ebe76e3 - Fix injection check for existing sidecars
44e9625e0 - Fix 'linkerd uninject'
62694965d - Fix 'linkerd version --proxy'
42dbdaddf - Fix 'linkerd identity'
39db823fe - Fix 'linkerd check'
7359f371d - Fix 'linkerd dg proxy-metrics'
f8f73c47c - Fix destination controller
The main change here is the refactoring of the address functions in `addr.go` that support the Destination controller and Viz's Tap controller. Some of those functions only worked for IPv4, so this change refactored them to make them IP family agnostic.
This enabled adding (and fixing) IPv6 unit tests as detailed in the following sections.
Other changes:
- The `ProxyAddressesToString()` function was no longer used, so it got removed.
- The `ProxyIPToString()` function was only used by the destination-client script, so that got stripped out.
## `addr_test.go`
We added IPv6 cases to each test, that would have failed previously.
## `endpoint_translator_test.go`
One of the test pods (pod3) was changed to have an IPv6. Without the other changes in this PR those tests would still have passed, but just because when comparing actual IPs with expected ones we weren't checking if they were both zero. So here we added checks against that.
## `server_test.go`
As above, we added checks against empty IPs. And in the mocked resources in `test_util.go` we added an IPv6 EndpointSlice.
Fixes#11995
If a Server is marking a Pod's port as opaque and then the Server's podSelector is updated to no longer select that Pod, then the Pod's port should no longer be marked as opaque. However, this update does not result in any updates from the destination API's Get stream and the port remains marked as opaque.
We fix this by updating the endpoint watcher's handling of Server updates to consider both the old and the new Server.
Signed-off-by: Alex Leong <alex@buoyant.io>
When the destination controller receives a GetProfile request for an ExternalName service, it should return gRPC status code INVALID_ARGUMENT to signal to the proxy that ExternalName services do not have endpoints and service discovery should not be used. However, the destination controller is returning gRPC status code UNKNOWN instead. This causes the proxy to retry these requests, resulting in a flurry of warning logs in the destination controller.
We fix the destination controller to properly return INVALID_ARGUMENT instead of UNKNOWN for ExternalName services.
Signed-off-by: Alex Leong <alex@buoyant.io>
When a meshed client attempts to establish a connection directly to the workload IP of an ExternalWorkload, the destination controller should return an endpoint profile for that ExternalWorkload with a single endpoint and the metadata associated with that ExternalWorkload including:
* mesh TLS identity
* workload metric labels
* opaque / protocol hints
Signed-off-by: Alex Leong <alex@buoyant.io>
When we do a `GetProfile` lookup for an unmeshed pod, we set the `weightedAddr.ProtocolHint` to an empty value `&pb.ProtocolHint{}` to indicate that the address is unmeshed and has no protocol hint. However, when the looked up port is in the default opaque list, we erroneously check if `weightedAddr.ProtocolHint != nil` to determine if we should attempt to get the inbound listen port for that pod. Since `&pb.ProtocolHint{} != nil`, we attempt to get the inbound listen port for the unmeshed pod. This results in an error, preventing any valid `GetProfile` responses from being returned.
We update the initialization logic for `weightedAddr.ProtocolHint` to only create a struct when a protocol hint is present and to leave it as `nil` if the pod is unmeshed.
We add a simple unit test for this behavior as well.
Signed-off-by: Alex Leong <alex@buoyant.io>
When we do a GetProfile lookup for an opaque port on an unmeshed pod,
we attempt to look up the inbound listen port of that pod's proxy. Since
that pod has no proxy, this fails and we return an error to the GetProfile
API call. This causes the proxy to fail to be able to resolve the profile and
be unable to route the traffic.
We revert to the previous behavior of only logging when we cannot look
up the inbound listen port instead of returning an error.
Signed-off-by: Alex Leong <alex@buoyant.io>
The destination controller indexes Pods by their primary PodIP and their
primary HostIP (when applicable); and it indexes Services by their
primary ClusterIP.
In preparation for dual-stack (IPv6) support, this change updates the
destination controller indexers to consume all IPs for these resources.
Tests are updated to demonstrate a dual-stack setup (where these
resources have a primary IPv4 address and a secondary IPv6 address).
While exercising these tests, I had to replace the brittle host:port
parsing logic we use in the server, in favor of Go's standard
`net.SplitHostPort` utility.
When a grpc client of the destination.Get API initiates a request but then doesn't read off of that stream, the HTTP2 stream flow control window will fill up and eventually exert backpressure on the destination controller. This manifests as calls to `Send` on the stream blocking. Since `Send` is called synchronously from the client-go informer callback (by way of the endpoint translator), this blocks the informer callback and prevents all further informer calllbacks from firing. This causes the destination controller to stop sending updates to any of its clients.
We add a queue in the endpoint translator so that when it gets an update from the informer callback, that update is queued and we avoid potentially blocking the informer callback. Each endpoint translator spawns a goroutine to process this queue and call `Send`. If there is not capacity in this queue (e.g. because a client has stopped reading and we are experiencing backpressure) then we terminate the stream.
Signed-off-by: Alex Leong <alex@buoyant.io>
Followup to https://github.com/linkerd/linkerd2/pull/11334#issuecomment-1736093592
This extends the test introduced in #11334 to excercise upgrading a
Server associated to a pod's HostPort, and observing how the stream
updates the OpaqueProtocol field.
Helper functions were refactored a bit to allow retrieving the
l5dCRDClientSet used when building the fake API.
Followup to #11328
Implements a new pod watcher, instantiated along the other ones in the Destination server. It also watches on Servers and carries all the logic from ServerWatcher, which has now been decommissioned.
The `CreateAddress()` function has been moved into a function of the PodWatcher, because now we're calling it on every update given the pod associated to an ip:port might change and we need to regenerate the Address object. That function also takes care of capturing opaque protocol info from associated Servers, which is not new and had some logic that was duped in the now defunct ServerWatcher. `getAnnotatedOpaquePorts()` got also moved for similar reasons.
Other things to note about PodWatcher:
- It publishes a new pair of metrics `ip_port_subscribers` and `ip_port_updates` leveraging the framework in `prometheus.go`.
- The complexity in `updatePod()` is due to only send stream updates when there are changes in the pod's readiness, to avoid sending duped messages on every pod lifecycle event.
-
Finally, endpointProfileTranslator's `endpoint` (*pb.WeightedAddr) not being a static object anymore, the `Update()` function now receives an Address that allows it to rebuild the endpoint on the fly (and so `createEndpoint()` was converted into a method of endpointProfileTranslator).
* stopgap fix for hostport staleness
## Problem
When there's a pod with a `hostPort` entry, `GetProfile` requests
targetting the host's IP and that `hostPort` return an endpoint profile
with that pod's IP and `containerPort`. If that pod vanishes and another
one in that same host with that same `hostPort` comes up, the existing
`GetProfile` streams won't get updated with the new pod information
(metadata, identity, protocol).
That breaks the connectivity of the client proxy relying on that stream.
## Partial Solution
It should be less surprising for those `GetProfile` requests to return
an endpoint profile with the same host IP and port requested, and leave
to the cluster's CNI to peform the translation to the corresponding pod
IP and `containerPort`.
This PR performs that change, but continuing returning the corresponding
pod's information alongside.
If the pod associated to that host IP and port changes, the client proxy
won't loose connectivity, but the pod's information won't get updated
(that'll be fixed in a separate PR).
A new unit test validating this has been added, which will be expanded
to validate the changed pod information when that gets implemented.
## Details of Change
- We no longer do the HostPort->ContainerPort conversion, so the
`getPortForPod` function was dropped.
- The `getPodByIp` function will now be split in two: `getPodByPodIP`
and `getPodByHostIP`, the latter being called only if the former
doesn't return anything.
- The `createAddress` function is now simplified in that it just uses
the passed IP to build the address. The passed IP will depend on which
of the two functions just mentioned returned the pod (host IP or pod
IP)
We add a `cluster_store_size` gauge to the destination controller to track the number of entries in the remote discovery cluster store. If this is ever different from the number of cluster credentials secrets in the linkerd namespace, this indicates that there is a problem with a link that needs to be investigated further.
Signed-off-by: Alex Leong <alex@buoyant.io>
Adds support for remote discovery to the destination controller.
When the destination controller gets a `Get` request for a Service with the `multicluster.linkerd.io/remote-discovery` label, this is an indication that the destination controller should discover the endpoints for this service from a remote cluster. The destination controller will look for a remote cluster which has been linked to it (using the `linkerd multicluster link` command) with that name. It will look at the `multicluster.linkerd.io/remote-discovery` label for the service name to look up in that cluster. It then streams back the endpoint data for that remote service.
Since we now have multiple client-go informers for the same resource types (one for the local cluster and one for each linked remote cluster) we add a `cluster` label onto the prometheus metrics for the informers and EndpointWatchers to ensure that each of these components' metrics are correctly tracked and don't overwrite each other.
---------
Signed-off-by: Alex Leong <alex@buoyant.io>
The GetProfile API endpoint does not behave as expected: when a profile
watch is established, the API server starts two separate profile
watches--a primary watch with the client's namespace and fallback watch
ignoring the client's namespace. These watches race to send data back to
the client. If the backup watch updates first, it may be sent to clients
before being corrected by a subsequent update. If the primary watch
updates with an empty value, the default profile may be served before
being corrected by an update to the backup watch.
From the proxy's perspective, we'd much prefer that the API provide a
single authoritative response when possible. It avoids needless
corrective work from distributing across the system on every watch
initiation.
To fix this, we modify the fallbackProfileListener to behave
predictably: it only emits updates once both its primary and fallback
listeners have been updated. This avoids emitting updates based on a
partial understanding of the cluster state.
Furthermore, the opaquePortsAdaptor is updated to avoid synthesizing a
default serviceprofile (surprising behavior) and, instead, this
defaulting logic is moved into a dedicated defaultProfileListener
helper. A dedupProfileListener is added to squelch obviously redundant
updates.
Finally, this newfound predictability allows us to simplify the API's
tests. Many of the API tests are not clear in what they test and
sometimes make assertions about the "incorrect" profile updates.
Maps the request port to the container's port if the request comes in from the node network and has a hostPort mapping.
Problem:
When a request for a container comes in from the node network, the node port is used ignoring the hostPort mapping.
Solution:
When a request is seen coming from the node network, get the container Port from the Spec.
Validation:
Fixed an existing unit test and wrote a new one driving GetProfile specifically.
Fixes#9677
Signed-off-by: Steve Jenson <stevej@buoyant.io>
While looking into #8273, I wanted to confirm that the destination
controller uses the default opaque ports configuration for arbitrary
(unmeshed) IPs.
This change adds a test that exercises resolution behavior for external
IPs.
Signed-off-by: Oliver Gould <ver@buoyant.io>
### What
`GetProfile` clients do not receive destinatin profiles that consider Server protocol fields the way that `Get` clients do. If a Server exists for a `GetProfile` destination that specifies the protocol for that destination is `opaque`, this information is not passed back to the client.
#7184 added this for `Get` by subscribing clients to Endpoint/EndpointSlice updates. When there is an update, or there is a Server update, the endpoints watcher passes this information back to the endpoint translator which handles sending the update back to the client.
For `GetProfile` the situation is different. As with `Get`, we only consider Servers when dealing with Pod IPs, but this only occurs in two situations for `GetProfile`.
1. The destination is a Pod IP and port
2. The destionation is an Instance ID and port
In both of these cases, we need to check if a already Server selects the endpoint and we need to subscribe for Server updates incase one is added or deleted which selects the endpoint.
### How
First we check if there is already a Server which selects the endpoint. This is so that when the first destionation profile is returned, the client knows if the destination is `opaque` or not.
After sending that first update, we then subscribe the client for any future updates which will come from a Server being added or deleted.
This is handled by the new `ServerWatcher` which watches for Server updates on the cluster; when an update occurs it sends that to the `endpointProfileTranslator` which translates the protcol update into a DestinationProfile.
By introducing the `endpointProfileTranslator` which only handles protocol updates, we're able to decouple the endpoint logic from `profileTranslator`—it's `endpoint` field has been removed now that it only handles updates for ServiceProfiles for Services.
### Testing
A unit test has been added and below are some manual testing instructions to see how it interacts with Server updates:
<details>
<summary>app.yaml</summary>
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod
labels:
app: pod
spec:
containers:
- name: app
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
name: srv
labels:
policy: srv
spec:
podSelector:
matchLabels:
app: pod
port: 80
proxyProtocol: opaque
```
</details>
```shell
$ go run ./controller/cmd/main.go destination
```
```shell
$ linkerd inject app.yaml |kubectl apply -f -
...
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod 2/2 Running 0 53m 10.42.0.34 k3d-k3s-default-server-0 <none> <none>
$ go run ./controller/script/destination-client/main.go -method getProfile -path 10.42.0.34:80
...
```
You can add/delete `srv` as well as edit its `proxyProtocol` field to observe the correct DestinationProfile updates.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Now, that SMI functionality is fully being moved into the
[linkerd-smi](www.github.com/linkerd/linkerd-smi) extension, we can
stop supporting its functionality by default.
This means that the `destination` component will stop reacting
to the `TrafficSplit` objects. When `linkerd-smi` is installed,
It does the conversion of `TrafficSplit` objects to `ServiceProfiles`
that destination components can understand, and will react accordingly.
Also, Whenever a `ServiceProfile` with traffic splitting is associated
with a service, the same information (i.e splits and weights) is also
surfaced through the `UI` (in the new `services` tab) and the `viz cmd`.
So, We are not really loosing any UI functionality here.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Closes#6253
### What
---
When we send a profile request with a pod IP, we get back an endpoint as part of the response. This has two advantages: we avoid building a load balancer and we can treat endpoint failure differently (with more of a fail fast approach). At the moment, when we use a pod DNS as the target of the profile lookup, we don't have an endpoint returned in the
response.
Through this change, the behaviour will be consistent. Whenever we look up a pod (either through IP or DNS name) we will get an endpoint back. The change also attempts to simplify some of the logic in GetProfile.
### How
---
We already have a way to build an endpoint and return it back to the client; I sought to re-use most of the code in an effort to also simplify `GetProfile()`. I extracted most of the code that would have been duplicated into a separate method that is responsible for building the address, looking at annotations for opaque ports and for sending the response back.
In addition, to support a pod DNS fqn I've expanded on the `else` branch of the topmost if statement -- if our host is not an IP, we parse the host to get the k8s fqn. If the parsing function returns an instance ID along with the ServiceID, then we know we are dealing directly with a pod -- if we do, we fetch the pod using the core informer and then return an endpoint for it.
### Tests
---
I've tested this mostly with the destination client script. For the tests, I used the following pods:
```
❯ kgp -n emojivoto -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
voting-ff4c54b8d-zbqc4 2/2 Running 0 3m58s 10.42.0.53 k3d-west-server-0 <none> <none>
web-0 2/2 Running 0 3m58s 10.42.0.55 k3d-west-server-0 <none> <none>
vote-bot-7d89964475-tfq7j 2/2 Running 0 3m58s 10.42.0.54 k3d-west-server-0 <none> <none>
emoji-79cc56f589-57tsh 2/2 Running 0 3m58s 10.42.0.52 k3d-west-server-0 <none> <none>
# emoji pod has an opaque port set to 8080.
# web-svc is a headless service and it backs a statefulset (which is why we have web-0).
# without a headless service we can't lookup based on pod DNS.
```
**`Responses before the change`**:
```
# request on IP, this is how things work at the moment. I included this because there shouldn't be
# any diff between the response given here and the response we get with the change.
# note: this corresponds to the emoji pod which has opaque ports set to 8080.
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.52:8080
INFO[0000] opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524724} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"deployment" value:"emoji"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"emoji-79cc56f589-57tsh"} metric_labels:{key:"pod_template_hash" value:"79cc56f589"} metric_labels:{key:"serviceaccount" value:"emoji"} tls_identity:{dns_like_identity:{name:"emoji.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{} opaque_transport:{inbound_port:4143}}}
INFO[0000]
# request web-0 by IP
# there shouldn't be any diff with the response we get after the change
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.55:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524727} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"web-0"} metric_labels:{key:"serviceaccount" value:"web"} metric_labels:{key:"statefulset" value:"web"} tls_identity:{dns_like_identity:{name:"web.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{}}}
INFO[0000]
# request web-0 by DNS name -- will not work.
❯ go run controller/script/destination-client/main.go -method getProfile -path web-0.web-svc.emojivoto.svc.cluster.loc
al:8080
INFO[0000] fully_qualified_name:"web-0.web-svc.emojivoto.svc.cluster.local" retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"web-svc.emojivoto.svc.cluster.local.:8080" weight:10000}
INFO[0000]
INFO[0000] fully_qualified_name:"web-0.web-svc.emojivoto.svc.cluster.local" retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"web-svc.emojivoto.svc.cluster.local.:8080" weight:10000}
INFO[0000]
# ^
# |
# --> no endpoint in the response
```
**`Responses after the change`**:
```
# request profile for emoji, we see opaque transport being set on the endpoint.
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.52:8080
INFO[0000] opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524724} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"deployment" value:"emoji"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"emoji-79cc56f589-57tsh"} metric_labels:{key:"pod_template_hash" value:"79cc56f589"} metric_labels:{key:"serviceaccount" value:"emoji"} tls_identity:{dns_like_identity:{name:"emoji.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{} opaque_transport:{inbound_port:4143}}}
INFO[0000]
# request profile for web-0 with IP.
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.55:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524727} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"web-0"} metric_labels:{key:"serviceaccount" value:"web"} metric_labels:{key:"statefulset" value:"web"} tls_identity:{dns_like_identity:{name:"web.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{}}}
INFO[0000]
# request profile for web-0 with pod DNS, resp contains endpoint.
❯ go run controller/script/destination-client/main.go -method getProfile -path web-0.web-svc.emojivoto.svc.cluster.local:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524727} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"web-0"} metric_labels:{key:"serviceaccount" value:"web"} metric_labels:{key:"statefulset" value:"web"} tls_identity:{dns_like_identity:{name:"web.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{}}}
INFO[0000]
```
Signed-off-by: Matei David <matei@buoyant.io>
While uncommon, if H2 upgrades are disabled it's possible for an opaque workload
to not have it's hint.OpaqueTransport field set in it's destination profile
response.
This changes the H2 upgrade enabled check to be specific for setting the
hint.Protocol while allowing hint.OpaqueTransport to be set independent of
that value.
Signed-off-by: Kevin Leimkuhler kevin@kleimkuhler.com
Co-authored-by: Oliver Gould <ver@buoyant.io>
* destination: Remove support for IP Queries in `Get` API
Fixes#5246
This PR updates the destination to report an error when `Get`
is called for IP Queries. As the issue mentions, The proxies
are not using this API anymore and it helps to simplify and
remove unnecessary logic.
This removes the relevant `IPWatcher` logic, along with
unit tests
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
When a pod is configured with `skip-inbound-ports` annotation, a client
proxy trying to connect to that pod tries to connect to it via H2 and
also tries to initiate a TLS connection. This issue is caused by the
destination controller when it sends protocol and TLS hints to the
client proxy for that skipped port.
This change fixes the destination controller so that it no longer
sends protocol and TLS identity hints to outbound proxies resolving a
`podIP:port` that is on a skipped inbound port.
I've included a test that exhibits this error prior to this fix but you
can also test the prior behavior by:
```bash
curl https://run.linkerd.io/booksapp.yml > booksapp.yaml
# edit either the books or authors service to:
1: Configure a failure rate of 0.0
2: add the `skip-inbound-ports` config annotation
bin/linkerd viz stat pods webapp
There should be no successful requests on the webapp deployment
```
Fixes#5995
Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
Fixes#5939
Some CNIs reasssign the IP of a terminating pod to a new pod, which
leads to duplicate IPs in the cluster.
It eventually triggers #5939.
This commit will make the IPWatcher, when given an IP, filter out the terminating pods
(when a pod is given a deletionTimestamp).
The issue is hard reproduce because we are not able to assign a
particular IP to a pod manually.
Signed-off-by: Bruce <wenliang.chen@personio.de>
Co-authored-by: Bruce <wenliang.chen@personio.de>
* destination: pass opaque-ports through cmd flag
Fixes#5817
Currently, Default opaque ports are stored at two places i.e
`Values.yaml` and also at `opaqueports/defaults.go`. As these
ports are used only in destination, We can instead pass these
values as a cmd flag for destination component from Values.yaml
and remove defaultPorts in `defaults.go`.
This means that users if they override `Values.yaml`'s opauePorts
field, That change is propogated both for injection and also
discovery like expected.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
This change introduces an opaque ports annotation watcher that will send
destination profile updates when a service has its opaque ports annotation
change.
The user facing change introduced by this is that the opaque ports annotation is
now required on services when using the multicluster extension. This is because
the service mirror will create mirrored services in the source cluster, and
destination lookups in the source cluster need to discover that the workloads in
the target cluster are opaque protocols.
### Why
Closes#5650
### How
The destination server now has a new opaque ports annotation watcher. When a
client subscribes to updates for a service name or cluster IP, the `GetProfile`
method creates a profile translator stack that passes updates through resource
adaptors such as: traffic split adaptor, service profile adaptor, and now opaque
ports adaptor.
When the annotation on a service changes, the update is passed through to the
client where the `opaque_protocol` field will either be set to true or false.
A few scenarios to consider are:
- If the annotation is removed from the service, the client should receive
an update with no opaque ports set.
- If the service is deleted, the stream stays open so the client should
receive an update with no opaque ports set.
- If the service has the annotation added, the client should receive that
update.
### Testing
Unit test have been added to the watcher as well as the destination server.
An integration test has been added that tests the opaque port annotation on a
service.
For manual testing, using the destination server scripts is easiest:
```
# install Linkerd
# start the destination server
$ go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
# Create a service or namespace with the annotation and inject it
# get the destination profile for that service and observe the opaque protocol field
$ go run controller/script/destination-client/main.go -method getProfile -path test-svc.default.svc.cluster.local:8080
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000}
INFO[0000]
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000}
INFO[0000]
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Getting information about node topology queries the k8s api directly.
In an environment with high traffic and high number of pods, the
k8s api server can become overwhelmed or start throttling requests.
This MR introduces a node informer to resolve the bottleneck and
fetch node information asynchronously.
Fixes#5684
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
Ignore pods with status.phase=Succeeded when watching IP addresses
When a pod terminates successfully, some CNIs will assign its IP address
to newly created pods. This can lead to duplicate pod IPs in the same
Kubernetes cluster.
Filter out pods which are in a Succeeded phase since they are not
routable anymore.
Fixes#5394
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
The destination service now returns `OpaqueTransport` hint when the annotation
matches the resolve target port. This is different from the current behavior
which always sets the hint when a proxy is present.
Closes#5421
This happens by changing the endpoint watcher to set a pod's opaque port
annotation in certain cases. If the pod already has an annotation, then its
value is used. If the pod has no annotation, then it checks the namespace that
the endpoint belongs to; if it finds an annotation on the namespace then it
overrides the pod's annotation value with that.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
## What
When the destination service returns a destination profile for an endpoint,
indicate if the endpoint can receive opaque traffic.
## Why
Closes#5400
## How
When translating a pod address to a destination profile, the destination service
checks if the pod is controlled by any linkerd control plane. If it is, it can
set a protocol hint where we indicate that it supports H2 and opaque traffic.
If the pod supports opaque traffic, we need to get the port that it expects
inbound traffic on. We do this by getting the proxy container and reading it's
`LINKERD2_PROXY_INBOUND_LISTEN_ADDR` environment variable. If we successfully
parse that into a port, we can set the opaque transport field in the destination
profile.
## Testing
A test has been added to the destination server where a pod has a
`linkerd-proxy` container. We can expect the `OpaqueTransport` field to be set
in the returned destination profile's protocol hint.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
## Summary
This changes the destination service to start indicating whether a profile is an
opaque protocol or not.
Currently, profiles returned by the destination service are built by chaining
together updates coming from watching Profile and Traffic Split updates.
With this change, we now also watch updates to Opaque Port annotations on pods
and namespaces; if an update occurs this is now included in building a profile
update and is sent to the client.
## Details
Watching updates to Profiles and Traffic Splits is straightforward--we watch
those resources and if an update occurs on one associated to a service we care
about then the update is passed through.
For Opaque Ports this is a little different because it is an annotation on pods
or namespaces. To account for this, we watch the endpoints that we should care
about.
### When host is a Pod IP
When getting the profile for a Pod IP, we check for the opaque ports annotation
on the pod and the pod's namespace. If one is found, we'll indicate if the
profile is an opaque protocol if the requested port is in the annotation.
We do not subscribe for updates to this pod IP. The only update we really care
about is if the pod is deleted and this is already handled by the proxy.
### When host is a Service
When getting the profile for a Service, we subscribe for updates to the
endpoints of that service. For any ports set in the opaque ports annotation on
any of the pods, we check if the requested port is present.
Since the endpoints for a service can be added and removed, we do subscribe for
updates to the endpoints of the service.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This adds additional tests for the destination service that assert `GetProfile`
behavior when the path is an IP address.
1. Assert that when the path is a cluster IP, the configured service profile is
returned.
2. Assert that when the path a pod IP, the endpoint field is populated in the
service profile returned.
3. Assert that when the path is not a cluster or pod IP, the default service
profile is returned.
4. Assert that when path is a pod IP with or without the controller annotation,
the endpoint has or does not have a protocol hint
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
## Motivation
Closes#5016
Depends on linkerd/linkerd2-proxy-api#44
## Solution
A `profileTranslator` exists for each service and now has a new
`fullyQualifiedName` field.
This field is used to set the `FullyQualifiedName` field of
`DestinationProfile`s each time an update is sent.
In the case that no service profile exists for a service, a default
`DestinationProfile` is created and we can use the field to set the correct
name.
In the case that a service profile does exist for a service, we still use this
field to set the name to keep it consistent.
### Example
Install linkerd on a cluster and run the destination server:
```
go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
```
Get the IP of a service. Here, we'll get the ip for `linkerd-identity`:
```
> kubectl get -n linkerd svc/linkerd-identity
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
linkerd-identity ClusterIP 10.43.161.68 <none> 8080/TCP 4h25m
```
Get the profile of `linkerd-identity` from service name or IP and note the
`FullyQualifiedName` field:
```
> go run controller/script/destination-client/main.go -method getProfile -path 10.43.161.68:8080
INFO[0000] fully_qualified_name:"linkerd-identity.linkerd.svc.cluster.local" ..
```
```
> go run controller/script/destination-client/main.go -method getProfile -path linkerd-identity.linkerd.svc.cluster.local
INFO[0000] fully_qualified_name:"linkerd-identity.linkerd.svc.cluster.local" ..
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
[Link to RFC](https://github.com/linkerd/rfc/pull/23)
### What
---
* PR that puts together all past pieces of the puzzle to deliver topology-aware service routing, as specified in the [Kubernetes docs](https://kubernetes.io/docs/concepts/services-networking/service-topology/) but with a much better load balancing algorithm and all the coolness of linkerd :)
* The first piece of this PR is focused on adding topology metadata: topology preference for services and topology `<k,v>` pairs for endpoints.
* The second piece of this PR puts together the new context format and fetching the source node topology metadata in order to allow for endpoints filtering.
* The final part is doing the filtering -- passing all of the metadata to the listener and on every `Add` filtering endpoints based on the topology preference of the service, topology `<k,v>` pairs of endpoints and topology of the source (again `<k,v>` pairs).
### How
---
* **Collecting metadata**:
- Services do not have values for topology keys -- the topological keys defined in a service's spec are only there to dictate locality preference for routing; as such, I decided to store them in an array, they will be taken exactly as they are found in the service spec, this ensures we respect the preference order.
- For EndpointSlices, we are using a map -- an EndpointSlice has locality information in the form of `<k,v>` pair, where the key is a topological key (similar to what's listed in the service) and the value is the locality information -- e.g `hostname: minikube`. For each address we now have a map of topology values which gets populated when we translate the endpoints to an address set. Because normal Endpoints do not have any topology information, we create each address with an empty map which is subsequently populated ONLY for slices in the `endpointSliceToAddressSet` function.
* **Filtering endpoints**:
- This was a tricky part and filled me with doubts. I think there are a few ways to do this, but this is how I "envisioned" it. First, the `endpoint_translator.go` should be the one to do the filtering; this means that on subscription, we need to feed all of the relevant metadata to the listener. To do this, I created a new function `AddTopologyFilter` as part of the listener interface.
- To complement the `AddTopologyFilter` function, I created a new `TopologyFilter` struct in `endpoints_watcher.go`. I then embedded this structure in all listeners that implement the interface. The structure holds the source topology (source node), a boolean to tell if slices are activated in case we need to double check (or write tests for the function) and the service preference. We create the filter on Subscription -- we have access to the k8s client here as well as the service, so it's the best point to collect all of this data together. Addresses all have their own topology added to them so they do not have to be collected by the filter.
- When we add a new set of addresses, we check to see if slices are enabled -- chances are if slices are enabled, service topology might be too. This lets us skip this step if the latest version is not adopted. Prior to sending an `Add` we filter the endpoints -- if the preference is registered by the filter we strictly enforce it, otherwise nothing changes.
And that's pretty much it.
Signed-off-by: Matei David <matei.david.35@gmail.com>
This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
The misspellings have been reported at aaf440489e (commitcomment-41423663)
The action reports that the changes in this PR would make it happy: 5b82c6c5ca
Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
Add a new structure on the destination controller side to keep track of contextual information.
The token format has been changed from ns:<namespace> to a JSON format so that more variables can be
encdoed in the token. As part of this PR, a new field 'nodeName' has been added to help with service
topologies.
Fixes#4498
Signed-off-by: Matei David <matei.david.35@gmail.com>
EndpointSlices have been made opt-in due to their experimental nature. This PR
introduces a new install flag 'enableEndpointSlices' that will allow adopters to
specify in their cli install or helm install step whether they would like to
use endpointslices as a resource in the destination service, instead of the
endpoints k8s resource.
Signed-off-by: Matei David <matei.david.35@gmail.com>
This PR introduces a service mirroring component that is responsible for watching remote clusters and mirroring their services locally.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Fixes#3332
Fixes the very rare test failure
```
--- FAIL: TestGetProfiles (0.33s)
--- FAIL: TestGetProfiles/Returns_server_profile (0.11s)
server_test.go:228: Expected 1 or 2 updates but got 3:
[retry_budget:<retry_ratio:0.2 min_retries_per_second:10
ttl:<seconds:10 > > routes:<condition:<path:<regex:"/a/b/c"
> > metrics_labels:<key:"route" value:"route1" >
timeout:<seconds:10 > > retry_budget:<retry_ratio:0.2
min_retries_per_second:10 ttl:<seconds:10 > >
routes:<condition:<path:<regex:"/a/b/c" > >
metrics_labels:<key:"route" value:"route1" >
timeout:<seconds:10 > > retry_budget:<retry_ratio:0.2
min_retries_per_second:10 ttl:<seconds:10 > > ]
FAIL
FAIL github.com/linkerd/linkerd2/controller/api/destination
0.624s
```
that occurs when a third unexpected stream update occurs, when the fake
API takes more time to notify its listeners about the resources created.
For all the nasty details check #3332
Fixes#3444Fixes#3443
## Background and Behavior
This change adds support for the destination service to resolve Get requests which contain a service clusterIP or pod ip as the `Path` parameter. It returns the stream of endpoints, just as if `Get` had been called with the service's authority. This lays the groundwork for allowing the proxy to TLS TCP connections by allowing the proxy to do destination lookups for the SO_ORIG_DST of tcp connections. When that ip address corresponds to a service cluster ip or pod ip, the destination service will return the endpoints stream, including the pod metadata required to establish identity.
Prior to this change, attempting to look up an ip address in the destination service would result in a `InvalidArgument` error.
Updating the `GetProfile` method to support ip address lookups is out of scope and attempts to look up an ip address with the `GetProfile` method will result in `InvalidArgument`.
## Implementation
We do this by creating a `IPWatcher` which wraps the `EndpointsWatcher` and supports lookups by ip. `IPWatcher` maintains a mapping up clusterIPs to service ids and translates subscriptions to an IP address into a subscription to the service id using the underlying `EndpointsWatcher`.
Since the service name is no longer always infer-able directly from the input parameters, we restructure `EndpointTranslator` and `PodSet` so that we propagate the service name from the endpoints API response.
## Testing
This can be tested by running the destination service locally, using the current kube context to connect to a Kubernetes cluster:
```
go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
```
Then lookups can be issued using the destination client:
```
go run controller/script/destination-client/main.go -path 192.168.54.78:80 -method get -addr localhost:8086
```
Service cluster ips and pod ips can be used as the `path` argument.
Signed-off-by: Alex Leong <alex@buoyant.io>
- Added cleanup step at the end of all integration tests.
- Disable external_issuer_integration_tests in cloud_tests due to
namespace issue. Running this via `kind` tests is sufficient for now.
- Set a flakey test to `Skip`, relates to #3332.
Signed-off-by: Andrew Seigner <siggy@buoyant.io>