The ExternalWorkload resource we introduced has a minor naming
inconsistency; `Tls` in `meshTls` is not capitalised. Other resources
that we have (e.g. authentication resources) capitalise TLS (and so does
Go, it follows a similar naming convention).
We fix this in the workload resource by changing the field's name and
bumping the version to `v1beta1`.
Upgrading the control plane version will continue to work without
downtime. However, if an existing resource exists, the policy controller
will not completely initialise. It will not enter a crashloop backoff,
but it will also not become ready until the resource is edited or
deleted.
Signed-off-by: Matei David <matei@buoyant.io>
Adds a metric that measures the number of items that have been discarded from the work queue in the external workloads controller due to the retries limit being exceeded.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Fixes#12032
The Destination controller server tests test the destination server with `enableEndpointSlices=false`. The default for this value is true, meaning that these tests do not test the default configuration.
We update the tests to test with `enableEndpointSlices=true` and update the corresponding mock kubernetes Endpoints resources to be EndpointSlices instead. We also fix an instance where the workload watcher was using Endpoints even when in EndpointSlices mode.
Signed-off-by: Alex Leong <alex@buoyant.io>
A Server may only select workloads in its own namespace. Therefore, when the
destination controller receives an update for a Server, it only needs to
potentially send updates to watches on workloads in that same namespace. Taking
this into account allows us avoid all opaqueness computations for workloads in
other namespaces.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#12010
## Problem
We're observing crashes in the destination controller in some scenarios, due to data race as described in #12010.
## Cause
The problem is the same instance of the `AddressSet.Addresses` map is getting mutated in the endpoints watcher Server [informer handler](https://github.com/linkerd/linkerd2/blob/edge-24.1.3/controller/api/destination/watcher/endpoints_watcher.go#L1309), and iterated over in the endpoint translator [queue loop](https://github.com/linkerd/linkerd2/blob/edge-24.1.3/controller/api/destination/endpoint_translator.go#L197-L211), which run in different goroutines and the map is not guarded. I believe this doesn't result in Destination returning stale data; it's more of a correctness issue.
## Solution
Make a shallow copy of `pp.addresses` in the endpoints watcher and only pass that to the listeners. It's a shallow copy because we avoid making copies of the pod reference in there, knowing it won't get mutated.
## Repro
Install linkerd core and injected emojivoto and patch the endpoint translator to include a sleep call that will help surfacing the race (don't install the patch in the cluster; we'll only use it locally below):
<details>
<summary>endpoint_translator.go diff</summary>
```diff
diff --git a/controller/api/destination/endpoint_translator.go b/controller/api/destination/endpoint_translator.go
index d1018d5f9..7d5abd638 100644
--- a/controller/api/destination/endpoint_translator.go
+++ b/controller/api/destination/endpoint_translator.go
@@ -5,6 +5,7 @@ import (
"reflect"
"strconv"
"strings"
+ "time"
pb "github.com/linkerd/linkerd2-proxy-api/go/destination"
"github.com/linkerd/linkerd2-proxy-api/go/net"
@@ -195,7 +196,9 @@ func (et *endpointTranslator) processUpdate(update interface{}) {
}
func (et *endpointTranslator) add(set watcher.AddressSet) {
for id, address := range set.Addresses {
+ time.Sleep(1 * time.Second)
et.availableEndpoints.Addresses[id] = address
}
```
</details>
Then create these two Server manifests:
<details>
<summary>emoji-web-server.yml</summary>
```yaml
apiVersion: policy.linkerd.io/v1beta2
kind: Server
metadata:
namespace: emojivoto
name: web-http
labels:
app.kubernetes.io/part-of: emojivoto
app.kubernetes.io/name: web
app.kubernetes.io/version: v11
spec:
podSelector:
matchLabels:
app: web-svc
port: http
proxyProtocol: HTTP/1
```
</details>
<details>
<summary>emoji-web-server-opaque.yml</summary>
```yaml
apiVersion: policy.linkerd.io/v1beta2
kind: Server
metadata:
namespace: emojivoto
name: web-http
labels:
app.kubernetes.io/part-of: emojivoto
app.kubernetes.io/name: web
app.kubernetes.io/version: v11
spec:
podSelector:
matchLabels:
app: web-svc
port: http
proxyProtocol: opaque
```
</details>
In separate consoles run the patched destination service and a destination client:
```bash
HOSTNAME=foobar go run -race ./controller/cmd/main.go destination -enable-h2-upgrade=true -enable-endpoint-slices=true -cluster-domain=cluster.local -identity-trust-domain=cluster.local -default-opaque-ports=25,587,3306,4444,5432,6379,9300,11211
```
```bash
go run ./controller/script/destination-client -path web-svc.emojivoto.svc.cluster.local:80
```
And run this to continuously switch the `proxyProtocol` field:
```bash
while true; do kubectl apply -f ~/src/k8s/sample_yamls/emoji-web-server.yml; kubectl apply -f ~/src/k8s/sample_yamls/emoji-web-server-opaque.yml ; done
```
You'll see the following data race report in the Destination controller logs:
<details>
<summary>destination logs</summary>
```console
==================
WARNING: DATA RACE
Write at 0x00c0006d30e0 by goroutine 178:
github.com/linkerd/linkerd2/controller/api/destination/watcher.(*portPublisher).updateServer()
/home/alpeb/pr/destination-race/linkerd2/controller/api/destination/watcher/endpoints_watcher.go:1310 +0x772
github.com/linkerd/linkerd2/controller/api/destination/watcher.(*servicePublisher).updateServer()
/home/alpeb/pr/destination-race/linkerd2/controller/api/destination/watcher/endpoints_watcher.go:711 +0x150
github.com/linkerd/linkerd2/controller/api/destination/watcher.(*EndpointsWatcher).addServer()
/home/alpeb/pr/destination-race/linkerd2/controller/api/destination/watcher/endpoints_watcher.go:514 +0x173
github.com/linkerd/linkerd2/controller/api/destination/watcher.(*EndpointsWatcher).updateServer()
/home/alpeb/pr/destination-race/linkerd2/controller/api/destination/watcher/endpoints_watcher.go:528 +0x26f
github.com/linkerd/linkerd2/controller/api/destination/watcher.(*EndpointsWatcher).updateServer-fm()
<autogenerated>:1 +0x64
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate()
/home/alpeb/go/pkg/mod/k8s.io/client-go@v0.29.1/tools/cache/controller.go:246 +0x81
k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnUpdate()
<autogenerated>:1 +0x1f
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/home/alpeb/go/pkg/mod/k8s.io/client-go@v0.29.1/tools/cache/shared_informer.go:970 +0x1f4
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
/home/alpeb/go/pkg/mod/k8s.io/apimachinery@v0.29.1/pkg/util/wait/backoff.go:226 +0x41
k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
/home/alpeb/go/pkg/mod/k8s.io/apimachinery@v0.29.1/pkg/util/wait/backoff.go:227 +0xbe
k8s.io/apimachinery/pkg/util/wait.JitterUntil()
/home/alpeb/go/pkg/mod/k8s.io/apimachinery@v0.29.1/pkg/util/wait/backoff.go:204 +0x10a
k8s.io/apimachinery/pkg/util/wait.Until()
/home/alpeb/go/pkg/mod/k8s.io/apimachinery@v0.29.1/pkg/util/wait/backoff.go:161 +0x9b
k8s.io/client-go/tools/cache.(*processorListener).run()
/home/alpeb/go/pkg/mod/k8s.io/client-go@v0.29.1/tools/cache/shared_informer.go:966 +0x38
k8s.io/client-go/tools/cache.(*processorListener).run-fm()
<autogenerated>:1 +0x33
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/home/alpeb/go/pkg/mod/k8s.io/apimachinery@v0.29.1/pkg/util/wait/wait.go:72 +0x86
Previous read at 0x00c0006d30e0 by goroutine 360:
github.com/linkerd/linkerd2/controller/api/destination.(*endpointTranslator).add()
/home/alpeb/pr/destination-race/linkerd2/controller/api/destination/endpoint_translator.go:200 +0x1ab
github.com/linkerd/linkerd2/controller/api/destination.(*endpointTranslator).processUpdate()
/home/alpeb/pr/destination-race/linkerd2/controller/api/destination/endpoint_translator.go:190 +0x166
github.com/linkerd/linkerd2/controller/api/destination.(*endpointTranslator).Start.func1()
/home/alpeb/pr/destination-race/linkerd2/controller/api/destination/endpoint_translator.go:174 +0x45
```
</details>
## Extras
This also removes the unused method `func (as *AddressSet) WithPort(port Port) AddressSet` in endpoints_watcher.go
Fixes#11995
If a Server is marking a Pod's port as opaque and then the Server's podSelector is updated to no longer select that Pod, then the Pod's port should no longer be marked as opaque. However, this update does not result in any updates from the destination API's Get stream and the port remains marked as opaque.
We fix this by updating the endpoint watcher's handling of Server updates to consider both the old and the new Server.
Signed-off-by: Alex Leong <alex@buoyant.io>
Revise leader election logic for endpoints controller
Our leader election logic can result in updates being missed under certain
conditions. Leases expire after their duration is up, even if their current
holder has been terminated. During this dead time, any changes in the
system will be observed by other controllers, but will not be written to
the API Server.
For example, during a rollout, a controller that has come up will not be
able to acquire the lease for a maximum time of 30 seconds (lease
duration). Within this time frame, any changes to the system (e.g. modified
workloads, services, deleted endpointslices) will be observed but not acted
on by the newly created controller. Once the controller gets into a bad
state, it can only recover after 10 minutes (via service resyncs) or if any
resources are modified.
To address this, we change our leader election mechanism. Instead of
pushing leader election to the edge (i.e. when performing writes) we only
allow events to be observed when a controller is leading (i.e. by
registering callbacks). When a controller stops leading, all of its
callbacks will be de-registered.
NOTE:
* controllers will have a grace period during which they can renew their
lease. Their callbacks will be de-registered only if this fails. We will
not register and de-register callbacks that often for a single
controller.
* we do not lose out on any state. Other informers will continue to run
(e.g. destination readers). When callbacks are registered, we pass all of
the cached objects through them. In other words, we do not issue API
requests on registration, we process the state of the cluster as observed
from the cache.
* we make another change that's slightly orthogonal. Before we shutdown,
we ensure to drain the queue. This should not be a race since we will
first block until the queue is drained, then signal to the leader elector
loop that we are done. This gives us some confidence that all events have
been processed as soon as they were observed.
Signed-off-by: Matei David <matei@buoyant.io>
We the destination controller's workload watcher receives an update for any Server resource, it recomputes opaqueness for every workload. This is because the Server update may have changed opaqueness for that workload. However, this is very CPU intensive for the destination controller, especially during resyncs when we get Server updates for every Server resource in the cluster.
Instead, we only need to recompute opaqueness for workloads that are selected by the old version of the Server or by the new version of the Server. If a workload is not selected by either the new or old version of the Server, then the Server update cannot have changed the workload's opaqueness.
Signed-off-by: Alex Leong <alex@buoyant.io>
Any slices generated for a group of external workloads follow a similar
convention: `linkerd-external-<svc-name>-<hash>`. Currently the hash is
appended directly to the service name making it less readable. We add a
`-` to the generate name value so that random hashes are not part of the
service name. This is similar to the upstream implementation.
Signed-off-by: Matei David <matei@buoyant.io>
When the destination controller receives a GetProfile request for an ExternalName service, it should return gRPC status code INVALID_ARGUMENT to signal to the proxy that ExternalName services do not have endpoints and service discovery should not be used. However, the destination controller is returning gRPC status code UNKNOWN instead. This causes the proxy to retry these requests, resulting in a flurry of warning logs in the destination controller.
We fix the destination controller to properly return INVALID_ARGUMENT instead of UNKNOWN for ExternalName services.
Signed-off-by: Alex Leong <alex@buoyant.io>
If the readiness of an external workload endpoint changes while traffic
is being sent to it, the update will not be propagated to clients. This
can lead to issues where an endpoint that is marked as `notReady`
continues to figure out as being `ready` by the endpoints watcher.
The issue stems from how endpoint slices are diffed. A utility function
responsible for processing addresses does not consider endpoints whose
targetRef is an external workload. We fix the problem and add two module
tests to validate readiness is propagated to clients correctly.
---------
Signed-off-by: Matei David <matei@buoyant.io>
We introduced an endpoints controller that will be responsible for
managing EndpointSlices for services that select external workloads. We
introduce as a follow-up the reconciler component of the controller that
will be responsible for doing the writes and diffing.
Additionally, the controller is wired-up in the destination service's
main routine and will start if endpoint slice support is enabled.
---------
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Co-authored-by: Zahari Dichev <zaharidichev@gmail.com>
When a meshed client attempts to establish a connection directly to the workload IP of an ExternalWorkload, the destination controller should return an endpoint profile for that ExternalWorkload with a single endpoint and the metadata associated with that ExternalWorkload including:
* mesh TLS identity
* workload metric labels
* opaque / protocol hints
Signed-off-by: Alex Leong <alex@buoyant.io>
This PR adds metrics to the work queue that is used in the external workload endpoints controller.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
For mesh expansion, we need to register an ExternalWorkload's service
membership. Service memberships describe which Service objects an
ExternalWorkload is part of (i.e. which service can be used to route
traffic to an external endpoint).
Service membership will allow the control plane to discover
configuration associated with an external endpoint when performing
discovery on a service target.
To build these memberships, we introduce a new controller to the
destination service, responsible for watching Service and
ExternalWorkload objects, and for writing out EndpointSlice objects for
each Service that selects one or more external endpoints.
As a first step, we add a new externalworkload module and a new controller in the
that watches services and workloads. In a follow-up change,
the ExternalEndpointManager will additionally perform
the necessary reconciliation by writing EndpointSlice objects.
Since Linkerd's control plane may run in HA, we also add a lease object
that will be used by the manager. When a lease is claimed, a flag is
turned on in the manager to let it know it may perform writes.
A more compact list of changes:
* Add a new externalworkload module
* Add an EndpointsController in the module along with necessary mechanisms to watch resources.
* Add RBAC rules to the destination service:
* Allow policy and destination to read ExternalWorkload objects
* Allow destination to create / update / read Lease objects
---------
Signed-off-by: Matei David <matei@buoyant.io>
Whenever the destination controller's informer receives an update of a Server resource, it checks every portPublisher in the endpointsWatcher to see if the Server selects any pods in that servicePort and updates those pods' opaque protocol field. Regardless of if any pods were matched or if the opaque protocol changed, an update is sent to each listener. This results in an update to every endpointTranslator each time a Server is updated. During a resync, we get an update for every Server in the cluster which results in N updates to each endpointTranslator where N is the number of Servers in the cluster.
If N is greater than 100, it becomes possible that these N updates could overflow the endpointTranslator update queue if the queue is not being drained fast enough.
We change this to only send the update for a Server if at least one of the servicePort addresses was selected by that server AND it's opaque protocol field changed.
Signed-off-by: Alex Leong <alex@buoyant.io>
In 71635cb and 357a1d3 we updated the endpoint and profile translators
to prevent backpressure from stalling the informer tasks. This change
updates the endpoint profile translator with the same fix, so that
updates are buffered and can detect when when a gRPC stream is stalled.
Furthermore, the update method is updated to use a protobuf-aware
comparison method instead of using serialization and string comparison.
A test is added for the endpoint profile translator, since none existed
previously.
When we do a `GetProfile` lookup for an unmeshed pod, we set the `weightedAddr.ProtocolHint` to an empty value `&pb.ProtocolHint{}` to indicate that the address is unmeshed and has no protocol hint. However, when the looked up port is in the default opaque list, we erroneously check if `weightedAddr.ProtocolHint != nil` to determine if we should attempt to get the inbound listen port for that pod. Since `&pb.ProtocolHint{} != nil`, we attempt to get the inbound listen port for the unmeshed pod. This results in an error, preventing any valid `GetProfile` responses from being returned.
We update the initialization logic for `weightedAddr.ProtocolHint` to only create a struct when a protocol hint is present and to leave it as `nil` if the pod is unmeshed.
We add a simple unit test for this behavior as well.
Signed-off-by: Alex Leong <alex@buoyant.io>
This change adds a runtime flag to the destination controller,
--experimental-endpoint-zone-weights=true, that causes endpoints in the
local zone to receive higher weights. This feature is disabled by
default, since the weight value is not honored by proxies. No helm
configuration is exposed yet, either.
This weighting is instrumented in the endpoint translator. Tests are
added to confirm that the behavior is feature-gated.
Additionally, this PR adds the "zone" metric label to endpoint metadata
responses.
This removes `endpointProfileTranslator`'s dependency on the k8sAPI and
metadataAPI, which are not used. This was introduced in one of #11334's
refactorings, but ended up being not required. No functional changes
here.
https://github.com/linkerd/linkerd2/pull/11491 changed the EndpointTranslator to use a queue to avoid calling `Send` on a gRPC stream directly from an informer callback goroutine. This change updates the ProfileTranslator in the same way, adding a queue to ensure we do not block the informer thread.
Signed-off-by: Alex Leong <alex@buoyant.io>
In order to detect if the destination controller's k8s informers have fallen behind, we add a histogram for each resource type. These histograms track the delta between when an update to a resource occurs and when the destination controller processes that update. We do this by looking at the timestamps on the managed fields of the resource and looking for the most recent update and comparing that to the current time.
The histogram metrics are of the form `{kind}_informer_lag_ms_bucket_*`.
* We record a value only for updates, not for adds or deletes. This is because when the controller starts up, it will populate its cache with an add for each resource in the cluster and the delta between the last updated time of that resource and the current time may be large. This does not represent informer lag and should not be counted as such.
* When the informer performs resyncs, we get updates where the updated time of the old version is equal to the updated time of the new version. This does not represent an actual update of the resource itself and so we do not record a value.
* Since we are comparing timestamps set on the manged fields of resources to the current time from the destination controller's system clock, the accuracy of these metrics depends on clock drift being minimal across the cluster.
* We use histogram buckets which range from 500ms to about 17 minutes. In my testing, an informer lag of 500ms-1000ms is typical. However, we wish to have enough buckets to identify cases where the informer is lagged significantly behind.
Signed-off-by: Alex Leong <alex@buoyant.io>
When we do a GetProfile lookup for an opaque port on an unmeshed pod,
we attempt to look up the inbound listen port of that pod's proxy. Since
that pod has no proxy, this fails and we return an error to the GetProfile
API call. This causes the proxy to fail to be able to resolve the profile and
be unable to route the traffic.
We revert to the previous behavior of only logging when we cannot look
up the inbound listen port instead of returning an error.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#11065
When an inbound proxy receives a request with a canonical name of the form `hostname.service.namespace.svc.cluster.domain`, we assume that `hostname` is the hostname of the pod as described in https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields. However, pods are also addressable with `pod-ip.service.namespace.svc.cluster.domain`. When the destination controller gets a profile request of this form, we attempt to find a pod with hostname of `pod-ip` and return an error with gRPC status `Unknown` since this will not exist.
It is expected that this profile lookup will fail since we cannot have service profiles for individual pods. However, returning a gRPC status `Unknown` for these requests brings the reported success rate of the destination controller down. Instead we should return these as gRPC status `NotFound` so that these responses don't get reported as server errors.
Signed-off-by: Alex Leong <alex@buoyant.io>
When the destination controller logs about receiving or sending messages to a data plane proxy, there is no information in the log about which data plane pod it is communicating with. This can make it difficult to diagnose issues which span the data plane and control plane.
We add a `pod` field to the context token that proxies include in requests to the destination controller. We add this pod name to the logging context so that it shows up in log messages. In order to accomplish this, we had to plumb through logging context in a few places where it previously had not been. This gives us a more complete logging context and more information in each log message.
An example log message with this fuller logging context is:
```
time="2023-10-24T00:14:09Z" level=debug msg="Sending destination add: add:{addrs:{addr:{ip:{ipv4:183762990} port:8080} weight:10000 metric_labels:{key:\"control_plane_ns\" value:\"linkerd\"} metric_labels:{key:\"deployment\" value:\"voting\"} metric_labels:{key:\"pod\" value:\"voting-7475cb974c-2crt5\"} metric_labels:{key:\"pod_template_hash\" value:\"7475cb974c\"} metric_labels:{key:\"serviceaccount\" value:\"voting\"} tls_identity:{dns_like_identity:{name:\"voting.emojivoto.serviceaccount.identity.linkerd.cluster.local\"}} protocol_hint:{h2:{}}} metric_labels:{key:\"namespace\" value:\"emojivoto\"} metric_labels:{key:\"service\" value:\"voting-svc\"}}" addr=":8086" component=endpoint-translator context-ns=emojivoto context-pod=web-767f4484fd-wmpvf remote="10.244.0.65:52786" service="voting-svc.emojivoto.svc.cluster.local:8080"
```
Note the `context-pod` field.
Additionally, we have tested this when no pod field is included in the context token (e.g. when handling requests from a pod which does not yet add this field) and confirmed that the `context-pod` log field is empty, but no errors occur.
Signed-off-by: Alex Leong <alex@buoyant.io>
The destination controller indexes Pods by their primary PodIP and their
primary HostIP (when applicable); and it indexes Services by their
primary ClusterIP.
In preparation for dual-stack (IPv6) support, this change updates the
destination controller indexers to consume all IPs for these resources.
Tests are updated to demonstrate a dual-stack setup (where these
resources have a primary IPv4 address and a secondary IPv6 address).
While exercising these tests, I had to replace the brittle host:port
parsing logic we use in the server, in favor of Go's standard
`net.SplitHostPort` utility.
When a grpc client of the destination.Get API initiates a request but then doesn't read off of that stream, the HTTP2 stream flow control window will fill up and eventually exert backpressure on the destination controller. This manifests as calls to `Send` on the stream blocking. Since `Send` is called synchronously from the client-go informer callback (by way of the endpoint translator), this blocks the informer callback and prevents all further informer calllbacks from firing. This causes the destination controller to stop sending updates to any of its clients.
We add a queue in the endpoint translator so that when it gets an update from the informer callback, that update is queued and we avoid potentially blocking the informer callback. Each endpoint translator spawns a goroutine to process this queue and call `Send`. If there is not capacity in this queue (e.g. because a client has stopped reading and we are experiencing backpressure) then we terminate the stream.
Signed-off-by: Alex Leong <alex@buoyant.io>
Followup to https://github.com/linkerd/linkerd2/pull/11334#issuecomment-1736093592
This extends the test introduced in #11334 to excercise upgrading a
Server associated to a pod's HostPort, and observing how the stream
updates the OpaqueProtocol field.
Helper functions were refactored a bit to allow retrieving the
l5dCRDClientSet used when building the fake API.
Followup to #11328
Implements a new pod watcher, instantiated along the other ones in the Destination server. It also watches on Servers and carries all the logic from ServerWatcher, which has now been decommissioned.
The `CreateAddress()` function has been moved into a function of the PodWatcher, because now we're calling it on every update given the pod associated to an ip:port might change and we need to regenerate the Address object. That function also takes care of capturing opaque protocol info from associated Servers, which is not new and had some logic that was duped in the now defunct ServerWatcher. `getAnnotatedOpaquePorts()` got also moved for similar reasons.
Other things to note about PodWatcher:
- It publishes a new pair of metrics `ip_port_subscribers` and `ip_port_updates` leveraging the framework in `prometheus.go`.
- The complexity in `updatePod()` is due to only send stream updates when there are changes in the pod's readiness, to avoid sending duped messages on every pod lifecycle event.
-
Finally, endpointProfileTranslator's `endpoint` (*pb.WeightedAddr) not being a static object anymore, the `Update()` function now receives an Address that allows it to rebuild the endpoint on the fly (and so `createEndpoint()` was converted into a method of endpointProfileTranslator).
* stopgap fix for hostport staleness
## Problem
When there's a pod with a `hostPort` entry, `GetProfile` requests
targetting the host's IP and that `hostPort` return an endpoint profile
with that pod's IP and `containerPort`. If that pod vanishes and another
one in that same host with that same `hostPort` comes up, the existing
`GetProfile` streams won't get updated with the new pod information
(metadata, identity, protocol).
That breaks the connectivity of the client proxy relying on that stream.
## Partial Solution
It should be less surprising for those `GetProfile` requests to return
an endpoint profile with the same host IP and port requested, and leave
to the cluster's CNI to peform the translation to the corresponding pod
IP and `containerPort`.
This PR performs that change, but continuing returning the corresponding
pod's information alongside.
If the pod associated to that host IP and port changes, the client proxy
won't loose connectivity, but the pod's information won't get updated
(that'll be fixed in a separate PR).
A new unit test validating this has been added, which will be expanded
to validate the changed pod information when that gets implemented.
## Details of Change
- We no longer do the HostPort->ContainerPort conversion, so the
`getPortForPod` function was dropped.
- The `getPodByIp` function will now be split in two: `getPodByPodIP`
and `getPodByHostIP`, the latter being called only if the former
doesn't return anything.
- The `createAddress` function is now simplified in that it just uses
the passed IP to build the address. The passed IP will depend on which
of the two functions just mentioned returned the pod (host IP or pod
IP)
When a service has it's internal traffic policy set to "local", we will perform filtering to only return local endpoints, as-per the ForZone hints in the endpoints. However, ForZone calculations do not take resources from remote clusters into account, therefore this type of filtering is not appropriate for remote discovery services.
We explicitly ignore any internal traffic policy when doing remote discovery.
Signed-off-by: Alex Leong <alex@buoyant.io>
Similar to #11246, the destination controller was complaning above
trying to register dupe metrics for the api cache gauges, when a given
target cluster got re-linked. This change unregisters the gauges for the
target cluster when said cluster is removed.
Supersedes #11252
We add a `cluster_store_size` gauge to the destination controller to track the number of entries in the remote discovery cluster store. If this is ever different from the number of cluster credentials secrets in the linkerd namespace, this indicates that there is a problem with a link that needs to be investigated further.
Signed-off-by: Alex Leong <alex@buoyant.io>
Adds support for remote discovery to the destination controller.
When the destination controller gets a `Get` request for a Service with the `multicluster.linkerd.io/remote-discovery` label, this is an indication that the destination controller should discover the endpoints for this service from a remote cluster. The destination controller will look for a remote cluster which has been linked to it (using the `linkerd multicluster link` command) with that name. It will look at the `multicluster.linkerd.io/remote-discovery` label for the service name to look up in that cluster. It then streams back the endpoint data for that remote service.
Since we now have multiple client-go informers for the same resource types (one for the local cluster and one for each linked remote cluster) we add a `cluster` label onto the prometheus metrics for the informers and EndpointWatchers to ensure that each of these components' metrics are correctly tracked and don't overwrite each other.
---------
Signed-off-by: Alex Leong <alex@buoyant.io>
There were a few improvements we could have made to a recent change that
added a ClusterStore concept to the destination service. This PR
introduces the small improvements:
* Sync dynamically created clients in separate go routines
* Refactor metadata API creation
* Remove redundant check in cluster_store_test
* Create a new runtime schema every time a fake metadata API client is
created to avoid racey behaviour.
Signed-off-by: Matei David <matei@buoyant.io>
Fixes#10764
Removed the `server_port_subscribers` gauge, as it wasn't distiguishing
amongst different pods, and the number of subscribers for each pod were
conflicting with one another when updating the metric (see more details
[here](https://github.com/linkerd/linkerd2/issues/10764#issuecomment-1650835823)).
Besides carying an invalid value, this was generating the warning
`unable to delete server_port_subscribers metric with labels`
The metric was replaced with the `server_port_subscribes` and
`server_port_unsubscribes` counters, which track the overall number of
subscribes and unsubscribes to the particular pod port.
🌮 to @adleong for the diagnosis and the fix!
Currently, the control plane does not support indexing and discovering resources across cluster boundaries. In a multicluster set-up, it is desirable to have access to endpoint data (and by extension, any configuration associated with that endpoint) to support pod-to-pod communication. Linkerd's destination service should be extended to support reading discovery information from multiple sources (i.e. API Servers).
This change introduces an `EndpointsWatcher` cache. On creation, the cache registers a pair of event handlers:
* One handler to read `multicluster` secrets that embed kubeconfig files. For each such secret, the cache creates a corresponding `EndpointsWatcher` to read remote discovery information.
* Another handle to evict entries from the cache when a `multicluster` secret is deleted.
Additionally, a set of tests have been added that assert the behaviour of the cache when secrets are created and/or deleted. The cache will be used by the destination service to do look-ups for services that belong to another cluster, and that are running in a "remote discovery" mode.
---------
Signed-off-by: Matei David <matei@buoyant.io>
Fixes#11163
The `servicePublisher.updateServer` function will iterate through all registered listeners and update them. However, a nil listener may temporarily be in the list of listeners if an unsubscribe is in progress. This results in a nil pointer dereference.
All functions which result in updating the listeners must therefore be protected by the mutex so that we don't try to act on the list of listeners while it is being modified.
Signed-off-by: Alex Leong <alex@buoyant.io>
Add support for enabling and disabling topology aware routing when hints are added/removed.
The testing setup is very involved because it involves so many moving parts
1) Setup a service which is layered over several availability zones.
1a) The best way to do this is one service object, with 3 replicasets explicitly forced to use a specific AZ each.
2) Add `service.kubernetes.io/topology-aware-hints: Auto` annotation to the Service object
3) Use a load tester like k6 to send meaningful traffic to your service but only in one AZ
3) Scale up your replica sets until k8s adds Hints to your endpointslices
4) Observe that traffic shifts to only hit pods in one AZ
5) Turn down the replicasets count until such time that K8s removes the hints from your endpointslices
6) Observe traffic shifts back to all pods across all AZ.
Currently, the proxy injector will expand lists of opaque port ranges
into lists of individual port numbers. This is because the proxy has
historically not accepted port ranges in the
`LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION` environment
variable. However, when very large ranges are used, the size of the
injected manifest can be quite large, since each individual port number
in a range must be listed separately.
Proxy PR linkerd/linkerd2-proxy#2395 changed the proxy to accept ranges
as well as individual port numbers in the opaque ports environment
variable, and this change was included in the latest proxy release
(v2.200.0). This means that the proxy-injector no longer needs to expand
large port ranges into individual port numbers, and can now simply
forward the list of ranges to the proxy. This branch changes the proxy
injector to do this, resolving issues with manifest size due to large
port ranges.
Closes#9803
Fixes#10764
`GetProfile` streams create a `server_port_subscribers` gauge that tracks the number of listeners interested in a given Server. Because of an oversight, the gauge was only being registered until the second listener was added. For just one listener the gauge was absent. But whenever the `GetProfile` stream ended, the gauge was deleted which resulted in this error if it wasn't registered to begin with:
```
level=warning msg="unable to delete server_port_subscribers metric with labels map[name:voting namespace:emojivoto port:4191]" addr=":8086" component=server
```
One can check that the gauge wasn't being created by installing viz and emojivoto, and checking the following returns empty:
```bash
$ linkerd diagnostics controller-metrics | grep server_port_subscribers
```
After this fix, one can see the metric getting populated:
```bash
$ linkerd diagnostics controller-metrics | grep server_port_subscribers
# HELP server_port_subscribers Number of subscribers to Server changes associated with a pod's port.
# TYPE server_port_subscribers gauge
server_port_subscribers{name="emoji",namespace="emojivoto",port="4191"} 1
server_port_subscribers{name="linkerd",namespace="linkerd",port="4191"} 1
server_port_subscribers{name="linkerd",namespace="linkerd",port="9990"} 1
server_port_subscribers{name="linkerd",namespace="linkerd",port="9995"} 1
server_port_subscribers{name="linkerd",namespace="linkerd",port="9996"} 1
server_port_subscribers{name="linkerd",namespace="linkerd",port="9997"} 1
server_port_subscribers{name="metrics",namespace="linkerd-viz",port="4191"} 1
server_port_subscribers{name="metrics",namespace="linkerd-viz",port="9995"} 1
server_port_subscribers{name="tap",namespace="linkerd-viz",port="4191"} 1
server_port_subscribers{name="tap",namespace="linkerd-viz",port="9995"} 1
server_port_subscribers{name="tap",namespace="linkerd-viz",port="9998"} 1
server_port_subscribers{name="vote",namespace="emojivoto",port="4191"} 1
server_port_subscribers{name="voting",namespace="emojivoto",port="4191"} 1
server_port_subscribers{name="web",namespace="emojivoto",port="4191"} 1
server_port_subscribers{name="web",namespace="linkerd-viz",port="4191"} 1
server_port_subscribers{name="web",namespace="linkerd-viz",port="9994"} 1
```
And when scaling down the voting deployment, one can see how the metric with `name="voting"` is removed.
The outbound proxy handles endpoints with the `opaque_transport` flag by
opening a direct connection to the inbound proxy's inbound listener
port, and sending a ProtoBuf `TransportHeader` including the target port
of the originating outbound connection and an (optional)
`SessionProtocol` describing the protocol used on that connection.
Currently, outbound proxies initiating direct connections will *always*
send `SessionProtocol` values communicating the protocol as understood
by the outbound proxy. However, this is not always the desired behavior.
Direct connections with `TransportHeader`s are used in two cases: for
gateway connections, and for ports which are marked as opaque. When the
inbound port is marked as opaque, the presence of a `SessionProtocol`
tells the inbound proxy to handle that connection as the indicated
protocol, which results in incorrect behavior when the inbound proxy's
ServerPolicy configures the target port as opaque (see #9888).
Therefore, the `Destination` proxy API has been updated to add a new
`ProtocolHint`, `Opaque`, which indicates that an outbound proxy should
_not_ send a `SessionProtocol` when initiating a direct connection, even
if the outbound proxy handled the connection as HTTP. This hint was
added to the proxy API in linkerd/linkerd2-proxy-api#197, and released
in `linkerd2-proxy-api` v0.8.0.
This branch updates the Destination controller's dependency on
`linkerd2-proxy-api` to v0.8.0, and changes the controller to send an
`Opaque` protocol hint when the target port is marked as opaque on the
destination pod. This should override the `H2` protocol hint that is
added when the destination is meshed. I've also added a new test for
this behavior.
Fixes#9888 (along with linkerd/linkerd2-proxy#2209, which changes the
proxy to actually handle the `Opaque` protocol hint).
The GetProfile API endpoint does not behave as expected: when a profile
watch is established, the API server starts two separate profile
watches--a primary watch with the client's namespace and fallback watch
ignoring the client's namespace. These watches race to send data back to
the client. If the backup watch updates first, it may be sent to clients
before being corrected by a subsequent update. If the primary watch
updates with an empty value, the default profile may be served before
being corrected by an update to the backup watch.
From the proxy's perspective, we'd much prefer that the API provide a
single authoritative response when possible. It avoids needless
corrective work from distributing across the system on every watch
initiation.
To fix this, we modify the fallbackProfileListener to behave
predictably: it only emits updates once both its primary and fallback
listeners have been updated. This avoids emitting updates based on a
partial understanding of the cluster state.
Furthermore, the opaquePortsAdaptor is updated to avoid synthesizing a
default serviceprofile (surprising behavior) and, instead, this
defaulting logic is moved into a dedicated defaultProfileListener
helper. A dedupProfileListener is added to squelch obviously redundant
updates.
Finally, this newfound predictability allows us to simplify the API's
tests. Many of the API tests are not clear in what they test and
sometimes make assertions about the "incorrect" profile updates.
Before changing any GetProfile behavior, this change splits the API
handler into some smaller scopes. This helps to clarify control flow and
reduce nested contexts. This change also adds relevant fields to log
contexts to improve diagnostics.
When processing a `delete` event for an EndpointSlice, regardless of the outcome (whether there are still addresses/endpoints alive or whether we have no endpoints left) we make a call to `noEndpoints` on the subscriber. This will send a `Destination.Get` update to the listeners to advertise a `NoEndpoints` event.
If there are still addresses left, NoEndpoints { exists: true } will be sent (to signal the service still exists). Until an update is registered (i.e a new add) there will be no available endpoints -- this is incorrect, since other EndpointSlices may exist. This change fixes the problem by handling `noEndpoints` in a more specialized way for EndpointSlices.
Signed-off-by: Yannick Utard <yannickutard@gmail.com>
Fixes#9986
After reviewing the k8s API calls in Destination, it was concluded we
could only swap out the calls to the Node and RS resources to use the
metadata API, as all the other resources (Endpoints, EndpointSlices,
Services, Pod, ServiceProfiles, Server) required fields other than those
found in their metadata section.
This also required completing the `NewFakeAPI` implementation by adding
the missing annotations and labels entries.
## Testing Memory Consumption
The gains here aren't as big as in #9650. In order to test this we need
to push hard and create 4000 RS:
``` bash
for i in {0..4000}; do kubectl create deployment test-pod-$i --image=nginx; done
```
In edge-23.2.1 the destination pod's memory consumption goes from 40Mi
to 160Mi after all the RS were created. With this change, it went from
37Mi to 140Mi.