Commit Graph

76 Commits

Author SHA1 Message Date
Scott Fleener 156bf60ad7
feat(destination): introduce transport-protocol outbound TLS mode (#13699)
Non-opaque meshed traffic currently flows over the original destination port, which requires the inbound proxy to do protocol detection.

This adds an option to the destination controller that configures all meshed traffic to flow to the inbound proxy's inbound port. This will allow us to include more session protocol information in the future, obviating the need for inbound protocol detection.

This doesn't do much in the way of testing, since the default behavior should be unchanged. When this default changes, more validation will be done on the behavior here.

Signed-off-by: Scott Fleener <scott@buoyant.io>
2025-03-05 13:51:21 -08:00
Oliver Gould 5cbe45c86e
feat(destination): set parent and profile references (#13292)
In order for proxies to properly reflect the resources used to drive policy
decisions, the proxy API has been updated to include resource metadata on
ServiceProfile responses.

This change updates the profile translator to include ParentRef and ProfileRef
metadata when it is configured.

This change does not set backend or endpoint references.
2024-11-09 00:11:40 +00:00
Alex Leong c66f83e1f1
Add federated service watcher (#13267)
We add support for federated services to the destination controller by adding a new FederatedServiceWatcher.  When the destination controller receives a `Get` request for a Service with the `multicluster.linkerd.io/remote-discovery` and/or the `multicluster.linkerd.io/local-discovery` annotations, it subscribes to the FederatedServiceWatcher instead of subscribing to the EndpointsWatcher directly.  The FederatedServiceWatcher watches the federated service for any changes to these annotations, and maintains the appropriate watches on the local EndpointWatcher and/or remote EndpointWatchers fetched through the ClusterStore.

This means that we will often have multiple EndpointTranslators writing to the same `Get` response stream.  In order for a `NoEndpoints` message sent to one EndpointTranslator to not clobber the whole stream, we make a change where `NoEndpoints` messages are no longer sent to the response stream, but are replaced by a `Remove` message containing all of the addresses from that EndpointTranslator.  This allows multiple EndpointTranslators to coexist on the same stream.

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-11-08 09:34:01 -08:00
Matei David 407df01ec3
chore(controller): Remove stream concurrency limits (#12598)
Our gRPC servers use the default gRPC server configuration, which
limits the number of concurrent streams to 100. Since the controllers
run with proxies, this provides a hard scaling limit for the number of
watches an application can have.

This change updates our gRPC server configuration to clear the default
concurrency limit, allowing the server to handle as many streams as
possible.

Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
2024-05-15 18:07:15 +01:00
Alejandro Pedraza 137eac9df3
Add IPv6 support for the destination controller (#12428)
Services in dual-stack mode result in the creation of two EndpointSlices, one for each IP family. Before this change, the Get Destination API would nondeterministically return the address for any of those ES, depending on which one was processed last by the controller because they would overwrite each other.

As part of the ongoing effort to support IPv6/dual-stack networks, this change fixes that behavior giving preference to IPv6 addresses whenever a service exposes both families.

There are a new set of unit tests in server_ipv6_test.go, and in the TestEndpointTranslatorForPods tests there's a couple of new cases to test the interaction with zone filtering.
Also the server unit tests were updated to segregate the tests and resources dealing with the IPv4/IPv6/dual-stack cases.
2024-05-02 14:39:05 -05:00
Oliver Gould aef8a02426
feat(destination): Add meshed HTTP/2 keep-alive settings (#12504)
This commit adds destination controller configuration that enables default
keep-alives for meshed HTTP/2 clients.

This is accomplished by encoding the raw protobuf message structure into the
helm values, and then encoding that as JSON in the destination controller's
command-line options. This allows operators to set any supported HTTP/2 client
configuration without having to modify the destination controller.
2024-04-30 19:35:30 +00:00
hanghuge 78d42b230d
chore: fix function name in comment (#12396)
Fixed comments for `subscribeToServicesWithContext` and `reconcileByAddressType`. Previously,
the comments contained incorrect function names.

Signed-off-by: hanghuge <cmoman@outlook.com>
2024-04-10 15:46:45 +01:00
Oliver Gould 2ab76b64c6
destination: Rename zone weighting flag to ext-endpoint-zone-weights (#12090) 2024-02-16 09:06:56 -05:00
Alex Leong 5aef29ace1
Update workload watcher and server tests to use EndpointSlices (#12054)
Fixes #12032

The Destination controller server tests test the destination server with `enableEndpointSlices=false`.  The default for this value is true, meaning that these tests do not test the default configuration.

We update the tests to test with `enableEndpointSlices=true` and update the corresponding mock kubernetes Endpoints resources to be EndpointSlices instead.  We also fix an instance where the workload watcher was using Endpoints even when in EndpointSlices mode.

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-02-09 11:33:06 -08:00
Alex Leong 0261e4f086
Return INVALID_ARGUMENT status codes properly from GetProfile (#11980)
When the destination controller receives a GetProfile request for an ExternalName service, it should return gRPC status code INVALID_ARGUMENT to signal to the proxy that ExternalName services do not have endpoints and service discovery should not be used.  However, the destination controller is returning gRPC status code UNKNOWN instead.  This causes the proxy to retry these requests, resulting in a flurry of warning logs in the destination controller.

We fix the destination controller to properly return INVALID_ARGUMENT instead of UNKNOWN for ExternalName services.

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-01-26 09:11:00 -08:00
Alex Leong 65f13de2ce
Add support for ExternalWorkloads in endpoint profiles (#11952)
When a meshed client attempts to establish a connection directly to the workload IP of an ExternalWorkload, the destination controller should return an endpoint profile for that ExternalWorkload with a single endpoint and the metadata associated with that ExternalWorkload including:
* mesh TLS identity
* workload metric labels
* opaque / protocol hints

Signed-off-by: Alex Leong <alex@buoyant.io>
2024-01-23 09:43:12 -08:00
Oliver Gould b0fee16fb4
destination: Fix GetProfile endpoint buffering (#11815)
In 71635cb and 357a1d3 we updated the endpoint and profile translators
to prevent backpressure from stalling the informer tasks. This change
updates the endpoint profile translator with the same fix, so that
updates are buffered and can detect when when a gRPC stream is stalled.

Furthermore, the update method is updated to use a protobuf-aware
comparison method instead of using serialization and string comparison.

A test is added for the endpoint profile translator, since none existed
previously.
2023-12-22 09:25:12 -08:00
Alex Leong a92d17dbe9
Fix error for profile lookups on unmeshed pods with port in default opaque list (#11550)
When we do a `GetProfile` lookup for an unmeshed pod, we set the `weightedAddr.ProtocolHint` to an empty value `&pb.ProtocolHint{}` to indicate that the address is unmeshed and has no protocol hint.  However, when the looked up port is in the default opaque list, we erroneously check if `weightedAddr.ProtocolHint != nil` to determine if we should attempt to get the inbound listen port for that pod.  Since `&pb.ProtocolHint{} != nil`, we attempt to get the inbound listen port for the unmeshed pod.  This results in an error, preventing any valid `GetProfile` responses from being returned.

We update the initialization logic for `weightedAddr.ProtocolHint` to only create a struct when a protocol hint is present and to leave it as `nil` if the pod is unmeshed.

We add a simple unit test for this behavior as well.

Signed-off-by: Alex Leong <alex@buoyant.io>
2023-12-20 13:56:49 -08:00
Oliver Gould 5e558ae3e5
destination: Add optional experimental endpoint weighting (#11795)
This change adds a runtime flag to the destination controller,
--experimental-endpoint-zone-weights=true, that causes endpoints in the
local zone to receive higher weights. This feature is disabled by
default, since the weight value is not honored by proxies. No helm
configuration is exposed yet, either.

This weighting is instrumented in the endpoint translator. Tests are
added to confirm that the behavior is feature-gated.

Additionally, this PR adds the "zone" metric label to endpoint metadata
responses.
2023-12-20 13:11:30 -08:00
Alejandro Pedraza c7db3b3b80
Remove k8sAPI/metadataAPI dependency from endpointProfileTranslator (#11587)
This removes `endpointProfileTranslator`'s dependency on the k8sAPI and
metadataAPI, which are not used. This was introduced in one of #11334's
refactorings, but ended up being not required. No functional changes
here.
2023-11-14 09:04:36 -05:00
Alex Leong 71635cbf3d
Add queuing to profile translator (#11546)
https://github.com/linkerd/linkerd2/pull/11491 changed the EndpointTranslator to use a queue to avoid calling `Send` on a gRPC stream directly from an informer callback goroutine.  This change updates the ProfileTranslator in the same way, adding a queue to ensure we do not block the informer thread.

Signed-off-by: Alex Leong <alex@buoyant.io>
2023-11-09 07:32:30 -05:00
Alex Leong 4e7a588a2c
Add pod name to context token and logging (#11532)
When the destination controller logs about receiving or sending messages to a data plane proxy, there is no information in the log about which data plane pod it is communicating with.  This can make it difficult to diagnose issues which span the data plane and control plane.

We add a `pod` field to the context token that proxies include in requests to the destination controller.  We add this pod name to the logging context so that it shows up in log messages.  In order to accomplish this, we had to plumb through logging context in a few places where it previously had not been.  This gives us a more complete logging context and more information in each log message.

An example log message with this fuller logging context is:

```
time="2023-10-24T00:14:09Z" level=debug msg="Sending destination add: add:{addrs:{addr:{ip:{ipv4:183762990}  port:8080}  weight:10000  metric_labels:{key:\"control_plane_ns\"  value:\"linkerd\"}  metric_labels:{key:\"deployment\"  value:\"voting\"}  metric_labels:{key:\"pod\"  value:\"voting-7475cb974c-2crt5\"}  metric_labels:{key:\"pod_template_hash\"  value:\"7475cb974c\"}  metric_labels:{key:\"serviceaccount\"  value:\"voting\"}  tls_identity:{dns_like_identity:{name:\"voting.emojivoto.serviceaccount.identity.linkerd.cluster.local\"}}  protocol_hint:{h2:{}}}  metric_labels:{key:\"namespace\"  value:\"emojivoto\"}  metric_labels:{key:\"service\"  value:\"voting-svc\"}}" addr=":8086" component=endpoint-translator context-ns=emojivoto context-pod=web-767f4484fd-wmpvf remote="10.244.0.65:52786" service="voting-svc.emojivoto.svc.cluster.local:8080"
```

Note the `context-pod` field.

Additionally, we have tested this when no pod field is included in the context token (e.g. when handling requests from a pod which does not yet add this field) and confirmed that the `context-pod` log field is empty, but no errors occur.

Signed-off-by: Alex Leong <alex@buoyant.io>
2023-10-25 13:48:42 -07:00
Oliver Gould 8cd165a1e4
destination: Index pods and service with multiple IPs (#11515)
The destination controller indexes Pods by their primary PodIP and their
primary HostIP (when applicable); and it indexes Services by their
primary ClusterIP.

In preparation for dual-stack (IPv6) support, this change updates the
destination controller indexers to consume all IPs for these resources.
Tests are updated to demonstrate a dual-stack setup (where these
resources have a primary IPv4 address and a secondary IPv6 address).

While exercising these tests, I had to replace the brittle host:port
parsing logic we use in the server, in favor of Go's standard
`net.SplitHostPort` utility.
2023-10-23 14:55:33 -07:00
Alex Leong 357a1d32b2
Add update queue to endpoint translator (#11491)
When a grpc client of the destination.Get API initiates a request but then doesn't read off of that stream, the HTTP2 stream flow control window will fill up and eventually exert backpressure on the destination controller.  This manifests as calls to `Send` on the stream blocking.  Since `Send` is called synchronously from the client-go informer callback (by way of the endpoint translator), this blocks the informer callback and prevents all further informer calllbacks from firing.  This causes the destination controller to stop sending updates to any of its clients.

We add a queue in the endpoint translator so that when it gets an update from the informer callback, that update is queued and we avoid potentially blocking the informer callback.  Each endpoint translator spawns a goroutine to process this queue and call `Send`.  If there is not capacity in this queue (e.g. because a client has stopped reading and we are experiencing backpressure) then we terminate the stream.

Signed-off-by: Alex Leong <alex@buoyant.io>
2023-10-18 12:34:38 -07:00
Alejandro Pedraza 65ddba4e5d
dst: Update `GetProfile`'s stream when pod associated to HostPort lookup changes (#11334)
Followup to #11328

Implements a new pod watcher, instantiated along the other ones in the Destination server. It also watches on Servers and carries all the logic from ServerWatcher, which has now been decommissioned.

The `CreateAddress()` function has been moved into a function of the PodWatcher, because now we're calling it on every update given the pod associated to an ip:port might change and we need to regenerate the Address object. That function also takes care of capturing opaque protocol info from associated Servers, which is not new and had some logic that was duped in the now defunct ServerWatcher. `getAnnotatedOpaquePorts()` got also moved for similar reasons.

Other things to note about PodWatcher:

- It publishes a new pair of metrics `ip_port_subscribers` and `ip_port_updates` leveraging the framework in `prometheus.go`.
- The complexity in `updatePod()` is due to only send stream updates when there are changes in the pod's readiness, to avoid sending duped messages on every pod lifecycle event.
- 
Finally, endpointProfileTranslator's `endpoint` (*pb.WeightedAddr) not being a static object anymore, the `Update()` function now receives an Address that allows it to rebuild the endpoint on the fly (and so `createEndpoint()` was converted into a method of endpointProfileTranslator).
2023-09-28 08:57:52 -05:00
Alejandro Pedraza 3d1a3e018c
dst: Stop overriding Host IP with Pod IP on HostPort lookup (#11328)
* stopgap fix for hostport staleness

## Problem

When there's a pod with a `hostPort` entry, `GetProfile` requests
targetting the host's IP and that `hostPort` return an endpoint profile
with that pod's IP and `containerPort`. If that pod vanishes and another
one in that same host with that same `hostPort` comes up, the existing
`GetProfile` streams won't get updated with the new pod information
(metadata, identity, protocol).

That breaks the connectivity of the client proxy relying on that stream.

## Partial Solution

It should be less surprising for those `GetProfile` requests to return
an endpoint profile with the same host IP and port requested, and leave
to the cluster's CNI to peform the translation to the corresponding pod
IP and `containerPort`.

This PR performs that change, but continuing returning the corresponding
pod's information alongside.

If the pod associated to that host IP and port changes, the client proxy
won't loose connectivity, but the pod's information won't get updated
(that'll be fixed in a separate PR).

A new unit test validating this has been added, which will be expanded
to validate the changed pod information when that gets implemented.

## Details of Change

- We no longer do the HostPort->ContainerPort conversion, so the
  `getPortForPod` function was dropped.
- The `getPodByIp` function will now be split in two: `getPodByPodIP`
  and `getPodByHostIP`, the latter being called only if the former
  doesn't return anything.
- The `createAddress` function is now simplified in that it just uses
  the passed IP to build the address. The passed IP will depend on which
  of the two functions just mentioned returned the pod (host IP or pod
  IP)
2023-09-06 10:35:14 -05:00
Alex Leong db2e543b0c
Disable local traffic policy for remote discovery (#11257)
When a service has it's internal traffic policy set to "local", we will perform filtering to only return local endpoints, as-per the ForZone hints in the endpoints. However, ForZone calculations do not take resources from remote clusters into account, therefore this type of filtering is not appropriate for remote discovery services.

We explicitly ignore any internal traffic policy when doing remote discovery.

Signed-off-by: Alex Leong <alex@buoyant.io>
2023-08-16 15:27:58 -07:00
Alex Leong 368b63866d
Add support for remote discovery (#11224)
Adds support for remote discovery to the destination controller.

When the destination controller gets a `Get` request for a Service with the `multicluster.linkerd.io/remote-discovery` label, this is an indication that the destination controller should discover the endpoints for this service from a remote cluster.  The destination controller will look for a remote cluster which has been linked to it (using the `linkerd multicluster link` command) with that name.  It will look at the `multicluster.linkerd.io/remote-discovery` label for the service name to look up in that cluster.  It then streams back the endpoint data for that remote service.

Since we now have multiple client-go informers for the same resource types (one for the local cluster and one for each linked remote cluster) we add a `cluster` label onto the prometheus metrics for the informers and EndpointWatchers to ensure that each of these components' metrics are correctly tracked and don't overwrite each other.

---------

Signed-off-by: Alex Leong <alex@buoyant.io>
2023-08-11 09:31:45 -07:00
Eliza Weisman 34df5aa606
inject: don't expand opaque port ranges (#10827)
Currently, the proxy injector will expand lists of opaque port ranges
into lists of individual port numbers. This is because the proxy has
historically not accepted port ranges in the
`LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION` environment
variable. However, when very large ranges are used, the size of the
injected manifest can be quite large, since each individual port number
in a range must be listed separately.

Proxy PR linkerd/linkerd2-proxy#2395 changed the proxy to accept ranges
as well as individual port numbers in the opaque ports environment
variable, and this change was included in the latest proxy release
(v2.200.0). This means that the proxy-injector no longer needs to expand
large port ranges into individual port numbers, and can now simply
forward the list of ranges to the proxy. This branch changes the proxy
injector to do this, resolving issues with manifest size due to large
port ranges.

Closes #9803
2023-04-27 11:27:40 -07:00
Oliver Gould 08f97cc26f
destination: Avoid sending spurious profile updates (#10517)
The GetProfile API endpoint does not behave as expected: when a profile
watch is established, the API server starts two separate profile
watches--a primary watch with the client's namespace and fallback watch
ignoring the client's namespace. These watches race to send data back to
the client. If the backup watch updates first, it may be sent to clients
before being corrected by a subsequent update. If the primary watch
updates with an empty value, the default profile may be served before
being corrected by an update to the backup watch.

From the proxy's perspective, we'd much prefer that the API provide a
single authoritative response when possible. It avoids needless
corrective work from distributing across the system on every watch
initiation.

To fix this, we modify the fallbackProfileListener to behave
predictably: it only emits updates once both its primary and fallback
listeners have been updated. This avoids emitting updates based on a
partial understanding of the cluster state.

Furthermore, the opaquePortsAdaptor is updated to avoid synthesizing a
default serviceprofile (surprising behavior) and, instead, this
defaulting logic is moved into a dedicated defaultProfileListener
helper. A dedupProfileListener is added to squelch obviously redundant
updates.

Finally, this newfound predictability allows us to simplify the API's
tests. Many of the API tests are not clear in what they test and
sometimes make assertions about the "incorrect" profile updates.
2023-03-13 13:36:18 -07:00
Oliver Gould c657aeabe8
destination: Split GetProfile into smaller functions (#10514)
Before changing any GetProfile behavior, this change splits the API
handler into some smaller scopes. This helps to clarify control flow and
reduce nested contexts. This change also adds relevant fields to log
contexts to improve diagnostics.
2023-03-13 11:36:39 -07:00
Alejandro Pedraza 4a84f2cb32
Implement the k8s metadata API in the Destination controller (#10326)
Fixes #9986

After reviewing the k8s API calls in Destination, it was concluded we
could only swap out the calls to the Node and RS resources to use the
metadata API, as all the other resources (Endpoints, EndpointSlices,
Services, Pod, ServiceProfiles, Server) required fields other than those
found in their metadata section.

This also required completing the `NewFakeAPI` implementation by adding
the missing annotations and labels entries.

## Testing Memory Consumption

The gains here aren't as big as in #9650. In order to test this we need
to push hard and create 4000 RS:

``` bash
for i in {0..4000}; do kubectl create deployment test-pod-$i --image=nginx; done
```

In edge-23.2.1 the destination pod's memory consumption goes from 40Mi
to 160Mi after all the RS were created. With this change, it went from
37Mi to 140Mi.
2023-02-13 17:30:07 -05:00
dependabot[bot] 62d6d7cd52
build(deps): bump sigs.k8s.io/gateway-api from 0.5.1 to 0.6.0 (#10038)
* build(deps): bump sigs.k8s.io/gateway-api from 0.5.1 to 0.6.0

Bumps [sigs.k8s.io/gateway-api](https://github.com/kubernetes-sigs/gateway-api) from 0.5.1 to 0.6.0.
- [Release notes](https://github.com/kubernetes-sigs/gateway-api/releases)
- [Changelog](https://github.com/kubernetes-sigs/gateway-api/blob/main/CHANGELOG.md)
- [Commits](https://github.com/kubernetes-sigs/gateway-api/compare/v0.5.1...v0.6.0)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/gateway-api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Account for possible errors returned from `AddEventHandler`

In v0.26.0 client-go's `AddEventHandler` method for informers started
returning a registration handle (that we ignore) and an error that we
now surface up.

* client-go v0.26.0 removed the openstack plugin

* Temporary changes to trigger tests in k8s 1.21

- Adds an innocuous change to integration.yml so that all tests get
  triggered
- Hard-code k8s version in `k3d cluster create` invocation to v1.21

* Revert "Temporary changes to trigger tests in k8s 1.21"

This reverts commit 3e1fdd0e5e.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
2023-01-16 09:38:09 -05:00
Alejandro Pedraza 52ae875e9d
Fixes HostPort mapping lookup that was generating a false warning (#9918)
When performing the HostPort mapping introduced in #9819, the `containsIP` iterates through the pod IPs searching for a match against `targetIP` using `ip.String()`, but that returns something like `&PodIP{IP: xxx}`. Fixed that to just use `ip.IP`, and also completed the text fixtures to include both `PodIP` and `PodIPs` in the pods manifests.

Note this wasn't affecting the end result, it was just producing an extra warning as shown below, that this change eliminates:

```bash
$ go test -v ./controller/api/destination/... -run TestGetProfiles
=== RUN   TestGetProfiles
...
=== RUN   TestGetProfiles/Return_profile_with_endpoint_when_using_pod_DNS
time="2022-11-29T09:38:48-05:00" level=info msg="waiting for caches to sync"
time="2022-11-29T09:38:49-05:00" level=info msg="caches synced"
time="2022-11-29T09:38:49-05:00" level=warning msg="unable to find container port as host (172.17.13.15) matches neither PodIP nor HostIP (&Pod{ObjectMeta:{pod-0  ns    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[linkerd.io/control-plane-ns:linkerd] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:172.17.13.15,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},})" test=TestGetProfiles/Return_profile_with_endpoint_when_using_pod_DNS
```
2022-11-29 16:33:31 -05:00
Steve Jenson 791c6a77d7
Follows the HostPort mapping when a request for a pod comes in on node network (#9819)
Maps the request port to the container's port if the request comes in from the node network and has a hostPort mapping.

Problem:

When a request for a container comes in from the node network, the node port is used ignoring the hostPort mapping.

Solution:

When a request is seen coming from the node network, get the container Port from the Spec.

Validation:

Fixed an existing unit test and wrote a new one driving GetProfile specifically.

Fixes #9677 

Signed-off-by: Steve Jenson <stevej@buoyant.io>
2022-11-23 11:58:06 -05:00
Oliver Gould f5876c2a98
go: Enable `errorlint` checking (#7885)
Since Go 1.13, errors may "wrap" other errors. [`errorlint`][el] checks
that error formatting and inspection is wrapping-aware.

This change enables `errorlint` in golangci-lint and updates all error
handling code to pass the lint. Some comparisons in tests have been left
unchanged (using `//nolint:errorlint` comments).

[el]: https://github.com/polyfloyd/go-errorlint

Signed-off-by: Oliver Gould <ver@buoyant.io>
2022-02-16 18:32:19 -07:00
Oliver Gould 7776a50074
destination: Use net.JoinHostPort to format names (#7821)
We use `fmt.Sprintf` to format URIs in several places we could be using
`net.JoinHostPort`. `net.JoinHostPort` ensures that IPv6 addresses are
properly escaped in URIs.

Signed-off-by: Oliver Gould <ver@buoyant.io>
2022-02-07 15:05:14 -08:00
Alex Leong 5f9591abdb
Support non-pod endpoints in GetProfile responses (#7459)
Fixes #6337

GetProfile can be called with a FQDN for a specific member of a service e.g.

```
web-0.foo.ns.svc.cluster.local
```

If that endpoint is not backed by a pod, `GetProfile` will not return an endpoint in the response.

We update the logic to return an endpoint in the response even when the endpoint is not backed by a pod.

Signed-off-by: Alex Leong <alex@buoyant.io>
2021-12-17 12:33:06 -08:00
Kevin Leimkuhler 147d85dc70
Update `GetProfile` clients with policy server updates (#7388)
### What

`GetProfile` clients do not receive destinatin profiles that consider Server protocol fields the way that `Get` clients do. If a Server exists for a `GetProfile` destination that specifies the protocol for that destination is `opaque`, this information is not passed back to the client.

#7184 added this for `Get` by subscribing clients to Endpoint/EndpointSlice updates. When there is an update, or there is a Server update, the endpoints watcher passes this information back to the endpoint translator which handles sending the update back to the client.

For `GetProfile` the situation is different. As with `Get`, we only consider Servers when dealing with Pod IPs, but this only occurs in two situations for `GetProfile`.

1. The destination is a Pod IP and port
2. The destionation is an Instance ID and port

In both of these cases, we need to check if a already Server selects the endpoint and we need to subscribe for Server updates incase one is added or deleted which selects the endpoint.

### How

First we check if there is already a Server which selects the endpoint. This is so that when the first destionation profile is returned, the client knows if the destination is `opaque` or not.

After sending that first update, we then subscribe the client for any future updates which will come from a Server being added or deleted.

This is handled by the new `ServerWatcher` which watches for Server updates on the cluster; when an update occurs it sends that to the `endpointProfileTranslator` which translates the protcol update into a DestinationProfile.

By introducing the `endpointProfileTranslator` which only handles protocol updates, we're able to decouple the endpoint logic from `profileTranslator`—it's `endpoint` field has been removed now that it only handles updates for ServiceProfiles for Services.

### Testing

A unit test has been added and below are some manual testing instructions to see how it interacts with Server updates:

<details>
	<summary>app.yaml</summary>

	```yaml
	apiVersion: v1
	kind: Pod
	metadata:
	  name: pod
	  labels:
		app: pod
	spec:
	  containers:
	  - name: app
		image: nginx
		ports:
		  - name: http
			containerPort: 80
	---
	apiVersion: policy.linkerd.io/v1beta1
	kind: Server
	metadata:
	  name: srv
	  labels:
		policy: srv
	spec:
	  podSelector:
		matchLabels:
		  app: pod
	  port: 80
	  proxyProtocol: opaque
	```
</details>

```shell
$ go run ./controller/cmd/main.go destination
```

```shell
$ linkerd inject app.yaml |kubectl apply -f -
...
$ kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE                       NOMINATED NODE   READINESS GATES
pod    2/2     Running   0          53m   10.42.0.34   k3d-k3s-default-server-0   <none>           <none>
$ go run ./controller/script/destination-client/main.go -method getProfile -path 10.42.0.34:80
...
```

You can add/delete `srv` as well as edit its `proxyProtocol` field to observe the correct DestinationProfile updates.

Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
2021-12-08 12:26:27 -07:00
Tarun Pothulapati 4170b49b33
smi: remove default functionality in linkerd (#7334)
Now, that SMI functionality is fully being moved into the
[linkerd-smi](www.github.com/linkerd/linkerd-smi) extension, we can
stop supporting its functionality by default.

This means that the `destination` component will stop reacting
to the `TrafficSplit` objects. When `linkerd-smi` is installed,
It does the conversion of `TrafficSplit` objects to `ServiceProfiles`
that destination components can understand, and will react accordingly.

Also, Whenever a `ServiceProfile` with traffic splitting is associated
with a service, the same information (i.e splits and weights) is also
surfaced through the `UI` (in the new `services` tab) and the `viz cmd`.
So, We are not really loosing any UI functionality here.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-12-03 12:07:30 +05:30
Kevin Leimkuhler 01cbe616f1
Honor Server `proxyProtocol` in destination service `Get` with policy CRD APIs (#7184)
This change ensures that if a Server exists with `proxyProtocol: opaque` that selects an endpoint backed by a pod, that destination requests for that pod reflect the fact that it handles opaque traffic.

Currently, the only way that opaque traffic is honored in the destination service is if the pod has the `config.linkerd.io/opaque-ports` annotation. With the introduction of Servers though, users can set `server.Spec.ProxyProtocol: opaque` to indicate that if a Server selects a pod, then traffic to that pod's `server.Spec.Port` should be opaque. Currently, the destination service does not take this into account.

There is an existing change up that _also_ adds this functionality; it takes a different approach by creating a policy server client for each endpoint that a destination has. For `Get` requests on a service, the number of clients scales with the number of endpoints that back that service.

This change fixes that issue by instead creating a Server watch in the endpoint watcher and sending updates through to the endpoint translator.

The two primary scenarios to consider are

### A `Get` request for some service is streaming when a Server is created/updated/deleted
When a Server is created or updated, the endpoint watcher iterates through its endpoint watches (`servicePublisher` -> `portPublisher`) and if it selects any of those endpoints, the port publisher sends an update if the Server has marked that port as opaque.

When a Server is deleted, the endpoint watcher once again iterates through its endpoint watches and deletes the address set's `OpaquePodPorts` field—ensuring that updates have been cleared of Server overrides.

### A `Get` request for some service happens after a Server is created
When a `Get` request occurs (or new endpoints are added—they both take the same path), we must check if any of those endpoints are selected by some existing Server. If so, we have to take that into account when creating the address set.

This part of the change gives me a little concern as we first must get all the Servers on the cluster and then create a set of _all_ the pod-backed endpoints that they select in order to determine if any of these _new_ endpoints are selected.

## Testing
Right now this can be tested by starting up the destination service locally and running `Get` requests on a service that has endpoints selected by a Server

**app.yaml**
```yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod
  labels:
    app: pod
spec:
  containers:
  - name: app
    image: nginx
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc
spec:
  selector:
    app: pod
  ports:
  - name: http
    port: 80
---
apiVersion: policy.linkerd.io/v1alpha1
kind: Server
metadata:
  name: srv
  labels:
    policy: srv
spec:
  podSelector:
    matchLabels:
      app: pod
  port: 80
  proxyProtocol: HTTP/1
```

```bash
$ go run controller/script/destination-client/main.go -path svc.default.svc.cluster.local:80
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-11-23 20:35:53 -07:00
dependabot[bot] 789aeea561
Fix gRPC servers (#6510)
Bump github.com/linkerd/linkerd2-proxy-api from 0.1.18 to 0.2.0

Bumps [github.com/linkerd/linkerd2-proxy-api](https://github.com/linkerd/linkerd2-proxy-api) from 0.1.18 to 0.2.0.
- [Release notes](https://github.com/linkerd/linkerd2-proxy-api/releases)
- [Changelog](https://github.com/linkerd/linkerd2-proxy-api/blob/main/CHANGES.md)
- [Commits](https://github.com/linkerd/linkerd2-proxy-api/compare/v0.1.18...v0.2.0)

---
updated-dependencies:
- dependency-name: github.com/linkerd/linkerd2-proxy-api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Oliver Gould <olix0r@gmail.com>

Co-authored-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Oliver Gould <olix0r@gmail.com>
2021-07-19 10:24:23 -05:00
Matei David 06ef634a9b
Add endpoint in profile response for requests on pod DNS. (#6260)
Closes #6253 


### What
---

When we send a profile request with a pod IP, we get back an endpoint as part of the response. This has two advantages: we avoid building a load balancer and we can treat endpoint failure differently (with more of a fail fast approach). At the moment, when we use a pod DNS as the target of the profile lookup, we don't have an endpoint returned in the
response.

Through this change, the behaviour will be consistent. Whenever we look up a pod (either through IP or DNS name) we will get an endpoint back. The change also attempts to simplify some of the logic in GetProfile.


### How
---

We already have a way to build an endpoint and return it back to the client; I sought to re-use most of the code in an effort to also simplify `GetProfile()`. I extracted most of the code that would have been duplicated into a separate method that is responsible for building the address, looking at annotations for opaque ports and for sending the response back.

In addition, to support a pod DNS fqn I've expanded on the `else` branch of the topmost if statement -- if our host is not an IP, we parse the host to get the k8s fqn. If the parsing function returns an instance ID along with the ServiceID, then we know we are dealing directly with a pod -- if we do, we fetch the pod using the core informer and then return an endpoint for it.

### Tests
---

I've tested this mostly with the destination client script. For the tests, I used the following pods:
```
❯ kgp -n emojivoto -o wide

NAME                        READY   STATUS    RESTARTS   AGE     IP           NODE                NOMINATED NODE   READINESS GATES
voting-ff4c54b8d-zbqc4      2/2     Running   0          3m58s   10.42.0.53   k3d-west-server-0   <none>           <none>
web-0                       2/2     Running   0          3m58s   10.42.0.55   k3d-west-server-0   <none>           <none>
vote-bot-7d89964475-tfq7j   2/2     Running   0          3m58s   10.42.0.54   k3d-west-server-0   <none>           <none>
emoji-79cc56f589-57tsh      2/2     Running   0          3m58s   10.42.0.52   k3d-west-server-0   <none>           <none>

# emoji pod has an opaque port set to 8080.
# web-svc is a headless service and it backs a statefulset (which is why we have web-0).
# without a headless service we can't lookup based on pod DNS.
```

**`Responses before the change`**: 
```
# request on IP, this is how things work at the moment. I included this because there shouldn't be
# any diff between the response given here and the response we get with the change.
# note: this corresponds to the emoji pod which has opaque ports set to 8080.
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.52:8080
INFO[0000] opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524724} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"deployment" value:"emoji"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"emoji-79cc56f589-57tsh"} metric_labels:{key:"pod_template_hash" value:"79cc56f589"} metric_labels:{key:"serviceaccount" value:"emoji"} tls_identity:{dns_like_identity:{name:"emoji.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{} opaque_transport:{inbound_port:4143}}}
INFO[0000]

# request web-0 by IP
# there shouldn't be any diff with the response we get after the change
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.55:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524727} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"web-0"} metric_labels:{key:"serviceaccount" value:"web"} metric_labels:{key:"statefulset" value:"web"} tls_identity:{dns_like_identity:{name:"web.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{}}}
INFO[0000]

# request web-0 by DNS name -- will not work.
❯ go run controller/script/destination-client/main.go -method getProfile -path web-0.web-svc.emojivoto.svc.cluster.loc
al:8080
INFO[0000] fully_qualified_name:"web-0.web-svc.emojivoto.svc.cluster.local" retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"web-svc.emojivoto.svc.cluster.local.:8080" weight:10000}
INFO[0000]
INFO[0000] fully_qualified_name:"web-0.web-svc.emojivoto.svc.cluster.local" retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"web-svc.emojivoto.svc.cluster.local.:8080" weight:10000}
INFO[0000]
# ^
# |
#  -->  no endpoint in the response
```

**`Responses after the change`**:

```
# request profile for emoji, we see opaque transport being set on the endpoint.
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.52:8080
INFO[0000] opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524724} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"deployment" value:"emoji"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"emoji-79cc56f589-57tsh"} metric_labels:{key:"pod_template_hash" value:"79cc56f589"} metric_labels:{key:"serviceaccount" value:"emoji"} tls_identity:{dns_like_identity:{name:"emoji.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{} opaque_transport:{inbound_port:4143}}}
INFO[0000]

# request profile for web-0 with IP.
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.55:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524727} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"web-0"} metric_labels:{key:"serviceaccount" value:"web"} metric_labels:{key:"statefulset" value:"web"} tls_identity:{dns_like_identity:{name:"web.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{}}}
INFO[0000]

# request profile for web-0 with pod DNS, resp contains endpoint.
❯ go run controller/script/destination-client/main.go -method getProfile -path web-0.web-svc.emojivoto.svc.cluster.local:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524727} port:8080} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"namespace" value:"emojivoto"} metric_labels:{key:"pod" value:"web-0"} metric_labels:{key:"serviceaccount" value:"web"} metric_labels:{key:"statefulset" value:"web"} tls_identity:{dns_like_identity:{name:"web.emojivoto.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{}}}
INFO[0000]
```

Signed-off-by: Matei David <matei@buoyant.io>
2021-06-22 16:51:29 -06:00
wangchenglong01 9ea66c8f73
Condition is always 'false' because 'err' is always 'nil' (#6218)
Remove unnecessary err check

Signed-off-by: Cookie Wang <wangchl01@inspur.com>
2021-06-04 14:57:00 +05:30
Oliver Gould f2eb3162d1
destination: Check port bounds (#6143)
CodeQL analysis flags that our use of `strconv.Atoi` is potentially
incorrect. More information [here][1].

This change addresses this by explicitly checking the bounds of the port
integer before casting it from `int` to `uint32`.

[1]: https://codeql.github.com/codeql-query-help/go/go-incorrect-integer-conversion/
2021-05-24 10:20:47 -07:00
Tarun Pothulapati fac28ff8a7
destination: Remove support for IP Queries in `Get` API (#6018)
* destination: Remove support for IP Queries in `Get` API

Fixes #5246

This PR updates the destination to report an error when `Get`
is called for IP Queries. As the issue mentions, The proxies
are not using this API anymore and it helps to simplify and
remove unnecessary logic.

This removes the relevant `IPWatcher` logic, along with
unit tests

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-04-21 12:40:40 +05:30
Dennis Adjei-Baah 78363ca894
Disable protocol and TLS hints on skipped ports (#6022)
When a pod is configured with `skip-inbound-ports` annotation, a client
proxy trying to connect to that pod tries to connect to it via H2 and
also tries to initiate a TLS connection. This issue is caused by the
destination controller when it sends protocol and TLS hints to the
client proxy for that skipped port.

This change fixes the destination controller so that it no longer
sends protocol and TLS identity hints to outbound proxies resolving a
`podIP:port` that is on a skipped inbound port.

I've included a test that exhibits this error prior to this fix but you
can also test the prior behavior by:

```bash
curl https://run.linkerd.io/booksapp.yml > booksapp.yaml

# edit either the books or authors service to:
1: Configure a failure rate of 0.0
2: add the `skip-inbound-ports` config annotation

bin/linkerd viz stat pods webapp

There should be no successful requests on the webapp deployment
```
Fixes #5995

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
2021-04-16 12:44:17 -04:00
Kevin Leimkuhler 3f72c998b3
Handle pod lookups for pods that map to a host IP and host port (#5904)
This fixes an issue where pod lookups by host IP and host port fail even though
the cluster has a matching pod.

Usually these manifested as `FailedPrecondition` errors, but the messages were
too long and resulted in http/2 errors. This change depends on #5893 which fixes
that separate issue.

This changes how often those `FailedPrecondition` errors actually occur. The
destination service now considers pod host IPs and should reduce the frequency
of those errors.

Closes #5881 

---

Lookups like this happen when a pod is created with a host IP and host port set
in its spec. It still has a pod IP when running, but requests to
`hostIP:hostPort` will also be redirected to the pod. Combinations of host IP
and host Port are unique to the cluster and enforced by Kubernetes.

Currently, the destination services fails to find pods in this scenario because
we only keep an index with pod and their pod IPs, not pods and their host IPs.
To fix this, we now also keep an index of pods and their host IPs—if and only if
they have the host IP set.

Now when doing a pod lookup, we consider both the IP and the port. We perform
the following steps:

1. Do a lookup by IP in the pod podIP index
  - If only one pod is found then return it
2. 0 or more than 1 pods have the same pod IP
3. Do a lookup by IP in the pod hostIP index
  - If any number of pods were found, we know that IP maps to a node IP.
    Therefore, we search for a pod with a matching host Port. If one exists then
    return it; if not then there is no pod that matches `hostIP:port`
4. The IP does not map to a host IP
5. If multiple pods were found in `1`, then we know there are pods with
   conflicting podIPs and an error is returned
6. If no pounds were found in `1` then there is no pod that matches `IP:port`

---

Aside from the additional IP watcher test being added, this can be tested with
the following steps:

1. Create a kind cluster. kind is required because it's pods in `kube-system`
   have the same pod IPs; this not the case with k3d: `bin/kind create cluster`
2. Install Linkerd with `4445` marked as opaque: `linkerd install --set
   proxy.opaquePorts="4445" |kubectl apply -f -`
2. Get the node IP: `kubectl get -o wide nodes`
3. Pull my fork of `tcp-echo`:

```
$ git clone https://github.com/kleimkuhler/tcp-echo
...
$ git checkout --track kleimkuhler/host-pod-repro
```

5. `helm package .`
7. Install `tcp-echo` with the server not injected and correct host IP: `helm
   install tcp-echo tcp-echo-0.1.0.tgz --set server.linkerdInject="false" --set
   hostIP="..."`
8. Looking at the client's proxy logs, you should not observe any errors or
   protocol detection timeouts.
9. Looking at the server logs, you should see all the requests coming through
   correctly.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-03-18 13:29:43 -04:00
Tarun Pothulapati 5c1a375a51
destination: pass opaque-ports through cmd flag (#5829)
* destination: pass opaque-ports through cmd flag

Fixes #5817

Currently, Default opaque ports are stored at two places i.e
`Values.yaml` and also at `opaqueports/defaults.go`. As these
ports are used only in destination, We can instead pass these
values as a cmd flag for destination component from Values.yaml
and remove defaultPorts in `defaults.go`.

This means that users if they override `Values.yaml`'s opauePorts
field, That change is propogated both for injection and also
discovery like expected.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-03-01 16:00:20 +05:30
Kevin Leimkuhler 51a965e228
Return default opaque ports in the destination service (#5814)
This changes the destination service to always use a default set of opaque ports
for pods and services. This is so that after Linkerd is installed onto a
cluster, users can benefit from common opaque ports without having to annotate
the workloads that serve the applications.

After #5810 merges, the proxy containers will be have the default opaque ports
`25,443,587,3306,5432,11211`. This value on the proxy container does not affect
traffic though; it only configures the proxy.

In order for clients and servers to detect opaque protocols and determine opaque
transports, the pods and services need to have these annotations.

The ports `25,443,587,3306,5432,11211` are now handled opaquely when a pod or
service does not have the opaque ports annotation. If the annotation is present
with a different value, this is used instead of the default. If the annotation
is present but is an empty string, there are no opaque ports for the workload.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-24 14:55:31 -05:00
Kevin Leimkuhler ff93d2d317
Mirror opaque port annotations on services (#5770)
This change introduces an opaque ports annotation watcher that will send
destination profile updates when a service has its opaque ports annotation
change.

The user facing change introduced by this is that the opaque ports annotation is
now required on services when using the multicluster extension. This is because
the service mirror will create mirrored services in the source cluster, and
destination lookups in the source cluster need to discover that the workloads in
the target cluster are opaque protocols.

### Why

Closes #5650

### How

The destination server now has a new opaque ports annotation watcher. When a
client subscribes to updates for a service name or cluster IP, the `GetProfile`
method creates a profile translator stack that passes updates through resource
adaptors such as: traffic split adaptor, service profile adaptor, and now opaque
ports adaptor.

When the annotation on a service changes, the update is passed through to the
client where the `opaque_protocol` field will either be set to true or false.

A few scenarios to consider are:

  - If the annotation is removed from the service, the client should receive
    an update with no opaque ports set.
  - If the service is deleted, the stream stays open so the client should
    receive an update with no opaque ports set.
  - If the service has the annotation added, the client should receive that
    update.

### Testing

Unit test have been added to the watcher as well as the destination server.

An integration test has been added that tests the opaque port annotation on a
service.

For manual testing, using the destination server scripts is easiest:

```
# install Linkerd

# start the destination server
$ go run controller/cmd/main.go destination -kubeconfig ~/.kube/config

# Create a service or namespace with the annotation and inject it

# get the destination profile for that service and observe the opaque protocol field
$ go run controller/script/destination-client/main.go -method getProfile -path test-svc.default.svc.cluster.local:8080
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000} 
INFO[0000]                                              
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000} 
INFO[0000]
```

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-23 13:36:17 -05:00
Kevin Leimkuhler 5dc662ae97
Remove namespace inheritance of opaque ports annotation (#5739)
This change removes the namespace inheritance of the opaque ports annotation.
Now when setting opaque port related fields in destination profile responses, we
only look at the pod annotations.

This prepares for #5736 where the proxy-injector will add the annotation from
the namespace if the pod does not have it already.

Closes #5735

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-02-15 10:21:20 -05:00
Filip Petkovski 73f9fb3518
Use a shared informer when getting node topology (#5722)
Getting information about node topology queries the k8s api directly.
In an environment with high traffic and high number of pods, the
k8s api server can become overwhelmed or start throttling requests.

This MR introduces a node informer to resolve the bottleneck and
fetch node information asynchronously.

Fixes #5684

Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
2021-02-12 11:05:38 -05:00
Alejandro Pedraza d3d7f4e2e2
Destination should return `OpaqueTransport` hint when annotation matches resolved target port (#5458)
The destination service now returns `OpaqueTransport` hint when the annotation
matches the resolve target port. This is different from the current behavior
which always sets the hint when a proxy is present.

Closes #5421

This happens by changing the endpoint watcher to set a pod's opaque port
annotation in certain cases. If the pod already has an annotation, then its
value is used. If the pod has no annotation, then it checks the namespace that
the endpoint belongs to; if it finds an annotation on the namespace then it
overrides the pod's annotation value with that.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-01-05 14:54:55 -05:00
Kevin Leimkuhler b830efdad7
Add OpaqueTransport field to destination protocol hints (#5421)
## What

When the destination service returns a destination profile for an endpoint,
indicate if the endpoint can receive opaque traffic.

## Why

Closes #5400

## How

When translating a pod address to a destination profile, the destination service
checks if the pod is controlled by any linkerd control plane. If it is, it can
set a protocol hint where we indicate that it supports H2 and opaque traffic.

If the pod supports opaque traffic, we need to get the port that it expects
inbound traffic on. We do this by getting the proxy container and reading it's
`LINKERD2_PROXY_INBOUND_LISTEN_ADDR` environment variable. If we successfully
parse that into a port, we can set the opaque transport field in the destination
profile.

## Testing

A test has been added to the destination server where a pod has a
`linkerd-proxy` container. We can expect the `OpaqueTransport` field to be set
in the returned destination profile's protocol hint.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-12-23 11:06:39 -05:00