mirror of https://github.com/linkerd/linkerd2.git
This change ensures that if a Server exists with `proxyProtocol: opaque` that selects an endpoint backed by a pod, that destination requests for that pod reflect the fact that it handles opaque traffic. Currently, the only way that opaque traffic is honored in the destination service is if the pod has the `config.linkerd.io/opaque-ports` annotation. With the introduction of Servers though, users can set `server.Spec.ProxyProtocol: opaque` to indicate that if a Server selects a pod, then traffic to that pod's `server.Spec.Port` should be opaque. Currently, the destination service does not take this into account. There is an existing change up that _also_ adds this functionality; it takes a different approach by creating a policy server client for each endpoint that a destination has. For `Get` requests on a service, the number of clients scales with the number of endpoints that back that service. This change fixes that issue by instead creating a Server watch in the endpoint watcher and sending updates through to the endpoint translator. The two primary scenarios to consider are ### A `Get` request for some service is streaming when a Server is created/updated/deleted When a Server is created or updated, the endpoint watcher iterates through its endpoint watches (`servicePublisher` -> `portPublisher`) and if it selects any of those endpoints, the port publisher sends an update if the Server has marked that port as opaque. When a Server is deleted, the endpoint watcher once again iterates through its endpoint watches and deletes the address set's `OpaquePodPorts` field—ensuring that updates have been cleared of Server overrides. ### A `Get` request for some service happens after a Server is created When a `Get` request occurs (or new endpoints are added—they both take the same path), we must check if any of those endpoints are selected by some existing Server. If so, we have to take that into account when creating the address set. This part of the change gives me a little concern as we first must get all the Servers on the cluster and then create a set of _all_ the pod-backed endpoints that they select in order to determine if any of these _new_ endpoints are selected. ## Testing Right now this can be tested by starting up the destination service locally and running `Get` requests on a service that has endpoints selected by a Server **app.yaml** ```yaml apiVersion: v1 kind: Pod metadata: name: pod labels: app: pod spec: containers: - name: app image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: svc spec: selector: app: pod ports: - name: http port: 80 --- apiVersion: policy.linkerd.io/v1alpha1 kind: Server metadata: name: srv labels: policy: srv spec: podSelector: matchLabels: app: pod port: 80 proxyProtocol: HTTP/1 ``` ```bash $ go run controller/script/destination-client/main.go -path svc.default.svc.cluster.local:80 ``` Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com> |
||
---|---|---|
.. | ||
destination-client | ||
policy-client |