Fixes#5939
Some CNIs reasssign the IP of a terminating pod to a new pod, which
leads to duplicate IPs in the cluster.
It eventually triggers #5939.
This commit will make the IPWatcher, when given an IP, filter out the terminating pods
(when a pod is given a deletionTimestamp).
The issue is hard reproduce because we are not able to assign a
particular IP to a pod manually.
Signed-off-by: Bruce <wenliang.chen@personio.de>
Co-authored-by: Bruce <wenliang.chen@personio.de>
This fixes an issue where pod lookups by host IP and host port fail even though
the cluster has a matching pod.
Usually these manifested as `FailedPrecondition` errors, but the messages were
too long and resulted in http/2 errors. This change depends on #5893 which fixes
that separate issue.
This changes how often those `FailedPrecondition` errors actually occur. The
destination service now considers pod host IPs and should reduce the frequency
of those errors.
Closes#5881
---
Lookups like this happen when a pod is created with a host IP and host port set
in its spec. It still has a pod IP when running, but requests to
`hostIP:hostPort` will also be redirected to the pod. Combinations of host IP
and host Port are unique to the cluster and enforced by Kubernetes.
Currently, the destination services fails to find pods in this scenario because
we only keep an index with pod and their pod IPs, not pods and their host IPs.
To fix this, we now also keep an index of pods and their host IPs—if and only if
they have the host IP set.
Now when doing a pod lookup, we consider both the IP and the port. We perform
the following steps:
1. Do a lookup by IP in the pod podIP index
- If only one pod is found then return it
2. 0 or more than 1 pods have the same pod IP
3. Do a lookup by IP in the pod hostIP index
- If any number of pods were found, we know that IP maps to a node IP.
Therefore, we search for a pod with a matching host Port. If one exists then
return it; if not then there is no pod that matches `hostIP:port`
4. The IP does not map to a host IP
5. If multiple pods were found in `1`, then we know there are pods with
conflicting podIPs and an error is returned
6. If no pounds were found in `1` then there is no pod that matches `IP:port`
---
Aside from the additional IP watcher test being added, this can be tested with
the following steps:
1. Create a kind cluster. kind is required because it's pods in `kube-system`
have the same pod IPs; this not the case with k3d: `bin/kind create cluster`
2. Install Linkerd with `4445` marked as opaque: `linkerd install --set
proxy.opaquePorts="4445" |kubectl apply -f -`
2. Get the node IP: `kubectl get -o wide nodes`
3. Pull my fork of `tcp-echo`:
```
$ git clone https://github.com/kleimkuhler/tcp-echo
...
$ git checkout --track kleimkuhler/host-pod-repro
```
5. `helm package .`
7. Install `tcp-echo` with the server not injected and correct host IP: `helm
install tcp-echo tcp-echo-0.1.0.tgz --set server.linkerdInject="false" --set
hostIP="..."`
8. Looking at the client's proxy logs, you should not observe any errors or
protocol detection timeouts.
9. Looking at the server logs, you should see all the requests coming through
correctly.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
# Problem
While rolling out often not all pods will be ready in all the same set of
ports, leading the Kubernetes Endpoints API to return multiple subsets,
each covering a different set of ports, with the end result that the
same address gets repeated across subsets.
The old code for endpointsToAddresses would loop through all subsets, and the
later occurrences of an address would overwrite previous ones, with the
last one prevailing.
If the last subset happened to be for an irrelevant port, and the port to
be resolved is named, resolveTargetPort would resolve to port 0, which would
return port 0 to clients, ultimately leading linkerd-proxy to forward
connections to port 0.
This only happens if the pods selected by a service expose > 1 port, the
service maps to > 1 of these ports, and at least one of these ports is named.
# Solution
Never write an address to set of addresses if resolved port is 0, which
indicates named port resolution failed.
# Validation
Added a test case.
Signed-off-by: Riccardo Freixo <riccardofreixo@gmail.com>
This reduces the possible HTTP response size from the destination service when
it encounters an error during a profile lookup.
If multiple objects on a cluster share the same IP (such as pods in
`kube-system`), the destination service will return an error with the two
conflicting pod yamls.
In certain cases, these pod yamls can be too large for the HTTP response and the
destination pod's proxy will indicate that with the following error:
```
hyper::proto::h2::server: send response error: user error: header too big
```
From the app pod's proxy, this results in the following error:
```
poll_profile: linkerd_service_profiles::client: Could not fetch profile error=status: Unknown, message: "http2 error: protocol error: unexpected internal error encountered"
```
We now only return the conflicting pods (or services) names. This reduces the
size of the returned error and fixes these warnings from occurring.
Example response error:
```
poll_profile: linkerd_service_profiles::client: Could not fetch profile error=status: FailedPrecondition, message: "Pod IP address conflict: kube-system/kindnet-wsflq, kube-system/kube-scheduler-kind-control-plane", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc", "date": "Fri, 12 Mar 2021 19:54:09 GMT"} }
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* destination: pass opaque-ports through cmd flag
Fixes#5817
Currently, Default opaque ports are stored at two places i.e
`Values.yaml` and also at `opaqueports/defaults.go`. As these
ports are used only in destination, We can instead pass these
values as a cmd flag for destination component from Values.yaml
and remove defaultPorts in `defaults.go`.
This means that users if they override `Values.yaml`'s opauePorts
field, That change is propogated both for injection and also
discovery like expected.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
This changes the destination service to always use a default set of opaque ports
for pods and services. This is so that after Linkerd is installed onto a
cluster, users can benefit from common opaque ports without having to annotate
the workloads that serve the applications.
After #5810 merges, the proxy containers will be have the default opaque ports
`25,443,587,3306,5432,11211`. This value on the proxy container does not affect
traffic though; it only configures the proxy.
In order for clients and servers to detect opaque protocols and determine opaque
transports, the pods and services need to have these annotations.
The ports `25,443,587,3306,5432,11211` are now handled opaquely when a pod or
service does not have the opaque ports annotation. If the annotation is present
with a different value, this is used instead of the default. If the annotation
is present but is an empty string, there are no opaque ports for the workload.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This reverts commit f9ab867cbc which renamed the
multicluster label name from `mirror.linkerd.io` to `multicluster.linkerd.io`.
While this change was made to follow similar namings in other extensions, it
complicates the multicluster upgrade process due to the secret creation.
`mirror.linkerd.io` is not that important of a label to change and this will
allow a smoother upgrade process for `stable-2.10.x`
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This change introduces an opaque ports annotation watcher that will send
destination profile updates when a service has its opaque ports annotation
change.
The user facing change introduced by this is that the opaque ports annotation is
now required on services when using the multicluster extension. This is because
the service mirror will create mirrored services in the source cluster, and
destination lookups in the source cluster need to discover that the workloads in
the target cluster are opaque protocols.
### Why
Closes#5650
### How
The destination server now has a new opaque ports annotation watcher. When a
client subscribes to updates for a service name or cluster IP, the `GetProfile`
method creates a profile translator stack that passes updates through resource
adaptors such as: traffic split adaptor, service profile adaptor, and now opaque
ports adaptor.
When the annotation on a service changes, the update is passed through to the
client where the `opaque_protocol` field will either be set to true or false.
A few scenarios to consider are:
- If the annotation is removed from the service, the client should receive
an update with no opaque ports set.
- If the service is deleted, the stream stays open so the client should
receive an update with no opaque ports set.
- If the service has the annotation added, the client should receive that
update.
### Testing
Unit test have been added to the watcher as well as the destination server.
An integration test has been added that tests the opaque port annotation on a
service.
For manual testing, using the destination server scripts is easiest:
```
# install Linkerd
# start the destination server
$ go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
# Create a service or namespace with the annotation and inject it
# get the destination profile for that service and observe the opaque protocol field
$ go run controller/script/destination-client/main.go -method getProfile -path test-svc.default.svc.cluster.local:8080
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000}
INFO[0000]
INFO[0000] fully_qualified_name:"terminus-svc.default.svc.cluster.local" opaque_protocol:true retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"terminus-svc.default.svc.cluster.local.:8080" weight:10000}
INFO[0000]
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This renames the multicluster annotation prefix from `mirror.linkerd.io` to
`multicluster.linkerd.io` in order to reflect other extension naming patterns.
Additionally, it moves labels only used in the Multicluster extension into their
own labels file—again to reflect other extensions.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
We've created a custom domain, `cr.l5d.io`, that redirects to `ghcr.io`
(using `scarf.sh`). This custom domain allows us to swap the underlying
container registry without impacting users. It also provides us with
important metrics about container usage, without collecting PII like IP
addresses.
This change updates our Helm charts and CLIs to reference this custom
domain. The integration test workflow now refers to the new domain,
while the release workflow continues to use the `ghcr.io/linkerd` registry
for the purpose of publishing images.
This change removes the namespace inheritance of the opaque ports annotation.
Now when setting opaque port related fields in destination profile responses, we
only look at the pod annotations.
This prepares for #5736 where the proxy-injector will add the annotation from
the namespace if the pod does not have it already.
Closes#5735
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Getting information about node topology queries the k8s api directly.
In an environment with high traffic and high number of pods, the
k8s api server can become overwhelmed or start throttling requests.
This MR introduces a node informer to resolve the bottleneck and
fetch node information asynchronously.
Fixes#5684
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
Closes#5545.
This change moves all tap and tap-injector code into the viz directory.
The tap and tap-injector components now also use a new tap image—separating
these components from the controller image that they are currently part of. This
means the controller image has removed all its build dependencies related to
tap.
Finally, the tap Protobuf has been separated from the metrics-api and moved into
it's own `.proto` file and gen directory. This introduces a clear split between
metrics-api and tap Protobuf.
There is no change in behavior for the `viz tap` command.
### Reviewing
#### Docker images
All the bin directory scripts should be updated to build and load the tap image.
All the CI workflows should be updated to build and push the tap image.
#### Controller and pkg directories
This is primarily deletions. Most of the deleted code in this directory is now
in the tap directory of the Viz extension.
#### viz/tap
This is the location that all the tap related code now lives in. New files are
mostly moved from the controller and pkg directories. Imports have all been
updated to point at the right locations and Protobuf.
The Protobuf here is taken from metrics-api and contains all tap-related
Protobuf.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Protobuf changes:
- Moved `healthcheck.proto` back from viz to `proto/common` as it remains being used by the main `healthcheck.go` library (it was moved to viz by #5510).
- Extracted from `viz.proto` the IP-related types and put them in `/controller/gen/common/net` to be used by both the public and the viz APIs.
* Added chart templates for new viz linkerd-metrics-api pod
* Spin-off viz healthcheck:
- Created `viz/pkg/healthcheck/healthcheck.go` that wraps the original `pkg/healthcheck/healthcheck.go` while adding the `vizNamespace` and `vizAPIClient` fields which were removed from the core `healthcheck`. That way the core healthcheck doesn't have any dependencies on viz, and viz' healthcheck can now be used to retrieve viz api clients.
- The core and viz healthcheck libs are now abstracted out via the new `healthcheck.Runner` interface.
- Refactored the data plane checks so they don't rely on calling `ListPods`
- The checks in `viz/cmd/check.go` have been moved to `viz/pkg/healthcheck/healthcheck.go` as well, so `check.go`'s sole responsibility is dealing with command business. This command also now retrieves its viz api client through viz' healthcheck.
* Removed linkerd-controller dependency on Prometheus:
- Removed the `global.prometheusUrl` config in the core values.yml.
- Leave the Heartbeat's `-prometheus` flag hard-coded temporarily. TO-DO: have it automatically discover viz and pull Prometheus' endpoint (#5352).
* Moved observability gRPC from linkerd-controller to viz:
- Created a new gRPC server under `viz/metrics-api` moving prometheus-dependent functions out of the core gRPC server and into it (same thing for the accompaigning http server).
- Did the same for the `PublicAPIClient` (now called just `Client`) interface. The `VizAPIClient` interface disappears as it's enough to just rely on the viz `ApiClient` protobuf type.
- Moved the other files implementing the rest of the gRPC functions from `controller/api/public` to `viz/metrics-api` (`edge.go`, `stat_summary.go`, etc.).
- Also simplified some type names to avoid stuttering.
* Added linkerd-metrics-api bootstrap files. At the same time, we strip out of the public-api's `main.go` file the prometheus parameters and other no longer relevant bits.
* linkerd-web updates: it requires connecting with both the public-api and the viz api, so both addresses (and the viz namespace) are now provided as parameters to the container.
* CLI updates and other minor things:
- Changes to command files under `cli/cmd`:
- Updated `endpoints.go` according to new API interface name.
- Updated `version.go`, `dashboard` and `uninstall.go` to pull the viz namespace dynamically.
- Changes to command files under `viz/cmd`:
- `edges.go`, `routes.go`, `stat.go` and `top.go`: point to dependencies that were moved from public-api to viz.
- Other changes to have tests pass:
- Added `metrics-api` to list of docker images to build in actions workflows.
- In `bin/fmt` exclude protobuf generated files instead of entire directories because directories could contain both generated and non-generated code (case in point: `viz/metrics-api`).
* Add retry to 'tap API service is running' check
* mc check shouldn't err when viz is not available. Also properly set the log in multicluster/cmd/root.go so that it properly displays messages when --verbose is used
The Destination controller can panic due to a nil-deref when
the EndpointSlices API is enabled.
This change updates the controller to properly initialize values
to avoid this segmentation fault.
Fixes#5521
Signed-off-by: Oleg Ozimok <oleg.ozimok@corp.kismia.com>
* Separate observability API
Closes#5312
This is a preliminary step towards moving all the observability API into `/viz`, by first moving its protobuf into `viz/metrics-api`. This should facilitate review as the go files are not moved yet, which will happen in a followup PR. There are no user-facing changes here.
- Moved `proto/common/healthcheck.proto` to `viz/metrics-api/proto/healthcheck.prot`
- Moved the contents of `proto/public.proto` to `viz/metrics-api/proto/viz.proto` except for the `Version` Stuff.
- Merged `proto/controller/tap.proto` into `viz/metrics-api/proto/viz.proto`
- `grpc_server.go` now temporarily exposes `PublicAPIServer` and `VizAPIServer` interfaces to separate both APIs. This will get properly split in a followup.
- The web server provides handlers for both interfaces.
- `cli/cmd/public_api.go` and `pkg/healthcheck/healthcheck.go` temporarily now have methods to access both APIs.
- Most of the CLI commands will use the Viz API, except for `version`.
The other changes in the go files are just changes in the imports to point to the new protobufs.
Other minor changes:
- Removed `git add controller/gen` from `bin/protoc-go.sh`
Ignore pods with status.phase=Succeeded when watching IP addresses
When a pod terminates successfully, some CNIs will assign its IP address
to newly created pods. This can lead to duplicate pod IPs in the same
Kubernetes cluster.
Filter out pods which are in a Succeeded phase since they are not
routable anymore.
Fixes#5394
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
Currently, public-api is part of the core control-plane where
the prom check fails when ran before the viz extension is installed.
This change comments out that check, Once metrics api is moved into
viz, maybe this check can be part of it instead or directly part of
`linkerd viz check`.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
Co-authored-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
The destination service now returns `OpaqueTransport` hint when the annotation
matches the resolve target port. This is different from the current behavior
which always sets the hint when a proxy is present.
Closes#5421
This happens by changing the endpoint watcher to set a pod's opaque port
annotation in certain cases. If the pod already has an annotation, then its
value is used. If the pod has no annotation, then it checks the namespace that
the endpoint belongs to; if it finds an annotation on the namespace then it
overrides the pod's annotation value with that.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
## What
When the destination service returns a destination profile for an endpoint,
indicate if the endpoint can receive opaque traffic.
## Why
Closes#5400
## How
When translating a pod address to a destination profile, the destination service
checks if the pod is controlled by any linkerd control plane. If it is, it can
set a protocol hint where we indicate that it supports H2 and opaque traffic.
If the pod supports opaque traffic, we need to get the port that it expects
inbound traffic on. We do this by getting the proxy container and reading it's
`LINKERD2_PROXY_INBOUND_LISTEN_ADDR` environment variable. If we successfully
parse that into a port, we can set the opaque transport field in the destination
profile.
## Testing
A test has been added to the destination server where a pod has a
`linkerd-proxy` container. We can expect the `OpaqueTransport` field to be set
in the returned destination profile's protocol hint.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
## Summary
This changes the destination service to start indicating whether a profile is an
opaque protocol or not.
Currently, profiles returned by the destination service are built by chaining
together updates coming from watching Profile and Traffic Split updates.
With this change, we now also watch updates to Opaque Port annotations on pods
and namespaces; if an update occurs this is now included in building a profile
update and is sent to the client.
## Details
Watching updates to Profiles and Traffic Splits is straightforward--we watch
those resources and if an update occurs on one associated to a service we care
about then the update is passed through.
For Opaque Ports this is a little different because it is an annotation on pods
or namespaces. To account for this, we watch the endpoints that we should care
about.
### When host is a Pod IP
When getting the profile for a Pod IP, we check for the opaque ports annotation
on the pod and the pod's namespace. If one is found, we'll indicate if the
profile is an opaque protocol if the requested port is in the annotation.
We do not subscribe for updates to this pod IP. The only update we really care
about is if the pod is deleted and this is already handled by the proxy.
### When host is a Service
When getting the profile for a Service, we subscribe for updates to the
endpoints of that service. For any ports set in the opaque ports annotation on
any of the pods, we check if the requested port is present.
Since the endpoints for a service can be added and removed, we do subscribe for
updates to the endpoints of the service.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This adds additional tests for the destination service that assert `GetProfile`
behavior when the path is an IP address.
1. Assert that when the path is a cluster IP, the configured service profile is
returned.
2. Assert that when the path a pod IP, the endpoint field is populated in the
service profile returned.
3. Assert that when the path is not a cluster or pod IP, the default service
profile is returned.
4. Assert that when path is a pod IP with or without the controller annotation,
the endpoint has or does not have a protocol hint
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This fixes an issue where the protocol hint is always set on endpoint responses.
We now check the right value which determines if the pod has the required label.
A test for this has been added to #5266.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
This release changes error handling to teardown the server-side
connection when an unexpected error is encountered.
Additionally, the outbound TCP routing stack can now skip redundant
service discovery lookups when profile responses include endpoint
information.
Finally, the cache implementation has been updated to reduce latency by
removing unnecessary buffers.
---
* h2: enable HTTP/2 keepalive PING frames (linkerd/linkerd2-proxy#737)
* actions: Add timeouts to GitHub actions (linkerd/linkerd2-proxy#738)
* outbound: Skip endpoint resolution on profile hint (linkerd/linkerd2-proxy#736)
* Add a FromStr for dns::Name (linkerd/linkerd2-proxy#746)
* outbound: Avoid redundant TCP endpoint resolution (linkerd/linkerd2-proxy#742)
* cache: Make the cache cloneable with RwLock (linkerd/linkerd2-proxy#743)
* http: Teardown serverside connections on error (linkerd/linkerd2-proxy#747)
Context: #5209
This updates the destination service to set the `Endpoint` field in `GetProfile`
responses.
The `Endpoint` field is only set if the IP maps to a Pod--not a Service.
Additionally in this scenario, the default Service Profile is used as the base
profile so no other significant fields are set.
### Examples
```
# GetProfile for an IP that maps to a Service
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.43.222.0:9090
INFO[0000] fully_qualified_name:"linkerd-prometheus.linkerd.svc.cluster.local" retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"linkerd-prometheus.linkerd.svc.cluster.local.:9090" weight:10000}
```
Before:
```
# GetProfile for an IP that maps to a Pod
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.20
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}}
```
After:
```
# GetProfile for an IP that maps to a Pod
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.42.0.20
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} endpoint:{addr:{ip:{ipv4:170524692}} weight:10000 metric_labels:{key:"control_plane_ns" value:"linkerd"} metric_labels:{key:"deployment" value:"fast-1"} metric_labels:{key:"pod" value:"fast-1-5cc87f64bc-9hx7h"} metric_labels:{key:"pod_template_hash" value:"5cc87f64bc"} metric_labels:{key:"serviceaccount" value:"default"} tls_identity:{dns_like_identity:{name:"default.default.serviceaccount.identity.linkerd.cluster.local"}} protocol_hint:{h2:{}}}
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Fixes#5143
The availability of prometheus is useful for some calls in public-api
that the check uses. This change updates the ListPods in public-api
to still return the pods even when prometheus is not configured.
For a test that exclusively checks for prometheus metrics, we have a gate
which checks if a prometheus is configured and skips it othervise.
Signed-off-by: Tarun Pothulapati tarunpothulapati@outlook.com
## Motivations
Closes#5080
## Solution
When the `--all-namespaces` (`-A`) flag is set for the `linkerd edges` command,
ignore the `namespace` value set by default or `-n`.
This is similar to the behavior for `kubectl`. `kubectl get -A -n linkerd pods`
showing pods in all namespaces.
### Behavior changes
With linkerd and emojivoto installed, this results in:
Before:
```
❯ linkerd edges -A pods
No edges found.
```
After:
```
❯ linkerd edges -A pods
SRC DST SRC_NS DST_NS SECURED
vote-bot-6cb9cb9569-wl6w5 web-5d69bcfdb7-mxf8f emojivoto emojivoto √
web-5d69bcfdb7-mxf8f emoji-7dc976587b-rb9c5 emojivoto emojivoto √
web-5d69bcfdb7-mxf8f voting-bdf4f778c-pjkjg emojivoto emojivoto √
linkerd-prometheus-68d6897d75-ghmgm emoji-7dc976587b-rb9c5 linkerd emojivoto √
linkerd-prometheus-68d6897d75-ghmgm vote-bot-6cb9cb9569-wl6w5 linkerd emojivoto √
linkerd-prometheus-68d6897d75-ghmgm voting-bdf4f778c-pjkjg linkerd emojivoto √
linkerd-prometheus-68d6897d75-ghmgm web-5d69bcfdb7-mxf8f linkerd emojivoto √
linkerd-controller-7d965cf78d-qw6xj linkerd-prometheus-68d6897d75-ghmgm linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-controller-7d965cf78d-qw6xj linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-destination-74dbb9c46b-nkxgh linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-grafana-5d9fb67dc6-sn2l8 linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-identity-c875b5d58-b756v linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-proxy-injector-767b55988d-n9r6f linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-sp-validator-6c8df84fb9-4w8kc linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-tap-777fbf7656-p87dm linkerd linkerd √
linkerd-prometheus-68d6897d75-ghmgm linkerd-web-546c9444b5-68xpx linkerd linkerd √
```
`linkerd edges -A -n linkerd pods` results in all edges as well (the result
above).
The behavior of `linkerd edges pods` does not change and shows edges in the
`default` namespace.
```
❯ linkerd edges pods
No edges found.
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
* Expand 'linkerd edges' to work with TCP connections
Fixes#4999
Before:
```
$ bin/linkerd edges po -owide
SRC DST SRC_NS DST_NS CLIENT_ID SERVER_ID SECURED
linkerd-prometheus-764ddd4f88-t6c2j rabbitmq-controller-5c6cf7cc6d-8lxp2 linkerd default √
linkerd-prometheus-764ddd4f88-t6c2j temp linkerd default √
```
After:
```
$ bin/linkerd edges po -owide
SRC DST SRC_NS DST_NS CLIENT_ID SERVER_ID SECURED
temp rabbitmq-controller-5c6cf7cc6d-5fpsc default default default.default default.default √
linkerd-prometheus-66fb97b7fc-vpnxf rabbitmq-controller-5c6cf7cc6d-5fpsc linkerd default √
linkerd-prometheus-66fb97b7fc-vpnxf temp linkerd default √
```
With the latest proxy upgrade to v2.113.0 (#5037), the `tcp_open_total` metric now contains the `client_id` label so that we can replace the http-only metric `response_total` with this one to determine edges for TCP-only connections.
This change basically performs the same query as before, but two times, one for `response_total` and another for `tcp_open_total`. For each resulting entry, the latter is kept if `client_id` is present, otherwise the former is used (if present at all). That way things keep on working for older proxies.
Disclaimers:
- This doesn't fix#3706: if two sources connect to the same destination there's no way to tell them appart from the metrics perspective and their edges can get mangled. To fix that, the proxy would have to expose `src_resource` labels in the `tcp_open_total` total inbound metric.
- Note connections coming from prometheus are still unidentified. The reason is those hit the proxy's admin server (instead of the main container) which doesn't expose metrics.
* Remove dependency of linkerd-config for most control plane components
This PR removes the dependency of `linkerd-config` into control
plane components by making all that information passed through CLI
flags. As most of these components require a couple of flags, passing
them as flags could be more helpful, as updations to the flags trigger a
rollout unlike a configMap update.
This does not update the proxy-injector as it needs a lot more data
and mounting `linkerd-config` is better.
## Motivation
Closes#5016
Depends on linkerd/linkerd2-proxy-api#44
## Solution
A `profileTranslator` exists for each service and now has a new
`fullyQualifiedName` field.
This field is used to set the `FullyQualifiedName` field of
`DestinationProfile`s each time an update is sent.
In the case that no service profile exists for a service, a default
`DestinationProfile` is created and we can use the field to set the correct
name.
In the case that a service profile does exist for a service, we still use this
field to set the name to keep it consistent.
### Example
Install linkerd on a cluster and run the destination server:
```
go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
```
Get the IP of a service. Here, we'll get the ip for `linkerd-identity`:
```
> kubectl get -n linkerd svc/linkerd-identity
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
linkerd-identity ClusterIP 10.43.161.68 <none> 8080/TCP 4h25m
```
Get the profile of `linkerd-identity` from service name or IP and note the
`FullyQualifiedName` field:
```
> go run controller/script/destination-client/main.go -method getProfile -path 10.43.161.68:8080
INFO[0000] fully_qualified_name:"linkerd-identity.linkerd.svc.cluster.local" ..
```
```
> go run controller/script/destination-client/main.go -method getProfile -path linkerd-identity.linkerd.svc.cluster.local
INFO[0000] fully_qualified_name:"linkerd-identity.linkerd.svc.cluster.local" ..
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
Fixes#4191#4993
This bumps Kubernetes client-go to the latest v0.19.2 (We had to switch directly to 1.19 because of this issue). Bumping to v0.19.2 required upgrading to smi-sdk-go v0.4.1. This also depends on linkerd/stern#5
This consists of the following changes:
- Fix ./bin/update-codegen.sh by adding the template path to the gen commands, as it is needed after we moved to GOMOD.
- Bump all k8s related dependencies to v0.19.2
- Generate CRD types, client code using the latest k8s.io/code-generator
- Use context.Context as the first argument, in all code paths that touch the k8s client-go interface
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
* Push docker images to ghcr.io instead of gcr.io
The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.
Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.
Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.
Besides that, the other changes are just a global search and replace (s/gcr.io\/linkerd-io/ghcr.io\/linkerd/).
## Motivation
#4879
## Solution
When no traffic split exists for services, return a single destination override
with a weight of 100%.
Using the destination client on a new linkerd installation, this results in the
following output for `linkerd-identity` service:
```
❯ go run controller/script/destination-client/main.go -method getProfile -path linkerd-identity.linkerd.svc.cluster.local:8080
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}} dst_overrides:{authority:"linkerd-identity.linkerd.svc.cluster.local.:8080" weight:100000}
INFO[0000]
```
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
[Link to RFC](https://github.com/linkerd/rfc/pull/23)
### What
---
* PR that puts together all past pieces of the puzzle to deliver topology-aware service routing, as specified in the [Kubernetes docs](https://kubernetes.io/docs/concepts/services-networking/service-topology/) but with a much better load balancing algorithm and all the coolness of linkerd :)
* The first piece of this PR is focused on adding topology metadata: topology preference for services and topology `<k,v>` pairs for endpoints.
* The second piece of this PR puts together the new context format and fetching the source node topology metadata in order to allow for endpoints filtering.
* The final part is doing the filtering -- passing all of the metadata to the listener and on every `Add` filtering endpoints based on the topology preference of the service, topology `<k,v>` pairs of endpoints and topology of the source (again `<k,v>` pairs).
### How
---
* **Collecting metadata**:
- Services do not have values for topology keys -- the topological keys defined in a service's spec are only there to dictate locality preference for routing; as such, I decided to store them in an array, they will be taken exactly as they are found in the service spec, this ensures we respect the preference order.
- For EndpointSlices, we are using a map -- an EndpointSlice has locality information in the form of `<k,v>` pair, where the key is a topological key (similar to what's listed in the service) and the value is the locality information -- e.g `hostname: minikube`. For each address we now have a map of topology values which gets populated when we translate the endpoints to an address set. Because normal Endpoints do not have any topology information, we create each address with an empty map which is subsequently populated ONLY for slices in the `endpointSliceToAddressSet` function.
* **Filtering endpoints**:
- This was a tricky part and filled me with doubts. I think there are a few ways to do this, but this is how I "envisioned" it. First, the `endpoint_translator.go` should be the one to do the filtering; this means that on subscription, we need to feed all of the relevant metadata to the listener. To do this, I created a new function `AddTopologyFilter` as part of the listener interface.
- To complement the `AddTopologyFilter` function, I created a new `TopologyFilter` struct in `endpoints_watcher.go`. I then embedded this structure in all listeners that implement the interface. The structure holds the source topology (source node), a boolean to tell if slices are activated in case we need to double check (or write tests for the function) and the service preference. We create the filter on Subscription -- we have access to the k8s client here as well as the service, so it's the best point to collect all of this data together. Addresses all have their own topology added to them so they do not have to be collected by the filter.
- When we add a new set of addresses, we check to see if slices are enabled -- chances are if slices are enabled, service topology might be too. This lets us skip this step if the latest version is not adopted. Prior to sending an `Add` we filter the endpoints -- if the preference is registered by the filter we strictly enforce it, otherwise nothing changes.
And that's pretty much it.
Signed-off-by: Matei David <matei.david.35@gmail.com>
This PR corrects misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
The misspellings have been reported at aaf440489e (commitcomment-41423663)
The action reports that the changes in this PR would make it happy: 5b82c6c5ca
Note: this PR does not include the action. If you're interested in running a spell check on every PR and push, that can be offered separately.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
* Introduce multicluster gateway api handler in web api server
* Added MetricsUtil for Gateway metrics
* Added gateway api helper
* Added Gateway Component
Updated metricsTable component to support gateway metrics
Added handler for gateway
Fixes#4601
Signed-off-by: Tharun <rajendrantharun@live.com>
Add a new structure on the destination controller side to keep track of contextual information.
The token format has been changed from ns:<namespace> to a JSON format so that more variables can be
encdoed in the token. As part of this PR, a new field 'nodeName' has been added to help with service
topologies.
Fixes#4498
Signed-off-by: Matei David <matei.david.35@gmail.com>
This PR removes the service mirror controller from `linkerd mc install` to `linkerd mc link`, as described in https://github.com/linkerd/rfc/pull/31. For fuller context, please see that RFC.
Basic multicluster functionality works here including:
* `linkerd mc install` installs the Link CRD but not any service mirror controllers
* `linkerd mc link` creates a Link resource and installs a service mirror controller which uses that Link
* The service mirror controller creates and manages mirror services, a gateway mirror, and their endpoints.
* The `linkerd mc gateways` command lists all linked target clusters, their liveliness, and probe latences.
* The `linkerd check` multicluster checks have been updated for the new architecture. Several checks have been rendered obsolete by the new architecture and have been removed.
The following are known issues requiring further work:
* the service mirror controller uses the existing `mirror.linkerd.io/gateway-name` and `mirror.linkerd.io/gateway-ns` annotations to select which services to mirror. it does not yet support configuring a label selector.
* an unlink command is needed for removing multicluster links: see https://github.com/linkerd/linkerd2/issues/4707
* an mc uninstall command is needed for uninstalling the multicluster addon: see https://github.com/linkerd/linkerd2/issues/4708
Signed-off-by: Alex Leong <alex@buoyant.io>
* Small PR that uncomments the `EndpointSliceAcess` method and cleans up left over todos in the destination service.
* Based on the past three PRs related to `EndpointSlices` (#4663#4696#4740); they should now be functional (albeit prone to bugs) and ready to use.
Signed-off-by: Matei David <matei.david.35@gmail.com>
* Removes/Relaxes prometheus related checks
Now that prometheus is an add-on, There can be cases where prometheus is
disabled at which the check should show a warning but not fail. This
decouples the tight depedency.
This changes the following checks:
- Removes serviceAccount and pod checks in the CLI.
- Relaxes `linkerd-api` checks to only check for prometheus access when
the URL is not empty. This should work seamlessly with external
prometheus as that URL will be passed and it performs the same
check.
Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
EndpointSlices have been made opt-in due to their experimental nature. This PR
introduces a new install flag 'enableEndpointSlices' that will allow adopters to
specify in their cli install or helm install step whether they would like to
use endpointslices as a resource in the destination service, instead of the
endpoints k8s resource.
Signed-off-by: Matei David <matei.david.35@gmail.com>
## Motivation
Closes#3916
This adds the ability to get profiles for services by IP address.
### Change in behavior
When the destination server receives a `GetProfile` request with an IP address,
it now tries to map that IP address to a service.
If the IP address maps to an existing service, then the destination server
returns the profile stream subscribes for updates to the _service_--this is the
existing behavior. If the IP changes to a new service, the stream will still
send updates for the first service the IP address corresponded to since that is
what it is subscribed to.
If the IP address does not map to an existing service, then the destination
server returns the profile stream but does not subscribe for updates. The stream
will receive one update, the default profile.
### Solution
This change uses the `IPWatcher` within the destination server to check for what
services an IP address correspond to. By adding a new method `GetSvc` to
`IPWatcher`, the server now calls this method if `GetProfile` receives a request
with an IP address.
## Testing
Install linkerd on a cluster and get the cluster IP of any service:
```bash
❯ kubectl get -n linkerd svc/linkerd-tap -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
linkerd-tap ClusterIP 10.104.57.90 <none> 8088/TCP,443/TCP 16h linkerd.io/control-plane-component=tap
```
Run the destination server:
```bash
❯ go run controller/cmd/main.go destination -kubeconfig ~/.kube/config
```
Get the profile for the tap service by IP address:
```bash
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.104.57.90:8088
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}}
INFO[0000]
```
Get the profile for an IP address that does not correspond to a service:
```bash
❯ go run controller/script/destination-client/main.go -method getProfile -path 10.256.0.1:8088
INFO[0000] retry_budget:{retry_ratio:0.2 min_retries_per_second:10 ttl:{seconds:10}}
INFO[0000]
```
You can add and remove settings for the service profile for tap and get updates.
Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
When a k8s pod is evicted its Phase is set to Failed and the reason is set to Evicted. Because in the ListPods method of the public APi we only transmit the phase and treat it as Status, the healthchecks assume such evicted data plane pods to be failed. Since this check is retryable, the results is that linkerd check --proxy appears to hang when there are evicted pods. As @adleong correctly pointed out here, the presence of evicted pod is not something that we should make the checks fail.
This change modifies the publci api to set the Pod.Status to "Evicted" for evicted pods. The healtcheks are also modified to not treat evicted pods as error cases.
Fix#4690
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Using following command the wrong spelling were found and later on
fixed:
```
codespell --skip CHANGES.md,.git,go.sum,\
controller/cmd/service-mirror/events_formatting.go,\
controller/cmd/service-mirror/cluster_watcher_test_util.go,\
SECURITY_AUDIT.pdf,.gcp.json.enc,web/app/img/favicon.png \
--ignore-words-list=aks,uint,ans,files\' --check-filenames \
--check-hidden
```
Signed-off-by: Suraj Deshmukh <surajd.service@gmail.com>
Introduce support for the EndpointSlice k8s resource (k8s v1.16+) in the destination service.
Through this PR, in the EndpointsWatcher, there will be a dedicated informer for EndpointSlice;
the informer cannot run at the same time as the Endpoints resource informer. The main difference
is that EndpointSlices have a one-to-many relationship with a service, they provide better performance benefits,
dual-stack addresses and more. EndpointSlice support also implies service topology and other k8s related features.
Validated and tested manually, as well as with dedicated unit tests.
Closes#4501
Signed-off-by: Matei David <matei.david.35@gmail.com>
Regenerated protobuf files, using version 1.4.2 that was upgraded from
1.3.2 with the proxy-api update in #4614.
As of v1.4 protobuf messages are disallowed to be copied (because they
hold a mutex), so whenever a message is passed to or returned from a
function we need to use a pointer.
This affects _mostly_ test files.
This is required to unblock #4620 which is adding a field to the config
protobuf.