Viz dashboard with locales es doesn't have menuItem Service item. This change introduces a fix by adding the item for the es locale.
Signed-off-by: Marcelo Clavel <marcelo.clavel@buda.com>
Our CI infrastructure does not currently support running two k3d
clusters in a flat network, for testing purposes. The process of
bridging two k3d clusters is quite simple (assuming one node for each
cluster). For each node, a network route has to be added to route to the
other node's cluster CIDR.
This change adds a `just` recipe to configure two running clusters to
participate in a flat network. Routes are added by executing `ip route
add` in each respective server's docker container. Additionally, to
support this, target clusters now run with a different pod CIDR (to
ensure there isn't any overlap between the two).
Signed-off-by: Matei David <matei@buoyant.io>
This is a follow-up to https://github.com/linkerd/linkerd2/pull/11201, addressing some feedback from that PR.
* fixes a typo in a comment
* do not copy the topology aware hints (TAH) annotation onto mirror service, becuase TAH is not multicluster aware.
Signed-off-by: Alex Leong <alex@buoyant.io>
There were a few improvements we could have made to a recent change that
added a ClusterStore concept to the destination service. This PR
introduces the small improvements:
* Sync dynamically created clients in separate go routines
* Refactor metadata API creation
* Remove redundant check in cluster_store_test
* Create a new runtime schema every time a fake metadata API client is
created to avoid racey behaviour.
Signed-off-by: Matei David <matei@buoyant.io>
We add the ability to mirror services in "remote discovery" mode where no Endpoints are created for the service in the source cluster, but instead the `multicluster.linkerd.io/remote-discovery` and `multicluster.linkerd.io/remote-service` labels are set on the mirror service to indicate that the control plane should perform remote discovery for this service.
To accomplish this, we add a new field to the Link resource: `remoteDiscoverySelector` which is a parallel to `selector` but instead it selects Services to export in remote discovery mode. Since this field is purely additive, we do not change the Link CRD version. By treating an empty selector as "Nothing", we remain backwards compatible (an unset `remoteDiscoverySelector` will not export any services in remote discovery mode).
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#10764
Removed the `server_port_subscribers` gauge, as it wasn't distiguishing
amongst different pods, and the number of subscribers for each pod were
conflicting with one another when updating the metric (see more details
[here](https://github.com/linkerd/linkerd2/issues/10764#issuecomment-1650835823)).
Besides carying an invalid value, this was generating the warning
`unable to delete server_port_subscribers metric with labels`
The metric was replaced with the `server_port_subscribes` and
`server_port_unsubscribes` counters, which track the overall number of
subscribes and unsubscribes to the particular pod port.
🌮 to @adleong for the diagnosis and the fix!
Currently, the control plane does not support indexing and discovering resources across cluster boundaries. In a multicluster set-up, it is desirable to have access to endpoint data (and by extension, any configuration associated with that endpoint) to support pod-to-pod communication. Linkerd's destination service should be extended to support reading discovery information from multiple sources (i.e. API Servers).
This change introduces an `EndpointsWatcher` cache. On creation, the cache registers a pair of event handlers:
* One handler to read `multicluster` secrets that embed kubeconfig files. For each such secret, the cache creates a corresponding `EndpointsWatcher` to read remote discovery information.
* Another handle to evict entries from the cache when a `multicluster` secret is deleted.
Additionally, a set of tests have been added that assert the behaviour of the cache when secrets are created and/or deleted. The cache will be used by the destination service to do look-ups for services that belong to another cluster, and that are running in a "remote discovery" mode.
---------
Signed-off-by: Matei David <matei@buoyant.io>
We update the `multicluster link` command to write a credentials secret into the `linkerd` core control plane namespace in addition to writing one into the `linkerd-multicluster` namespace. This is a prerequisite for the destination controller to be able to connect to linked clusters to do remote service discovery.
We also update the `multicluster unlink` command so that these credentials secrets are properly deleted when the cluster is unlinked.
Signed-off-by: Alex Leong <alex@buoyant.io>
This edge release restores a proxy setting for it to shed load less aggressively
while under high load, which should result in lower error rates (addressing
#11055). It also removes the usage of host networking in the linkerd-cni
extension.
* Changed the default HTTP request queue capacities for the inbound and outbound
proxies back to 10,000 requests (see #11055 and #11198)
* Lifted need of using host networking in the linkerd-cni Daemonset (#11141)
(thanks @abhijeetgauravm!)
Problem - Current does Linkerd CNI Helm chart templates have hostNetwork: true set which is unnecessary and less secure.
Solution - Removed hostNetwork: true from linkerd-cni Helm chart templates
PR Fixes#11141
---------
Signed-off-by: Abhijeet Gaurav <abhijeetdav24aug@gmail.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>
To make it easier to identify the particular releases where publicly reported issues are remedied we'll incorporate links to specific issues into the release notes.
This edge release improves Linkerd's support for HttpRoute by allowing
`parent_ref` ports to be optional, allowing HttpRoutes to be defined in a
consumer's namespace, and adding support for the `ResponseHeaderModifier` filter.
It also fixes a panic in the destination controller.
* Added an option for disabling the network validator's security context for
environments that provide their own
* Added high-availability (HA) mode for the multicluster service-mirror
* Added support for HttpRoute `parent_refs` that do not specify a port
* Fixed a Grafana error caused by an incorrect datasource (thanks @albundy83!)
* Added support for HttpRoutes defined in the consumer namespace
* Improved the granularity of logging levels in the control plane
* Fixed a race condition in the destination controller that could cause it to
panic
* Added support for the `ResponseHeaderModifier` HttpRoute filter
* Updated extension CLI commands to prefer the `--register` flag over the
`LINKERD_DOCKER_REGISTRY` environment variable, making the precedence more
consistent (thanks @harsh020!)
Signed-off-by: Alex Leong <alex@buoyant.io>
Problem:
Commands `jaeger install`, `multicluster link` give precedence to `LINKERD_DOCKER_REGISTRY` env var, whereas commands `install`, `upgrade` and `inject` give preference to `--register` flag.
Solution:
Make the commands consitent by giving precedence to `--register` flag in all commands.
Fixes: #11115
Signed-off-by: Harsh Soni <devilincarcerated020@yahoo.com>
Fixes#11163
The `servicePublisher.updateServer` function will iterate through all registered listeners and update them. However, a nil listener may temporarily be in the list of listeners if an unsubscribe is in progress. This results in a nil pointer dereference.
All functions which result in updating the listeners must therefore be protected by the mutex so that we don't try to act on the list of listeners while it is being modified.
Signed-off-by: Alex Leong <alex@buoyant.io>
The `--log-level` flag did not support a `trace` level, despite the
underlying `logrus` library supporting it. Also, at `debug` level, the
Control Plane components were setting klog at v=12, which includes
sensitive data.
Introduce a `trace` log level. Keep klog at `v=12` for `trace`, change
it to `v=6` for `debug`.
Fixeslinkerd/linkerd2#11132
Signed-off-by: Andrew Seigner <siggy@buoyant.io>
The [xRoute Binding KEP](https://gateway-api.sigs.k8s.io/geps/gep-1426/#namespace-boundaries) states that HttpRoutes may be created in either the namespace of their parent Service (producer routes) or in the namespace of the client initiating requests to the service (consumer routes). Linkerd currently only indexes producer routes and ignores consumer routes.
We add support for consumer routes by changing the way that HttpRoutes are indexed. We now index each route by the namespace of its parent service instead of by the namespace of the HttpRoute resource. We then further subdivide the `ServiceRoutes` struct to have a watch per-client-namespace instead of just a single watch. This is because clients from different namespaces will have a different view of the routes for a service.
When an HttpRoute is updated, if it is a producer route, we apply that HttpRoute to watches for all of the client namespaces. If the route was a consumer route, then we only apply the HttpRoute to watches for that consumer namespace.
We also add API tests for consumer and producer routes.
A few noteworthy changes:
* Because the namespace of the client factors into the lookup, we had to change the discovery target to a type which includes the client namespace.
* Because a service may have routes from different namespaces, the route metadata now needs to track group, kind, name, AND namespace instead of just using the namespace of the service. This means that many uses of the `GroupKindName` type are replaced with a `GroupKindNamespaceName` type.
Signed-off-by: Alex Leong <alex@buoyant.io>
In #11008 I added Go support for the `timeouts` field in the
`HTTPRouteRule` struct. That used Go's built-in `time.Duration` type,
but based on my reading of kubernetes-sigs/gateway-api#2013, we should
instead by using apimachinery's `metav1.Duration` type.
Signed-off-by: Kevin Ingelman <ki@buoyant.io>