This PR adds a few notable changes associated with the egress functionality of Linkerd:
- `EgressNetwork` objects are indexed into the outbound index
- outbound policy lookups are classfieid as either in-cluster or egress based on the `ip:port` combination
- `TCPRoute`, `TLSRoute`, `GRPCRoute` and `HTTPRoute` attachments are reflected for both `EgressNetwork` and `Service` targets
- the default traffic policy for `EgressNetwork` is honored by returning the appropriate default (failure/success) routes for all protocols
Note that this PR depends on an unreleased version of the linkerd2-proxy-api repo.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
This PR adds an `EgressNetwork` CRD, which purpose is to describe networks that are external to the cluster.
In addition to that it also adds `TLSRoute` and `TCPRoute` gateway api CRDs.
Most of the work in this change is focused on introducing these CRDs and correctly setting their status based on route specificity rules described in: https://gateway-api.sigs.k8s.io/geps/gep-1426/#route-types.
Notable changes include:
- ability to attach TCP and TLS routes to both `EgressNetworks` and `Service` objects
- implemented conflict resolutions between routes
- admission validation on the newly introduced resources
- module + integration tests
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Followup to #12845
This expands the policy controller index in the following ways:
- Adds the new Audit variant to the DefaultPolicy enum
- Expands the function that synthesizes the authorizations for a given default policy (DefaultPolicy::default_authzs) so that it also creates an Unauthenticated client auth and a allow-all NetworkMatch for the new Audit default policy.
- Now that a Server can have a default policy different than Deny, when generating InboundServer authorizations (PolicyIndex::client_authzs) make sure to append the default authorizations when DefaultPolicy is Allow or Audit
Also, the admission controller ensures the new accessPolicy field contains a valid value.
## Tests
New integration tests added:
- e2e_audit.rs exercising first the audit policy in Server, and then at the namespace level
- in admit_server.rs a new test checks invalid accessPolicy values are rejected.
- in inbound_api.rs server_with_audit_policy verifies the synthesized audit authorization is returned for a Server with accessPolicy=audit
> [!NOTE]
> Please check linkerd/website#1805 for how this is supposed to work from the user's perspective.
Adds support for configuring retries and timeouts as outbound policy. Http retries can be configured as annotations on HttpRoute or Service resources like
```
retry.linkerd.io/http: 5xx,gateway-error
retry.linkerd.io/limit: "2"
retry.linkerd.io/timeout: 400ms
```
If any of these retry annotations are specified on an HttpRoute resource, they will override ALL retry annotations on the parent Service resource.
Similarly, Grpc retries can be configured as annotations on GrpcRoute or Service resources like
```
retry.linkerd.io/grpc: cancelled,deadline-exceeded,internal,resource-exhausted,unavailable
retry.linkerd.io/limit: "2"
retry.linkerd.io/timeout: 400ms
```
Outbound timeouts can be configured on HttpRoute, GrpcRoute, or Service resources like
```
timeout.linkerd.io/request: 500ms
timeout.linkerd.io/response: 100ms
timeout.linkerd.io/idle: 50ms
```
If any of these timeout annotations are specified on a HttpRoute or GrpcRoute resource, they will override ALL timeout annotations on the parent Service resource.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
We add support for grpcroute in the inbound policy API. When a Server resource has the proxy protocol set to grpc, we will now serve grpc as the protocol in the inbound policy API along with any GrpcRoutes which have been defined and attached to the Server. If grpc is specified as the proxy protocol but no GrpcRoutes are attached, we return a default catch-all grpc route.
Signed-off-by: Alex Leong <alex@buoyant.io>
We update the policy-controller to support grpcroutes in the outbound policy API. When a GrpcRoute resource has a Service as a parent ref, outbound policy requests for that service may return a proxy protocol of grpc with grpc routes.
* if a service has no HttpRoute or GrpcRoutes as children, we continue to return the default http route with a proxy protocol of Detect (so that the proxy is directed to detect Http1 vs Http2)
* similarly, if a service has HttpRoute children only, we continue to return those routes with a proxy protocol of Detect
* if a service has GrpcRoute children only, we return those routes with a proxy protocol of Grpc
* if a service has both types of routes as children, we determine the proxy protocol based on which route type has the oldest created_at timestamp as described in https://gateway-api.sigs.k8s.io/geps/gep-1016/#cross-serving and only return routes of the determined type
Signed-off-by: Alex Leong <alex@buoyant.io>
## Subject
Prepare to expand the subset of [`Gateway API`](https://gateway-api.sigs.k8s.io/api-types/grpcroute/) route types `linkerd` supports driving outbound traffic with in [`linkerd-policy-controller-k8s-index`](https://github.com/linkerd/linkerd2/tree/main/policy-controller/k8s/status).
## Problem
Currently, the policy controller's `index` component is written with `HTTPRoute` support (effectively) exclusively, both in its structure/organization as well as its naming (e.g. `HttpRoute` as a primary type name, `convert_http_route` as a method name, etc...).
In order to expand the subset of route types defined by the Gateway API that `linkerd` supports for driving outbound traffic policy, the policy controller's `index` component needs to be made more generic in both respects.
## Solution
PR introduces structural and naming changes making the codebase generic with respect to the type of route being handled (e.g. `HTTPRoute` -> `Route`). Changes are largely cosmetic, with no _behavioral_ changes introduced by any functional refactoring.
Signed-off-by: Mark S <the@wondersmith.dev>
## Subject
Prepare to expand `linkerd`'s repertoire of supported [`Gateway API`](https://gateway-api.sigs.k8s.io/api-types/grpcroute/) route types in [`linkerd-policy-controller-k8s-status`](https://github.com/linkerd/linkerd2/tree/main/policy-controller/k8s/status).
## Problem
Currently, the policy controller's `status` component is written with `HTTPRoute` support (effectively) exclusively, both in its structure/organization as well as its naming (e.g. `HttpRoute` as a primary type name, `update_http_route` as a method name, etc...).
In order to expand `linkerd`'s support for the route types defined by the Gateway API, the policy controller's `status` component needs to be made more generic in both respects.
## Solution
> **NOTE:** PR was opened out of order and should only be merged _after_ #12662
PR introduces structural and naming changes making the codebase generic with respect to the type of route being handled (e.g. `HTTPRoute` -> `Route`). Changes are almost entirely cosmetic introducing only a couple of minor functional changes, most notably:
- making the `status` argument to [`make_patch`](8d6cd57b70/policy-controller/k8s/status/src/index.rs (L734)) generic
- adding a type-aware `api_version` helper method to [`NamespaceGroupKindName`](8d6cd57b70/policy-controller/k8s/status/src/resource_id.rs (L27))
- **note:** *required for proper handling of different route types in the future*
## Validation
- [X] maintainer review
- [X] tests pass

## ~~Fixes~~ *Lays Groundwork For Addressing*
- #12404
Signed-off-by: Mark S <the@wondersmith.dev>
If an HTTPRoute references a backend service that does not exist, the policy controller synthesizes a FailureInjector in the outbound policy so that requests to that backend will fail with a 500 status code. However, we do not update the policy when backend services are created or deleted, which can result in an outbound policy that synthesizes 500s for backends, even if the backend currently exists (or vice versa).
This is often papered over because when a backend service is created or deleted, this will trigger the HTTPRoute's ResolvedRef status condition to change which will cause a reindex of the HTTPRotue and a recomputation of the backends. However, depending on the specific order that these updates are processed, the outbound policy can still be left with the incorrect backend state.
In order to be able to update the backend of an outbound policy when backend services are created or deleted, we change the way these backends are represented in the index. Previously, we had represented backends which were services that did not exist as `Backend::Invalid`. However, this discards the necessary backend information necessary to recreate the backend if the service is created. Instead, we update this to represent these backends as a `Backend::Service` but with a new field `exists` set to false. This allows us to update this field as backend services are created or deleted.
Signed-off-by: Alex Leong <alex@buoyant.io>
An HTTPRoute whose parentRef is a Service in the same namespace is called a producer route. Producer routes should be used in outbound policy by all clients calling that Service, even if the client is in a different namespace. The policy controller has a bug where when a outbound policy watch is started, the initial outbound policy returned will not include any producer routes which already exist.
We correct this bug and add tests.
Signed-off-by: Alex Leong <alex@buoyant.io>
Adds index metrics to the outbound policy index.
```
# HELP outbound_index_service_index_size The number of entires in service index
# TYPE outbound_index_service_index_size gauge
outbound_index_service_index_size 20
# HELP outbound_index_service_info_index_size The number of entires in the service info index
# TYPE outbound_index_service_info_index_size gauge
outbound_index_service_info_index_size 23
# HELP outbound_index_service_route_index_size The number of entires in the service route index
# TYPE outbound_index_service_route_index_size gauge
outbound_index_service_route_index_size{namespace="kube-system"} 0
outbound_index_service_route_index_size{namespace="cert-manager"} 0
outbound_index_service_route_index_size{namespace="default"} 0
outbound_index_service_route_index_size{namespace="linkerd"} 0
outbound_index_service_route_index_size{namespace="emojivoto"} 0
outbound_index_service_route_index_size{namespace="linkerd-viz"} 0
# HELP outbound_index_service_port_route_index_size The number of entires in the service port route index
# TYPE outbound_index_service_port_route_index_size gauge
outbound_index_service_port_route_index_size{namespace="kube-system"} 0
outbound_index_service_port_route_index_size{namespace="cert-manager"} 0
outbound_index_service_port_route_index_size{namespace="default"} 1
outbound_index_service_port_route_index_size{namespace="linkerd"} 0
outbound_index_service_port_route_index_size{namespace="emojivoto"} 3
outbound_index_service_port_route_index_size{namespace="linkerd-viz"} 0
```
Signed-off-by: Alex Leong <alex@buoyant.io>
We add index size gauges for each of the resource indexes in the inbound policy index.
```
# HELP inbound_index_meshtls_authentication_index_size The number of MeshTLS authentications in index
# TYPE inbound_index_meshtls_authentication_index_size gauge
inbound_index_meshtls_authentication_index_size{namespace="linkerd-viz"} 1
# HELP inbound_index_network_autnetication_index_size The number of Network authentications in index
# TYPE inbound_index_network_autnetication_index_size gauge
inbound_index_network_autnetication_index_size{namespace="linkerd-viz"} 2
# HELP inbound_index_pod_index_size The number of pods in index
# TYPE inbound_index_pod_index_size gauge
inbound_index_pod_index_size{namespace="linkerd"} 3
inbound_index_pod_index_size{namespace="emojivoto"} 4
inbound_index_pod_index_size{namespace="linkerd-viz"} 5
# HELP inbound_index_external_workload_index_size The number of external workloads in index
# TYPE inbound_index_external_workload_index_size gauge
inbound_index_external_workload_index_size{namespace="linkerd"} 0
inbound_index_external_workload_index_size{namespace="emojivoto"} 0
inbound_index_external_workload_index_size{namespace="linkerd-viz"} 0
# HELP inbound_index_server_index_size The number of servers in index
# TYPE inbound_index_server_index_size gauge
inbound_index_server_index_size{namespace="linkerd"} 0
inbound_index_server_index_size{namespace="emojivoto"} 1
inbound_index_server_index_size{namespace="linkerd-viz"} 4
# HELP inbound_index_server_authorization_index_size The number of server authorizations in index
# TYPE inbound_index_server_authorization_index_size gauge
inbound_index_server_authorization_index_size{namespace="linkerd"} 0
inbound_index_server_authorization_index_size{namespace="emojivoto"} 0
inbound_index_server_authorization_index_size{namespace="linkerd-viz"} 0
# HELP inbound_index_authorization_policy_index_size The number of authorization policies in index
# TYPE inbound_index_authorization_policy_index_size gauge
inbound_index_authorization_policy_index_size{namespace="linkerd"} 0
inbound_index_authorization_policy_index_size{namespace="emojivoto"} 0
inbound_index_authorization_policy_index_size{namespace="linkerd-viz"} 4
# HELP inbound_index_http_route_index_size The number of HTTP routes in index
# TYPE inbound_index_http_route_index_size gauge
inbound_index_http_route_index_size{namespace="linkerd"} 0
inbound_index_http_route_index_size{namespace="emojivoto"} 600
inbound_index_http_route_index_size{namespace="linkerd-viz"} 0
```
Signed-off-by: Alex Leong <alex@buoyant.io>
This changes the policy controller service indexer to index
services by thier `spec.clusterIPs` instead of `spec.clusterIP`, to
account for dual-stack services that hold both an IPv4 and an IPv6
address in `spec.clusterIPs`.
This change is fully backwards-compatible.
This assists the ongoing effort to support IPv6/dual-stack in linkerd,
but doesn't imply full support yet.
PR https://github.com/linkerd/linkerd2/pull/12088 fixed an issue where removing and then re-adding certain policy resources could leave the policy index in an incorrect state. We add a test for the specific condition that triggered this behavior to prevent against future regressions.
Verified that this test fails prior to https://github.com/linkerd/linkerd2/pull/12088 but passes on main.
Signed-off-by: Alex Leong <alex@buoyant.io>
In the inbound policy index, we maintain a policy index per namespace which holds various policy resources for that namespace. When a per namespace index becomes empty, we remove it. However, we were not considering authorization policy resources when determining if the index is empty. This could result in the index being removed even while it contained authorization policy resources, as long as all other resource types did not exist.
This can lead to incorrect inbound policy responses when the per namespace index is recreated, since it will not longer contain the authorization policy.
We update the `is_empty()` function to properly consider authorization policies as well. We also add some generally useful logging at debug and trace level.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
ExternalWorkload resources represent as a resource configuration associated
with a process (or a group of processes) that are foreign to a Kubernetes
cluster. It allows Linkerd to read / write and store configuration for mesh
expansion. Since VMs will be able to receive inbound traffic from a variety of
resources, the proxy should be able to dynamically discover inbound
authorisation policies.
This change introduces a set of callbacks in the indexer that will apply (or
delete) ExternalWorkload resources. In addition, we ensure that
ExternalWorkloads can be processed in a similar fashion to pods (where
applicable, of course) wrt to server matching and defaulting. To serve
discovery requests for a VM, the policy controller will now also start a
watcher for external workloads and allow requests to reference an
`external_workload` target
A quick list of changes:
* ExternalWorkloads can now be indexed in the inbound (policy) index. Renamed
* the pod module in the inbound index to be more generic ("workload"); the
* module has some re-usable building blocks that we can use for external
* workloads. Moved common functions (e.g. building a default inbound server)
* around to share what's already been done without abstracting more or
* introducing generics. Changed gRPC target types to a tuple of `(Workload,
* port)` from a tuple of `(String, String, port)` Added RBAC to watch external
* workloads.
---------
Signed-off-by: Matei David <matei@buoyant.io>
This PR adds the ability for a `Server` resource to select over `ExternalWorkload`
resources in addition to `Pods`. For the time being, only one of these selector types
can be specified. This has been realized via incrementing the version of the resource
to `v1beta2`
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
* kube 0.87.1
* k8s-openapi 0.20.0
* kubert 0.21.1
* k8s-gateway-api 0.15
* ring 0.17
Furthermore, the policy controller's metrics endpoint has been updated
to include tokio runtime metrics.
The policy controller sets incorrect backend metadata when (1) there is
no explicit backend reference specified, and (2) when a backend
reference crosses namespaces.
This change fixes these backend references so that proxy logs and
metrics have the proper metadata references. Outbound policy tests are
updated to validate this.
* Add native sidecar support
Kubernetes will be providing beta support for native sidecar containers in version 1.29. This feature improves network proxy sidecar compatibility for jobs and initContainers.
Introduce a new annotation config.alpha.linkerd.io/proxy-enable-native-sidecar and configuration option Proxy.NativeSidecar that causes the proxy container to run as an init-container.
Fixes: #11461
Signed-off-by: TJ Miller <millert@us.ibm.com>
This branch updates the policy-controller's dependency on Kubert to
v0.18, `kube-rs` to v0.85, `k8s-gateway-api` to v0.13, and `k8s-openapi`
to v0.19.
All of these crates depend on `kube-rs` and `k8s-openapi`, so they must
all be updated together in one commit. Therefore, this branch updates
all these dependencies.
The [xRoute Binding KEP](https://gateway-api.sigs.k8s.io/geps/gep-1426/#namespace-boundaries) states that HttpRoutes may be created in either the namespace of their parent Service (producer routes) or in the namespace of the client initiating requests to the service (consumer routes). Linkerd currently only indexes producer routes and ignores consumer routes.
We add support for consumer routes by changing the way that HttpRoutes are indexed. We now index each route by the namespace of its parent service instead of by the namespace of the HttpRoute resource. We then further subdivide the `ServiceRoutes` struct to have a watch per-client-namespace instead of just a single watch. This is because clients from different namespaces will have a different view of the routes for a service.
When an HttpRoute is updated, if it is a producer route, we apply that HttpRoute to watches for all of the client namespaces. If the route was a consumer route, then we only apply the HttpRoute to watches for that consumer namespace.
We also add API tests for consumer and producer routes.
A few noteworthy changes:
* Because the namespace of the client factors into the lookup, we had to change the discovery target to a type which includes the client namespace.
* Because a service may have routes from different namespaces, the route metadata now needs to track group, kind, name, AND namespace instead of just using the namespace of the service. This means that many uses of the `GroupKindName` type are replaced with a `GroupKindNamespaceName` type.
Signed-off-by: Alex Leong <alex@buoyant.io>
According to the [xRoutes Mesh Binding KEP](https://gateway-api.sigs.k8s.io/geps/gep-1426/#ports), the port in a parent reference is optional:
> By default, a Service attachment applies to all ports in the service. Users may want to attach routes to only a specific port in a Service. To do so, the parentRef.port field should be used.
> If port is set, the implementation MUST associate the route only with that port. If port is not set, the implementation MUST associate the route with all ports defined in the Service.
However, we currently ignore any HttpRoutes which don't have a port specified in the parent ref.
We update the policy controller to apply HttpRoutes which do not specify a port in the parent ref to all ports of the parent service.
We do this by storing these "portless" HttpRoutes in the index and then copying these routes into every port-specific watch for that service.
Signed-off-by: Alex Leong <alex@buoyant.io>
We add support for the RequestHeaderModifier and RequestRedirect HTTP filters. The policy controller reads these filters in any HttpRoute resource that it indexes (both policy.linkerd.io and gateway.networking.k8s.io) and returns them in the outbound policy API. These filters may be added at the route rule level and at the backend level.
We add outbound api tests for this behavior for both types of HttpRoute.
Incidentally we also fix a flaky test in the outbound api tests where a watch was being recreated partway through a test, leading to a race condition.
Signed-off-by: Alex Leong <alex@buoyant.io>
Updates the policy-controller to watch `httproute.gateway.networking.k8s.io` resources in addition to watching `httproute.policy.linkerd.io` resources. Routes of either or both types can be returned in policy responses and will be appropriately identified by the `group` field on their metadata. Furthermore we update the Status of these resources to correctly reflect when they are accepted.
We add the `httproute.gateway.networking.k8s.io` CRD to the Linkerd installed CRD list and add the appropriate RBAC to the policy controller so that it may watch these resources.
Signed-off-by: Alex Leong <alex@buoyant.io>
PR #10969 adds support for the GEP-1742 `timeouts` field to the
HTTPRoute CRD. This branch implements actual support for these fields in
the policy controller. The timeout fields are now read and used to set
the timeout fields added to the proxy-api in
linkerd/linkerd2-proxy-api#243.
In addition, I've added code to ensure that the timeout fields are
parsed correctly when a JSON manifest is deserialized. The current
implementation represents timeouts in the bindings as a Rust
`std::time::Duration` type. `Duration` does implement
`serde::Deserialize` and `serde::Serialize`, but its serialization
implementation attempts to (de)serialize it as a struct consisting of a
number of seconds and a number of subsecond nanoseconds. The timeout
fields are instead supposed to be represented as strings in the Go
standard library's `time.ParseDuration` format. Therefore, I've added a
newtype which wraps the Rust `std::time::Duration` and implements the
same parsing logic as Go. Eventually, I'd like to upstream the
implementation of this to `kube-rs`; see kube-rs/kube#1222 for details.
Depends on #10969
Depends on linkerd/linkerd2-proxy-api#243
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Add a new version to the HttpRoute CRD: `v1beta3`. This version adds a new `timeouts` struct to the http route rule. This mirrors a corresponding new field in the Gateway API, as described in [GEP-1742](https://github.com/kubernetes-sigs/gateway-api/pull/1997). This field is currently unused, but will eventually be read by the policy controller and used to configure timeouts enforced by the proxy.
The diff between v1beta2 and v1beta3 is:
```
timeouts:
description: "Timeouts defines the timeouts that can be configured
for an HTTP request. \n Support: Core \n <gateway:experimental>"
properties:
backendRequest:
description: "BackendRequest specifies a timeout for an
individual request from the gateway to a backend service.
Typically used in conjunction with automatic retries,
if supported by an implementation. Default is the value
of Request timeout. \n Support: Extended"
format: duration
type: string
request:
description: "Request specifies a timeout for responding
to client HTTP requests, disabled by default. \n For example,
the following rule will timeout if a client request is
taking longer than 10 seconds to complete: \n ``` rules:
- timeouts: request: 10s backendRefs: ... ``` \n Support:
Core"
format: duration
type: string
type: object
```
We update the `storage` version of HttpRoute to be v1beta3 but continue to serve all versions. Since this new field is optional, the Kubernetes API will be able to automatically convert between versions.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#10877
Linkerd reads the list of container readiness and liveness probes for a pod in order to generate authorizations which allow probes by default. However, when reading the `path` field of a probe, we interpret this field literally rather than parsing it as a URI. This means that any non-path parts of the URI (such as URI params) will attempt to match against the path of a probe request, causing these authorizations to fail.
Instead, we parse this field as a URI and only use the path part for path matching. Invalid URIs are skipped and a warning is logged.
Signed-off-by: Alex Leong <alex@buoyant.io>
When the `namespace` field of a `backend_ref` of an `HttpRoute` is set, Linkerd ignores this field and instead assumes that the backend is in the same namespace as the parent `Service`.
To properly handle the case where the backend is in a different namespace from the parent `Service`, we change the way that service metadata is stored in the policy controller outbound index. Instead of keeping a separate service metadata map per namespace, we maintain one global service metadata map which is shared between all namespaces using an RwLock. This allows us to make the two necessary changes:
1. When validating the existence of a backend service, we now look for it in the appropriate namespace instead of the Service's namespace
2. When constructing the backend authority, we use the appropriate namespace instead of the Service's namespace
We also add an API test for this situation.
Signed-off-by: Alex Leong <alex@buoyant.io>
Fixes#10782
Added the `minItems: 1` field to `spec.identities` and `spec.identitiRefs`. This is a BC change so it's not required to bump the CRD version, plus current CRs not abiding to this constraint would be broken anyway.
```bash
$ cat << EOF | k apply -f -
> apiVersion: policy.linkerd.io/v1alpha1
kind: MeshTLSAuthentication
metadata:
name: "test"
spec:
identities: []
> EOF
The MeshTLSAuthentication "test" is invalid: spec.identities: Invalid value: 0: spec.identities in body should have at least 1 items
```
Also refactored the MeshTLSAuthentication index reset loop to avoid stop processing items when one of them fails.
Controller currently serves hardcoded configuration values for failure
accrual parameters. This change adds support to discover configuration
on Service objects based on annotations.
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Alex Leong <alex@buoyant.io>
We update the outbound policy API to ignore any HTTPRoute if it does not have an Accepted status of True. Additionally, we remove spurious logging from both the inbound and outbound index when they ignore an HTTPRoute which does not have parents which are relevant to that API. This means that, for example, an HTTPRoute that has a Service as a parent will no longer trigger log warnings from the inbound index and vice versa.
Signed-off-by: Alex Leong <alex@buoyant.io>
This change improves the modularity of the policy controller by splitting code related in inbound policy and outbound policy into separate modules. There are no functional changes in this PR, only moves and renames. We have moved any code which was used by both inbound and outbound into common modules so that there are no cross dependencies between inbound and outbound.
Additionally, several types that had names beginning with "Inbound" or "Outbound" have had this prefix removed since they are now distinguishable by their modules. e.g. `InboundHttpRoute` becomes `inbound::HttpRoute`.
Signed-off-by: Alex Leong <alex@buoyant.io>
When the outbound policy API was returning backends which corresponded to Service resources, those backends were missing metadata information.
We update the policy controller to populate Service metadata for those backends.
Signed-off-by: Alex Leong <alex@buoyant.io>
This removes the separate naming of “status controller” from the policy
controller resources, code, and comments. There is a single controller in all of
this — the policy controller. Part of the policy controller is maintaining the
status of policy resources. We can therefore remove this separate naming that
has been used as well as reorganize some of the code to use single naming
consts.
The lease resource has been renamed from `status-controller` to
`policy-controller-write`.
The controller name value has been renamed from
`policy.linkerd.io/status-controller` to `linkerd.io/policy-controller`. This
field appears in the `status` of HTTPRoutes indicating which controller applied
the status.
Lastly, I’ve updated the comments to remove the use of “status controller” and
moved the use of the name const to the `linkerd-policy-controller-core` package
so that it can be shared.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
When the policy controller is creating the inbound server which contains the
HTTPRoutes that a policy client should consider, we now consider the `status`
field and filter out HTTPRoutes which have not been accepted by the status
controller.
When creating the HTTPRoutes for an inbound server, the policy controller now
looks at the `status` field for each HTTPRoute. It filters out statuses which
are not from the policy status controller. For statuses which are from the
policy status controller, it ensures that it has been accepted by a Server on
the cluster.
The tests have been updated in the index; these were failing without an
`Accepted` status.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
We have a `ClusterInfo` type that includes all cluster-level
configuration. The outbound indexer does not use this type and, instead,
passes around individual configuration values.
This change updates the outbound indexer to use the ClusterInfo
configuration type. It also moves the type to its own module.
This is largely a reorganization PR, though there is one notable change:
InvalidDst response no longer reference a backend. Instead, the backend
is left empty, since it's not possible to route requests to it.