The policy controller has several indexes: the inbound index, the outbound index, and the status index, each for different purposes, but each needing an overlapping set of Kubert resource watches. This has led to having duplicated kubert watches for the same type when it is needed by multiple indexes (e.g. all 3 indexes need to watch HttpRoute). We previously introduced a IndexPair type which fans out watch updates to 2 indexes, but this is no longer sufficient now that we have 3 indexes.
We replace IndexPair with IndexList which can hold any number of indexes for a resource. We reorganize the policy controller's main to establish only one kubert watch for each resource and then use IndexList to fan updates out to the relevant indexes.
Signed-off-by: Alex Leong <alex@buoyant.io>
This change improves the modularity of the policy controller by splitting code related in inbound policy and outbound policy into separate modules. There are no functional changes in this PR, only moves and renames. We have moved any code which was used by both inbound and outbound into common modules so that there are no cross dependencies between inbound and outbound.
Additionally, several types that had names beginning with "Inbound" or "Outbound" have had this prefix removed since they are now distinguishable by their modules. e.g. `InboundHttpRoute` becomes `inbound::HttpRoute`.
Signed-off-by: Alex Leong <alex@buoyant.io>
Proxies in ingress and gateway configurations may discover policies by
name instead of network address.
This change updates the policy controller to parse service names from
lookups.
This removes the separate naming of “status controller” from the policy
controller resources, code, and comments. There is a single controller in all of
this — the policy controller. Part of the policy controller is maintaining the
status of policy resources. We can therefore remove this separate naming that
has been used as well as reorganize some of the code to use single naming
consts.
The lease resource has been renamed from `status-controller` to
`policy-controller-write`.
The controller name value has been renamed from
`policy.linkerd.io/status-controller` to `linkerd.io/policy-controller`. This
field appears in the `status` of HTTPRoutes indicating which controller applied
the status.
Lastly, I’ve updated the comments to remove the use of “status controller” and
moved the use of the name const to the `linkerd-policy-controller-core` package
so that it can be shared.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
We have a `ClusterInfo` type that includes all cluster-level
configuration. The outbound indexer does not use this type and, instead,
passes around individual configuration values.
This change updates the outbound indexer to use the ClusterInfo
configuration type. It also moves the type to its own module.
This is largely a reorganization PR, though there is one notable change:
InvalidDst response no longer reference a backend. Instead, the backend
is left empty, since it's not possible to route requests to it.
Implement the outbound policy API as defined in the proxy api: https://github.com/linkerd/linkerd2-proxy-api/blob/main/proto/outbound.proto
This API is consumed by the proxy for the routing of outbound traffic. It is intended to replace the GetProfile API which is currently served by the destination controller. It has not yet been released in a proxy-api release, so we take a git dependency on it in the mean time.
This PR adds a new index to the policy controller which indexes HTTPRoutes and Services and uses this information to serve the outbound API. We also add outbound API tests to validate the behavior of this implementation.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
https://github.com/linkerd/linkerd2/pull/10424 added the status-controller lease
used by the policy status controller. When the status controller starts up, it
tries to become the holder of that lease; if it fails to become the holder
because another status controller already holds the lease, it keeps trying to
become the holder for its lifetime.
This change continues on that work and now considers the holder of the
status-controller lease when applying patches to HTTPRoutes. If a status
controller is not the holder of the lease, its `Index` does not send patch
updates to its `Controller`; it does keep the index up-to-date though so that if
it does become the holder in the future it has a correct view of the cluster.
Additionally, the status controller `Controller` will now attempt to reconcile
the cluster periodically. This is to cover for any failure scenarios when
applying a patch fails.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
The previous version k8s-gateway (`v0.10.0`) did not include backendRefs
for HTTP Routes, since the policy controller did not use them for any
specific task or validation. BackendRef support is currently being added
for the status controller, and will be used as more and more route
functionality is added to Linkerd.
This change bumps k8s-gateway to the most recent version and updates the
internal model of the route to include backendRefs. Additionally, fixes
any compiler issues that cropped up from adding a field to the struct.
Signed-off-by: Matei David <matei@buoyant.io>
This adds lease claims to the policy status controller so that upon startup, a
status controller attempts to claim the `status-controller` lease in the
`linkerd` namespace. With this lease, we can enforce leader election and ensure
that only one status controller on a cluster is attempting to patch HTTPRoute’s
`status` field.
Upon startup, the status controller now attempts to create the
`status-controller` lease — it will handle failure if the lease is already
present on the cluster. It then spawns a task for attempting to claim this lease
and sends all claim updates to the index `Index`.
Currently, `Index.claims` is not used, but in follow-up changes we can check
against the current claim for determining if the status controller is the
current leader on the cluster. If it is, we can make decisions about sending
updates or not to the controller `Controller`.
### Testing
Currently I’ve only manually tested this, but integration tests will definitely
be helpful follow-ups. For manually testing, I’ve asserted that the
`status-controller` is claimed when one or more status controllers startup and
are running on a cluster. I’ve also asserted that when the current leader is
deleted, another status controller claims the lease. Below is the summary of how
I tested it
```shell
$ linkerd install --ha |kubectl apply -f -
…
$ kubectl get -n linkerd leases status-controller
NAME HOLDER AGE
status-controller linkerd-destination-747b456876-dcwlb 15h
$ kubectl delete -n linkerd pod linkerd-destination-747b456876-dcwlb
pod "linkerd-destination-747b456876-dcwlb" deleted
$ kubectl get -n linkerd leases status-controller
NAME HOLDER AGE
status-controller linkerd-destination-747b456876-5zpwd 15h
```
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
### Overview
This adds a policy status controller which is responsible for patching Linkerd’s
HTTPRoute resource with a `status` field. The `status` field has a list of
parent statuses — one status for each of its parent references. Each status
indicates whether or not this parent has “accepted” the HTTPRoute.
The status controller runs on its own task in the policy controller and watches
for updates to the resources that it cares about, similar to the policy
controller’s index. One of the main differences is that while the policy
controller’s index watches many resources, the status controller currently only
cares about HTTPRoutes and Servers; HTTPRoutes can still only have parent
references that are Servers so we don’t currently need to consider any other
parent reference resources.
The status controller maintains its own index of resources so that it is
completely separated from the policy controller’s index. This allows the index
to be simpler in both its structure, how it handles `apply` and `delete`, and
what information it needs to store.
### Follow-ups
There are several important follow-ups to this change. #10124 contains changes
for the policy controller index filtering out HTTPRoutes that are not accepted
by a Server. We don’t want those changes yet. Leaving those out, the status
controller does not actually have any affect on Linkerd policy in the cluster.
We can probably add additional logging several places in the status controller;
that may even take place as part of the reviews on this. Additionally, we could
try queue size for updates to be processed.
Currently if the status controller fails in any of its potential places, we do
not re-queue updates. We probably should do that so that it is more robust
against failure.
In an HA installation, there could be multiple status controllers trying to
patch the same resource. We should explore the k8s lease API so that only one
status controller can patch a resource at a time.
### Implementation
The status controller `Controller` has a k8s client for patching resources,
`index` for tracking resources, and an `updates` channel which handles
asynchronous updates to resources.
#### Index
`Index` synchronously observes changes to resources. It determines which Servers
accept each HTTPRoute and generates a status patch for that HTTPRoute. Again,
the status contains a list of parent statuses, one for each of the HTTPRoutes
parent references.
When a Server is added or deleted, the status controller needs to recalculate
the status for all HTTPRoutes. This is because an HTTPRoute can reference
Servers in other namespaces, so if a Server is added or deleted anywhere in the
cluster it could affect any of the HTTPRoutes on the cluster.
When an HTTPRoute is added, we need to determine the status only for that
HTTPRoute. When it’s deleted we just need to make sure it’s removed from the
index.
The patches that the `Index` creates are sent to the `Controller` which is
responsible only for applying those patches to HTTPRoutes.
#### Controller
`Controller` asynchronously processes updates and applies patches to HTTPRoutes.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Currently, when the policy controller's validating admission webhook
rejects a Server because it collides with an existing one, it's
difficult to determine which resource the new Server would collide with
(see #10153). Therefore, we should update the error message to include
the existing Server. Additionally, the current error message uses the
word "identical", which suggests to the user that the two Server specs
have the same pod selector. However, this may not actually be the case:
the conflict occurs if the two Servers' pod selectors would select *any*
overlapping pods.
This branch changes the error message to include the name and namespace
of the existing Server whose pod selector overlaps with the new Server.
Additionally, I've reworded the error message to avoid the use of
"identical", and tried to make it clearer that the collision is because
the pod selectors would select one or more overlapping pods, rather than
selecting all the same pods.
Fixes#10153
Fixes#9965
Adds a `path` property to the RedirectRequestFilter in all versions. This property was absent from the CRD even though it appears in the gateway API documentation and is represented in the internal types. Adding this property to the CRD will also users to specify it.
Add a new version to the HTTPRoute CRD: v1beta2. This new version includes two changes from v1beta1:
* Added `port` property to `parentRef` for use when the parentRef is a Service
* Added `backendRefs` property to HTTPRoute rules
We switch the storage version of the HTTPRoute CRD from v1alpha1 to v1beta2 so that these new fields may be persisted.
We also update the policy admission controller to allow an HTTPRoute parentRef type to be Service (in addition to Server).
Signed-off-by: Alex Leong <alex@buoyant.io>
This changes updates the policy controller's indexer to add default, unauthenticated routes for
endpoints referenced in a Pod's readiness/liveness/startup probe configuration. These default routes
are included when:
1. the policy controller is configured with a list of networks from which probes may originate; and
2. no other routes are configured for the server.
If a user defines routes for a Server, then they must also explicitly account for probe endpoints.
An e2e test has been added which asserts the following:
1. When no Server is configured for a Pod:port, the probe routes are authorized.
2. When a Server is configured, but there are no routes, the probe routes are still authorized.
3. When a route is configured for the Server, the probe routes are no longer authorized by default.
Related to #8961#8945
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
This branch updates the policy controller's validating admission
controller to use the same validation functions as the indexer when
validating `HTTPRoute` resources. This way, we can ensure that any
`HTTPRoute` spec that passes validation will also convert to a valid
`InboundRouteBinding` in the indexer.
The proxy won't handle httproute paths (in URI rewrites or matches) when
paths are relative. The policy admission controller and indexer should
catch this case and fail to handle routes that deal in paths that do not
start in `/`.
This branch adds validation to the admission controller and indexer to
ensure that all paths in an `httproute` rule are absolute.
As discussed in #8944, Linkerd's current use of the
`gateway.networking.k8s.io` `HTTPRoute` CRD is not a spec-compliant use
of the Gateway API, because we don't support some "core" features of the
Gateway API that don't make sense in Linkerd's use-case. Therefore,
we've chosen to replace the `gateway.networking.k8s.io` `HTTPRoute` CRD
with our own `HTTPRoute` CRD in the `policy.linkerd.io` API group, which
removes the unsupported features.
PR #8949 added the Linkerd versions of those CRDs, but did not remove
support for the Gateway API CRDs. This branch removes the Gateway API
CRDs from the policy controller and `linkerd install`/Helm charts.
The various helper functions for converting the Gateway API resource
binding types from `k8s-gateway-api` to the policy controller's internal
representation is kept in place, but the actual use of that code in the
indexer is disabled. This way, we can add support for the Gateway API
CRDs again easily. Similarly, I've kept the validation code for Gateway
API types in the policy admission controller, but the admission
controller no longer actually tries to validate those resources.
Depends on #8949Closes#8944
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Our use of the `gateway.networking.k8s.io` types is not compliant with
the gateway API spec in at least a few ways:
1. We do not support the `Gateway` types. This is considered a "core"
feature of the `HTTPRoute` type.
2. We do not currently update `HTTPRoute` status fields as dictated by
the spec.
3. Our use of Linkerd-specific `parentRef` types may not work well with
the gateway project's admission controller (untested).
Issue #8944 proposes solving this by replacing our use of
`gateway.networking.k8s.io`'s `HTTPRoute` type with our own
`policy.linkerd.io` version of the same type. That issue suggests that
the new `policy.linkerd.io` types be added separately from the change
that removes support for the `gateway.networking.k8s.io` versions, so
that the migration can be done incrementally.
This branch does the following:
* Add new `HTTPRoute` CRDs. These are based on the
`gateway.networking.k8s.io` CRDs, with the following changes:
- The group is `policy.linkerd.io`,
- The API version is `v1alpha1`,
- `backendRefs` fields are removed, as Linkerd does not support them,
- filter types Linkerd does not support (`RequestMirror` and
`ExtensionRef`), are removed.
* Add Rust bindings for the new `policy.linkerd.io` versions of
`HTTPRoute` types in `linkerd-policy-controller-k8s-api`.
The Rust bindings define their own versions of the `HttpRoute`,
`HttpRouteRule`, and `HttpRouteFilter` types, because these types'
structures are changed from the Gateway API versions (due to the
removal of unsupported filter types and fields). For other types,
which are identical to the upstream Gateway API versions (such as the
various match types and filter types), we re-export the existing
bindings from the `k8s-gateway-api`crate to minimize duplication.
* Add conversions to `InboundRouteBinding` from the `policy.linkerd.io`
`HTTPRoute` types.
When possible, I tried to factor out the code that was shared between
the conversions for Linkerd's `HTTPRoute` types and the upstream
Gateway API versions.
* Implement `kubert`'s `IndexNamespacedResource` trait for
`linkerd_policy_controller_k8s_api::policy::HttpRoute`, so that the
policy controller can index both versions of the `HTTPRoute` CRD.
* Adds validation for `policy.linkerd.io` `HTTPRoute`s to the policy
controller's validating admission webhook.
* Updated the policy controller tests to test both versions of
`HTTPRoute`.
## Notes
A couple questions I had about this approach:
- Is re-using bindings from the `k8s-gateway-api` crate appropriate
here, when the type has not changed from the Gateway API version? If
not, I can change this PR to vendor those types as well, but it will
result in a lot more code duplication.
- Right now, the indexer stores all `HTTPRoute`s in the same index.
This means that applying a `policy.linkerd.io` version of `HTTPRoute`
and then applying the Gateway API version with the same ns/name will
update the same value in the index. Is this what we want? I wasn't
entirely sure...
See #8944.
This change updates the policy controller to admit `AuthorizationPolicy` resources
that reference `HTTPRoute` parents. These policies configure proxies to augment
server-level authorizations with per-route authorizations.
Fixes#8890
Signed-off-by: Alex Leong <alex@buoyant.io>
In various places we read port configurations from external sources
(either the Kubernetes API or gRPC clients). We have manual checks in
place to ensure that port values are never zero. We can instead assert
this with the type system by using `NonZeroU16`.
This change updates the policy controller to use `NonZeroU16` for port
values. This allows us to replace our manual port value checks with
`NonZero::try_from`, etc.
Signed-off-by: Oliver Gould <ver@buoyant.io>
linkerd2-proxy-api v0.6.0 adds support for inbound proxies to discover
route configurations based on the Gateway API HTTPRoute types. This
change updates the policy controller to index
`gateway.networking.k8s.io/v1beta` `HTTPRoute` types to discover these
policies from the Kubernetes API.
`HTTPRoute` resources may target `Server` resources (as a `parentRef`)
to attach policies to an inbound proxy. When no routes are configured,
a default route is synthesized to allow traffic; but when at least one
route attaches to a server, only requests that match a route are
permitted (other requests are failed with a 404).
Only the *core* subset of the `HTTPRoute` filters are supported:
`RequestRedirect` and `RequestHeaderModifier`. Backends may *not* be
configured on these routes (since they may only apply to inbound/server-
side proxies). No `status` updates are currently performed on these
`HTTPRoute` resources.
This change does not yet allow `AuthorizationPolicy` resources to target
`HTTPRoute` resources. This will be added in a follow-up change.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Kubernetes resource type names are not case-sensitive. This change
updates `kind` and `group` comparisons to ignore case.
Signed-off-by: Oliver Gould <ver@buoyant.io>
In 1a0c1c31 we updated the admission controller to allow
`AuthorizationPolicy` resources with an empty
`requiredAuthenticationRefs`. But we did NOT update the indexer, so we
would allow these resources to be created but then fail to honor them in
the API.
To fix this:
1. The `AuthorizationPolicy` admission controller is updated to exercise
the indexer's validation so that it is impossible to admit resources
that will be discarded by the indexer;
2. An e2e test is added to exercise this configuration;
3. The indexer's validation is updated to accept resources with no
authentications.
Signed-off-by: Oliver Gould <ver@buoyant.io>
* Replace deprecated uses of `ResourceExt::name` with
`ResourceExt::name_unchecked`;
* Update k8s-gateway-api to v0.6;
* Update kubert to v0.9.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Fixes#8665
We add validation for HTTPRoute resources to the policy admission controller. We validate that for any HTTPRoute which has a Server as a parent_ref, that it doesn't have unsupported filters. For the moment we do not support any HTTP filters. As we add support for HTTP filter types, we should update the validator accordingly.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Closes#8565.
With this change, AuthorizationPolicys can now reference ServiceAccounts for
their target authentications. This allows users to avoid the requirement of
creating a MeshTLSAuthentication resource that references a single
ServiceAccount.
The policy admission controller only allows an AuthorizationPolicy to reference
a single MeshTLSAuthentication _or_ a ServiceAccount; it cannot reference both.
Additionally, if a ServiceAccount is reference it can onl be a single
one—similar to MeshTLSAuthentications.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
There is no convenient way to create an `AuthorizationPolicy` that targets all
`Server` resources in a `Namespace`. A distinct policy resource must be created
for each `Server`. This is cumbersome.
This change updates the policy controller to allow `AuthorizationPolicy`
resources to target `Namespace` resources. Policies may only target the
namespace in which the policy resource exists. The admission controller rejects
resources created that targets other namespaces.
Fixes#8297
The `identity_refs` section of the `MeshTLSAuthentication` resource currently
only supports ServiceAccount resources. We add support for Namespace resources
as well. When a `MeshTLSAuthentication` has a Namespace identity_ref, this
means that all ServiceAccounts in that Namespace are authenticated.
Fixes#8298
Signed-off-by: Alex Leong <alex@buoyant.io>
Currently, the policy admission controller requires that the
`AuthorizationPolicy` resources include a non-empty
`requiredAuthenticationRefs` field. This means that all authorization
policies require at least a `NetworkAuthentication` to permit traffic.
For example:
```yaml
---
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
name: ingress
spec:
targetRef:
group: policy.linkerd.io
kind: Server
name: ingress-http
requiredAuthenticationRefs:
- group: policy.linkerd.io
kind: NetworkAuthentication
name: all-nets
---
apiVersion: policy.linkerd.io/v1alpha1
kind: NetworkAuthentication
metadata:
name: ingress-all-nets
spec:
networks:
- cidr: 0.0.0.0/0
- cidr: ::/0
```
This is needlessly verbose and can more simply be expressed as:
```yaml
---
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
name: ingress
spec:
targetRef:
group: policy.linkerd.io
kind: Server
name: ingress-http
requiredAuthenticationRefs: []
```
That is: there are explicitly no required authentications for this
policy.
This change updates the admission controller to permit such a policy.
Note that the `requiredAuthenticationRefs` field is still required so
that it's harder for simple misconfigurations to result in allowing
traffic.
This change also removes `Default` implementation for resources where do
they not make sense because there are required fields.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Issue #7709 proposes new Custom Resource types to support generalized
authorization policies:
- `AuthorizationPolicy`
- `MeshTLSAuthentication`
- `NetworkAuthentication`
This change introduces these CRDs to the default linkerd installation
(via the `linkerd-crds` chart) and updates the policy controller's
to handle these resource types. The policy admission controller
validates that these resource reference only suppported types.
This new functionality is tested at multiple levels:
* `linkerd-policy-controller-k8s-index` includes unit tests for the
indexer to test how events update the index;
* `linkerd-policy-test` includes integration tests that run in-cluster
to validate that the gRPC API updates as resources are manipulated;
* `linkerd-policy-test` includes integration tests that exercise the
admission controller's resource validation; and
* `linkerd-policy-test` includes integration tests that ensure that
proxies honor authorization resources.
This change does NOT update Linkerd's control plane and extensions to
use these new authorization primitives. Furthermore, the `linkerd` CLI
does not yet support inspecting these new resource types. These
enhancements will be made in followup changes.
Signed-off-by: Oliver Gould <ver@buoyant.io>
In preparation for new policy CRD resources, this change adds end-to-end
tests to validate policy enforcement for `ServerAuthorization`
resources.
In adding these tests, it became clear that the OpenAPI validation for
`ServerAuthorization` resources is too strict. Various `oneof`
constraints have been removed in favor of admission controller
validation. These changes are semantically compatible and do not
necessitate an API version change.
The end-to-end tests work by creating `curl` pods that call an `nginx`
pod. In order to test network policies, the `curl` pod may be created
before the nginx pod, in which case an init container blocks execution
until a `curl-lock` configmap is deleted from the cluster. If the
configmap is not present to begin with, no blocking occurs.
Signed-off-by: Oliver Gould <ver@buoyant.io>
The policy controller's indexing module spans several files and relies
on an unnecessarily complex double-watch. It's generally confusing and
therefore difficult to change.
This change attempts to simplify the logic somewhat:
* All of the indexing code is now in the
`linkerd_policy_controller_k8s_index::index` module. No other files
have any dependencies on the internals of this data structure. It
exposes one public API, `Index::pod_server_rx`, used by discovery
clients.
* It uses the new `kubert::index` API so that we can avoid redundant
event-handling code. We now let kubert drive event processing so that
our indexing code is solely responsible for updating per-port server
configurations.
* A single watch is maintained for each pod:port.
* Watches are constructed lazily. The policy controller no longer
requires that all ports be documented on a pod. (The proxy still
requires this, however). This sets up for more flexible port
discovery.
Signed-off-by: Oliver Gould <ver@buoyant.io>
`ServerAuthorization` resources are not validated by the admission
controller.
This change enables validation for `ServerAuthorization` resources,
based on changes to the admission controller proposed as a part of
linkerd/linkerd2#8007. This admission controller is generalized to
support arbitrary resource types. The `ServerAuthoriation` validation
currently only ensures that network blocks are valid CIDRs and that they
are coherent. We use the new _schemars_ feature of `ipnet` v2.4.0 to
support using IpNet data structures directly in the custom resource
type bindings.
This change also adds an integration test to validate that the admission
controller behaves as expected.
Signed-off-by: Oliver Gould <ver@buoyant.io>
In preparation for introducing new policy types, this change reorganizes
the policy controller to keep more of each indexing module private.
Signed-off-by: Oliver Gould <ver@buoyant.io>
The policy controller has an validating webhook for `Server` resources,
but this functionality is not really tested.
Before adding more policy resources that need validation, let's add an
integration test that exercises resource validation. The initial test is
pretty simplistic, but this is just setup.
These test also help expose two issues:
1. The change in 8760c5f--to solely use the index for validation--is
problematic, especially in CI where quick updates can pass validation
when they should not. This is fixed by going back to making API calls
when validating `Server` resources.
2. Our pod selector overlap detection is overly simplistic. This change
updates it to at least detect when a server selects _all_ pods.
There's probably more we can do here in followup changes.
Tests are added in a new `policy-test` crate that only includes these
tests and the utiltities they need. This crate is excluded when running
unit tests and is only executed when it has a Kubernetes cluster it can
execute against. A temporary namespace is created before each test is
run and deleted as the test completes.
The policy controller's CI workflow is updated to build the core control
plane, run a k3d cluster, and exercise tests. This workflow has minimal
dependencies on the existing script/CI tooling so that the dependencies
are explicit and we can avoid some of the complexity of the existing
test infrastructure.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Currently, the policy-controller's admission server issues a request to
Kubernetes every admission review request. This is unnecessary, since
we already maintain a cache of the relevant data structure in the
process.
This change updates the admission server to hold a reference to the
process's `Index` data structure. Each time an admission request is
received, the admission server accesses the index to lookup servers that
already exist in the cluster.
Note that it's possible that the index may not be 100% up to date, as
event delivery can be delayed. I think it's okay to make the tradeoff
that the admission controller is best-effort in favor of reducing
resource usage.
Signed-off-by: Oliver Gould <ver@buoyant.io>
`kubert` provides a runtime utility that helps reduce boilerplate around
process lifecycle management, construction of admin and HTTPS servers,
etc.
The admission controller server preserves the certificate reloading
functionality introduced in 96131b5 and updates the utility to read both
RSA and PKSC8 keys to close#7963.
Signed-off-by: Oliver Gould <ver@buoyant.io>
The validating webhook admission controller in the policy controller loads it's
TLS credentials from files at startup and uses them for the lifetime of the
process. This means that if the credentials are rotated, the admission
controller will not use the updated credentials until the process is restarted.
We instead load these credentials each time a connection is established so that
new connections will always use the newest credentials. In doing so, we remove
warp and instead create a hyper server manually.
Fixes#7519
The policy controller synthesizes identity strings based on service account
names; but it assumed that `linkerd` was the name of the control plane
namespace. This change updates the policy controller to take a
`--control-plane-namespace` command-line argument to set this value in
identity strings. The helm templates have been updated to configure the policy
controller appropriately.
Fixes#7204
Co-authored-by: Oliver Gould <ver@buoyant.io>
While testing the proxy with various allocators, we've seen that
jemalloc generally uses less memory without incurring CPU or latency
costs.
This change updates the policy-controller to use jemalloc on x86_64
gnu/linux. We continue to use the system allocator on other platforms
(especially arm), since the jemalloc tests do not pass on these
platforms (according to the jemallocator readme).
The policy controller currently logs a warning message every
time a Server resource is applied. There is a mismatch between
the format of the blob that we're trying to deserialise and the
type we are deserialising into. To fix, I've changed the
`parse_server` function to deserialise only the spec;
the function signature has also changed to return the name
of the Server as a string.
Closes#6860
The policy controller only emitted logs in the default plain format.
This change adds new CLI flags to the policy-controller: `--log-format`
and `--log-level` that configure logging (replacing the `RUST_LOG`
environment variable). The helm chart is updated to configure these
flags--the `controllerLogLevel` variable is used to configure the policy
controller as well.
Example:
```
{"timestamp":"2021-09-15T03:30:49.552704Z","level":"INFO","fields":{"message":"HTTP admin server listening","addr":"0.0.0.0:8080"},"target":"linkerd_policy_controller::admin","spans":[{"addr":"0.0.0.0:8080","name":"serve"}]}
{"timestamp":"2021-09-15T03:30:49.552689Z","level":"INFO","fields":{"message":"gRPC server listening","addr":"0.0.0.0:8090"},"target":"linkerd_policy_controller","spans":[{"addr":"0.0.0.0:8090","cluster_networks":"[10.0.0.0/8, 100.64.0.0/10, 172.16.0.0/12, 192.168.0.0/16]","name":"grpc"}]}
{"timestamp":"2021-09-15T03:30:49.567734Z","level":"DEBUG","fields":{"message":"Ready"},"target":"linkerd_policy_controller_k8s_index"}
^C{"timestamp":"2021-09-15T03:30:51.245387Z","level":"DEBUG","fields":{"message":"Received ctrl-c"},"target":"linkerd_policy_controller"}
{"timestamp":"2021-09-15T03:30:51.245473Z","level":"INFO","fields":{"message":"Shutting down"},"target":"linkerd_policy_controller"}
```
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
We add a validating admission controller to the policy controller which validates `Server` resources. When a `Server` admission request is received, we look at all existing `Server` resources in the cluster and ensure that no other `Server` has an identical selector and port.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Pods may be annotated with annotations like
`config.linkerd.io/opaque-ports` and
`config.linkerd.io/proxy-require-identity-inbound-ports`--these
annotations configure default behavior that should be honored when a
`Server` does not match the workload's ports. As it stands now, the
policy controller would break opaque-ports configurations that aren't
reflected in a `Server`.
This change reworks the pod indexer to create a default server watch for
each _port_ (rather than for each pod). The cache of default server
watches is now lazy, creating watches as needed for all used
combinations of default policies. These watches are never dropped, but
there are only a few possible combinations of port configurations, so
this doesn't pose any concerns re: memory usage.
While doing this, the names used to describe these default policies are
updated to be prefixed with `default:`. This generally makes these names
more descriptive and easier to understand.
We've implemented a new controller--in Rust!--that implements discovery
APIs for inbound server policies. This change imports this code from
linkerd/polixy@25af9b5e.
This policy controller watches nodes, pods, and the recently-introduced
`policy.linkerd.io` CRD resources. It indexes these resources and serves
a gRPC API that will be used by proxies to configure the inbound proxy
for policy enforcement.
This change introduces a new policy-controller container image and adds a
container to the `Linkerd-destination` pod along with a `linkerd-policy` service
to be used by proxies.
This change adds a `policyController` object to the Helm `values.yaml` that
supports configuring the policy controller at runtime.
Proxies are not currently configured to use the policy controller at runtime. This
will change in an upcoming proxy release.