Add go client codegen for HttpRoute v1beta3. This will be necessary for any of the go controllers (i.e. metrics-api) or go CLI commands to interact with HttpRoute v1beta3 resources in kubernetes.
Signed-off-by: Kevin Ingelman <ki@buoyant.io>
PR #10969 adds support for the GEP-1742 `timeouts` field to the
HTTPRoute CRD. This branch implements actual support for these fields in
the policy controller. The timeout fields are now read and used to set
the timeout fields added to the proxy-api in
linkerd/linkerd2-proxy-api#243.
In addition, I've added code to ensure that the timeout fields are
parsed correctly when a JSON manifest is deserialized. The current
implementation represents timeouts in the bindings as a Rust
`std::time::Duration` type. `Duration` does implement
`serde::Deserialize` and `serde::Serialize`, but its serialization
implementation attempts to (de)serialize it as a struct consisting of a
number of seconds and a number of subsecond nanoseconds. The timeout
fields are instead supposed to be represented as strings in the Go
standard library's `time.ParseDuration` format. Therefore, I've added a
newtype which wraps the Rust `std::time::Duration` and implements the
same parsing logic as Go. Eventually, I'd like to upstream the
implementation of this to `kube-rs`; see kube-rs/kube#1222 for details.
Depends on #10969
Depends on linkerd/linkerd2-proxy-api#243
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Add a new version to the HttpRoute CRD: `v1beta3`. This version adds a new `timeouts` struct to the http route rule. This mirrors a corresponding new field in the Gateway API, as described in [GEP-1742](https://github.com/kubernetes-sigs/gateway-api/pull/1997). This field is currently unused, but will eventually be read by the policy controller and used to configure timeouts enforced by the proxy.
The diff between v1beta2 and v1beta3 is:
```
timeouts:
description: "Timeouts defines the timeouts that can be configured
for an HTTP request. \n Support: Core \n <gateway:experimental>"
properties:
backendRequest:
description: "BackendRequest specifies a timeout for an
individual request from the gateway to a backend service.
Typically used in conjunction with automatic retries,
if supported by an implementation. Default is the value
of Request timeout. \n Support: Extended"
format: duration
type: string
request:
description: "Request specifies a timeout for responding
to client HTTP requests, disabled by default. \n For example,
the following rule will timeout if a client request is
taking longer than 10 seconds to complete: \n ``` rules:
- timeouts: request: 10s backendRefs: ... ``` \n Support:
Core"
format: duration
type: string
type: object
```
We update the `storage` version of HttpRoute to be v1beta3 but continue to serve all versions. Since this new field is optional, the Kubernetes API will be able to automatically convert between versions.
Signed-off-by: Alex Leong <alex@buoyant.io>
A route may have two conditions in a parent status: a condition that
states whether it has been accepted by the parents, and a condition that
states whether all backend references -- that traffic matched against
route is sent to -- have resolved successfully. Currently, the policy
controller does not support the latter.
This change introduces support for checking and setting a backendRef
specific condition. A successful condition (ResolvedRefs = True) is met
when all backend references point to a supported type, and that type
exists in the cluster. Currently, only Service objects are supported. A
nonexistent object, or an unsupported kind will reject the entire
condition; the particular reason will be reflected in the condition's
message.
Since statuses are set on a route's parents, the same condition will
apply to _all_ parents in a route (since there is no way to elicit
different backends for different parents).
If a route does not have any backend references, then the parent
reference type will be used. As such, any parents that are not Services
will automatically get an invalid backend condition (exception to the
rule in the third paragraph where a condition is shared by all parents).
When the parent is supported (i.e a Service) we needn't check its
existence since the parent condition will already reflect that.
---
Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Implement the outbound policy API as defined in the proxy api: https://github.com/linkerd/linkerd2-proxy-api/blob/main/proto/outbound.proto
This API is consumed by the proxy for the routing of outbound traffic. It is intended to replace the GetProfile API which is currently served by the destination controller. It has not yet been released in a proxy-api release, so we take a git dependency on it in the mean time.
This PR adds a new index to the policy controller which indexes HTTPRoutes and Services and uses this information to serve the outbound API. We also add outbound API tests to validate the behavior of this implementation.
Signed-off-by: Alex Leong <alex@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
The policy controller uses the v1alpha1 HTTPRoute type as its internal
representation of HTTPRoute resources. This change updates the resource
version to v1beta2 in anticipation of adding outbound policy support.
To do so, we need to update the e2e tests to create HTTPRoute resources
properly. They currently include a `port` value, though it is not
allowed by our validator. The older resource type does not support this
field and so it was silently ignored.
The previous version k8s-gateway (`v0.10.0`) did not include backendRefs
for HTTP Routes, since the policy controller did not use them for any
specific task or validation. BackendRef support is currently being added
for the status controller, and will be used as more and more route
functionality is added to Linkerd.
This change bumps k8s-gateway to the most recent version and updates the
internal model of the route to include backendRefs. Additionally, fixes
any compiler issues that cropped up from adding a field to the struct.
Signed-off-by: Matei David <matei@buoyant.io>
This adds lease claims to the policy status controller so that upon startup, a
status controller attempts to claim the `status-controller` lease in the
`linkerd` namespace. With this lease, we can enforce leader election and ensure
that only one status controller on a cluster is attempting to patch HTTPRoute’s
`status` field.
Upon startup, the status controller now attempts to create the
`status-controller` lease — it will handle failure if the lease is already
present on the cluster. It then spawns a task for attempting to claim this lease
and sends all claim updates to the index `Index`.
Currently, `Index.claims` is not used, but in follow-up changes we can check
against the current claim for determining if the status controller is the
current leader on the cluster. If it is, we can make decisions about sending
updates or not to the controller `Controller`.
### Testing
Currently I’ve only manually tested this, but integration tests will definitely
be helpful follow-ups. For manually testing, I’ve asserted that the
`status-controller` is claimed when one or more status controllers startup and
are running on a cluster. I’ve also asserted that when the current leader is
deleted, another status controller claims the lease. Below is the summary of how
I tested it
```shell
$ linkerd install --ha |kubectl apply -f -
…
$ kubectl get -n linkerd leases status-controller
NAME HOLDER AGE
status-controller linkerd-destination-747b456876-dcwlb 15h
$ kubectl delete -n linkerd pod linkerd-destination-747b456876-dcwlb
pod "linkerd-destination-747b456876-dcwlb" deleted
$ kubectl get -n linkerd leases status-controller
NAME HOLDER AGE
status-controller linkerd-destination-747b456876-5zpwd 15h
```
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
This branch updates the dependency on `kubert` to 0.13.0.
- [Release notes](https://github.com/olix0r/kubert/releases)
- [Commits](https://github.com/olix0r/kubert/compare/release/v0.12.0...release/v0.13.0)
Since `kubert` and other Kubernetes API dependencies must be updated in
lockstep, this branch also updates `kube` to 0.78, `k8s-openapi` to
0.13, and `k8s-gateway-api` to 0.9.
`kube-runtime` now depends on a version of the `base64` crate which has
diverged significantly from the version `rustls-pemfile` depends on.
Since both `base64` deps are transitive dependencies which we have no
control over, this branch adds a `cargo deny` exception for duplicate
dependencies on `base64`.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
### Overview
This adds a policy status controller which is responsible for patching Linkerd’s
HTTPRoute resource with a `status` field. The `status` field has a list of
parent statuses — one status for each of its parent references. Each status
indicates whether or not this parent has “accepted” the HTTPRoute.
The status controller runs on its own task in the policy controller and watches
for updates to the resources that it cares about, similar to the policy
controller’s index. One of the main differences is that while the policy
controller’s index watches many resources, the status controller currently only
cares about HTTPRoutes and Servers; HTTPRoutes can still only have parent
references that are Servers so we don’t currently need to consider any other
parent reference resources.
The status controller maintains its own index of resources so that it is
completely separated from the policy controller’s index. This allows the index
to be simpler in both its structure, how it handles `apply` and `delete`, and
what information it needs to store.
### Follow-ups
There are several important follow-ups to this change. #10124 contains changes
for the policy controller index filtering out HTTPRoutes that are not accepted
by a Server. We don’t want those changes yet. Leaving those out, the status
controller does not actually have any affect on Linkerd policy in the cluster.
We can probably add additional logging several places in the status controller;
that may even take place as part of the reviews on this. Additionally, we could
try queue size for updates to be processed.
Currently if the status controller fails in any of its potential places, we do
not re-queue updates. We probably should do that so that it is more robust
against failure.
In an HA installation, there could be multiple status controllers trying to
patch the same resource. We should explore the k8s lease API so that only one
status controller can patch a resource at a time.
### Implementation
The status controller `Controller` has a k8s client for patching resources,
`index` for tracking resources, and an `updates` channel which handles
asynchronous updates to resources.
#### Index
`Index` synchronously observes changes to resources. It determines which Servers
accept each HTTPRoute and generates a status patch for that HTTPRoute. Again,
the status contains a list of parent statuses, one for each of the HTTPRoutes
parent references.
When a Server is added or deleted, the status controller needs to recalculate
the status for all HTTPRoutes. This is because an HTTPRoute can reference
Servers in other namespaces, so if a Server is added or deleted anywhere in the
cluster it could affect any of the HTTPRoutes on the cluster.
When an HTTPRoute is added, we need to determine the status only for that
HTTPRoute. When it’s deleted we just need to make sure it’s removed from the
index.
The patches that the `Index` creates are sent to the `Controller` which is
responsible only for applying those patches to HTTPRoutes.
#### Controller
`Controller` asynchronously processes updates and applies patches to HTTPRoutes.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
Currently, when the policy controller's validating admission webhook
rejects a Server because it collides with an existing one, it's
difficult to determine which resource the new Server would collide with
(see #10153). Therefore, we should update the error message to include
the existing Server. Additionally, the current error message uses the
word "identical", which suggests to the user that the two Server specs
have the same pod selector. However, this may not actually be the case:
the conflict occurs if the two Servers' pod selectors would select *any*
overlapping pods.
This branch changes the error message to include the name and namespace
of the existing Server whose pod selector overlaps with the new Server.
Additionally, I've reworded the error message to avoid the use of
"identical", and tried to make it clearer that the collision is because
the pod selectors would select one or more overlapping pods, rather than
selecting all the same pods.
Fixes#10153
Fixes#9965
Adds a `path` property to the RedirectRequestFilter in all versions. This property was absent from the CRD even though it appears in the gateway API documentation and is represented in the internal types. Adding this property to the CRD will also users to specify it.
Add a new version to the HTTPRoute CRD: v1beta2. This new version includes two changes from v1beta1:
* Added `port` property to `parentRef` for use when the parentRef is a Service
* Added `backendRefs` property to HTTPRoute rules
We switch the storage version of the HTTPRoute CRD from v1alpha1 to v1beta2 so that these new fields may be persisted.
We also update the policy admission controller to allow an HTTPRoute parentRef type to be Service (in addition to Server).
Signed-off-by: Alex Leong <alex@buoyant.io>
The implementation of the `NotIn` pod selector expression in the policy
controller is backwards. If a value exists for the label in the
expression, and it is contained in the `NotIn` set, the expression will
return `true`, and it will return `false` when the value is _not_ in the
set. This is because it calls `values.contains(v)`, just like the `In`
expression.
* Update kubert to v0.10
* Update kube-rs to v0.75 (fixes#9339)
* Update k8s-openapi to v0.16
* Update k8s-gateway-api to v0.7
Signed-off-by: Oliver Gould <ver@buoyant.io>
This branch updates the policy controller's validating admission
controller to use the same validation functions as the indexer when
validating `HTTPRoute` resources. This way, we can ensure that any
`HTTPRoute` spec that passes validation will also convert to a valid
`InboundRouteBinding` in the indexer.
When there are multiple equivalent routes (e.g., two routes with the
same match), the proxy will use the first route in the returned list. We
need to ensure that the policy controller returns routes in a
deterministic order--and the Gateway API defines such an order:
> If ties still exist across multiple Routes, matching precedence MUST
> be determined in order of the following criteria, continuing on ties:
>
> * The oldest Route based on creation timestamp.
> * The Route appearing first in alphabetical order by
> "{namespace}/{name}".
This branch updates the policy controller to return the list of
`HttpRoute`s for an inbound server with a deterministic ordering based
on these rules. This is done by tracking the creation timestamp for
indexed `HTTPRoute` resources, and sorting the list of protobuf
`HttpRoute`s when the API server constructs an `InboundServer` response.
The implementation is *somewhat* hairy, because we can't just define a
custom `Ord` implementation for the protobuf `HttpRoute` type that
includes the timestamp --- doing so would require actually storing the
creation timestamp in the protobuf type, which would be a change in
`linkerd2-proxy-api` (and would result in serializing additional
information that the proxy itself doesn't actually care about). Instead,
we use `slice::sort_by` with a closure that looks up routes by name in
the hash map stored by the indexer in order to determine their
timestamps, and implements a custom ordering that first compares the
timestamp, and falls back to comparing the route's name if the
timestamps are equal. Note that we don't include the namespace in that
comparison, because all the routes for a given `InboundServer` are
already known to be in the same namespace.
I've also added an end-to-end test that the API returns the route list
in the correct order. Unfortunately, this test has 4 seconds of `sleep`s
in it, because the minimum resolution of Kubernetes creation timestamps
is 1 second. I figured a test that takes five or six seconds to run was
probably not a huge deal in the end to end tests --- some of the policy
tests take as long as a minute to run, at least on my machine.
Closes#8946
The proxy won't handle httproute paths (in URI rewrites or matches) when
paths are relative. The policy admission controller and indexer should
catch this case and fail to handle routes that deal in paths that do not
start in `/`.
This branch adds validation to the admission controller and indexer to
ensure that all paths in an `httproute` rule are absolute.
Part of #8945
In order for default inbound Servers to authorize probe routes, we must track
the probe ports and their expected paths when indexing Pods.
This introduces a new `_probes` field to Pod (which will be used in a follow-up
change) which maps probe ports to their expected paths.
For example, if a Pod’s container configures the following probes
```yaml
livenessProbe:
httpGet:
path: /live
port: 4191
...
readinessProbe:
httpGet:
path: /ready
port: 4191
...
```
Then we expect `_probes == {4191: {“/live”, “/ready”}}`
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
As discussed in #8944, Linkerd's current use of the
`gateway.networking.k8s.io` `HTTPRoute` CRD is not a spec-compliant use
of the Gateway API, because we don't support some "core" features of the
Gateway API that don't make sense in Linkerd's use-case. Therefore,
we've chosen to replace the `gateway.networking.k8s.io` `HTTPRoute` CRD
with our own `HTTPRoute` CRD in the `policy.linkerd.io` API group, which
removes the unsupported features.
PR #8949 added the Linkerd versions of those CRDs, but did not remove
support for the Gateway API CRDs. This branch removes the Gateway API
CRDs from the policy controller and `linkerd install`/Helm charts.
The various helper functions for converting the Gateway API resource
binding types from `k8s-gateway-api` to the policy controller's internal
representation is kept in place, but the actual use of that code in the
indexer is disabled. This way, we can add support for the Gateway API
CRDs again easily. Similarly, I've kept the validation code for Gateway
API types in the policy admission controller, but the admission
controller no longer actually tries to validate those resources.
Depends on #8949Closes#8944
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Our use of the `gateway.networking.k8s.io` types is not compliant with
the gateway API spec in at least a few ways:
1. We do not support the `Gateway` types. This is considered a "core"
feature of the `HTTPRoute` type.
2. We do not currently update `HTTPRoute` status fields as dictated by
the spec.
3. Our use of Linkerd-specific `parentRef` types may not work well with
the gateway project's admission controller (untested).
Issue #8944 proposes solving this by replacing our use of
`gateway.networking.k8s.io`'s `HTTPRoute` type with our own
`policy.linkerd.io` version of the same type. That issue suggests that
the new `policy.linkerd.io` types be added separately from the change
that removes support for the `gateway.networking.k8s.io` versions, so
that the migration can be done incrementally.
This branch does the following:
* Add new `HTTPRoute` CRDs. These are based on the
`gateway.networking.k8s.io` CRDs, with the following changes:
- The group is `policy.linkerd.io`,
- The API version is `v1alpha1`,
- `backendRefs` fields are removed, as Linkerd does not support them,
- filter types Linkerd does not support (`RequestMirror` and
`ExtensionRef`), are removed.
* Add Rust bindings for the new `policy.linkerd.io` versions of
`HTTPRoute` types in `linkerd-policy-controller-k8s-api`.
The Rust bindings define their own versions of the `HttpRoute`,
`HttpRouteRule`, and `HttpRouteFilter` types, because these types'
structures are changed from the Gateway API versions (due to the
removal of unsupported filter types and fields). For other types,
which are identical to the upstream Gateway API versions (such as the
various match types and filter types), we re-export the existing
bindings from the `k8s-gateway-api`crate to minimize duplication.
* Add conversions to `InboundRouteBinding` from the `policy.linkerd.io`
`HTTPRoute` types.
When possible, I tried to factor out the code that was shared between
the conversions for Linkerd's `HTTPRoute` types and the upstream
Gateway API versions.
* Implement `kubert`'s `IndexNamespacedResource` trait for
`linkerd_policy_controller_k8s_api::policy::HttpRoute`, so that the
policy controller can index both versions of the `HTTPRoute` CRD.
* Adds validation for `policy.linkerd.io` `HTTPRoute`s to the policy
controller's validating admission webhook.
* Updated the policy controller tests to test both versions of
`HTTPRoute`.
## Notes
A couple questions I had about this approach:
- Is re-using bindings from the `k8s-gateway-api` crate appropriate
here, when the type has not changed from the Gateway API version? If
not, I can change this PR to vendor those types as well, but it will
result in a lot more code duplication.
- Right now, the indexer stores all `HTTPRoute`s in the same index.
This means that applying a `policy.linkerd.io` version of `HTTPRoute`
and then applying the Gateway API version with the same ns/name will
update the same value in the index. Is this what we want? I wasn't
entirely sure...
See #8944.
In various places we read port configurations from external sources
(either the Kubernetes API or gRPC clients). We have manual checks in
place to ensure that port values are never zero. We can instead assert
this with the type system by using `NonZeroU16`.
This change updates the policy controller to use `NonZeroU16` for port
values. This allows us to replace our manual port value checks with
`NonZero::try_from`, etc.
Signed-off-by: Oliver Gould <ver@buoyant.io>
* Replace deprecated uses of `ResourceExt::name` with
`ResourceExt::name_unchecked`;
* Update k8s-gateway-api to v0.6;
* Update kubert to v0.9.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Currently, the policy admission controller requires that the
`AuthorizationPolicy` resources include a non-empty
`requiredAuthenticationRefs` field. This means that all authorization
policies require at least a `NetworkAuthentication` to permit traffic.
For example:
```yaml
---
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
name: ingress
spec:
targetRef:
group: policy.linkerd.io
kind: Server
name: ingress-http
requiredAuthenticationRefs:
- group: policy.linkerd.io
kind: NetworkAuthentication
name: all-nets
---
apiVersion: policy.linkerd.io/v1alpha1
kind: NetworkAuthentication
metadata:
name: ingress-all-nets
spec:
networks:
- cidr: 0.0.0.0/0
- cidr: ::/0
```
This is needlessly verbose and can more simply be expressed as:
```yaml
---
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
name: ingress
spec:
targetRef:
group: policy.linkerd.io
kind: Server
name: ingress-http
requiredAuthenticationRefs: []
```
That is: there are explicitly no required authentications for this
policy.
This change updates the admission controller to permit such a policy.
Note that the `requiredAuthenticationRefs` field is still required so
that it's harder for simple misconfigurations to result in allowing
traffic.
This change also removes `Default` implementation for resources where do
they not make sense because there are required fields.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Issue #7709 proposes new Custom Resource types to support generalized
authorization policies:
- `AuthorizationPolicy`
- `MeshTLSAuthentication`
- `NetworkAuthentication`
This change introduces these CRDs to the default linkerd installation
(via the `linkerd-crds` chart) and updates the policy controller's
to handle these resource types. The policy admission controller
validates that these resource reference only suppported types.
This new functionality is tested at multiple levels:
* `linkerd-policy-controller-k8s-index` includes unit tests for the
indexer to test how events update the index;
* `linkerd-policy-test` includes integration tests that run in-cluster
to validate that the gRPC API updates as resources are manipulated;
* `linkerd-policy-test` includes integration tests that exercise the
admission controller's resource validation; and
* `linkerd-policy-test` includes integration tests that ensure that
proxies honor authorization resources.
This change does NOT update Linkerd's control plane and extensions to
use these new authorization primitives. Furthermore, the `linkerd` CLI
does not yet support inspecting these new resource types. These
enhancements will be made in followup changes.
Signed-off-by: Oliver Gould <ver@buoyant.io>
In preparation for new policy CRD resources, this change adds end-to-end
tests to validate policy enforcement for `ServerAuthorization`
resources.
In adding these tests, it became clear that the OpenAPI validation for
`ServerAuthorization` resources is too strict. Various `oneof`
constraints have been removed in favor of admission controller
validation. These changes are semantically compatible and do not
necessitate an API version change.
The end-to-end tests work by creating `curl` pods that call an `nginx`
pod. In order to test network policies, the `curl` pod may be created
before the nginx pod, in which case an init container blocks execution
until a `curl-lock` configmap is deleted from the cluster. If the
configmap is not present to begin with, no blocking occurs.
Signed-off-by: Oliver Gould <ver@buoyant.io>
The policy controller's indexing module spans several files and relies
on an unnecessarily complex double-watch. It's generally confusing and
therefore difficult to change.
This change attempts to simplify the logic somewhat:
* All of the indexing code is now in the
`linkerd_policy_controller_k8s_index::index` module. No other files
have any dependencies on the internals of this data structure. It
exposes one public API, `Index::pod_server_rx`, used by discovery
clients.
* It uses the new `kubert::index` API so that we can avoid redundant
event-handling code. We now let kubert drive event processing so that
our indexing code is solely responsible for updating per-port server
configurations.
* A single watch is maintained for each pod:port.
* Watches are constructed lazily. The policy controller no longer
requires that all ports be documented on a pod. (The proxy still
requires this, however). This sets up for more flexible port
discovery.
Signed-off-by: Oliver Gould <ver@buoyant.io>
`ServerAuthorization` resources are not validated by the admission
controller.
This change enables validation for `ServerAuthorization` resources,
based on changes to the admission controller proposed as a part of
linkerd/linkerd2#8007. This admission controller is generalized to
support arbitrary resource types. The `ServerAuthoriation` validation
currently only ensures that network blocks are valid CIDRs and that they
are coherent. We use the new _schemars_ feature of `ipnet` v2.4.0 to
support using IpNet data structures directly in the custom resource
type bindings.
This change also adds an integration test to validate that the admission
controller behaves as expected.
Signed-off-by: Oliver Gould <ver@buoyant.io>
In preparation for introducing new policy types, this change reorganizes
the policy controller to keep more of each indexing module private.
Signed-off-by: Oliver Gould <ver@buoyant.io>
The policy controller has an validating webhook for `Server` resources,
but this functionality is not really tested.
Before adding more policy resources that need validation, let's add an
integration test that exercises resource validation. The initial test is
pretty simplistic, but this is just setup.
These test also help expose two issues:
1. The change in 8760c5f--to solely use the index for validation--is
problematic, especially in CI where quick updates can pass validation
when they should not. This is fixed by going back to making API calls
when validating `Server` resources.
2. Our pod selector overlap detection is overly simplistic. This change
updates it to at least detect when a server selects _all_ pods.
There's probably more we can do here in followup changes.
Tests are added in a new `policy-test` crate that only includes these
tests and the utiltities they need. This crate is excluded when running
unit tests and is only executed when it has a Kubernetes cluster it can
execute against. A temporary namespace is created before each test is
run and deleted as the test completes.
The policy controller's CI workflow is updated to build the core control
plane, run a k3d cluster, and exercise tests. This workflow has minimal
dependencies on the existing script/CI tooling so that the dependencies
are explicit and we can avoid some of the complexity of the existing
test infrastructure.
Signed-off-by: Oliver Gould <ver@buoyant.io>
`kubert` provides a runtime utility that helps reduce boilerplate around
process lifecycle management, construction of admin and HTTPS servers,
etc.
The admission controller server preserves the certificate reloading
functionality introduced in 96131b5 and updates the utility to read both
RSA and PKSC8 keys to close#7963.
Signed-off-by: Oliver Gould <ver@buoyant.io>
Fixes#7904
Allow the `Server` CRD to have the `PodSelector` entry be an empty object, by removing the `omitempty` tag from its go type definition and the `oneof` section in the CRD. No update to the CRD version is required, as this is BC change -- The CRD overriding was tested fine.
Also added some unit tests to confirm podSelector conditions are ANDed, and some minor refactorings in the `Selector` constructors.
Co-authored-by: Oliver Gould <ver@buoyant.io>
Kubernetes v1.19 is reaching its end-of-life date on 2021-10-28. In
anticipation of this, we should explicitly update our minimum supported
version to v1.20. This allows us keep our dependencies up-to-date and
ensures that we can actually test against our minimum supported version.
Fixes#7171
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>