Commit Graph

7 Commits

Author SHA1 Message Date
Zahari Dichev 40cdb7fc23
add mc crd to codegen (#7335)
Currently, the MC `Link` CRD is being handled using the dynamic k8s client. It would be useful for consumers of this API if there was a typed API for this CRD.

The solution is to update the codegen script to generate this code.

Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
2021-11-23 15:49:14 -07:00
Kevin Leimkuhler 00e018d277
Add policy CRD APIs (#7095)
This adds the policy CRD APIs for `Server` and `ServerAuthorization` CRDs.

The structure of each (in their respective `types.go`) is based off the `policy-crd.yaml` specs for each CRD.

Unlike service profiles, servers and server authorizations use the `oneof` extensively so I encoded that as a struct with a pointer for each possible `oneof`. For example, a server's `PodSelector` is either `MatchExpressions` or `MatchLabels`. Therefore, a `PodSelector` is defined as:

```
type PodSelector struct {
	MatchExpressions *MatchExpressions
	MatchLabels      *MatchLabels
}
```

Closes #6970 

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2021-10-22 15:54:09 -06:00
Tarun Pothulapati 395cc3677e
sp: prevent `sp.Spec.Routes` from being null'ed (#6271)
## Context

Currently, Whenever a `SP` is created with `Spec.Routes` field not being set from [golang types](https://github.com/linkerd/linkerd2/blob/main/controller/gen/apis/serviceprofile/v1alpha2/types.go#L13), k8s API rejects them with the following error

```bash
ServiceProfile.linkerd.io \"backend-svc.linkerd-smi-app.svc.cluster.local\" is invalid: spec.routes: Invalid value: \"null\": spec.routes in body must be of type array: \"null\"
```

This happens because, Golang automatically renders them it as `Routes: Null` whenever it marshaled into json. This is rejected by k8s API server as it expects that field to be an array.

[This is fixed in k8s >= 1.20](https://github.com/kubernetes/kubernetes/pull/95423) as non-nullable nulls are defaulted, and hence this error happens only in `<=1.19`.

## Problem

As `1.19` is a pretty recent version of k8s, and things like [smi-adaptor](https://github.com/linkerd/linkerd-smi/pull) may not want to manage and make sure `Spec.Routes` is not null all the time.

## Fix

This can be easily be fixed by marking `Spec.Routes` as `omitempty` in its json tags which means that the field is omitted whenever it is not set while being marshaled.

This means that the k8s API won't error out, as that field isn't set to anything invalid.

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
2021-06-17 12:08:32 +05:30
Alex Leong 964ce11559
Update generated serviceprofile code (#5580)
I ran `bin/update-codegen.sh` to update the generated code to include the opaque ports in the generated deepcopy function for service profiles.

Signed-off-by: Alex Leong <alex@buoyant.io>
2021-01-22 14:34:49 -08:00
Kevin Leimkuhler 7c0843a823
Add opaque ports to destination service updates (#5294)
## Summary

This changes the destination service to start indicating whether a profile is an
opaque protocol or not.

Currently, profiles returned by the destination service are built by chaining
together updates coming from watching Profile and Traffic Split updates.

With this change, we now also watch updates to Opaque Port annotations on pods
and namespaces; if an update occurs this is now included in building a profile
update and is sent to the client.

## Details

Watching updates to Profiles and Traffic Splits is straightforward--we watch
those resources and if an update occurs on one associated to a service we care
about then the update is passed through.

For Opaque Ports this is a little different because it is an annotation on pods
or namespaces. To account for this, we watch the endpoints that we should care
about.

### When host is a Pod IP

When getting the profile for a Pod IP, we check for the opaque ports annotation
on the pod and the pod's namespace. If one is found, we'll indicate if the
profile is an opaque protocol if the requested port is in the annotation.

We do not subscribe for updates to this pod IP. The only update we really care
about is if the pod is deleted and this is already handled by the proxy.

### When host is a Service

When getting the profile for a Service, we subscribe for updates to the
endpoints of that service. For any ports set in the opaque ports annotation on
any of the pods, we check if the requested port is present.

Since the endpoints for a service can be added and removed, we do subscribe for
updates to the endpoints of the service.

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>
2020-12-18 12:38:59 -05:00
Alex Leong d8eebee4f7
Upgrade to client-go 0.17.4 and smi-sdk-go 0.3.0 (#4221)
Here we upgrade our dependencies on client-go to 0.17.4 and smi-sdk-go to 0.3.0.  Since smi-sdk-go uses client-go 0.17.4, these upgrades must be performed simultaneously.

This also requires simultaneously upgrading our dependency on linkerd/stern to a SHA which also uses client-go 0.17.4.  This keeps all of our transitive dependencies synchronized on one version of client-go.

This ALSO requires updating our codegen scripts to use the 0.17.4 version of code-generator and running it to generate 0.17.4 compatible generated code.  I took this opportunity to update our code generation script to properly use the version of code-generater from `go.mod` rather than a hardcoded SHA.

Signed-off-by: Alex Leong <alex@buoyant.io>
2020-04-01 10:07:23 -07:00
Alex Leong d6ef9ea460
Update ServiceProfile CRD to version v1alpha2 and remove validation (#3078)
The openAPIV3Schema validation in the ServiceProfiles CRD is very limited in what it can validate and is obviated by more sophisticated validation done by the validating admission controller.  Therefore, we would like to remove the openAPIV3Schema validation to reduce the size and complexity of the CRD object.

To do so, we must also bump the version of the ServiceProfile custom resource from v1alpha1 to v1alpha2.  This ensures that when the controller is upgraded, it will attempt to watch the v1alpha2 resource.  If it cannot (because, for example, the controller pod started before the ServiceProfile CRD was updated and therefore the v1alpha2 version does not exist) then it will go into a crash loop backoff until it can.  This essentially means that the controller will wait for the CRD to be upgraded to include v1alpha2 before it will start.  

Bumping the version is necessary because if we did not, it would be possible for the controller to start before the CRD is updated (removing the validation).  In this case, when the CRD is edited, the controller will lose its list watch on ServiceProfiles and will stop getting updates.

Signed-off-by: Alex Leong <alex@buoyant.io>
2019-07-23 11:46:31 -07:00