A distributed claim allows the OIDC provider to delegate a claim to a
separate URL. Distributed claims are of the form as seen below, and are
defined in the OIDC Connect Core 1.0, section 5.6.2.
See: https://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims
Example claim:
```
{
... (other normal claims)...
"_claim_names": {
"groups": "src1"
},
"_claim_sources": {
"src1": {
"endpoint": "https://www.example.com",
"access_token": "f005ba11"
},
},
}
```
Example response to a followup request to https://www.example.com is a
JWT-encoded claim token:
```
{
"iss": "https://www.example.com",
"aud": "my-client",
"groups": ["team1", "team2"],
"exp": 9876543210
}
```
Apart from the indirection, the distributed claim behaves exactly
the same as a standard claim. For Kubernetes, this means that the
token must be verified using the same approach as for the original OIDC
token. This requires the presence of "iss", "aud" and "exp" claims in
addition to "groups".
All existing OIDC options (e.g. groups prefix) apply.
Any claim can be made distributed, even though the "groups" claim is
the primary use case.
Allows groups to be a single string due to
https://github.com/kubernetes/kubernetes/issues/33290, even though
OIDC defines "groups" claim to be an array of strings. So, this will
be parsed correctly:
```
{
"iss": "https://www.example.com",
"aud": "my-client",
"groups": "team1",
"exp": 9876543210
}
```
Expects that distributed claims endpoints return JWT, per OIDC specs.
In case both a standard and a distributed claim with the same name
exist, standard claim wins. The specs seem undecided about the correct
approach here.
Distributed claims are resolved serially. This could be parallelized
for performance if needed.
Aggregated claims are silently skipped. Support could be added if
needed.
Kubernetes-commit: dfb527843ca1720ad64383fa5d6baea4113daa3e
experimental-keystone-url and experimental-keystone-ca-file were always
experimental. So we don't need a deprecation period.
KeystoneAuthenticator was on the server side and needed userid/password
to be passed in and used that to authenticate with Keystone. We now
have authentication and authorization web hooks that can be used. There
is a external repo with a webook for keystone which works fine along
with the kubectl auth provider that was added in:
a0cebcb559c5c0ab8a2e50b1ee11cc62f9ebb3a8
So we don't need this older style / hard coded / experimental code
anymore.
Kubernetes-commit: 18590378c4491eacdea5cd05f98c92fe84020263
This change does three things:
1. use auditinternal for unit test in filter stage
2. add a seperate unit test for Audit-ID http header
3. add unit test for audit log backend
Kubernetes-commit: c030026b544da2dd7ef7201019bdc0ac255c2d23
Add the following flags to control the prefixing of usernames and
groups authenticated using OpenID Connect tokens.
--oidc-username-prefix
--oidc-groups-prefix
Kubernetes-commit: 1f8ee7fe13490a8e8e0e7801492770caca9f9b5c
I found some dead code in audit webhook backend.
This change do some clean work for: 2bbe72d4e0
Kubernetes-commit: 7b5c7bb711e7f15a1bf216a7a51fd40148110fba
WebhookAuthorizer's Authorize should send *all* the information
present in the user.Info data structure. We are not sending the
UID currently.
Kubernetes-commit: 9a761b16c1558106800222dbc52f6ab03c40c64c
e2e and integration tests have been switched over to the tokenfile
authenticator instead.
```release-note
The --insecure-allow-any-token flag has been removed from kube-apiserver. Users of the flag should use impersonation headers instead for debugging.
```
Kubernetes-commit: e2f2ab67f29d3e859e0b3e6668d8d770d93132fc
- port direct calls to deepcopy funcs
- apimachinery: fix types in unstructured converter test
- federation: fix deepcopy registration
Kubernetes-commit: 2bbe72d4e09f7c95e1ad851187d4733a54644fbe