- as soon as a request is received by the apiserver, determine the
timeout of the request and set a new request context with the deadline.
- the timeout filter that times out non-long-running requests should
use the request context as opposed to a fixed 60s wait today.
- admission and storage layer uses the same request context with the
deadline specified.
we use the default timeout enforced by the apiserver:
- if the user has specified a timeout of 0s, this implies no timeout on the user's part.
- if the user has specified a timeout that exceeds the maximum deadline allowed by the apiserver.
Kubernetes-commit: e416c9e574c49fd0190c8cdac58322aa33a935cf
apiserver dedups and adds warning in CREATE/UPDATE/PATCH requests;
also handles duplication caused by mutating admission.
Kubernetes-commit: 8bcf34a203efa596ac3b65da9afd6b6c764e78a9
for CREATE and UPDATE requests, we check duplication before managedFields
update, and after mutating admission; for PATCH requests, we check
duplication after mutating admission
Kubernetes-commit: ffc54ed1d2cbf4396fcc498beeb6ad34ac3df69c
- as soon as a request is received by the apiserver, determine the
timeout of the request and set a new request context with the deadline.
- the timeout filter that times out non-long-running requests should
use the request context as opposed to a fixed 60s wait today.
- admission and storage layer uses the same request context with the
deadline specified.
Kubernetes-commit: 83f869ee1350da1b65d508725749fb70d0f535f2
PATCH verb is used when creating a namespace using server-side apply,
while POST verb is used when creating a namespace using client-side
apply.
The difference in path between the two ways to create a namespace led to
an inconsistency when calling webhooks. When server-side apply is used,
the request sent to webhooks has the field "namespace" populated with
the name of namespace being created. On the other hand, when using
client-side apply the "namespace" field is omitted.
This commit aims to make the behaviour consistent and populates the
"namespace" field when creating a namespace using POST verb (i.e.
client-side apply).
Kubernetes-commit: 3cb510e33eecbdc37aad14f121396ccfbf5268cb
If a request tries to change managedFields, the response returns the
managedField of the live object.
Kubernetes-commit: c522ee08a3d248ec1097e3673119ffa7a4e1ef7b
previously all sorts of errors including a data race were possible because deferredResponseWriter resets the writer and returns it to the pool.
an attempt to write to a nil writer will lead to "invalid memory address or nil pointer dereference"
sharing the same instance of deferredResponseWriter might lead to "index out of range [43] with length 30" and "recovered from err index > windowEnd" errors
Kubernetes-commit: e6f98311d00f083c1b980ed7434d2e9769fa921f
- Test that client-side apply users don't encounter a conflict with
server-side apply for objects that previously didn't track managedFields
- Test that we stop tracking managed fields with `managedFields: []`
- Test that we stop tracking managed fields when the feature is disabled
Kubernetes-commit: f2deb2417a6c542c54606ab17376b26ef1552b87
- Allow client-side to server-side apply upgrade.
Ensure that a user can change management of an object from client-side apply to
server-side apply without conflicts.
- Allow server-side apply to client-side downgrade.
For an object managed with client-side apply, a user may upgrade to
managing the object with server-side apply, then decide to downgrade.
We can support this downgrade by keeping the last-applied-configuration
annotation for client-side apply updated with server-side apply.
Kubernetes-commit: e4368eb67e363d3d03f81214a8929268d2fe88ff
Beta OS/arch labels have been deprecated since 1.14.
This change replaces these labels with the GA ones.
Kubernetes-commit: bcd975aa6575ae37ec3be3481e44cd0dccd02337
I've also moved the deserialization of the object outside the benchmark
since we're not trying to benchmark the yaml parser.
Kubernetes-commit: a52776fbfb305374d87bb553739f712e055b2206
Lowers probability of managedField population on create/update to 0%
until serialization/normalization issues are resolved
Kubernetes-commit: ba23aa98f6574bd1f9781f0d3e61d0496f16fc53
This is super expensive and not needed at all since we don't have to
reparse the entire object. Remove all allocations but the first one.
Kubernetes-commit: 31c644a1e79c685b52683ed1e84964186a37f3ff
We don't have a lot of data on allocations and how much time it takes to
run apply or update on objects, so adding some benchmark will help us
investigate possible improvements.
Kubernetes-commit: 92cf3764f979e63317c8f483d8e841e0358599f4
The previous HTTP compression implementation functioned as a filter, which
required it to deal with a number of special cases that complicated the
implementation.
Instead, when we write an API object to a response, handle only that one
case. This will allow a more limited implementation that does not impact
other code flows.
Also, to prevent excessive CPU use on small objects, compression is
disabled on responses smaller than 128Kb in size.
Kubernetes-commit: 4ed2b9875d0498b5c577095075bda341e96fcec2
Normal files should have permissions 644 by default,
and does not require the last bit to be
executable
Signed-off-by: Odin Ugedal <odin@ugedal.com>
Kubernetes-commit: 35cb87f9cf71776e99a970dfff751cd29ba7ebfb
Correctly ensure CRDs can be watched using protobuf when transformed to
PartialObjectMetadata. To do this we add a set of serializers allowed to
be used for "normal" requests (that return CRDs) while the serializers
supported by the infrastructure is broader and includes protobuf. During
negotatiation we check for transformation requests and protobuf is
excluded from non-transform requests.
As part of the change, correct an error message when the server returns
a 406 but the client doesn't accept the format to avoid confusing users
who set impossible Accept rules for CRDs (the dynamic client doesn't
support Protobuf, so if the server responds with a protobuf status the
message from the server is lost and the generic error was confusing).
Kubernetes-commit: 89e752add07f443248f66e4798d160f2d7529a19
conflict.
Adding unit test verify that deleteValidation is retried.
adding e2e test verifying the webhook can intercept configmap and custom
resource deletion, and the existing object is sent via the
admissionreview.OldObject.
update the admission integration test to verify that the existing object
is passed to the deletion admission webhook as oldObject, in case of an
immediate deletion and in case of an update-on-delete.
Kubernetes-commit: 7bb4a3bace048cb9cd93d0221a7bf7c4accbf6be
Errors on updates are bad because they usually come from controllers and
it's very hard to take actions on them. We also don't want to start
breaking kubernetes clusters if something in a schema happens a way we
didn't foresee (even though we've tried to be diligent and test as much
as possible, these can still happen).
Log an identifiable error when they happen. Ideally people can look in
the logs to find these and report them, or providers can look for these
in logs and make sure they don't happen.
Only conversion to internal types are going to be logged and ignored.
It means that we're still failing for:
- Version conversions. If we can't convert the object from one version
to another,
- Unions. If we can't normalize the union,
- Invalid MangedFields sent in the object. If something has changed the
ManagedFields to an invalid value.
- Failure to serialize the manager information, this really shouldn't
happen.
- Encoding the ManagedFields
Kubernetes-commit: 4e32d183d0257c9f6c7f8342d1f9aa7f28458f2f
Typo during setting up PartialObjectMetadataList, it should be a slice
of `PartialObjectMetadata`, not a slice of `*PartialObjectMetadata`.
Kubernetes-commit: f25efd12e63f1d7db5f29fe28831ad0126200c0b
Now that internal types are equivalent, allow the apiserver to serve
metav1 and metav1beta1 depending on the client. Test that in the
apiserver integration test and ensure we get the appropriate responses.
Register the metav1 type in the appropriate external locations.
Kubernetes-commit: 33a3e325f754d179b25558dee116fca1c67d353a
We have an existing helper function for this: runtime.SerializerInfoForMediaType()
This is common prep-work for encoding runtime.Objects into JSON/YAML for transmission over the wire or writing to ComponentConfigs.
Kubernetes-commit: 47e52d2981dc2a5c5950042f50688cf24dd92eda
Clarifies that requesting no conversion is part of the codec factory, and
future refactors will make the codec factory less opionated about conversion.
Kubernetes-commit: 7f9dfe58f4cbe1e1b9e80f52addff70bac87bed4
RequestScope is a large struct and causes stack growth when we pass
it by value into multiple stack levels. Avoid the allocations for
this read only struct by passing a pointer.
Kubernetes-commit: 8fede0b18a81a6fb1acc1a48857f482857c25286
A self link should only require one allocation, and we should skip
url.PathEscape() except when the path actually needs it.
Add a fuzz test to build random strings and verify them against
the optimized implementation. Add a new BenchmarkWatchHTTP_UTF8 that
covers when we have unicode names in the self link.
```
> before
BenchmarkGet-12 10000 118863 ns/op 17482 B/op 130 allocs/op
BenchmarkWatchHTTP-12 30000 38346 ns/op 1893 B/op 29 allocs/op
> after
BenchmarkGet-12 10000 116218 ns/op 17456 B/op 130 allocs/op
BenchmarkWatchHTTP-12 50000 35988 ns/op 1571 B/op 26 allocs/op
BenchmarkWatchHTTP_UTF8-12 50000 41467 ns/op 1928 B/op 28 allocs/op
```
Saves 3 allocations in the fast path and 1 in the slow path (the
slow path has to build the buffer and then call url.EscapedPath
which always allocates).
Kubernetes-commit: 389a8436b52db4936b56e08f07984da362c91f6b
There was no reason to have two types and this avoids ~10% of allocations
on the GET code path.
```
BenchmarkGet-12 100000 109045 ns/op 17608 B/op 146 allocs/op
BenchmarkGet-12 100000 108850 ns/op 15942 B/op 132 allocs/op
```
Kubernetes-commit: 0489d0b1cf139253b82f73b072578073bc5616d6
IsListType was causing ~100 allocations for a non list object. It is
used in a wide range of code, so perform a more targeted check.
The use of `%#v` in a hot return path for `fmt.Errorf()` was the main
victim.
Replace `%#v` with a typed error and create a cache of types that are
lists with a bounded size (probably not necessary, but safer).
```
BenchmarkGet-12 100000 119635 ns/op 20110 B/op 206 allocs/op
BenchmarkWatchHTTP-12 100000 65761 ns/op 7296 B/op 139 allocs/op
BenchmarkGet-12 100000 109085 ns/op 17831 B/op 152 allocs/op
BenchmarkWatchHTTP-12 200000 33966 ns/op 1913 B/op 30 allocs/op
```
Kubernetes-commit: 58fb665646aa4c1b63f1322a50e3af7a60e39886
Clean up the code paths that lead to objects being transformed and output with negotiation.
Remove some duplicate code that was not consistent. Now, watch will respond correctly to
Table and PartialObjectMetadata requests. Add unit and integration tests.
When transforming responses to Tables, only the first watch event for a given type will
include the columns. Columns will not change unless the watch is restarted.
Add a volume attachment printer and tighten up table validation error cases.
Disable protobuf from table conversion because Tables don't have protobuf because they
use `interface{}`
Kubernetes-commit: 3230a0b4fd14a6166f8362d4732e199e8779c426
What is the problem being solved?
https://github.com/kubernetes/kubernetes/pull/75135
Currently version compatibility is not being checked in server side apply between the patch object and the live object. This is causing a merge that will error to be run and the apiserver returns a 500 error. The request should fail if the apiVersion provided in the object is different from the apiVersion in the url, but it should fail before trying to merge, and be a 4xx error. Probably a bad request error.
Why is this the best approach?
The approach of serializing the patch byte array and then checking for version equality with the already serialized live object is the simplest and most straightforward solution.
Kubernetes-commit: d5bd17cda0c134e5ef5c03c3eac79a9ce4e18003
And add a corresponding flag in kubectl (for apply), even though the
value is defaulted in kubectl with "kubectl".
The flag is required for Apply patch-type, and optional for other PATCH,
CREATE and UPDATE (in which case we fallback on the user-agent).
Kubernetes-commit: eb904d8fa89da491f400614f99458ed3f0d529fb
DefaultEndpointRestrictions is only used in the module,
so this renames it to defaultEndpointRestrictions.
Kubernetes-commit: 302ec9859113f322a32ed03673865b32ca5a130a
refactor fieldstrip and update tests
add checks and remove empty fields
shorten test and check for nil manager
fix gofmt
panic on nil manager
Kubernetes-commit: 9082cac48240ebc316015dabb466e5b24a113dc1
Move to github.com/munnerz/goautoneg as bitbucket is flaky!
Change-Id: Iaa6e964ef0d6f308eea59bcc6f365ecd7dbf0784
Kubernetes-commit: 16fd72d6c91ba466a0e955a1d59a6c8d9e8791bc
Prepare to support watch by cleaning up the conversion method and
splitting out each transition into a smaller method.
Kubernetes-commit: 63c49ba55a8da571522a9615dfa64471c5e9041e
Make setLink and setListLink the same, and make them happen in transformResponseObject.
Make those methods also responsible for ensuring an empty list. Then move outputMediaType
negotiation before all other calls in the specific methods, to ensure we fail fast.
Refactoring in preparation to support type conversion on watch.
Kubernetes-commit: 56a25d8c5f04ec5401b99c8eb29e980b1e8123d3
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Kubernetes-commit: 954996e231074dc7429f7be1256a579bedd8344c
Added tracing for use cases where etcd is not the cause of long running
requests.
Fixed spelling.
Factored in Wojtek-t feedback.
Kubernetes-commit: 99ebe8747176a10c718d5e3276c64d8c507bfb3b
- Added metav1.Status() that enforces '406 Not Acceptable' response if
protobuf serialization is not fully supported for the API resource type.
- JSON and YAML serialization are supposed to be more completely baked
in, so serialization involving those, and general errors with seralizing
protobuf, will return '500 Internal Server Error'.
- If serialization failure occurs and original HTTP status code is
error, use the original status code, else use the serialization failure
status code.
- Write encoded API responses to intermediate buffer
- Use apimachinery/runtime::Encode() instead of
apimachinery/runtime/protocol::Encode() in
apiserver/endpoints/handlers/responsewriters/writers::SerializeObject()
- This allows for intended encoder error handling to fully work, facilitated by
apiserver/endpoints/handlers/responsewriters/status::ErrorToAPIResponse() before officially
writing to the http.ResponseWriter
- The specific part that wasn't working by ErrorToAPIResponse() was the
HTTP status code set. A direct call to
http.ResponseWriter::WriteHeader(statusCode) was made in
SerializeObject() with the original response status code, before
performing the encode. Once this
method is called, it can not again update the status code at a later
time, with say, an erro status code due to encode failure.
- Updated relevant apiserver unit test to reflect the new behavior
(TestWriteJSONDecodeError())
- Add build deps from make update for protobuf serializer
50342: Code review suggestion impl
- Ensure that http.ResponseWriter::Header().Set() is called before http.ResponseWriter::WriteHeader()
- This will avert a potential issue where changing the response media type to text/plain wouldn't work.
- We want to respond with plain text if serialization fails of the original response, and serialization also fails for the resultant error response.
50342: wrapper for http.ResponseWriter
- Prevent potential performance regression caused by modifying encode to use a buffer instead of streaming
- This is achieved by creating a wrapper type for http.ResponseWriter that will use WriteHeader(statusCode) on the first
call to Write(). Thus, on encode success, Write() will write the original statusCode. On encode failure, we pass control
onto responsewriters::errSerializationFatal(), which will process the error to obtain potentially a new status code, depending
on whether or not the original status code was itself an error.
50342: code review suggestions
- Remove historical note from unit test comment
- Don't export httpResponseWriterWithInit type (for now)
Kubernetes-commit: bcdf3bb64333ce12f15b1beebef48f554d69027f
This ensures that request cancellation will be propagated properly to
the client used to create the stream. Without this fix, the apiserver
and the kubelet may leak resources (e.g., goroutine, inotify watches).
One such example is that if user run `kubectl logs -f <container that
don't produce new logs)` and then enter ctrl-c, both kubelet and
apiserver will hold on to the connection and resources indefinitely.
Kubernetes-commit: 31d1607a514b62ef46452e402f5438d827314b98
Make apiserver pass connectRequest.Options directly to the admission layer. All
the information in rest.ConnectRequest is present in admission attributes.
Kubernetes-commit: 355691d310803ea3a0cd8ff284a39ead38857602
This makes the error consistent with the timeout filter and also helps
the user understand that they requested a specific timeout.
Kubernetes-commit: 8a2d037bc51c97758c0a68f2726f104953846cd5
remove create-on-update logic for quota controller
review: add more error check
remove unused args
revert changes in patch.go
use hasUID to judge if it's a create-on-update
Kubernetes-commit: ccb1ec7a3695082326fe60ec06890f91004dc043
Got
```
E0628 00:23:07.106285 1 watch.go:274] unable to encode watch object: expected pointer, but got invalid kind
```
on a production system and had no way to debug what type was being sent.
Kubernetes-commit: 307849baef076d8ee61a3b9649f9260a765f7ac0
builds on #62868
1. When the incoming patch specified a resourceVersion that failed as a precondition,
the patch handler would retry uselessly 5 times. This PR collapses onto GuaranteedUpdate,
which immediately stops retrying in that case.
2. When the incoming patch did not specify a resourceVersion, and persisting to etcd
contended with other etcd updates, the retry would try to detect patch conflicts with
deltas from the first 'current object' retrieved from etcd and fail with a conflict error
in that case. Given that the user did not provide any information about the starting version
they expected their patch to apply to, this does not make sense, and results in arbitrary
conflict errors, depending on when the patch was submitted relative to other changes made
to the resource. This PR changes the patch application to be performed on the object retrieved
from etcd identically on every attempt.
fixes#58017
SMP is no longer computed for CRD objects
fixes#42644
No special state is retained on the first attempt, so the patch handler correctly handles
the cached storage optimistically trying with a cached object first
Kubernetes-commit: fbd6f3808480d27a83643e82a11c217601b76cbc
This is the combination of a series of changes which individually don't
make any behavioral changes. The original commits are preserved in my
own fork in the refactor-patch-complete branch, as when squashed this is
impossible to review.
This turned a big function with lots of parameters and closures into an
object with multiple functions, fewer closures and more well documented
state transitions.
Kubernetes-commit: 349a99b80e7e6c0c92218c814ae0858fd71609fc
Since we have a custom handler for apiextensions-apiserver,
we need to record the metrics here.
Kubernetes-commit: 74cd45fb21b349dd037e3bfd844459ca5834cca1