efficiently without checking conflicts, and wire up CRD discovery
controller to serve OpenAPI spec.
Kubernetes-commit: 3222a7033cf9128b76c0677887f4e383821d0475
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Kubernetes-commit: 954996e231074dc7429f7be1256a579bedd8344c
The striped cache used by the token cache is slightly more sophisticated
however the simple cache provides about the same exact behavior. I used
the striped cache rather than the simple cache because:
* It has been used without issue as the primary token cache.
* It preforms better under load.
* It is already exposed in the public API of the token cache package.
Kubernetes-commit: 0ec4d6d396f237ccb3ae0e96922a90600befb83d
Added tracing for use cases where etcd is not the cause of long running
requests.
Fixed spelling.
Factored in Wojtek-t feedback.
Kubernetes-commit: 99ebe8747176a10c718d5e3276c64d8c507bfb3b
Watch can return type "ERROR" and a metav1.Status object. We need to
handle that during wait, and make it easy to handle the status object.
Kubernetes-commit: 5a8afa073f6b8cbb8b09f997f6db747c39dffb6e
Suppress common name verify warning log and roll up into returned error
remove glog test dependency
Kubernetes-commit: bb3124c48a4d276ed280175e5825ea9db022d699
kubernetes/kubernetes#67768 accidentally removed population of the the ClientCA
in the delegating auth setup code. This restores it.
Kubernetes-commit: 65cea86e4413cb5899c3b89bda375bb326de5093
Like we do everywhere else we use TranformFromStorage. The current
behavior is causing all service account tokens to be regenerated,
invalidating old service account tokens and unrecoverably breaking apps
that are using InClusterConfig or exported service account tokens.
If we are going to break stuff, let's just break the Lists so that
misconfiguration of encryption config or checkpoint corruption are
obvious.
Kubernetes-commit: e7bda4431da05b55b4e8f66ed308d4ed90efd2df
Adding blank line between comment tag and package name in doc.go. So
that the comment tags such as '+k8s:deepcopy-gen=package' do not show up
in GoDoc.
Kubernetes-commit: 61117761cd4a1b2e6ad9ff2d7eb915f3d2739dc6
adding validation for componentconfig
adding validation to cmd kube-scheduler
Add support for ipv6 in IsValidSocketAddr function
updating copyright date in componentconfig/validation/validation.go
updating copyright date in componentconfig/validation/validation_test.go
adding validation for cli options
adding BUILD files
updating validate function to return []errors in cmd/kube-scheduler
ok, really returning []error this time
adding comments for exported componentconfig Validation functions
silly me, not checking structs along the way :'(
refactor to avoid else statement
moving policy nil check up one function
rejigging some deprecated cmd validations
stumbling my way around validation slowly but surely
updating according to review from @bsalamat
- not validating leader election config unless leader election is enabled
- leader election time values cannot be zero
- removing validation for KubeConfigFile
- removing validation for scheduler policy
leader elect options should be non-negative
adding test cases for renewDeadline and leaseDuration being zero
fixing logic in componentconfig validation 😅
removing KubeConfigFile reference from tests as it was removed in master
2ff9bd6699
removing bogus space after var assignment
adding more tests for componentconfig based on feedback
making updates to validation because types were moved on master
update bazel build
adding validation for staging/apimachinery
adding validation for staging/apiserver
adding fieldPaths for staging validations
moving staging validations out of componentconfig
updating test case scenario for staging/apimachinery
./hack/update-bazel.sh
moving kube-scheduler validations from componentconfig
./hack/update-bazel.sh
removing non-negative check for QPS
resourceLock required
adding HardPodAffinitySymmetricWeight 0-100 range to cmd flag help section
Kubernetes-commit: 0334a34e4af9b56ffa4d8fe17514c931c69db84b
This is the old behaviour and we did not intent to change it due to enabled authn/z in general.
As the kube-apiserver this sets the "system:unsecured" user info.
Kubernetes-commit: 8aa0eefce8fbd801a38da46c8704f2d74996e5cd
pkg/kubectl/util/logs & staging/src/k8s.io/apiserver/pkg/util/logs
use `glog.info(...)` but this function is not made to be wrapped because
the underlying mechanism use a fixed call trace length to determine
where the log has been emited.
This results is having `logs.go:49` in the logs which is in the body
of the wrapper function and thus useless.
Instead use `glog.infoDepth(1, ...)` which tells the underlying mechanism
to go back 1 more level in the call trace to determine where the log
has been emitted.
Signed-off-by: Sylvain Rabot <sylvain@abstraction.fr>
Kubernetes-commit: 7e7b01fa3103e272ca4acc5a4fac6a9119c2623d
For ObjectMeta pruning, we round trip through marshalling and
unmarshalling. If the ObjectMeta contained any strings with "" (or other
fields with empty values) _and_ the respective fields are omitempty,
those fields will be lost in the round trip process.
This makes ObjectMeta after the no-op write different from the one
before the write.
Resource version is incremented every time data is written to etcd.
Writes to etcd short-circuit if the bytes being written are identical
to the bytes already present. So this ends up incrementing the
resourceVersion even on no-op writes.
The zero values are set in BeforeCreate and BeforeUpdate. This commit
updates BeforeUpdate such that zero values are only set when the
object does not have a zero value for the respective field.
Kubernetes-commit: d691748aa64376e4b90ae2c01aca14d1be2b442c
- Added metav1.Status() that enforces '406 Not Acceptable' response if
protobuf serialization is not fully supported for the API resource type.
- JSON and YAML serialization are supposed to be more completely baked
in, so serialization involving those, and general errors with seralizing
protobuf, will return '500 Internal Server Error'.
- If serialization failure occurs and original HTTP status code is
error, use the original status code, else use the serialization failure
status code.
- Write encoded API responses to intermediate buffer
- Use apimachinery/runtime::Encode() instead of
apimachinery/runtime/protocol::Encode() in
apiserver/endpoints/handlers/responsewriters/writers::SerializeObject()
- This allows for intended encoder error handling to fully work, facilitated by
apiserver/endpoints/handlers/responsewriters/status::ErrorToAPIResponse() before officially
writing to the http.ResponseWriter
- The specific part that wasn't working by ErrorToAPIResponse() was the
HTTP status code set. A direct call to
http.ResponseWriter::WriteHeader(statusCode) was made in
SerializeObject() with the original response status code, before
performing the encode. Once this
method is called, it can not again update the status code at a later
time, with say, an erro status code due to encode failure.
- Updated relevant apiserver unit test to reflect the new behavior
(TestWriteJSONDecodeError())
- Add build deps from make update for protobuf serializer
50342: Code review suggestion impl
- Ensure that http.ResponseWriter::Header().Set() is called before http.ResponseWriter::WriteHeader()
- This will avert a potential issue where changing the response media type to text/plain wouldn't work.
- We want to respond with plain text if serialization fails of the original response, and serialization also fails for the resultant error response.
50342: wrapper for http.ResponseWriter
- Prevent potential performance regression caused by modifying encode to use a buffer instead of streaming
- This is achieved by creating a wrapper type for http.ResponseWriter that will use WriteHeader(statusCode) on the first
call to Write(). Thus, on encode success, Write() will write the original statusCode. On encode failure, we pass control
onto responsewriters::errSerializationFatal(), which will process the error to obtain potentially a new status code, depending
on whether or not the original status code was itself an error.
50342: code review suggestions
- Remove historical note from unit test comment
- Don't export httpResponseWriterWithInit type (for now)
Kubernetes-commit: bcdf3bb64333ce12f15b1beebef48f554d69027f
This ensures that request cancellation will be propagated properly to
the client used to create the stream. Without this fix, the apiserver
and the kubelet may leak resources (e.g., goroutine, inotify watches).
One such example is that if user run `kubectl logs -f <container that
don't produce new logs)` and then enter ctrl-c, both kubelet and
apiserver will hold on to the connection and resources indefinitely.
Kubernetes-commit: 31d1607a514b62ef46452e402f5438d827314b98
This commit prevents extension API server from erroring out during bootstrap when the core
API server doesn't support certificate based authentication for it's clients i.e. client-ca isn't
present in extension-apiserver-authentication ConfigMap in kube-system.
This can happen in cluster setups where core API server uses Webhook token authentication.
Fixes: https://github.com/kubernetes/kubernetes/issues/65724
Kubernetes-commit: db828a44406efe09e2db91e6dc88d1292c9a29e1
Make apiserver pass connectRequest.Options directly to the admission layer. All
the information in rest.ConnectRequest is present in admission attributes.
Kubernetes-commit: 355691d310803ea3a0cd8ff284a39ead38857602
This makes the error consistent with the timeout filter and also helps
the user understand that they requested a specific timeout.
Kubernetes-commit: 8a2d037bc51c97758c0a68f2726f104953846cd5
There's code to automatically populate OpenAPI info based on existing
generic apiserver config, but it only fires if securitydefinitions are
present. This doesn't make much sense, since this info is both required
and independent of security definitions, and there's no easy, generic
way to generate security definitions for an aggregated API server.
Kubernetes-commit: ef73bb684bcc4402f66160f254193d2690b80f11