Merge pull request #3151 from eduartua/issue-3064-grouping-by-sig-architecture
Grouping files in /devel by SIG - SIG Architecture
This commit is contained in:
commit
3f8bf88a06
|
@ -44,7 +44,7 @@ that does not contain a discriminator.
|
|||
|---|---|
|
||||
| non-inlined non-discriminated union | Yes |
|
||||
| non-inlined discriminated union | Yes |
|
||||
| inlined union with [patchMergeKey](/contributors/devel/api-conventions.md#strategic-merge-patch) only | Yes |
|
||||
| inlined union with [patchMergeKey](/contributors/devel/sig-architecture/api-conventions.md#strategic-merge-patch) only | Yes |
|
||||
| other inlined union | No |
|
||||
|
||||
For the inlined union with patchMergeKey, we move the tag to the parent struct's instead of
|
||||
|
|
|
@ -80,7 +80,7 @@ There are two configurations in which it makes sense to run `kube-aggregator`.
|
|||
`api.mycompany.com/v1/grobinators` from different apiservers. This restriction
|
||||
allows us to limit the scope of `kube-aggregator` to a manageable level.
|
||||
* Follow API conventions: APIs exposed by every API server should adhere to [kubernetes API
|
||||
conventions](../../devel/api-conventions.md).
|
||||
conventions](/contributors/devel/sig-architecture/api-conventions.md).
|
||||
* Support discovery API: Each API server should support the kubernetes discovery API
|
||||
(list the supported groupVersions at `/apis` and list the supported resources
|
||||
at `/apis/<groupVersion>/`)
|
||||
|
|
|
@ -148,7 +148,7 @@ in *CRD v1* (apiextensions.k8s.io/v1), there will be only version list with no t
|
|||
|
||||
#### Alternative approaches considered
|
||||
|
||||
First a defaulting approach is considered which per-version fields would be defaulted to top level fields. but that breaks backward incompatible change; Quoting from API [guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#backward-compatibility-gotchas):
|
||||
First a defaulting approach is considered which per-version fields would be defaulted to top level fields. but that breaks backward incompatible change; Quoting from API [guidelines](/contributors/devel/sig-architecture/api_changes.md#backward-compatibility-gotchas):
|
||||
|
||||
> A single feature/property cannot be represented using multiple spec fields in the same API version simultaneously
|
||||
|
||||
|
|
|
@ -31,7 +31,7 @@ The `Version` object currently only specifies:
|
|||
## Expectations about third party objects
|
||||
|
||||
Every object that is added to a third-party Kubernetes object store is expected
|
||||
to contain Kubernetes compatible [object metadata](../devel/api-conventions.md#metadata).
|
||||
to contain Kubernetes compatible [object metadata](/contributors/devel/sig-architecture/api-conventions.md#metadata).
|
||||
This requirement enables the Kubernetes API server to provide the following
|
||||
features:
|
||||
* Filtering lists of objects via label queries.
|
||||
|
|
|
@ -6,7 +6,7 @@ Most users will deploy a combination of applications they build themselves, also
|
|||
|
||||
In the case of the latter, users sometimes have the choice of using hosted SaaS products that are entirely managed by the service provider and are therefore opaque, also known as **_blackbox_** *services*. However, they often run open-source components themselves, and must configure, deploy, scale, secure, monitor, update, and otherwise manage the lifecycles of these **_whitebox_** *COTS applications*.
|
||||
|
||||
This document proposes a unified method of managing both bespoke and off-the-shelf applications declaratively using the same tools and application operator workflow, while leveraging developer-friendly CLIs and UIs, streamlining common tasks, and avoiding common pitfalls. The approach is based on observations of several dozen configuration projects and hundreds of configured applications within Google and in the Kubernetes ecosystem, as well as quantitative analysis of Borg configurations and work on the Kubernetes [system architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture.md), [API](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md), and command-line tool ([kubectl](https://github.com/kubernetes/community/wiki/Roadmap:-kubectl)).
|
||||
This document proposes a unified method of managing both bespoke and off-the-shelf applications declaratively using the same tools and application operator workflow, while leveraging developer-friendly CLIs and UIs, streamlining common tasks, and avoiding common pitfalls. The approach is based on observations of several dozen configuration projects and hundreds of configured applications within Google and in the Kubernetes ecosystem, as well as quantitative analysis of Borg configurations and work on the Kubernetes [system architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture.md), [API](/contributors/devel/sig-architecture/api-conventions.md), and command-line tool ([kubectl](https://github.com/kubernetes/community/wiki/Roadmap:-kubectl)).
|
||||
|
||||
The central idea is that a toolbox of composable configuration tools should manipulate configuration data in the form of declarative API resource specifications, which serve as a [declarative data model](https://docs.google.com/document/d/1RmHXdLhNbyOWPW_AtnnowaRfGejw-qlKQIuLKQWlwzs/edit#), not express configuration as code or some other representation that is restrictive, non-standard, and/or difficult to manipulate.
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ Principles to follow when extending Kubernetes.
|
|||
|
||||
## API
|
||||
|
||||
See also the [API conventions](../../devel/api-conventions.md).
|
||||
See also the [API conventions](/contributors/devel/sig-architecture/api-conventions.md).
|
||||
|
||||
* All APIs should be declarative.
|
||||
* API objects should be complementary and composable, not opaque wrappers.
|
||||
|
|
|
@ -89,7 +89,7 @@ API groups may be exposed as a unified API surface while being served by distinc
|
|||
|
||||
Each API server supports a custom [discovery API](https://github.com/kubernetes/client-go/blob/master/discovery/discovery_client.go) to enable clients to discover available API groups, versions, and types, and also [OpenAPI](https://kubernetes.io/blog/2016/12/kubernetes-supports-openapi/), which can be used to extract documentation and validation information about the resource types.
|
||||
|
||||
See the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md ) for more details.
|
||||
See the [Kubernetes API conventions](/contributors/devel/sig-architecture/api-conventions.md ) for more details.
|
||||
|
||||
## Resource semantics and lifecycle
|
||||
|
||||
|
@ -97,12 +97,12 @@ Each API resource undergoes [a common sequence of behaviors](https://kubernetes.
|
|||
|
||||
1. [Authentication](https://kubernetes.io/docs/admin/authentication/)
|
||||
2. [Authorization](https://kubernetes.io/docs/admin/authorization/): [Built-in](https://kubernetes.io/docs/admin/authorization/rbac/) and/or [administrator-defined](https://kubernetes.io/docs/admin/authorization/webhook/) identity-based policies
|
||||
3. [Defaulting](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#defaulting): API-version-specific default values are made explicit and persisted
|
||||
3. [Defaulting](/contributors/devel/sig-architecture/api-conventions.md#defaulting): API-version-specific default values are made explicit and persisted
|
||||
4. Conversion: The apiserver converts between the client-requested [API version](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#API-versioning) and the version it uses to store each resource type in etcd
|
||||
5. [Admission control](https://kubernetes.io/docs/admin/admission-controllers/): [Built-in](https://kubernetes.io/docs/admin/admission-controllers/) and/or [administrator-defined](https://kubernetes.io/docs/admin/extensible-admission-controllers/) resource-type-specific policies
|
||||
6. [Validation](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#validation): Resource field values are validated. Other than the presence of required fields, the API resource schema is not currently validated, but optional validation may be added in the future
|
||||
6. [Validation](/contributors/devel/sig-architecture/api-conventions.md#validation): Resource field values are validated. Other than the presence of required fields, the API resource schema is not currently validated, but optional validation may be added in the future
|
||||
7. Idempotence: Resources are accessed via immutable client-provided, declarative-friendly names
|
||||
8. [Optimistic concurrency](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency): Writes may specify a precondition that the **resourceVersion** last reported for a resource has not changed
|
||||
8. [Optimistic concurrency](/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency): Writes may specify a precondition that the **resourceVersion** last reported for a resource has not changed
|
||||
9. [Audit logging](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/): Records the sequence of changes to each resource by all actors
|
||||
|
||||
Additional behaviors are supported upon deletion:
|
||||
|
|
|
@ -63,7 +63,7 @@ and is supported on several
|
|||
# Alpha Design
|
||||
|
||||
This section describes the proposed design for
|
||||
[alpha-level](../devel/api_changes.md#alpha-beta-and-stable-versions) support, although
|
||||
[alpha-level](/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions) support, although
|
||||
additional features are described in [future work](#future-work). For AppArmor alpha support
|
||||
(targeted for Kubernetes 1.4) we will enable:
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ feature's owner(s). The following are suggested conventions:
|
|||
- Features that touch multiple components should reserve the same key
|
||||
in each component to toggle on/off.
|
||||
- Alpha features should be disabled by default. Beta features may
|
||||
be enabled by default. Refer to docs/devel/api_changes.md#alpha-beta-and-stable-versions
|
||||
be enabled by default. Refer to [this file](/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)
|
||||
for more detailed guidance on alpha vs. beta.
|
||||
|
||||
## Upgrade support
|
||||
|
|
|
@ -84,7 +84,7 @@ Optional API operations:
|
|||
support WATCH for this API. Implementations can choose to support or not
|
||||
support this operation. An implementation that does not support the
|
||||
operation should return HTTP error 405, StatusMethodNotAllowed, per the
|
||||
[relevant Kubernetes API conventions](/contributors/devel/api-conventions.md#error-codes).
|
||||
[relevant Kubernetes API conventions](/contributors/devel/sig-architecture/api-conventions.md#error-codes).
|
||||
|
||||
We also intend to support a use case where the server returns a file that can be
|
||||
stored for later use. We expect this to be doable with the standard API
|
||||
|
@ -107,7 +107,7 @@ objects that contain a value for the `ClusterName` field. The `Cluster` object's
|
|||
of namespace scoped.
|
||||
|
||||
The `Cluster` object will have `Spec` and `Status` fields, following the
|
||||
[Kubernetes API conventions](/contributors/devel/api-conventions.md#spec-and-status).
|
||||
[Kubernetes API conventions](/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
There was argument in favor of a `State` field instead of `Spec` and `Status`
|
||||
fields, since the `Cluster` in the registry does not necessarily hold a user's
|
||||
intent about the cluster being represented, but instead may hold descriptive
|
||||
|
|
|
@ -50,7 +50,7 @@ lot of applications and customer use-cases.
|
|||
# Alpha Design
|
||||
|
||||
This section describes the proposed design for
|
||||
[alpha-level](../devel/api_changes.md#alpha-beta-and-stable-versions) support, although
|
||||
[alpha-level](/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions) support, although
|
||||
additional features are described in [future work](#future-work).
|
||||
|
||||
## Overview
|
||||
|
|
|
@ -49,7 +49,7 @@ This was asked on the mailing list here[2] and here[3], too.
|
|||
Several alternatives have been considered:
|
||||
|
||||
* Add a mode to the API definition when using secrets: this is backward
|
||||
compatible as described in (docs/devel/api_changes.md) IIUC and seems like the
|
||||
compatible as described [here](/contributors/devel/sig-architecture/api_changes.md) IIUC and seems like the
|
||||
way to go. Also @thockin said in the ML that he would consider such an
|
||||
approach. But it might be worth to consider if we want to do the same for
|
||||
configmaps or owners, but there is no need to do it now either.
|
||||
|
|
|
@ -243,7 +243,7 @@ type VolumeAttachment struct {
|
|||
metav1.TypeMeta `json:",inline"`
|
||||
|
||||
// Standard object metadata.
|
||||
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
|
||||
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
// +optional
|
||||
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ The API design of VolumeSnapshot and VolumeSnapshotContent is modeled after Pers
|
|||
type VolumeSnapshot struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
// Standard object's metadata.
|
||||
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
|
||||
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
// +optional
|
||||
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
|
||||
|
||||
|
@ -144,7 +144,7 @@ Note that if an error occurs before the snapshot is cut, `Error` will be set and
|
|||
type VolumeSnapshotContent struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
// Standard object's metadata.
|
||||
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
|
||||
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
// +optional
|
||||
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
|
||||
|
||||
|
@ -234,7 +234,7 @@ A new VolumeSnapshotClass API object will be added instead of reusing the existi
|
|||
type VolumeSnapshotClass struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
// Standard object's metadata.
|
||||
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
|
||||
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
// +optional
|
||||
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ Guide](http://kubernetes.io/docs/admin/).
|
|||
|
||||
* **Testing** ([testing.md](sig-testing/testing.md)): How to run unit, integration, and end-to-end tests in your development sandbox.
|
||||
|
||||
* **Conformance Testing** ([conformance-tests.md](conformance-tests.md))
|
||||
* **Conformance Testing** ([conformance-tests.md](sig-architecture/conformance-tests.md))
|
||||
What is conformance testing and how to create/manage them.
|
||||
|
||||
* **Hunting flaky tests** ([flaky-tests.md](sig-testing/flaky-tests.md)): We have a goal of 99.9% flake free tests.
|
||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -1,221 +1,3 @@
|
|||
# Component Configuration Conventions
|
||||
This file has moved to https://git.k8s.io/community/contributors/devel/sig-architecture/component-config-conventions.md.
|
||||
|
||||
# Objective
|
||||
|
||||
This document concerns the configuration of Kubernetes system components (as
|
||||
opposed to the configuration of user workloads running on Kubernetes).
|
||||
Component configuration is a major operational burden for operators of
|
||||
Kubernetes clusters. To date, much literature has been written on and much
|
||||
effort expended to improve component configuration. Despite this, the state of
|
||||
component configuration remains dissonant. This document attempts to aggregate
|
||||
that literature and propose a set of guidelines that component owners can
|
||||
follow to improve consistency across the project.
|
||||
|
||||
# Background
|
||||
|
||||
Currently, component configuration is primarily driven through command line
|
||||
flags. Command line driven configuration poses certain problems which are
|
||||
discussed below. Attempts to improve component configuration as a whole have
|
||||
been slow to make progress and have petered out (ref componentconfig api group,
|
||||
configmap driven config issues). Some component owners have made use case
|
||||
specific improvements on a per-need basis. Various comments in issues recommend
|
||||
subsets of best design practice but no coherent, complete story exists.
|
||||
|
||||
## Pain Points of Current Configuration
|
||||
|
||||
Flag based configuration has poor qualities such as:
|
||||
|
||||
1. Flags exist in a flat namespace, hampering the ability to organize them and expose them in helpful documentation. --help becomes useless as a reference as the number of knobs grows. It's impossible to distinguish useful knobs from cruft.
|
||||
1. Flags can't easily have different values for different instances of a class. To adjust the resync period in the informers of O(n) controllers requires O(n) different flags in a global namespace.
|
||||
1. Changing a process's command line necessitates a binary restart. This negatively impacts availability.
|
||||
1. Flags are unsuitable for passing confidential configuration. The command line of a process is available to unprivileged process running in the host pid namespace.
|
||||
1. Flags are a public API but are unversioned and unversionable.
|
||||
1. Many arguments against using global variables apply to flags.
|
||||
|
||||
Configuration in general has poor qualities such as:
|
||||
|
||||
1. Configuration changes have the same forward/backward compatibility requirements as releases but rollout/rollback of configuration largely untested. Examples of configuration changes that might break a cluster: kubelet CNI plugin, etcd storage version.
|
||||
1. Configuration options often exist only to test a specific feature where the default is reasonable for all real use cases. Examples: many sync periods.
|
||||
1. Configuration options often exist to defer a "hard" design decision and to pay forward the "TODO(someone-else): think critically".
|
||||
1. Configuration options are often used to workaround deficiencies of the API. For example `--register-with-labels` and `--register-with-taints` could be solved with a node initializer, if initializers existed.
|
||||
1. Configuration options often exist to take testing shortcuts. There is a mentality that because a feature is opt-in, it can be released as a flag without robust testing.
|
||||
1. Configuration accumulates new knobs, knobs accumulate new behaviors, knobs are forgotten and bitrot reducing code quality over time.
|
||||
1. Number of configuration options is inversely proportional to test coverage. The size of the configuration state space grows >O(2^n) with the number of configuration bits. A handful of states in that space are ever tested.
|
||||
1. Configuration options hamper troubleshooting efforts. On github, users frequently file tickets from environments that are neither consistent nor reproducible.
|
||||
|
||||
## Types Of Configuration
|
||||
|
||||
Configuration can only come from three sources:
|
||||
|
||||
1. Command line flags.
|
||||
1. API types serialized and stored on disk.
|
||||
1. API types serialized and stored in the kubernetes API.
|
||||
|
||||
Configuration options can be partitioned along certain lines. To name a few
|
||||
important partitions:
|
||||
|
||||
1. Bootstrap: This is configuration that is required before the component can contact the API. Examples include the kubeconfig and the filepath to the kubeconfig.
|
||||
1. Dynamic vs Static: Dynamic config is config that is expected to change as part of normal operations such as a scheduler configuration or a node entering maintenance mode. Static config is config that is unlikely to change over subsequent deployments and even releases of a component.
|
||||
1. Shared vs Per-Instance: Per-Instance configuration is configuration whose value is unique to the instance that the node runs on (e.g. Kubelet's `--hostname-override`).
|
||||
1. Feature Gates: Feature gates are configuration options that enable a feature that has been deemed unsafe to enable by default.
|
||||
1. Request context dependent: Request context dependent config is config that should probably be scoped to an attribute of the request (such as the user). We do a pretty good job of keeping these out of config and in policy objects (e.g. Quota, RBAC) but we could do more (e.g. rate limits).
|
||||
1. Environment information: This is configuration that is available through downwards and OS APIs, e.g. node name, pod name, number of cpus, IP address.
|
||||
|
||||
# Requirements
|
||||
|
||||
Desired qualities of a configuration solution:
|
||||
|
||||
1. Secure: We need to control who can change configuration. We need to control who can read sensitive configuration.
|
||||
1. Manageable: We need to control which instances of a component uses which configuration, especially when those instances differ in version.
|
||||
1. Reliable: Configuration pushes should just work. If they fail, they should fail early in the rollout, rollback config if possible, and alert noisily.
|
||||
1. Recoverable: We need to be able to update (e.g. rollback) configuration when a component is down.
|
||||
1. Monitorable: Both humans and computers need to monitor configuration; humans through json interfaces like /configz, computers through interfaces like prometheus /streamz. Confidential configuration needs to be accounted for, but can also be useful to monitor in an unredacted or partially redacted (i.e. hashed) form.
|
||||
1. Verifiable: We need to be able to verify that a configuration is good. We need to verify the integrity of the received configuration and we need to validate that the encoded configuration state is sensible.
|
||||
1. Auditable: We need to be able to trace the origin of a configuration change.
|
||||
1. Accountable: We need to correlate a configuration push with its impact to the system. We need to be able to do this at the time of the push and later when analyzing logs.
|
||||
1. Available: We should avoid high frequency configuration updates that require service disruption. We need to take into account system component SLA.
|
||||
1. Scalable: We need to support distributing configuration to O(10,000) components at our current supported scalability limits.
|
||||
1. Consistent: There should exist conventions that hold across components.
|
||||
1. Composable: We should favor composition of configuration sources over layering/templating/inheritance.
|
||||
1. Normalized: Redundant specification of configuration data should be avoided.
|
||||
1. Testable: We need to be able to test the system under many different configurations. We also need to test configuration changes, both dynamic changes and those that require process restarts.
|
||||
1. Maintainable: We need to push back on ever increasing cyclomatic complexity in our codebase. Each if statement and function argument added to support a configuration option negatively impacts the maintainability of our code.
|
||||
1. Evolvable: We need to be able to extend our configuration API like we extend our other user facing APIs. We need to hold our configuration API to the same SLA and deprecation policy of public facing APIs. (e.g. [dynamic admission control](https://github.com/kubernetes/community/pull/611) and [hooks](https://github.com/kubernetes/kubernetes/issues/3585))
|
||||
|
||||
These don't need to be implemented immediately but are good to keep in mind. At
|
||||
some point these should be ranked by priority and implemented.
|
||||
|
||||
# Two Part Solution:
|
||||
|
||||
## Part 1: Don't Make It Configuration
|
||||
|
||||
The most effective way to reduce the operational burden of configuration is to
|
||||
minimize the amount of configuration. When adding a configuration option, ask
|
||||
whether alternatives might be a better fit.
|
||||
|
||||
1. Policy objects: Create first class Kubernetes objects to encompass how the system should behave. These are especially useful for request context dependent configuration. We do this already in places such as RBAC and ResourceQuota but we could do more such as rate limiting. We should never hardcode groups or usermaps in configuration.
|
||||
1. API features: Use (or implement) functionality of the API (e.g. think through and implement initializers instead of --register-with-label). Allowing for extension in the right places is a better way to give users control.
|
||||
1. Feature discovery: Write components that introspect the existing API to decide whether to enable a feature or not. E.g. controller-manager should start an app controller if the app API is available, kubelet should enable zram if zram is set in the node spec.
|
||||
1. Downwards API: Use the APIs that the OS and pod environment expose directly before opting to pass in new configuration options.
|
||||
1. const's: If you don't know whether tweaking a value will be useful, make the value const. Only give it a configuration option once there becomes a need to tweak the value at runtime.
|
||||
1. Autotuning: Build systems that incorporate feedback and do the best thing under the given circumstances. This makes the system more robust. (e.g. prefer congestion control, load shedding, backoff rather than explicit limiting).
|
||||
1. Avoid feature flags: Turn on features when they are tested and ready for production. Don't use feature flags as a fallback for poorly tested code.
|
||||
1. Configuration profiles: Instead of allowing individual configuration options to be modified, try to encompass a broader desire as a configuration profile. For example: instead of enabling individual alpha features, have an EnableAlpha option that enables all. Instead of allowing individual controller knobs to be modified, have a TestMode option that sets a broad number of parameters to be suitable for tests.
|
||||
|
||||
## Part 2: Component Configuration
|
||||
|
||||
### Versioning Configuration
|
||||
|
||||
We create configuration API groups per component that live in the source tree of
|
||||
the component. Each component has its own API group for configuration.
|
||||
Components will use the same API machinery that we use for other API groups.
|
||||
Configuration API serialization doesn't have the same performance requirements
|
||||
as other APIs so much of the codegen can be avoided (e.g. ugorji, generated
|
||||
conversions) and we can instead fallback to the reflection based implementations
|
||||
where they exist.
|
||||
|
||||
Configuration API groups for component config should be named according to the
|
||||
scheme `<component>.config.k8s.io`. The `.config.k8s.io` suffix serves to
|
||||
disambiguate types of config API groups from served APIs.
|
||||
|
||||
### Retrieving Configuration
|
||||
|
||||
The primary mechanism for retrieving static configuration should be
|
||||
deserialization from files. For the majority of components (with the possible
|
||||
exception of the kubelet, see
|
||||
[here](https://github.com/kubernetes/kubernetes/pull/29459)), these files will
|
||||
be source from the configmap API and managed by the kubelet. Reliability of
|
||||
this mechanism is predicated on kubelet checkpointing of pod dependencies.
|
||||
|
||||
|
||||
### Structuring Configuration
|
||||
|
||||
Group related options into distinct objects and subobjects. Instead of writing:
|
||||
|
||||
|
||||
```yaml
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1beta3
|
||||
ipTablesSyncPeriod: 2
|
||||
ipTablesConntrackHashSize: 2
|
||||
ipTablesConntrackTableSize: 2
|
||||
```
|
||||
|
||||
Write:
|
||||
|
||||
```yaml
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1beta3
|
||||
ipTables:
|
||||
syncPeriod: 2
|
||||
conntrack:
|
||||
hashSize: 2
|
||||
tableSize: 2
|
||||
```
|
||||
|
||||
We should avoid passing around full configuration options to deeply constructed
|
||||
modules. For example, instead of calling NewSomethingController in the
|
||||
controller-manager with the full controller-manager config, group relevant
|
||||
config into a subobject and only pass the subobject. We should expose the
|
||||
smallest possible necessary configuration to the SomethingController.
|
||||
|
||||
|
||||
### Handling Different Types Of Configuration
|
||||
|
||||
Above in "Type Of Configuration" we introduce a few ways to partition
|
||||
configuration options. Environment information, request context depending
|
||||
configuration, feature gates, and static configuration should be avoided if at
|
||||
all possible using a configuration alternative. We should maintain separate
|
||||
objects along these partitions and consider retrieving these configurations
|
||||
from separate source (i.e. files). For example: kubeconfig (which falls into
|
||||
the bootstrapping category) should not be part of the main config option (nor
|
||||
should the filepath to the kubeconfig), per-instance config should be stored
|
||||
separately from shared config. This allows for composition and obviates the
|
||||
need for layering/templating solutions.
|
||||
|
||||
|
||||
### In-Process Representation Of Configuration
|
||||
|
||||
We should separate structs for flags, serializable config, and runtime config.
|
||||
|
||||
1. Structs for flags should have enough information for the process startup to retrieve its full configuration. Examples include: path the kubeconfig, path to configuration file, namespace and name of configmap to use for configuration.
|
||||
1. Structs for serializable configuration: This struct contains the full set of options in a serializable form (e.g. to represent an ip address instead of `net.IP`, use `string`). This is the struct that is versioned and serialized to disk using API machinery.
|
||||
1. Structs for runtime: This struct holds data in the most appropriate format for execution. This field can hold non-serializable types (e.g. have a `kubeClient` field instead of a `kubeConfig` field, store ip addresses as `net.IP`).
|
||||
|
||||
The flag struct is transformed into the configuration struct which is
|
||||
transformed into the runtime struct.
|
||||
|
||||
|
||||
### Migrating Away From Flags
|
||||
|
||||
Migrating to component configuration can happen incrementally (per component).
|
||||
By versioning each component's API group separately, we can allow each API
|
||||
group to advance to beta and GA independently. APIs should be approved by
|
||||
component owners and reviewers familiar with the component configuration
|
||||
conventions. We can incentivize operators to migrate away from flags by making
|
||||
new configuration options only available through the component configuration
|
||||
APIs.
|
||||
|
||||
# Caveats
|
||||
|
||||
Proposed are not laws but guidelines and as such we've favored completeness
|
||||
over consistency. There will thus be need for exceptions.
|
||||
|
||||
1. Components (especially those that are not self hosted such as the kubelet) will require custom rollout strategies of new config.
|
||||
1. Pod checkpointing by kubelet would allow this strategy to be simpler to make reliable.
|
||||
|
||||
|
||||
# Miscellaneous Consideration
|
||||
|
||||
1. **This document takes intentionally a very zealous stance against configuration.** Often configuration alternatives are not possible in Kubernetes as they are in proprietary software because Kubernetes has to run in diverse environments, with diverse users, managed by diverse operators.
|
||||
1. More frequent releases of kubernetes would make "skipping the config knob" more enticing because fixing a bad guess at a const wouldn't take O(4 months) best case to rollout. Factoring in our support for old versions, it takes closer to a year.
|
||||
1. Self-hosting resolves much of the distribution issue (except for maybe the Kubelet) but reliability is predicated on to-be-implemented features such as kubelet checkpointing of pod dependencies and sound operational practices such as incremental rollout of new configuration using Deployments/DaemonSets.
|
||||
1. Validating config is hard. Fatal logs lead to crash loops and error logs are ignored. Both options are suboptimal.
|
||||
1. Configuration needs to be updatable when components are down.
|
||||
1. Naming style guide:
|
||||
1. No negatives, e.g. prefer --enable-foo over --disable-foo
|
||||
1. Use the active voice
|
||||
1. We should actually enforce deprecation. Can we have a test that fails when a comment exists beyond its deadline to be removed? See [#44248](https://github.com/kubernetes/kubernetes/issues/44248)
|
||||
1. Use different implementations of the same interface rather than if statements to toggle features. This makes deprecation and deletion easy, improving maintainability.
|
||||
1. How does the proposed solution meet the requirements? Which desired qualities are missed?
|
||||
1. Configuration changes should trigger predictable and reproducible actions. From a given system state and a given component configuration, we should be able to simulate the actions that the system will take.
|
||||
This file is a placeholder to preserve links. Please remove by April 24, 2019 or the release of kubernetes 1.13, whichever comes first.
|
|
@ -1,216 +1,3 @@
|
|||
# Conformance Testing in Kubernetes
|
||||
This file has moved to https://git.k8s.io/community/contributors/devel/sig-architecture/conformance-tests.md.
|
||||
|
||||
The Kubernetes Conformance test suite is a subset of e2e tests that SIG
|
||||
Architecture has approved to define the core set of interoperable features that
|
||||
all conformant Kubernetes clusters must support. The tests verify that the
|
||||
expected behavior works as a user might encounter it in the wild.
|
||||
|
||||
The process to add new conformance tests is intended to decouple the development
|
||||
of useful tests from their promotion to conformance:
|
||||
- Contributors write and submit e2e tests, to be approved by owning SIGs
|
||||
- Tests are proven to meet the [conformance test requirements] by review
|
||||
and by accumulation of data on flakiness and reliability
|
||||
- A follow up PR is submitted to [promote the test to conformance](#promoting-tests-to-conformance)
|
||||
|
||||
NB: This should be viewed as a living document in a few key areas:
|
||||
- The desired set of conformant behaviors is not adequately expressed by the
|
||||
current set of e2e tests, as such this document is currently intended to
|
||||
guide us in the addition of new e2e tests than can fill this gap
|
||||
- This document currently focuses solely on the requirements for GA,
|
||||
non-optional features or APIs. The list of requirements will be refined over
|
||||
time to the point where it as concrete and complete as possible.
|
||||
- There are currently conformance tests that violate some of the requirements
|
||||
(e.g., require privileged access), we will be categorizing these tests and
|
||||
deciding what to do once we have a better understanding of the situation
|
||||
- Once we resolve the above issues, we plan on identifying the appropriate areas
|
||||
to relax requirements to allow for the concept of conformance Profiles that
|
||||
cover optional or additional behaviors
|
||||
|
||||
## Conformance Test Requirements
|
||||
|
||||
Conformance tests currently test only GA, non-optional features or APIs. More
|
||||
specifically, a test is eligible for promotion to conformance if:
|
||||
|
||||
- it tests only GA, non-optional features or APIs (e.g., no alpha or beta
|
||||
endpoints, no feature flags required, no deprecated features)
|
||||
- it works for all providers (e.g., no `SkipIfProviderIs`/`SkipUnlessProviderIs`
|
||||
calls)
|
||||
- it is non-privileged (e.g., does not require root on nodes, access to raw
|
||||
network interfaces, or cluster admin permissions)
|
||||
- it works without access to the public internet (short of whatever is required
|
||||
to pre-pull images for conformance tests)
|
||||
- it works without non-standard filesystem permissions granted to pods
|
||||
- it does not rely on any binaries that would not be required for the linux
|
||||
kernel or kubelet to run (e.g., can't rely on git)
|
||||
- any container images used within the test support all architectures for which
|
||||
kubernetes releases are built
|
||||
- it passes against the appropriate versions of kubernetes as spelled out in
|
||||
the [conformance test version skew policy]
|
||||
- it is stable and runs consistently (e.g., no flakes)
|
||||
|
||||
Examples of features which are not currently eligible for conformance tests:
|
||||
|
||||
- node/platform-reliant features, eg: multiple disk mounts, GPUs, high density,
|
||||
etc.
|
||||
- optional features, eg: policy enforcement
|
||||
- cloud-provider-specific features, eg: GCE monitoring, S3 Bucketing, etc.
|
||||
- anything that requires a non-default admission plugin
|
||||
|
||||
Examples of tests which are not eligible for promotion to conformance:
|
||||
- anything that checks specific Events are generated, as we make no guarantees
|
||||
about the contents of events, nor their delivery
|
||||
- anything that checks optional Condition fields, such as Reason or Message, as
|
||||
these may change over time (however it is reasonable to verify these fields
|
||||
exist or are non-empty)
|
||||
|
||||
Examples of areas we may want to relax these requirements once we have a
|
||||
sufficient corpus of tests that define out of the box functionality in all
|
||||
reasonable production worthy environments:
|
||||
- tests may need to create or set objects or fields that are alpha or beta that
|
||||
bypass policies that are not yet GA, but which may reasonably be enabled on a
|
||||
conformant cluster (e.g., pod security policy, non-GA scheduler annotations)
|
||||
|
||||
## Conformance Test Version Skew Policy
|
||||
|
||||
As each new release of Kubernetes provides new functionality, the subset of
|
||||
tests necessary to demonstrate conformance grows with each release. Conformance
|
||||
is thus considered versioned, with the same backwards compatibility guarantees
|
||||
as laid out in the [kubernetes versioning policy]
|
||||
|
||||
To quote:
|
||||
|
||||
> For example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes, and
|
||||
> should work with v1.2, v1.3, and v1.4 clients.
|
||||
|
||||
Conformance tests for a given version should be run off of the release branch
|
||||
that corresponds to that version. Thus `v1.2` conformance tests would be run
|
||||
from the head of the `release-1.2` branch.
|
||||
|
||||
For example, suppose we're in the midst of developing kubernetes v1.3. Clusters
|
||||
with the following versions must pass conformance tests built from the
|
||||
following branches:
|
||||
|
||||
| cluster version | master | release-1.3 | release-1.2 | release-1.1 |
|
||||
| --------------- | ----- | ----------- | ----------- | ----------- |
|
||||
| v1.3.0-alpha | yes | yes | yes | no |
|
||||
| v1.2.x | no | no | yes | yes |
|
||||
| v1.1.x | no | no | no | yes |
|
||||
|
||||
## Running Conformance Tests
|
||||
|
||||
Conformance tests are designed to be run even when there is no cloud provider
|
||||
configured. Conformance tests must be able to be run against clusters that have
|
||||
not been created with `hack/e2e.go`, just provide a kubeconfig with the
|
||||
appropriate endpoint and credentials.
|
||||
|
||||
These commands are intended to be run within a kubernetes directory, either
|
||||
cloned from source, or extracted from release artifacts such as
|
||||
`kubernetes.tar.gz`. They assume you have a valid golang installation.
|
||||
|
||||
```sh
|
||||
# ensure kubetest is installed
|
||||
go get -u k8s.io/test-infra/kubetest
|
||||
|
||||
# build test binaries, ginkgo, and kubectl first:
|
||||
make WHAT="test/e2e/e2e.test vendor/github.com/onsi/ginkgo/ginkgo cmd/kubectl"
|
||||
|
||||
# setup for conformance tests
|
||||
export KUBECONFIG=/path/to/kubeconfig
|
||||
export KUBERNETES_CONFORMANCE_TEST=y
|
||||
|
||||
# Option A: run all conformance tests serially
|
||||
kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\]"
|
||||
|
||||
# Option B: run parallel conformance tests first, then serial conformance tests serially
|
||||
GINKGO_PARALLEL=y kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]"
|
||||
kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]"
|
||||
```
|
||||
|
||||
## Kubernetes Conformance Document
|
||||
|
||||
For each Kubernetes release, a Conformance Document will be generated that lists
|
||||
all of the tests that comprise the conformance test suite, along with the formal
|
||||
specification of each test. For an example, see the [v1.9 conformance doc].
|
||||
This document will help people understand what features are being tested without
|
||||
having to look through the testcase's code directly.
|
||||
|
||||
|
||||
## Promoting Tests to Conformance
|
||||
|
||||
To promote a test to the conformance test suite, open a PR as follows:
|
||||
- is titled "Promote xxx e2e test to Conformance"
|
||||
- includes information and metadata in the description as follows:
|
||||
- "/area conformance" on a newline
|
||||
- "@kubernetes/sig-architecture-pr-reviews @kubernetes/sig-foo-pr-reviews
|
||||
@kubernetes/cncf-conformance-wg" on a new line, where sig-foo is whichever
|
||||
sig owns this test
|
||||
- any necessary information in the description to verify that the test meets
|
||||
[conformance test requirements], such as links to reports or dashboards that
|
||||
prove lack of flakiness
|
||||
- contains no other modifications to test source code other than the following:
|
||||
- modifies the testcase to use the `framework.ConformanceIt()` function rather
|
||||
than the `framework.It()` function
|
||||
- adds a comment immediately before the `ConformanceIt()` call that includes
|
||||
all of the required [conformance test comment metadata]
|
||||
- add the PR to SIG Architecture's [Conformance Test Review board]
|
||||
|
||||
|
||||
### Conformance Test Comment Metadata
|
||||
|
||||
Each conformance test must include the following piece of metadata
|
||||
within its associated comment:
|
||||
|
||||
- `Release`: indicates the Kubernetes release that the test was added to the
|
||||
conformance test suite. If the test was modified in subsequent releases
|
||||
then those releases should be included as well (comma separated)
|
||||
- `Testname`: a human readable short name of the test
|
||||
- `Description`: a detailed description of the test. This field must describe
|
||||
the required behaviour of the Kubernetes components being tested using
|
||||
[RFC2119](https://tools.ietf.org/html/rfc2119) keywords. This field
|
||||
is meant to be a "specification" of the tested Kubernetes features, as
|
||||
such, it must be detailed enough so that readers can fully understand
|
||||
the aspects of Kubernetes that are being tested without having to read
|
||||
the test's code directly. Additionally, this test should provide a clear
|
||||
distinction between the parts of the test that are there for the purpose
|
||||
of validating Kubernetes rather than simply infrastructure logic that
|
||||
is necessary to setup, or clean up, the test.
|
||||
|
||||
### Sample Conformance Test
|
||||
|
||||
The following snippet of code shows a sample conformance test's metadata:
|
||||
|
||||
```
|
||||
/*
|
||||
Release : v1.9
|
||||
Testname: Kubelet: log output
|
||||
Description: By default the stdout and stderr from the process being
|
||||
executed in a pod MUST be sent to the pod's logs.
|
||||
*/
|
||||
framework.ConformanceIt("it should print the output to logs", func() {
|
||||
...
|
||||
})
|
||||
```
|
||||
|
||||
The corresponding portion of the Kubernetes Conformance Documentfor this test
|
||||
would then look like this:
|
||||
|
||||
> ## [Kubelet: log output](https://github.com/kubernetes/kubernetes/tree/release-1.9/test/e2e_node/kubelet_test.go#L47)
|
||||
>
|
||||
> Release : v1.9
|
||||
>
|
||||
> By default the stdout and stderr from the process being executed in a pod MUST be sent to the pod's logs.
|
||||
|
||||
### Reporting Conformance Test Results
|
||||
|
||||
Conformance test results, by provider and releases, can be viewed in the
|
||||
[testgrid conformance dashboard]. If you wish to contribute test results
|
||||
for your provider, please see the [testgrid conformance README]
|
||||
|
||||
[kubernetes versioning policy]: /contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew
|
||||
[Conformance Test Review board]: https://github.com/kubernetes-sigs/architecture-tracking/projects/1
|
||||
[conformance test requirements]: #conformance-test-requirements
|
||||
[conformance test metadata]: #conformance-test-metadata
|
||||
[conformance test version skew policy]: #conformance-test-version-skew-policy
|
||||
[testgrid conformance dashboard]: https://testgrid.k8s.io/conformance-all
|
||||
[testgrid conformance README]: https://github.com/kubernetes/test-infra/blob/master/testgrid/conformance/README.md
|
||||
[v1.9 conformance doc]: https://github.com/cncf/k8s-conformance/blob/master/docs/KubeConformance-1.9.md
|
||||
This file is a placeholder to preserve links. Please remove by April 24, 2019 or the release of kubernetes 1.13, whichever comes first.
|
|
@ -167,7 +167,7 @@ Kubernetes uses [`godep`](https://github.com/tools/godep) to manage
|
|||
dependencies.
|
||||
|
||||
Developers who need to manage dependencies in the `vendor/` tree should read
|
||||
the docs on [using godep to manage dependencies](godep.md).
|
||||
the docs on [using godep to manage dependencies](sig-architecture/godep.md).
|
||||
|
||||
|
||||
## Build with Bazel/Gazel
|
||||
|
|
|
@ -1,251 +1,3 @@
|
|||
# Using godep to manage dependencies
|
||||
This file has moved to https://git.k8s.io/community/contributors/devel/sig-architecture/godep.md.
|
||||
|
||||
This document is intended to show a way for managing `vendor/` tree dependencies
|
||||
in Kubernetes. If you do not need to manage vendored dependencies, you probably
|
||||
do not need to read this.
|
||||
|
||||
## Background
|
||||
|
||||
As a tool, `godep` leaves much to be desired. It builds on `go get`, and adds
|
||||
the ability to pin dependencies to exact git version. The `go get` tool itself
|
||||
doesn't have any concept of versions, and tends to blow up if it finds a git
|
||||
repo synced to anything but `master`, but that is exactly the state that
|
||||
`godep` leaves repos. This is a recipe for frustration when people try to use
|
||||
the tools.
|
||||
|
||||
This doc will focus on predictability and reproducibility.
|
||||
|
||||
## Justifications for an update
|
||||
|
||||
Before you update a dependency, take a moment to consider why it should be
|
||||
updated. Valid reasons include:
|
||||
1. We need new functionality that is in a later version.
|
||||
2. New or improved APIs in the dependency significantly improve Kubernetes code.
|
||||
3. Bugs were fixed that impact Kubernetes.
|
||||
4. Security issues were fixed even if they don't impact Kubernetes yet.
|
||||
5. Performance, scale, or efficiency was meaningfully improved.
|
||||
6. We need dependency A and there is a transitive dependency B.
|
||||
7. Kubernetes has an older level of a dependency that is precluding being able
|
||||
to work with other projects in the ecosystem.
|
||||
|
||||
## Theory of operation
|
||||
|
||||
The `go` toolchain assumes a global workspace that hosts all of your Go code.
|
||||
|
||||
The `godep` tool operates by first "restoring" dependencies into your `$GOPATH`.
|
||||
This reads the `Godeps.json` file, downloads all of the dependencies from the
|
||||
internet, and syncs them to the specified revisions. You can then make
|
||||
changes - sync to different revisions or edit Kubernetes code to use new
|
||||
dependencies (and satisfy them with `go get`). When ready, you tell `godep` to
|
||||
"save" everything, which it does by walking the Kubernetes code, finding all
|
||||
required dependencies, copying them from `$GOPATH` into the `vendor/` directory,
|
||||
and rewriting `Godeps.json`.
|
||||
|
||||
This does not work well, when combined with a global Go workspace. Instead, we
|
||||
will set up a private workspace for this process.
|
||||
|
||||
The Kubernetes build process uses this same technique, and offers a tool called
|
||||
`run-in-gopath.sh` which sets up and switches to a local, private workspace,
|
||||
including setting up `$GOPATH` and `$PATH`. If you wrap commands with this
|
||||
tool, they will use the private workspace, which will not conflict with other
|
||||
projects and is easily cleaned up and recreated.
|
||||
|
||||
To see this in action, you can run an interactive shell in this environment:
|
||||
|
||||
```sh
|
||||
# Run a shell, but don't run your own shell initializations.
|
||||
hack/run-in-gopath.sh bash --norc --noprofile
|
||||
```
|
||||
|
||||
## Restoring deps
|
||||
|
||||
To extract and download dependencies into `$GOPATH` we provide a script:
|
||||
`hack/godep-restore.sh`. If you run this tool, it will restore into your own
|
||||
`$GOPATH`. If you wrap it in `run-in-gopath.sh` it will restore into your
|
||||
`_output/` directory.
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/godep-restore.sh
|
||||
```
|
||||
|
||||
This script will try to optimize what it needs to download, and if it seems the
|
||||
dependencies are all present already, it will return very quickly.
|
||||
|
||||
If there's every any doubt about the correctness of your dependencies, you can
|
||||
simply `make clean` or `rm -rf _output`, and run it again.
|
||||
|
||||
Now you should have a clean copy of all of the Kubernetes dependencies.
|
||||
|
||||
Downloading dependencies might take a while, so if you want to see progress
|
||||
information use the `-v` flag:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/godep-restore.sh -v
|
||||
```
|
||||
|
||||
## Making changes
|
||||
|
||||
The most common things people need to do with deps are add and update them.
|
||||
These are similar but different.
|
||||
|
||||
### Adding a dep
|
||||
|
||||
For the sake of examples, consider that we have discovered a wonderful Go
|
||||
library at `example.com/go/frob`. The first thing you need to do is get that
|
||||
code into your workspace:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh go get -d example.com/go/frob
|
||||
```
|
||||
|
||||
This will fetch, but not compile (omit the `-d` if you want to compile it now),
|
||||
the library into your private `$GOPATH`. It will pull whatever the default
|
||||
revision of that library is, typically the `master` branch for git repositories.
|
||||
If this is not the revision you need, you can change it, for example to
|
||||
`v1.0.0`:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash -c 'git -C $GOPATH/src/example.com/go/frob checkout v1.0.0'
|
||||
```
|
||||
|
||||
Now that the code is present, you can start to use it in Kubernetes code.
|
||||
Because it is in your private workspace's `$GOPATH`, it might not be part of
|
||||
your own `$GOPATH`, so tools like `goimports` might not find it. This is an
|
||||
unfortunate side-effect of this process. You can either add the whole private
|
||||
workspace to your own `$GOPATH` or you can `go get` the library into your own
|
||||
`$GOPATH` until it is properly vendored into Kubernetes.
|
||||
|
||||
Another possible complication is a dep that uses `gopdep` itself. In that case,
|
||||
you need to restore its dependencies, too:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash -c 'cd $GOPATH/src/example.com/go/frob && godep restore'
|
||||
```
|
||||
|
||||
If the transitive deps collide with Kubernetes deps, you may have to manually
|
||||
resolve things. This is where the ability to run a shell in this environment
|
||||
comes in handy:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash --norc --noprofile
|
||||
```
|
||||
|
||||
### Updating a dep
|
||||
|
||||
Sometimes we already have a dep, but the version of it is wrong. Because of the
|
||||
way that `godep` and `go get` interact (badly) it's generally easiest to hit it
|
||||
with a big hammer:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash -c 'rm -rf $GOPATH/src/example.com/go/frob'
|
||||
hack/run-in-gopath.sh go get -d example.com/go/frob
|
||||
hack/run-in-gopath.sh bash -c 'git -C $GOPATH/src/example.com/go/frob checkout v2.0.0'
|
||||
```
|
||||
|
||||
This will remove the code, re-fetch it, and sync to your desired version.
|
||||
|
||||
### Removing a dep
|
||||
|
||||
This happens almost for free. If you edit Kubernetes code and remove the last
|
||||
use of a given dependency, you only need to restore and save the deps, and the
|
||||
`godep` tool will figure out that you don't need that dep any more:
|
||||
|
||||
## Saving deps
|
||||
|
||||
Now that you have made your changes - adding, updating, or removing the use of a
|
||||
dep - you need to rebuild the dependency database and make changes to the
|
||||
`vendor/` directory.
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/godep-save.sh
|
||||
```
|
||||
|
||||
This will run through all of the primary targets for the Kubernetes project,
|
||||
calculate which deps are needed, and rebuild the database. It will also
|
||||
regenerate other metadata files which the project needs, such as BUILD files and
|
||||
the LICENSE database.
|
||||
|
||||
Commit the changes before updating deps in staging repos.
|
||||
|
||||
## Saving deps in staging repos
|
||||
|
||||
Kubernetes stores some code in a directory called `staging` which is handled
|
||||
specially, and is not covered by the above. If you modified any code under
|
||||
staging, or if you changed a dependency of code under staging (even
|
||||
transitively), you'll also need to update deps there:
|
||||
|
||||
```sh
|
||||
./hack/update-staging-godeps.sh
|
||||
```
|
||||
|
||||
Then commit the changes generated by the above script.
|
||||
|
||||
## Commit messages
|
||||
|
||||
Terse messages like "Update foo.org/bar to 0.42" are problematic
|
||||
for maintainability. Please include in your commit message the
|
||||
detailed reason why the dependencies were modified.
|
||||
|
||||
Too commonly dependency changes have a ripple effect where something
|
||||
else breaks unexpectedly. The first instinct during issue triage
|
||||
is to revert a change. If the change was made to fix some other
|
||||
issue and that issue was not documented, then a revert simply
|
||||
continues the ripple by fixing one issue and reintroducing another
|
||||
which then needs refixed. This can needlessly span multiple days
|
||||
as CI results bubble in and subsequent patches fix and refix and
|
||||
rerefix issues. This may be avoided if the original modifications
|
||||
recorded artifacts of the change rationale.
|
||||
|
||||
## Sanity checking
|
||||
|
||||
After all of this is done, `git status` should show you what files have been
|
||||
modified and added/removed. Make sure to sanity-check them with `git diff`, and
|
||||
to `git add` and `git rm` them, as needed. It is commonly advised to make one
|
||||
`git commit` which includes just the dependencies and Godeps files, and
|
||||
another `git commit` that includes changes to Kubernetes code to use (or stop
|
||||
using) the new/updated/removed dependency. These commits can go into a single
|
||||
pull request.
|
||||
|
||||
Before sending your PR, it's a good idea to sanity check that your
|
||||
Godeps.json file and the contents of `vendor/ `are ok:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/verify-godeps.sh
|
||||
```
|
||||
|
||||
All this script will do is a restore, followed by a save, and then look for
|
||||
changes. If you followed the above instructions, it should be clean. If it is
|
||||
not, you get to figure out why.
|
||||
|
||||
## Manual updates
|
||||
|
||||
It is sometimes expedient to manually fix the `Godeps.json` file to
|
||||
minimize the changes. However, without great care this can lead to failures
|
||||
with the verifier scripts. The kubernetes codebase does "interesting things"
|
||||
with symlinks between `vendor/` and `staging/` to allow multiple Go import
|
||||
paths to coexist in the same git repo.
|
||||
|
||||
The verifiers, including `hack/verify-godeps.sh` *must* pass for every pull
|
||||
request.
|
||||
|
||||
## Reviewing and approving dependency changes
|
||||
|
||||
Particular attention to detail should be exercised when reviewing and approving
|
||||
PRs that add/remove/update dependencies. Importing a new dependency should bring
|
||||
a certain degree of value as there is a maintenance overhead for maintaining
|
||||
dependencies into the future.
|
||||
|
||||
When importing a new dependency, be sure to keep an eye out for the following:
|
||||
- Is the dependency maintained?
|
||||
- Does the dependency bring value to the project? Could this be done without
|
||||
adding a new dependency?
|
||||
- Is the target dependency the original source, or a fork?
|
||||
- Is there already a dependency in the project that does something similar?
|
||||
- Does the dependency have a license that is compatible with the Kubernetes
|
||||
project?
|
||||
|
||||
All new dependency licenses should be reviewed by either Tim Hockin (@thockin)
|
||||
or the Steering Committee (@kubernetes/steering-committee) to ensure that they
|
||||
are compatible with the Kubernetes project license. It is also important to note
|
||||
and flag if a license has changed when updating a dependency, so that these can
|
||||
also be reviewed.
|
||||
This file is a placeholder to preserve links. Please remove by April 24, 2019 or the release of kubernetes 1.13, whichever comes first.
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,221 @@
|
|||
# Component Configuration Conventions
|
||||
|
||||
# Objective
|
||||
|
||||
This document concerns the configuration of Kubernetes system components (as
|
||||
opposed to the configuration of user workloads running on Kubernetes).
|
||||
Component configuration is a major operational burden for operators of
|
||||
Kubernetes clusters. To date, much literature has been written on and much
|
||||
effort expended to improve component configuration. Despite this, the state of
|
||||
component configuration remains dissonant. This document attempts to aggregate
|
||||
that literature and propose a set of guidelines that component owners can
|
||||
follow to improve consistency across the project.
|
||||
|
||||
# Background
|
||||
|
||||
Currently, component configuration is primarily driven through command line
|
||||
flags. Command line driven configuration poses certain problems which are
|
||||
discussed below. Attempts to improve component configuration as a whole have
|
||||
been slow to make progress and have petered out (ref componentconfig api group,
|
||||
configmap driven config issues). Some component owners have made use case
|
||||
specific improvements on a per-need basis. Various comments in issues recommend
|
||||
subsets of best design practice but no coherent, complete story exists.
|
||||
|
||||
## Pain Points of Current Configuration
|
||||
|
||||
Flag based configuration has poor qualities such as:
|
||||
|
||||
1. Flags exist in a flat namespace, hampering the ability to organize them and expose them in helpful documentation. --help becomes useless as a reference as the number of knobs grows. It's impossible to distinguish useful knobs from cruft.
|
||||
1. Flags can't easily have different values for different instances of a class. To adjust the resync period in the informers of O(n) controllers requires O(n) different flags in a global namespace.
|
||||
1. Changing a process's command line necessitates a binary restart. This negatively impacts availability.
|
||||
1. Flags are unsuitable for passing confidential configuration. The command line of a process is available to unprivileged process running in the host pid namespace.
|
||||
1. Flags are a public API but are unversioned and unversionable.
|
||||
1. Many arguments against using global variables apply to flags.
|
||||
|
||||
Configuration in general has poor qualities such as:
|
||||
|
||||
1. Configuration changes have the same forward/backward compatibility requirements as releases but rollout/rollback of configuration largely untested. Examples of configuration changes that might break a cluster: kubelet CNI plugin, etcd storage version.
|
||||
1. Configuration options often exist only to test a specific feature where the default is reasonable for all real use cases. Examples: many sync periods.
|
||||
1. Configuration options often exist to defer a "hard" design decision and to pay forward the "TODO(someone-else): think critically".
|
||||
1. Configuration options are often used to workaround deficiencies of the API. For example `--register-with-labels` and `--register-with-taints` could be solved with a node initializer, if initializers existed.
|
||||
1. Configuration options often exist to take testing shortcuts. There is a mentality that because a feature is opt-in, it can be released as a flag without robust testing.
|
||||
1. Configuration accumulates new knobs, knobs accumulate new behaviors, knobs are forgotten and bitrot reducing code quality over time.
|
||||
1. Number of configuration options is inversely proportional to test coverage. The size of the configuration state space grows >O(2^n) with the number of configuration bits. A handful of states in that space are ever tested.
|
||||
1. Configuration options hamper troubleshooting efforts. On github, users frequently file tickets from environments that are neither consistent nor reproducible.
|
||||
|
||||
## Types Of Configuration
|
||||
|
||||
Configuration can only come from three sources:
|
||||
|
||||
1. Command line flags.
|
||||
1. API types serialized and stored on disk.
|
||||
1. API types serialized and stored in the kubernetes API.
|
||||
|
||||
Configuration options can be partitioned along certain lines. To name a few
|
||||
important partitions:
|
||||
|
||||
1. Bootstrap: This is configuration that is required before the component can contact the API. Examples include the kubeconfig and the filepath to the kubeconfig.
|
||||
1. Dynamic vs Static: Dynamic config is config that is expected to change as part of normal operations such as a scheduler configuration or a node entering maintenance mode. Static config is config that is unlikely to change over subsequent deployments and even releases of a component.
|
||||
1. Shared vs Per-Instance: Per-Instance configuration is configuration whose value is unique to the instance that the node runs on (e.g. Kubelet's `--hostname-override`).
|
||||
1. Feature Gates: Feature gates are configuration options that enable a feature that has been deemed unsafe to enable by default.
|
||||
1. Request context dependent: Request context dependent config is config that should probably be scoped to an attribute of the request (such as the user). We do a pretty good job of keeping these out of config and in policy objects (e.g. Quota, RBAC) but we could do more (e.g. rate limits).
|
||||
1. Environment information: This is configuration that is available through downwards and OS APIs, e.g. node name, pod name, number of cpus, IP address.
|
||||
|
||||
# Requirements
|
||||
|
||||
Desired qualities of a configuration solution:
|
||||
|
||||
1. Secure: We need to control who can change configuration. We need to control who can read sensitive configuration.
|
||||
1. Manageable: We need to control which instances of a component uses which configuration, especially when those instances differ in version.
|
||||
1. Reliable: Configuration pushes should just work. If they fail, they should fail early in the rollout, rollback config if possible, and alert noisily.
|
||||
1. Recoverable: We need to be able to update (e.g. rollback) configuration when a component is down.
|
||||
1. Monitorable: Both humans and computers need to monitor configuration; humans through json interfaces like /configz, computers through interfaces like prometheus /streamz. Confidential configuration needs to be accounted for, but can also be useful to monitor in an unredacted or partially redacted (i.e. hashed) form.
|
||||
1. Verifiable: We need to be able to verify that a configuration is good. We need to verify the integrity of the received configuration and we need to validate that the encoded configuration state is sensible.
|
||||
1. Auditable: We need to be able to trace the origin of a configuration change.
|
||||
1. Accountable: We need to correlate a configuration push with its impact to the system. We need to be able to do this at the time of the push and later when analyzing logs.
|
||||
1. Available: We should avoid high frequency configuration updates that require service disruption. We need to take into account system component SLA.
|
||||
1. Scalable: We need to support distributing configuration to O(10,000) components at our current supported scalability limits.
|
||||
1. Consistent: There should exist conventions that hold across components.
|
||||
1. Composable: We should favor composition of configuration sources over layering/templating/inheritance.
|
||||
1. Normalized: Redundant specification of configuration data should be avoided.
|
||||
1. Testable: We need to be able to test the system under many different configurations. We also need to test configuration changes, both dynamic changes and those that require process restarts.
|
||||
1. Maintainable: We need to push back on ever increasing cyclomatic complexity in our codebase. Each if statement and function argument added to support a configuration option negatively impacts the maintainability of our code.
|
||||
1. Evolvable: We need to be able to extend our configuration API like we extend our other user facing APIs. We need to hold our configuration API to the same SLA and deprecation policy of public facing APIs. (e.g. [dynamic admission control](https://github.com/kubernetes/community/pull/611) and [hooks](https://github.com/kubernetes/kubernetes/issues/3585))
|
||||
|
||||
These don't need to be implemented immediately but are good to keep in mind. At
|
||||
some point these should be ranked by priority and implemented.
|
||||
|
||||
# Two Part Solution:
|
||||
|
||||
## Part 1: Don't Make It Configuration
|
||||
|
||||
The most effective way to reduce the operational burden of configuration is to
|
||||
minimize the amount of configuration. When adding a configuration option, ask
|
||||
whether alternatives might be a better fit.
|
||||
|
||||
1. Policy objects: Create first class Kubernetes objects to encompass how the system should behave. These are especially useful for request context dependent configuration. We do this already in places such as RBAC and ResourceQuota but we could do more such as rate limiting. We should never hardcode groups or usermaps in configuration.
|
||||
1. API features: Use (or implement) functionality of the API (e.g. think through and implement initializers instead of --register-with-label). Allowing for extension in the right places is a better way to give users control.
|
||||
1. Feature discovery: Write components that introspect the existing API to decide whether to enable a feature or not. E.g. controller-manager should start an app controller if the app API is available, kubelet should enable zram if zram is set in the node spec.
|
||||
1. Downwards API: Use the APIs that the OS and pod environment expose directly before opting to pass in new configuration options.
|
||||
1. const's: If you don't know whether tweaking a value will be useful, make the value const. Only give it a configuration option once there becomes a need to tweak the value at runtime.
|
||||
1. Autotuning: Build systems that incorporate feedback and do the best thing under the given circumstances. This makes the system more robust. (e.g. prefer congestion control, load shedding, backoff rather than explicit limiting).
|
||||
1. Avoid feature flags: Turn on features when they are tested and ready for production. Don't use feature flags as a fallback for poorly tested code.
|
||||
1. Configuration profiles: Instead of allowing individual configuration options to be modified, try to encompass a broader desire as a configuration profile. For example: instead of enabling individual alpha features, have an EnableAlpha option that enables all. Instead of allowing individual controller knobs to be modified, have a TestMode option that sets a broad number of parameters to be suitable for tests.
|
||||
|
||||
## Part 2: Component Configuration
|
||||
|
||||
### Versioning Configuration
|
||||
|
||||
We create configuration API groups per component that live in the source tree of
|
||||
the component. Each component has its own API group for configuration.
|
||||
Components will use the same API machinery that we use for other API groups.
|
||||
Configuration API serialization doesn't have the same performance requirements
|
||||
as other APIs so much of the codegen can be avoided (e.g. ugorji, generated
|
||||
conversions) and we can instead fallback to the reflection based implementations
|
||||
where they exist.
|
||||
|
||||
Configuration API groups for component config should be named according to the
|
||||
scheme `<component>.config.k8s.io`. The `.config.k8s.io` suffix serves to
|
||||
disambiguate types of config API groups from served APIs.
|
||||
|
||||
### Retrieving Configuration
|
||||
|
||||
The primary mechanism for retrieving static configuration should be
|
||||
deserialization from files. For the majority of components (with the possible
|
||||
exception of the kubelet, see
|
||||
[here](https://github.com/kubernetes/kubernetes/pull/29459)), these files will
|
||||
be source from the configmap API and managed by the kubelet. Reliability of
|
||||
this mechanism is predicated on kubelet checkpointing of pod dependencies.
|
||||
|
||||
|
||||
### Structuring Configuration
|
||||
|
||||
Group related options into distinct objects and subobjects. Instead of writing:
|
||||
|
||||
|
||||
```yaml
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1beta3
|
||||
ipTablesSyncPeriod: 2
|
||||
ipTablesConntrackHashSize: 2
|
||||
ipTablesConntrackTableSize: 2
|
||||
```
|
||||
|
||||
Write:
|
||||
|
||||
```yaml
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1beta3
|
||||
ipTables:
|
||||
syncPeriod: 2
|
||||
conntrack:
|
||||
hashSize: 2
|
||||
tableSize: 2
|
||||
```
|
||||
|
||||
We should avoid passing around full configuration options to deeply constructed
|
||||
modules. For example, instead of calling NewSomethingController in the
|
||||
controller-manager with the full controller-manager config, group relevant
|
||||
config into a subobject and only pass the subobject. We should expose the
|
||||
smallest possible necessary configuration to the SomethingController.
|
||||
|
||||
|
||||
### Handling Different Types Of Configuration
|
||||
|
||||
Above in "Type Of Configuration" we introduce a few ways to partition
|
||||
configuration options. Environment information, request context depending
|
||||
configuration, feature gates, and static configuration should be avoided if at
|
||||
all possible using a configuration alternative. We should maintain separate
|
||||
objects along these partitions and consider retrieving these configurations
|
||||
from separate source (i.e. files). For example: kubeconfig (which falls into
|
||||
the bootstrapping category) should not be part of the main config option (nor
|
||||
should the filepath to the kubeconfig), per-instance config should be stored
|
||||
separately from shared config. This allows for composition and obviates the
|
||||
need for layering/templating solutions.
|
||||
|
||||
|
||||
### In-Process Representation Of Configuration
|
||||
|
||||
We should separate structs for flags, serializable config, and runtime config.
|
||||
|
||||
1. Structs for flags should have enough information for the process startup to retrieve its full configuration. Examples include: path the kubeconfig, path to configuration file, namespace and name of configmap to use for configuration.
|
||||
1. Structs for serializable configuration: This struct contains the full set of options in a serializable form (e.g. to represent an ip address instead of `net.IP`, use `string`). This is the struct that is versioned and serialized to disk using API machinery.
|
||||
1. Structs for runtime: This struct holds data in the most appropriate format for execution. This field can hold non-serializable types (e.g. have a `kubeClient` field instead of a `kubeConfig` field, store ip addresses as `net.IP`).
|
||||
|
||||
The flag struct is transformed into the configuration struct which is
|
||||
transformed into the runtime struct.
|
||||
|
||||
|
||||
### Migrating Away From Flags
|
||||
|
||||
Migrating to component configuration can happen incrementally (per component).
|
||||
By versioning each component's API group separately, we can allow each API
|
||||
group to advance to beta and GA independently. APIs should be approved by
|
||||
component owners and reviewers familiar with the component configuration
|
||||
conventions. We can incentivize operators to migrate away from flags by making
|
||||
new configuration options only available through the component configuration
|
||||
APIs.
|
||||
|
||||
# Caveats
|
||||
|
||||
Proposed are not laws but guidelines and as such we've favored completeness
|
||||
over consistency. There will thus be need for exceptions.
|
||||
|
||||
1. Components (especially those that are not self hosted such as the kubelet) will require custom rollout strategies of new config.
|
||||
1. Pod checkpointing by kubelet would allow this strategy to be simpler to make reliable.
|
||||
|
||||
|
||||
# Miscellaneous Consideration
|
||||
|
||||
1. **This document takes intentionally a very zealous stance against configuration.** Often configuration alternatives are not possible in Kubernetes as they are in proprietary software because Kubernetes has to run in diverse environments, with diverse users, managed by diverse operators.
|
||||
1. More frequent releases of kubernetes would make "skipping the config knob" more enticing because fixing a bad guess at a const wouldn't take O(4 months) best case to rollout. Factoring in our support for old versions, it takes closer to a year.
|
||||
1. Self-hosting resolves much of the distribution issue (except for maybe the Kubelet) but reliability is predicated on to-be-implemented features such as kubelet checkpointing of pod dependencies and sound operational practices such as incremental rollout of new configuration using Deployments/DaemonSets.
|
||||
1. Validating config is hard. Fatal logs lead to crash loops and error logs are ignored. Both options are suboptimal.
|
||||
1. Configuration needs to be updatable when components are down.
|
||||
1. Naming style guide:
|
||||
1. No negatives, e.g. prefer --enable-foo over --disable-foo
|
||||
1. Use the active voice
|
||||
1. We should actually enforce deprecation. Can we have a test that fails when a comment exists beyond its deadline to be removed? See [#44248](https://github.com/kubernetes/kubernetes/issues/44248)
|
||||
1. Use different implementations of the same interface rather than if statements to toggle features. This makes deprecation and deletion easy, improving maintainability.
|
||||
1. How does the proposed solution meet the requirements? Which desired qualities are missed?
|
||||
1. Configuration changes should trigger predictable and reproducible actions. From a given system state and a given component configuration, we should be able to simulate the actions that the system will take.
|
|
@ -0,0 +1,216 @@
|
|||
# Conformance Testing in Kubernetes
|
||||
|
||||
The Kubernetes Conformance test suite is a subset of e2e tests that SIG
|
||||
Architecture has approved to define the core set of interoperable features that
|
||||
all conformant Kubernetes clusters must support. The tests verify that the
|
||||
expected behavior works as a user might encounter it in the wild.
|
||||
|
||||
The process to add new conformance tests is intended to decouple the development
|
||||
of useful tests from their promotion to conformance:
|
||||
- Contributors write and submit e2e tests, to be approved by owning SIGs
|
||||
- Tests are proven to meet the [conformance test requirements] by review
|
||||
and by accumulation of data on flakiness and reliability
|
||||
- A follow up PR is submitted to [promote the test to conformance](#promoting-tests-to-conformance)
|
||||
|
||||
NB: This should be viewed as a living document in a few key areas:
|
||||
- The desired set of conformant behaviors is not adequately expressed by the
|
||||
current set of e2e tests, as such this document is currently intended to
|
||||
guide us in the addition of new e2e tests than can fill this gap
|
||||
- This document currently focuses solely on the requirements for GA,
|
||||
non-optional features or APIs. The list of requirements will be refined over
|
||||
time to the point where it as concrete and complete as possible.
|
||||
- There are currently conformance tests that violate some of the requirements
|
||||
(e.g., require privileged access), we will be categorizing these tests and
|
||||
deciding what to do once we have a better understanding of the situation
|
||||
- Once we resolve the above issues, we plan on identifying the appropriate areas
|
||||
to relax requirements to allow for the concept of conformance Profiles that
|
||||
cover optional or additional behaviors
|
||||
|
||||
## Conformance Test Requirements
|
||||
|
||||
Conformance tests currently test only GA, non-optional features or APIs. More
|
||||
specifically, a test is eligible for promotion to conformance if:
|
||||
|
||||
- it tests only GA, non-optional features or APIs (e.g., no alpha or beta
|
||||
endpoints, no feature flags required, no deprecated features)
|
||||
- it works for all providers (e.g., no `SkipIfProviderIs`/`SkipUnlessProviderIs`
|
||||
calls)
|
||||
- it is non-privileged (e.g., does not require root on nodes, access to raw
|
||||
network interfaces, or cluster admin permissions)
|
||||
- it works without access to the public internet (short of whatever is required
|
||||
to pre-pull images for conformance tests)
|
||||
- it works without non-standard filesystem permissions granted to pods
|
||||
- it does not rely on any binaries that would not be required for the linux
|
||||
kernel or kubelet to run (e.g., can't rely on git)
|
||||
- any container images used within the test support all architectures for which
|
||||
kubernetes releases are built
|
||||
- it passes against the appropriate versions of kubernetes as spelled out in
|
||||
the [conformance test version skew policy]
|
||||
- it is stable and runs consistently (e.g., no flakes)
|
||||
|
||||
Examples of features which are not currently eligible for conformance tests:
|
||||
|
||||
- node/platform-reliant features, eg: multiple disk mounts, GPUs, high density,
|
||||
etc.
|
||||
- optional features, eg: policy enforcement
|
||||
- cloud-provider-specific features, eg: GCE monitoring, S3 Bucketing, etc.
|
||||
- anything that requires a non-default admission plugin
|
||||
|
||||
Examples of tests which are not eligible for promotion to conformance:
|
||||
- anything that checks specific Events are generated, as we make no guarantees
|
||||
about the contents of events, nor their delivery
|
||||
- anything that checks optional Condition fields, such as Reason or Message, as
|
||||
these may change over time (however it is reasonable to verify these fields
|
||||
exist or are non-empty)
|
||||
|
||||
Examples of areas we may want to relax these requirements once we have a
|
||||
sufficient corpus of tests that define out of the box functionality in all
|
||||
reasonable production worthy environments:
|
||||
- tests may need to create or set objects or fields that are alpha or beta that
|
||||
bypass policies that are not yet GA, but which may reasonably be enabled on a
|
||||
conformant cluster (e.g., pod security policy, non-GA scheduler annotations)
|
||||
|
||||
## Conformance Test Version Skew Policy
|
||||
|
||||
As each new release of Kubernetes provides new functionality, the subset of
|
||||
tests necessary to demonstrate conformance grows with each release. Conformance
|
||||
is thus considered versioned, with the same backwards compatibility guarantees
|
||||
as laid out in the [kubernetes versioning policy]
|
||||
|
||||
To quote:
|
||||
|
||||
> For example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes, and
|
||||
> should work with v1.2, v1.3, and v1.4 clients.
|
||||
|
||||
Conformance tests for a given version should be run off of the release branch
|
||||
that corresponds to that version. Thus `v1.2` conformance tests would be run
|
||||
from the head of the `release-1.2` branch.
|
||||
|
||||
For example, suppose we're in the midst of developing kubernetes v1.3. Clusters
|
||||
with the following versions must pass conformance tests built from the
|
||||
following branches:
|
||||
|
||||
| cluster version | master | release-1.3 | release-1.2 | release-1.1 |
|
||||
| --------------- | ----- | ----------- | ----------- | ----------- |
|
||||
| v1.3.0-alpha | yes | yes | yes | no |
|
||||
| v1.2.x | no | no | yes | yes |
|
||||
| v1.1.x | no | no | no | yes |
|
||||
|
||||
## Running Conformance Tests
|
||||
|
||||
Conformance tests are designed to be run even when there is no cloud provider
|
||||
configured. Conformance tests must be able to be run against clusters that have
|
||||
not been created with `hack/e2e.go`, just provide a kubeconfig with the
|
||||
appropriate endpoint and credentials.
|
||||
|
||||
These commands are intended to be run within a kubernetes directory, either
|
||||
cloned from source, or extracted from release artifacts such as
|
||||
`kubernetes.tar.gz`. They assume you have a valid golang installation.
|
||||
|
||||
```sh
|
||||
# ensure kubetest is installed
|
||||
go get -u k8s.io/test-infra/kubetest
|
||||
|
||||
# build test binaries, ginkgo, and kubectl first:
|
||||
make WHAT="test/e2e/e2e.test vendor/github.com/onsi/ginkgo/ginkgo cmd/kubectl"
|
||||
|
||||
# setup for conformance tests
|
||||
export KUBECONFIG=/path/to/kubeconfig
|
||||
export KUBERNETES_CONFORMANCE_TEST=y
|
||||
|
||||
# Option A: run all conformance tests serially
|
||||
kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\]"
|
||||
|
||||
# Option B: run parallel conformance tests first, then serial conformance tests serially
|
||||
GINKGO_PARALLEL=y kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]"
|
||||
kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]"
|
||||
```
|
||||
|
||||
## Kubernetes Conformance Document
|
||||
|
||||
For each Kubernetes release, a Conformance Document will be generated that lists
|
||||
all of the tests that comprise the conformance test suite, along with the formal
|
||||
specification of each test. For an example, see the [v1.9 conformance doc].
|
||||
This document will help people understand what features are being tested without
|
||||
having to look through the testcase's code directly.
|
||||
|
||||
|
||||
## Promoting Tests to Conformance
|
||||
|
||||
To promote a test to the conformance test suite, open a PR as follows:
|
||||
- is titled "Promote xxx e2e test to Conformance"
|
||||
- includes information and metadata in the description as follows:
|
||||
- "/area conformance" on a newline
|
||||
- "@kubernetes/sig-architecture-pr-reviews @kubernetes/sig-foo-pr-reviews
|
||||
@kubernetes/cncf-conformance-wg" on a new line, where sig-foo is whichever
|
||||
sig owns this test
|
||||
- any necessary information in the description to verify that the test meets
|
||||
[conformance test requirements], such as links to reports or dashboards that
|
||||
prove lack of flakiness
|
||||
- contains no other modifications to test source code other than the following:
|
||||
- modifies the testcase to use the `framework.ConformanceIt()` function rather
|
||||
than the `framework.It()` function
|
||||
- adds a comment immediately before the `ConformanceIt()` call that includes
|
||||
all of the required [conformance test comment metadata]
|
||||
- add the PR to SIG Architecture's [Conformance Test Review board]
|
||||
|
||||
|
||||
### Conformance Test Comment Metadata
|
||||
|
||||
Each conformance test must include the following piece of metadata
|
||||
within its associated comment:
|
||||
|
||||
- `Release`: indicates the Kubernetes release that the test was added to the
|
||||
conformance test suite. If the test was modified in subsequent releases
|
||||
then those releases should be included as well (comma separated)
|
||||
- `Testname`: a human readable short name of the test
|
||||
- `Description`: a detailed description of the test. This field must describe
|
||||
the required behaviour of the Kubernetes components being tested using
|
||||
[RFC2119](https://tools.ietf.org/html/rfc2119) keywords. This field
|
||||
is meant to be a "specification" of the tested Kubernetes features, as
|
||||
such, it must be detailed enough so that readers can fully understand
|
||||
the aspects of Kubernetes that are being tested without having to read
|
||||
the test's code directly. Additionally, this test should provide a clear
|
||||
distinction between the parts of the test that are there for the purpose
|
||||
of validating Kubernetes rather than simply infrastructure logic that
|
||||
is necessary to setup, or clean up, the test.
|
||||
|
||||
### Sample Conformance Test
|
||||
|
||||
The following snippet of code shows a sample conformance test's metadata:
|
||||
|
||||
```
|
||||
/*
|
||||
Release : v1.9
|
||||
Testname: Kubelet: log output
|
||||
Description: By default the stdout and stderr from the process being
|
||||
executed in a pod MUST be sent to the pod's logs.
|
||||
*/
|
||||
framework.ConformanceIt("it should print the output to logs", func() {
|
||||
...
|
||||
})
|
||||
```
|
||||
|
||||
The corresponding portion of the Kubernetes Conformance Documentfor this test
|
||||
would then look like this:
|
||||
|
||||
> ## [Kubelet: log output](https://github.com/kubernetes/kubernetes/tree/release-1.9/test/e2e_node/kubelet_test.go#L47)
|
||||
>
|
||||
> Release : v1.9
|
||||
>
|
||||
> By default the stdout and stderr from the process being executed in a pod MUST be sent to the pod's logs.
|
||||
|
||||
### Reporting Conformance Test Results
|
||||
|
||||
Conformance test results, by provider and releases, can be viewed in the
|
||||
[testgrid conformance dashboard]. If you wish to contribute test results
|
||||
for your provider, please see the [testgrid conformance README]
|
||||
|
||||
[kubernetes versioning policy]: /contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew
|
||||
[Conformance Test Review board]: https://github.com/kubernetes-sigs/architecture-tracking/projects/1
|
||||
[conformance test requirements]: #conformance-test-requirements
|
||||
[conformance test metadata]: #conformance-test-metadata
|
||||
[conformance test version skew policy]: #conformance-test-version-skew-policy
|
||||
[testgrid conformance dashboard]: https://testgrid.k8s.io/conformance-all
|
||||
[testgrid conformance README]: https://github.com/kubernetes/test-infra/blob/master/testgrid/conformance/README.md
|
||||
[v1.9 conformance doc]: https://github.com/cncf/k8s-conformance/blob/master/docs/KubeConformance-1.9.md
|
|
@ -0,0 +1,251 @@
|
|||
# Using godep to manage dependencies
|
||||
|
||||
This document is intended to show a way for managing `vendor/` tree dependencies
|
||||
in Kubernetes. If you do not need to manage vendored dependencies, you probably
|
||||
do not need to read this.
|
||||
|
||||
## Background
|
||||
|
||||
As a tool, `godep` leaves much to be desired. It builds on `go get`, and adds
|
||||
the ability to pin dependencies to exact git version. The `go get` tool itself
|
||||
doesn't have any concept of versions, and tends to blow up if it finds a git
|
||||
repo synced to anything but `master`, but that is exactly the state that
|
||||
`godep` leaves repos. This is a recipe for frustration when people try to use
|
||||
the tools.
|
||||
|
||||
This doc will focus on predictability and reproducibility.
|
||||
|
||||
## Justifications for an update
|
||||
|
||||
Before you update a dependency, take a moment to consider why it should be
|
||||
updated. Valid reasons include:
|
||||
1. We need new functionality that is in a later version.
|
||||
2. New or improved APIs in the dependency significantly improve Kubernetes code.
|
||||
3. Bugs were fixed that impact Kubernetes.
|
||||
4. Security issues were fixed even if they don't impact Kubernetes yet.
|
||||
5. Performance, scale, or efficiency was meaningfully improved.
|
||||
6. We need dependency A and there is a transitive dependency B.
|
||||
7. Kubernetes has an older level of a dependency that is precluding being able
|
||||
to work with other projects in the ecosystem.
|
||||
|
||||
## Theory of operation
|
||||
|
||||
The `go` toolchain assumes a global workspace that hosts all of your Go code.
|
||||
|
||||
The `godep` tool operates by first "restoring" dependencies into your `$GOPATH`.
|
||||
This reads the `Godeps.json` file, downloads all of the dependencies from the
|
||||
internet, and syncs them to the specified revisions. You can then make
|
||||
changes - sync to different revisions or edit Kubernetes code to use new
|
||||
dependencies (and satisfy them with `go get`). When ready, you tell `godep` to
|
||||
"save" everything, which it does by walking the Kubernetes code, finding all
|
||||
required dependencies, copying them from `$GOPATH` into the `vendor/` directory,
|
||||
and rewriting `Godeps.json`.
|
||||
|
||||
This does not work well, when combined with a global Go workspace. Instead, we
|
||||
will set up a private workspace for this process.
|
||||
|
||||
The Kubernetes build process uses this same technique, and offers a tool called
|
||||
`run-in-gopath.sh` which sets up and switches to a local, private workspace,
|
||||
including setting up `$GOPATH` and `$PATH`. If you wrap commands with this
|
||||
tool, they will use the private workspace, which will not conflict with other
|
||||
projects and is easily cleaned up and recreated.
|
||||
|
||||
To see this in action, you can run an interactive shell in this environment:
|
||||
|
||||
```sh
|
||||
# Run a shell, but don't run your own shell initializations.
|
||||
hack/run-in-gopath.sh bash --norc --noprofile
|
||||
```
|
||||
|
||||
## Restoring deps
|
||||
|
||||
To extract and download dependencies into `$GOPATH` we provide a script:
|
||||
`hack/godep-restore.sh`. If you run this tool, it will restore into your own
|
||||
`$GOPATH`. If you wrap it in `run-in-gopath.sh` it will restore into your
|
||||
`_output/` directory.
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/godep-restore.sh
|
||||
```
|
||||
|
||||
This script will try to optimize what it needs to download, and if it seems the
|
||||
dependencies are all present already, it will return very quickly.
|
||||
|
||||
If there's every any doubt about the correctness of your dependencies, you can
|
||||
simply `make clean` or `rm -rf _output`, and run it again.
|
||||
|
||||
Now you should have a clean copy of all of the Kubernetes dependencies.
|
||||
|
||||
Downloading dependencies might take a while, so if you want to see progress
|
||||
information use the `-v` flag:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/godep-restore.sh -v
|
||||
```
|
||||
|
||||
## Making changes
|
||||
|
||||
The most common things people need to do with deps are add and update them.
|
||||
These are similar but different.
|
||||
|
||||
### Adding a dep
|
||||
|
||||
For the sake of examples, consider that we have discovered a wonderful Go
|
||||
library at `example.com/go/frob`. The first thing you need to do is get that
|
||||
code into your workspace:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh go get -d example.com/go/frob
|
||||
```
|
||||
|
||||
This will fetch, but not compile (omit the `-d` if you want to compile it now),
|
||||
the library into your private `$GOPATH`. It will pull whatever the default
|
||||
revision of that library is, typically the `master` branch for git repositories.
|
||||
If this is not the revision you need, you can change it, for example to
|
||||
`v1.0.0`:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash -c 'git -C $GOPATH/src/example.com/go/frob checkout v1.0.0'
|
||||
```
|
||||
|
||||
Now that the code is present, you can start to use it in Kubernetes code.
|
||||
Because it is in your private workspace's `$GOPATH`, it might not be part of
|
||||
your own `$GOPATH`, so tools like `goimports` might not find it. This is an
|
||||
unfortunate side-effect of this process. You can either add the whole private
|
||||
workspace to your own `$GOPATH` or you can `go get` the library into your own
|
||||
`$GOPATH` until it is properly vendored into Kubernetes.
|
||||
|
||||
Another possible complication is a dep that uses `gopdep` itself. In that case,
|
||||
you need to restore its dependencies, too:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash -c 'cd $GOPATH/src/example.com/go/frob && godep restore'
|
||||
```
|
||||
|
||||
If the transitive deps collide with Kubernetes deps, you may have to manually
|
||||
resolve things. This is where the ability to run a shell in this environment
|
||||
comes in handy:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash --norc --noprofile
|
||||
```
|
||||
|
||||
### Updating a dep
|
||||
|
||||
Sometimes we already have a dep, but the version of it is wrong. Because of the
|
||||
way that `godep` and `go get` interact (badly) it's generally easiest to hit it
|
||||
with a big hammer:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh bash -c 'rm -rf $GOPATH/src/example.com/go/frob'
|
||||
hack/run-in-gopath.sh go get -d example.com/go/frob
|
||||
hack/run-in-gopath.sh bash -c 'git -C $GOPATH/src/example.com/go/frob checkout v2.0.0'
|
||||
```
|
||||
|
||||
This will remove the code, re-fetch it, and sync to your desired version.
|
||||
|
||||
### Removing a dep
|
||||
|
||||
This happens almost for free. If you edit Kubernetes code and remove the last
|
||||
use of a given dependency, you only need to restore and save the deps, and the
|
||||
`godep` tool will figure out that you don't need that dep any more:
|
||||
|
||||
## Saving deps
|
||||
|
||||
Now that you have made your changes - adding, updating, or removing the use of a
|
||||
dep - you need to rebuild the dependency database and make changes to the
|
||||
`vendor/` directory.
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/godep-save.sh
|
||||
```
|
||||
|
||||
This will run through all of the primary targets for the Kubernetes project,
|
||||
calculate which deps are needed, and rebuild the database. It will also
|
||||
regenerate other metadata files which the project needs, such as BUILD files and
|
||||
the LICENSE database.
|
||||
|
||||
Commit the changes before updating deps in staging repos.
|
||||
|
||||
## Saving deps in staging repos
|
||||
|
||||
Kubernetes stores some code in a directory called `staging` which is handled
|
||||
specially, and is not covered by the above. If you modified any code under
|
||||
staging, or if you changed a dependency of code under staging (even
|
||||
transitively), you'll also need to update deps there:
|
||||
|
||||
```sh
|
||||
./hack/update-staging-godeps.sh
|
||||
```
|
||||
|
||||
Then commit the changes generated by the above script.
|
||||
|
||||
## Commit messages
|
||||
|
||||
Terse messages like "Update foo.org/bar to 0.42" are problematic
|
||||
for maintainability. Please include in your commit message the
|
||||
detailed reason why the dependencies were modified.
|
||||
|
||||
Too commonly dependency changes have a ripple effect where something
|
||||
else breaks unexpectedly. The first instinct during issue triage
|
||||
is to revert a change. If the change was made to fix some other
|
||||
issue and that issue was not documented, then a revert simply
|
||||
continues the ripple by fixing one issue and reintroducing another
|
||||
which then needs refixed. This can needlessly span multiple days
|
||||
as CI results bubble in and subsequent patches fix and refix and
|
||||
rerefix issues. This may be avoided if the original modifications
|
||||
recorded artifacts of the change rationale.
|
||||
|
||||
## Sanity checking
|
||||
|
||||
After all of this is done, `git status` should show you what files have been
|
||||
modified and added/removed. Make sure to sanity-check them with `git diff`, and
|
||||
to `git add` and `git rm` them, as needed. It is commonly advised to make one
|
||||
`git commit` which includes just the dependencies and Godeps files, and
|
||||
another `git commit` that includes changes to Kubernetes code to use (or stop
|
||||
using) the new/updated/removed dependency. These commits can go into a single
|
||||
pull request.
|
||||
|
||||
Before sending your PR, it's a good idea to sanity check that your
|
||||
Godeps.json file and the contents of `vendor/ `are ok:
|
||||
|
||||
```sh
|
||||
hack/run-in-gopath.sh hack/verify-godeps.sh
|
||||
```
|
||||
|
||||
All this script will do is a restore, followed by a save, and then look for
|
||||
changes. If you followed the above instructions, it should be clean. If it is
|
||||
not, you get to figure out why.
|
||||
|
||||
## Manual updates
|
||||
|
||||
It is sometimes expedient to manually fix the `Godeps.json` file to
|
||||
minimize the changes. However, without great care this can lead to failures
|
||||
with the verifier scripts. The kubernetes codebase does "interesting things"
|
||||
with symlinks between `vendor/` and `staging/` to allow multiple Go import
|
||||
paths to coexist in the same git repo.
|
||||
|
||||
The verifiers, including `hack/verify-godeps.sh` *must* pass for every pull
|
||||
request.
|
||||
|
||||
## Reviewing and approving dependency changes
|
||||
|
||||
Particular attention to detail should be exercised when reviewing and approving
|
||||
PRs that add/remove/update dependencies. Importing a new dependency should bring
|
||||
a certain degree of value as there is a maintenance overhead for maintaining
|
||||
dependencies into the future.
|
||||
|
||||
When importing a new dependency, be sure to keep an eye out for the following:
|
||||
- Is the dependency maintained?
|
||||
- Does the dependency bring value to the project? Could this be done without
|
||||
adding a new dependency?
|
||||
- Is the target dependency the original source, or a fork?
|
||||
- Is there already a dependency in the project that does something similar?
|
||||
- Does the dependency have a license that is compatible with the Kubernetes
|
||||
project?
|
||||
|
||||
All new dependency licenses should be reviewed by either Tim Hockin (@thockin)
|
||||
or the Steering Committee (@kubernetes/steering-committee) to ensure that they
|
||||
are compatible with the Kubernetes project license. It is also important to note
|
||||
and flag if a license has changed when updating a dependency, so that these can
|
||||
also be reviewed.
|
|
@ -0,0 +1,34 @@
|
|||
# Staging Directory and Publishing
|
||||
|
||||
The [staging/ directory](https://git.k8s.io/kubernetes/staging) of Kubernetes contains a number of pseudo repositories ("staging repos"). They are symlinked into Kubernetes' [vendor/ directory](https://git.k8s.io/kubernetes/vendor/k8s.io) for Golang to pick them up.
|
||||
|
||||
We publish the staging repos using the [publishing bot](https://git.k8s.io/publishing-bot). It uses `git filter-branch` essentially to [cut the staging directories into separate git trees](https://de.slideshare.net/sttts/cutting-the-kubernetes-monorepo-in-pieces-never-learnt-more-about-git) and pushing the new commits to the corresponding real repositories in the [kubernetes organization on Github](https://github.com/kubernetes).
|
||||
|
||||
The list of staging repositories and their published branches are listed in [publisher.go inside of the bot](https://git.k8s.io/publishing-bot/cmd/publishing-bot/publisher.go). Though it is planned to move this out into the k8s.io/kubernetes repository.
|
||||
|
||||
At the time of this writing, this includes the branches
|
||||
|
||||
- master,
|
||||
- release-1.8 / release-5.0,
|
||||
- and release-1.9 / release-6.0
|
||||
|
||||
of the following staging repos in the k8s.io org:
|
||||
|
||||
- api
|
||||
- apiextensions-apiserver
|
||||
- apimachinery
|
||||
- apiserver
|
||||
- client-go
|
||||
- code-generator
|
||||
- kube-aggregator
|
||||
- metrics
|
||||
- sample-apiserver
|
||||
- sample-controller
|
||||
|
||||
Kubernetes tags (e.g., v1.9.1-beta1) are also applied automatically to the published repositories, prefixed with kubernetes- (e.g., kubernetes-1.9.1-beta1). The client-go semver tags (on client-go only!) including release-notes are still done manually.
|
||||
|
||||
The semver tags are still the (well tested) official releases. The kubernetes-1.x.y tags have limited test coverage (we have some automatic tests in place in the bot), but can be used by early adopters of client-go and the other libraries. Moreover, they help to vendor the correct version of k8s.io/api and k8s.io/apimachinery.
|
||||
|
||||
If further repos under staging are need, adding them to the bot is easy. Contact one of the [owners of the bot](https://git.k8s.io/publishing-bot/OWNERS).
|
||||
|
||||
Currently, the bot is hosted on the CI cluster of Redhat's OpenShift (ready to be moved out to a public CNCF cluster if we have that in the future).
|
|
@ -1,34 +1,3 @@
|
|||
# Staging Directory and Publishing
|
||||
This file has moved to https://git.k8s.io/community/contributors/devel/sig-architecture/staging.md.
|
||||
|
||||
The [staging/ directory](https://git.k8s.io/kubernetes/staging) of Kubernetes contains a number of pseudo repositories ("staging repos"). They are symlinked into Kubernetes' [vendor/ directory](https://git.k8s.io/kubernetes/vendor/k8s.io) for Golang to pick them up.
|
||||
|
||||
We publish the staging repos using the [publishing bot](https://git.k8s.io/publishing-bot). It uses `git filter-branch` essentially to [cut the staging directories into separate git trees](https://de.slideshare.net/sttts/cutting-the-kubernetes-monorepo-in-pieces-never-learnt-more-about-git) and pushing the new commits to the corresponding real repositories in the [kubernetes organization on Github](https://github.com/kubernetes).
|
||||
|
||||
The list of staging repositories and their published branches are listed in [publisher.go inside of the bot](https://git.k8s.io/publishing-bot/cmd/publishing-bot/publisher.go). Though it is planned to move this out into the k8s.io/kubernetes repository.
|
||||
|
||||
At the time of this writing, this includes the branches
|
||||
|
||||
- master,
|
||||
- release-1.8 / release-5.0,
|
||||
- and release-1.9 / release-6.0
|
||||
|
||||
of the following staging repos in the k8s.io org:
|
||||
|
||||
- api
|
||||
- apiextensions-apiserver
|
||||
- apimachinery
|
||||
- apiserver
|
||||
- client-go
|
||||
- code-generator
|
||||
- kube-aggregator
|
||||
- metrics
|
||||
- sample-apiserver
|
||||
- sample-controller
|
||||
|
||||
Kubernetes tags (e.g., v1.9.1-beta1) are also applied automatically to the published repositories, prefixed with kubernetes- (e.g., kubernetes-1.9.1-beta1). The client-go semver tags (on client-go only!) including release-notes are still done manually.
|
||||
|
||||
The semver tags are still the (well tested) official releases. The kubernetes-1.x.y tags have limited test coverage (we have some automatic tests in place in the bot), but can be used by early adopters of client-go and the other libraries. Moreover, they help to vendor the correct version of k8s.io/api and k8s.io/apimachinery.
|
||||
|
||||
If further repos under staging are need, adding them to the bot is easy. Contact one of the [owners of the bot](https://git.k8s.io/publishing-bot/OWNERS).
|
||||
|
||||
Currently, the bot is hosted on the CI cluster of Redhat's OpenShift (ready to be moved out to a public CNCF cluster if we have that in the future).
|
||||
This file is a placeholder to preserve links. Please remove by April 24, 2019 or the release of kubernetes 1.13, whichever comes first.
|
|
@ -225,7 +225,7 @@ The location of the test code varies with type, as do the specifics of the envir
|
|||
* Unit: These confirm that a particular function behaves as intended. Golang includes a native ability for unit testing via the [testing](https://golang.org/pkg/testing/) package. Unit test source code can be found adjacent to the corresponding source code within a given package. For example: functions defined in [kubernetes/cmd/kubeadm/app/util/version.go](https://git.k8s.io/kubernetes/cmd/kubeadm/app/util/version.go) will have unit tests in [kubernetes/cmd/kubeadm/app/util/version_test.go](https://git.k8s.io/kubernetes/cmd/kubeadm/app/util/version_test.go). These are easily run locally by any developer on any OS.
|
||||
* Integration: These tests cover interactions of package components or interactions between kubernetes components and some other non-kubernetes system resource (eg: etcd). An example would be testing whether a piece of code can correctly store data to or retrieve data from etcd. Integration tests are stored in [kubernetes/test/integration/](https://git.k8s.io/kubernetes/test/integration). Running these can require the developer set up additional functionality on their development system.
|
||||
* End-to-end ("e2e"): These are broad tests of overall system behavior and coherence. These are more complicated as they require a functional kubernetes cluster built from the sources to be tested. A separate [document detailing e2e testing](/contributors/devel/sig-testing/e2e-tests.md) and test cases themselves can be found in [kubernetes/test/e2e/](https://git.k8s.io/kubernetes/test/e2e).
|
||||
* Conformance: These are a set of testcases, currently a subset of the integration/e2e tests, that the Architecture SIG has approved to define the core set of interoperable features that all Kubernetes deployments must support. For more information on Conformance tests please see the [Conformance Testing](/contributors/devel/conformance-tests.md) Document.
|
||||
* Conformance: These are a set of testcases, currently a subset of the integration/e2e tests, that the Architecture SIG has approved to define the core set of interoperable features that all Kubernetes deployments must support. For more information on Conformance tests please see the [Conformance Testing](/contributors/devel/sig-architecture/conformance-tests.md) Document.
|
||||
|
||||
Continuous integration will run these tests either as pre-submits on PRs, post-submits against master/release branches, or both.
|
||||
The results appear on [testgrid](https://testgrid.k8s.io).
|
||||
|
|
|
@ -55,7 +55,7 @@ the name of the directory in which the .go file exists.
|
|||
sync.Mutex`). When multiple locks are present, give each lock a distinct name
|
||||
following Go conventions - `stateLock`, `mapLock` etc.
|
||||
|
||||
- [API changes](/contributors/devel/api_changes.md)
|
||||
- [API changes](/contributors/devel/sig-architecture/api_changes.md)
|
||||
|
||||
- [API conventions](/contributors/devel/api-conventions.md)
|
||||
|
||||
|
@ -119,7 +119,7 @@ respectively. Actual application examples belong in /examples.
|
|||
|
||||
- Go code for normal third-party dependencies is managed using
|
||||
[Godep](https://github.com/tools/godep) and is described in the kubernetes
|
||||
[godep guide](/contributors/devel/godep.md)
|
||||
[godep guide](/contributors/devel/sig-architecture/godep.md)
|
||||
|
||||
- Other third-party code belongs in `/third_party`
|
||||
- forked third party Go code goes in `/third_party/forked`
|
||||
|
|
|
@ -100,7 +100,7 @@ Establishing and documenting conventions for system and user-facing APIs, define
|
|||
|
||||
* [Kubernetes Design and Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md)
|
||||
* [Design principles](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/principles.md)
|
||||
* [API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md)
|
||||
* [API conventions](/contributors/devel/sig-architecture/api-conventions.md)
|
||||
* [API Review process](https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md)
|
||||
* [Deprecation policy](https://kubernetes.io/docs/reference/deprecation-policy/)
|
||||
|
||||
|
@ -111,7 +111,7 @@ Please see the [Design documentation](https://github.com/kubernetes-sigs/archite
|
|||
Reviewing, approving, and driving changes to the conformance test suite; reviewing, guiding, and creating new conformance profiles
|
||||
|
||||
* [Conformance Tests](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/conformance.txt)
|
||||
* [Test Guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/conformance-tests.md)
|
||||
* [Test Guidelines](/contributors/devel/sig-architecture/conformance-tests.md)
|
||||
|
||||
Please see the [Conformance Test Review](https://github.com/kubernetes-sigs/architecture-tracking/projects/1) tracking board to follow the work for this sub-project. Please reach out to folks in the [OWNERS](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/OWNERS) file if you are interested in joining this effort. There is a lot of overlap with the [Kubernetes Software Conformance Working Group](https://github.com/cncf/k8s-conformance/blob/master/README-WG.md) with this sub project as well. The github group [cncf-conformance-wg](https://github.com/orgs/kubernetes/teams/cncf-conformance-wg) enumerates the folks on this working group. Look for the `area/conformance` label in the kubernetes repositories to mark [issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aopen+label%3Aarea%2Fconformance) and [PRs](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+is%3Aopen+label%3Aarea%2Fconformance)
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
# Process Overview and Motivations
|
||||
|
||||
Due to the importance of preserving usability and consistency in Kubernetes APIs, all changes and additions require expert oversight. The API review process is intended to maintain logical and functional integrity of the API over time, the consistency of user experience and the ability of previously written tools to function with new APIs. Wherever possible, the API review process should help change submitters follow [established conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md), and not simply reject without cause.
|
||||
Due to the importance of preserving usability and consistency in Kubernetes APIs, all changes and additions require expert oversight. The API review process is intended to maintain logical and functional integrity of the API over time, the consistency of user experience and the ability of previously written tools to function with new APIs. Wherever possible, the API review process should help change submitters follow [established conventions](/contributors/devel/sig-architecture/api-conventions.md), and not simply reject without cause.
|
||||
|
||||
Because expert reviewer bandwidth is extremely limited, the process provides a curated backlog with highest priority issues at the top. While this does mean some changes may be delayed in favor of other higher priority ones, this will help maintain critical project velocity, transparency, and equilibrium. Ideally, those whose API review priority is shifted in a release-impacting way will be proactively notified by the reviewers.
|
||||
|
||||
|
@ -227,9 +227,9 @@ Aspiring reviewers should reach out the moderator on slack. The moderator will
|
|||
|
||||
* [Updating-docs-for-feature-changes.md](https://git.k8s.io/sig-release/release-team/role-handbooks/documentation-guides/updating-docs-for-feature-changes.md#when-making-api-changes)
|
||||
|
||||
* [https://github.com/kubernetes/community/blob/be9eeca6ee3becfa5b4c96bedf62b5b3ff5b1f8d/contributors/devel/api_changes.md](https://github.com/kubernetes/community/blob/be9eeca6ee3becfa5b4c96bedf62b5b3ff5b1f8d/contributors/devel/api_changes.md)
|
||||
* [Api_changes.md](/contributors/devel/sig-architecture/api_changes.md)
|
||||
|
||||
* [https://github.com/kubernetes/community/blob/be9eeca6ee3becfa5b4c96bedf62b5b3ff5b1f8d/contributors/devel/api-conventions.md](https://github.com/kubernetes/community/blob/be9eeca6ee3becfa5b4c96bedf62b5b3ff5b1f8d/contributors/devel/api-conventions.md)
|
||||
* [Api-conventions.md](/contributors/devel/sig-architecture/api-conventions.md)
|
||||
|
||||
* [Pull-requests.md](https://github.com/kubernetes/community/blob/a74d906f0121c78114d79a3ac105aa2d36e24b57/contributors/devel/pull-requests.md#2-smaller-is-better-small-commits-small-prs) - should be updated to specifically call out API changes as important
|
||||
|
||||
|
|
Loading…
Reference in New Issue