Use git.k8s.io for links

This commit is contained in:
Christoph Blecker 2017-12-21 17:53:39 -08:00
parent ce3044d912
commit 95a4a105cd
No known key found for this signature in database
GPG Key ID: B34A59A9D39F838B
103 changed files with 298 additions and 298 deletions

View File

@ -1,7 +1,7 @@
# Files that should be ignored by tools which do not want to consider generated
# code.
#
# https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/size.go
# https://git.k8s.io/contrib/mungegithub/mungers/size.go
#
# This file is a series of lines, each of the form:
# <type> <name>

View File

@ -1,6 +1,6 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
- If this is your first contribution, read our Getting Started guide https://github.com/kubernetes/community#your-first-contribution
- If you are editing SIG information, please follow these instructions: https://github.com/kubernetes/community/tree/master/generator
- If you are editing SIG information, please follow these instructions: https://git.k8s.io/community/generator
You will need to follow these steps:
1. Edit sigs.yaml with your change
2. Generate docs with `make generate`. To build docs for one sig, run `make WHAT=sig-apps generate`

4
CLA.md
View File

@ -75,6 +75,6 @@ Someone from the CNCF will respond to your ticket to help.
[Corporation signup]: https://identity.linuxfoundation.org/node/285/organization-signup
[Individual signup]: https://identity.linuxfoundation.org/projects/cncf
[git email]: https://help.github.com/articles/setting-your-email-in-git
[third_party]: https://github.com/kubernetes/kubernetes/tree/master/third_party
[vendor]: https://github.com/kubernetes/kubernetes/tree/master/vendor
[third_party]: https://git.k8s.io/kubernetes/third_party
[vendor]: https://git.k8s.io/kubernetes/vendor
[documentation]: https://help.github.com/articles/setting-your-commit-email-address-on-github/

View File

@ -36,4 +36,4 @@ Edits in SIG sub-directories should follow any additional guidelines described
by the respective SIG leads in the sub-directory's `CONTRIBUTING` file
(e.g. [sig-cli/CONTRIBUTING](sig-cli/CONTRIBUTING.md)).
Attending a [SIG meeting](https://github.com/kubernetes/community/blob/master/sig-list.md) or posting on their mailing list might be prudent if you want to make extensive contributions.
Attending a [SIG meeting](/sig-list.md) or posting on their mailing list might be prudent if you want to make extensive contributions.

View File

@ -78,8 +78,8 @@ If you want to work on a new idea of relatively small scope:
1. Submit a [pull request] containing a tested change.
[architecture]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md
[cmd]: https://github.com/kubernetes/kubernetes/tree/master/cmd
[architecture]: /contributors/design-proposals/architecture/architecture.md
[cmd]: https://git.k8s.io/kubernetes/cmd
[CLA]: CLA.md
[Collaboration Guide]: contributors/devel/collab.md
[Developer's Guide]: contributors/devel/development.md

View File

@ -77,8 +77,8 @@ Kubernetes is the main focus of CloudNativeCon/KubeCon, held every spring in Eur
[blog]: http://blog.kubernetes.io
[calendar.google.com]: https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles
[CNCF code of conduct]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md
[communication]: https://github.com/kubernetes/community/blob/master/communication.md
[community meeting]: https://github.com/kubernetes/community/blob/master/communication.md#weekly-meeting
[communication]: /communication.md
[community meeting]: /communication.md#weekly-meeting
[events]: https://www.cncf.io/events/
[file an issue]: https://github.com/kubernetes/kubernetes/issues/new
[Google+]: https://plus.google.com/u/0/b/116512812300813784482/116512812300813784482
@ -89,7 +89,7 @@ Kubernetes is the main focus of CloudNativeCon/KubeCon, held every spring in Eur
[kubernetes-users]: https://groups.google.com/forum/#!forum/kubernetes-users
[kubernetes.slackarchive.io]: http://kubernetes.slackarchive.io
[kubernetes.slack.com]: http://kubernetes.slack.com
[Special Interest Group]: https://github.com/kubernetes/community/blob/master/README.md#SIGs
[Special Interest Group]: /README.md#SIGs
[slack.k8s.io]: http://slack.k8s.io
[Stack Overflow]: http://stackoverflow.com/questions/tagged/kubernetes
[timezone table]: https://www.google.com/search?q=1000+am+in+pst

View File

@ -229,7 +229,7 @@ TODO: Determine if this role is outdated and needs to be redefined or merged int
- Primary reviewer for 20 substantial PRs
- Reviewed or merged at least 50 PRs
- Apply to [`kubernetes-maintainers`](https://github.com/orgs/kubernetes/teams/kubernetes-maintainers), with:
- A [Champion](https://github.com/kubernetes/community/blob/master/incubator.md#faq) from the existing
- A [Champion](/incubator.md#faq) from the existing
kubernetes-maintainers members
- A Sponsor from Project Approvers
- Summary of contributions to the project

View File

@ -1,5 +1,5 @@
This is an announcement for the 2017 Kubernetes Leadership Summit, which will occur on June 2nd, 2017 in San Jose, CA.
This event will be similar to the [Kubernetes Developer's Summit](https://github.com/kubernetes/community/blob/master/community/2016-events/developer-summit-2016/Kubernetes_Dev_Summit.md) in November
This event will be similar to the [Kubernetes Developer's Summit](/community/2016-events/developer-summit-2016/Kubernetes_Dev_Summit.md) in November
2016, but involving a smaller smaller audience comprised solely of leaders and influencers of the community. These leaders and
influences include the SIG leads, release managers, and representatives from several companies, including (but not limited to)
Google, Red Hat, CoreOS, WeaveWorks, Deis, and Mirantis.

View File

@ -39,7 +39,7 @@ Ideally, code would be divided along lines of ownership, by SIG and subproject,
Next up:
* [kubeadm](https://github.com/kubernetes/kubernetes/issues/35314)
* Federation
* [cloudprovider](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/README.md) and [cluster](https://github.com/kubernetes/kubernetes/blob/master/cluster/README.md)
* [cloudprovider](https://git.k8s.io/kubernetes/pkg/cloudprovider/README.md) and [cluster](https://git.k8s.io/kubernetes/cluster/README.md)
* Scheduler
* [Kubelet](https://github.com/kubernetes/kubernetes/issues/444)

View File

@ -18,7 +18,7 @@ Times in CST
- 01:00 pm - Sessions resume
- 05:00 pm - Happy Hour onsite
View the complete session schedule [here](https://github.com/kubernetes/community/blob/master/community/2017-events/12-contributor-summit/schedule.png).
View the complete session schedule [here](/community/2017-events/12-contributor-summit/schedule.png).
Session schedule will be available onsite in print as well as on TVs.
## Where:
@ -41,7 +41,7 @@ Level 3, Meeting Room 4, Meeting Room 6a, Meeting Room 6b - Breakout session roo
*_Note-Your summit registration is NOT your KubeCon registration. Tickets are almost sold out for KubeCon, please purchase those separately from the [event page](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america/attend/register)._
## Invitations:
Selected attendees are Kubernetes upstream contributors including SIG members and lottery winners from a [members of standing](https://github.com/kubernetes/community/blob/master/community-membership.md) pool.
Selected attendees are Kubernetes upstream contributors including SIG members and lottery winners from a [members of standing](/community-membership.md) pool.
We realize that this is not the best approach now that our community is growing in exponential ways. In 2018, we will be unfolding a new strategy for bringing our contributors together so all voices can be heard.
Invites went out on October 19th. There are ten spots left. Please reach out to parispittman@google.com to claim one.

View File

@ -6,7 +6,7 @@ Kubernetes Dashboard UX breakout session 12.5.17, led by Rahul Dhide ([rahuldhid
* Dashboard [User Types and Use Cases](https://docs.google.com/document/d/1urAlgRP7AbcdsOMQ_piQQ6O1XTDIum_LmOUe8xsC4pE/edit)
* [SIG UI weekly](https://github.com/kubernetes/community/tree/master/sig-ui)
* [SIG UI weekly](/sig-ui)
* **Notes**

View File

@ -146,7 +146,7 @@ Questions:
* Vision is that writing KEPs, know from them what the roadmap is; can write the blog post based on the value articulated in the KEPs.
* Right now they are [buried in community repo](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/0000-kep-template.md) 3 levels deep: a lot of great feedback so far. Joe wouldn't object to making it more discoverable.
* Right now they are [buried in community repo](/contributors/design-proposals/architecture/0000-kep-template.md) 3 levels deep: a lot of great feedback so far. Joe wouldn't object to making it more discoverable.
* Kep.k8s.io for rendered versions?

View File

@ -8,7 +8,7 @@ Three high-level topics:
Early shout-out for the [Common LISP client](https://github.com/brendandburns/cl-k8s) :)
Currently Java, Python, .Net, Javascript clients are all silver. Reference to the [badge and capabilities descriptions](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
Currently Java, Python, .Net, Javascript clients are all silver. Reference to the [badge and capabilities descriptions](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
Python still outside the [clients org](https://github.com/kubernetes-client/) - should this move from incubator?
Lots of code in client libraries to handle limitations in OpenAPI 2, some of which is fixed in 3. What is required/should we move to OpenAPI 3?

View File

@ -1,6 +1,6 @@
## Kubernetes Elections
This document will outline how to conduct a Kubernetes Steering Committee Election. See the [Steering Committee Charter](https://github.com/kubernetes/steering/blob/master/charter.md) for more information of how the committee decides when to have an election, the method, and the maximal representation.
This document will outline how to conduct a Kubernetes Steering Committee Election. See the [Steering Committee Charter](https://git.k8s.io/steering/charter.md) for more information of how the committee decides when to have an election, the method, and the maximal representation.
## Steering Committee chooses Election Deadlines and Officers

View File

@ -44,7 +44,7 @@ that does not contain a discriminator.
|---|---|
| non-inlined non-discriminated union | Yes |
| non-inlined discriminated union | Yes |
| inlined union with [patchMergeKey](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#strategic-merge-patch) only | Yes |
| inlined union with [patchMergeKey](/contributors/devel/api-conventions.md#strategic-merge-patch) only | Yes |
| other inlined union | No |
For the inlined union with patchMergeKey, we move the tag to the parent struct's instead of

View File

@ -21,7 +21,7 @@ This document proposes a detailed plan for bringing Webhooks to Beta. Highlights
* Versioned rather than Internal data sent on hook
* Ordering behavior within webhooks, and with other admission phases, is better defined
This plan is compatible with the [original design doc]( https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md).
This plan is compatible with the [original design doc](/contributors/design-proposals/api-machinery/admission_control_extension.md).
# Definitions
@ -391,12 +391,12 @@ Specific Use cases:
* Kubernetes static Admission Controllers
* Documented [here](https://kubernetes.io/docs/admin/admission-controllers/)
* Discussed [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md)
* Discussed [here](/contributors/design-proposals/api-machinery/admission_control_extension.md)
* All are highly reliable. Most are simple. No external deps.
* Many need update checks.
* Can be separated into mutation and validate phases.
* OpenShift static Admission Controllers
* Discussed [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md)
* Discussed [here](/contributors/design-proposals/api-machinery/admission_control_extension.md)
* Similar to Kubernetes ones.
* Istio, Case 1: Add Container to all Pods.
* Currently uses Initializer but can use Mutating Webhook.
@ -411,7 +411,7 @@ Specific Use cases:
* Simple, can be highly reliable and fast. No external deps.
* No current use case for updates.
Good further discussion of use cases [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md)
Good further discussion of use cases [here](/contributors/design-proposals/api-machinery/admission_control_extension.md)
## Details of Porting Admission Controllers

View File

@ -4,66 +4,66 @@
| Topic | Link |
| ----- | ---- |
| Admission Control | https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control.md |
| Admission Control | https://git.k8s.io/community/contributors/design-proposals/api-machinery/admission_control.md |
## Introduction
An admission controller is a piece of code that intercepts requests to the Kubernetes API - think a middleware.
The API server lets you have a whole chain of them. Each is run in sequence before a request is accepted
into the cluster. If any of the plugins in the sequence rejects the request, the entire request is rejected
The API server lets you have a whole chain of them. Each is run in sequence before a request is accepted
into the cluster. If any of the plugins in the sequence rejects the request, the entire request is rejected
immediately and an error is returned to the user.
Many features in Kubernetes require an admission control plugin to be enabled in order to properly support the feature.
In fact in the [documentation](https://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use) you will find
Many features in Kubernetes require an admission control plugin to be enabled in order to properly support the feature.
In fact in the [documentation](https://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use) you will find
a recommended set of them to use.
At the moment admission controllers are implemented as plugins and they have to be compiled into the
At the moment admission controllers are implemented as plugins and they have to be compiled into the
final binary in order to be used at a later time. Some even require an access to cache, an authorizer etc.
This is where an admission plugin initializer kicks in. An admission plugin initializer is used to pass additional
This is where an admission plugin initializer kicks in. An admission plugin initializer is used to pass additional
configuration and runtime references to a cache, a client and an authorizer.
To streamline the process of adding new plugins especially for aggregated API servers we would like to build some plugins
into the generic API server library and provide a plugin initializer. While anyone can author and register one, having a known set of
To streamline the process of adding new plugins especially for aggregated API servers we would like to build some plugins
into the generic API server library and provide a plugin initializer. While anyone can author and register one, having a known set of
provided references let's people focus on what they need their admission plugin to do instead of paying attention to wiring.
## Implementation
The first step would involve creating a "standard" plugin initializer that would be part of the
generic API server. It would use kubeconfig to populate
[external clients](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/admission/initializer.go#L29)
and [external informers](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/admission/initializer.go#L35).
By default for servers that would be run on the kubernetes cluster in-cluster config would be used.
The standard initializer would also provide a client config for connecting to the core kube-apiserver.
Some API servers might be started as static pods, which don't have in-cluster configs.
In that case the config could be easily populated form the file.
The first step would involve creating a "standard" plugin initializer that would be part of the
generic API server. It would use kubeconfig to populate
[external clients](https://git.k8s.io/kubernetes/pkg/kubeapiserver/admission/initializer.go#L29)
and [external informers](https://git.k8s.io/kubernetes/pkg/kubeapiserver/admission/initializer.go#L35).
By default for servers that would be run on the kubernetes cluster in-cluster config would be used.
The standard initializer would also provide a client config for connecting to the core kube-apiserver.
Some API servers might be started as static pods, which don't have in-cluster configs.
In that case the config could be easily populated form the file.
The second step would be to move some plugins from [admission pkg](https://github.com/kubernetes/kubernetes/tree/master/plugin/pkg/admission)
to the generic API server library. Some admission plugins are used to ensure consistent user expectations.
These plugins should be moved. One example is the Namespace Lifecycle plugin which prevents users
The second step would be to move some plugins from [admission pkg](https://git.k8s.io/kubernetes/plugin/pkg/admission)
to the generic API server library. Some admission plugins are used to ensure consistent user expectations.
These plugins should be moved. One example is the Namespace Lifecycle plugin which prevents users
from creating resources in non-existent namespaces.
*Note*:
For loading in-cluster configuration [visit](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
For loading the configuration directly from a file [visit](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go)
For loading in-cluster configuration [visit](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
For loading the configuration directly from a file [visit](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go)
## How to add an admission plugin ?
At this point adding an admission plugin is very simple and boils down to performing the
At this point adding an admission plugin is very simple and boils down to performing the
following series of steps:
1. Write an admission plugin
2. Register the plugin
2. Register the plugin
3. Reference the plugin in the admission chain
## An example
The sample apiserver provides an example admission plugin that makes meaningful use of the "standard" plugin initializer.
The sample apiserver provides an example admission plugin that makes meaningful use of the "standard" plugin initializer.
The admission plugin ensures that a resource name is not on the list of banned names.
The source code of the plugin can be found [here](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go).
The source code of the plugin can be found [here](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go).
Having the plugin, the next step is the registration. [AdmissionOptions](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go)
provides two important things. Firstly it exposes [a register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go#L43)
under which all admission plugins are registered. In fact, that's exactly what the [Register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go#L33)
provides two important things. Firstly it exposes [a register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go#L43)
under which all admission plugins are registered. In fact, that's exactly what the [Register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go#L33)
method does from our example admission plugin. It accepts a global registry as a parameter and then simply registers itself in that registry.
Secondly, it adds an admission chain to the server configuration via [ApplyTo](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go#L66) method.
The method accepts optional parameters in the form of `pluginInitalizers`. This is useful when admission plugins need custom configuration that is not provided by the generic initializer.
The method accepts optional parameters in the form of `pluginInitalizers`. This is useful when admission plugins need custom configuration that is not provided by the generic initializer.
The following code has been extracted from the sample server and illustrates how to register and wire an admission plugin:
@ -74,7 +74,7 @@ The following code has been extracted from the sample server and illustrates how
// create custom plugin initializer
informerFactory := informers.NewSharedInformerFactory(client, serverConfig.LoopbackClientConfig.Timeout)
admissionInitializer, _ := wardleinitializer.New(informerFactory)
// add admission chain to the server configuration
o.Admission.ApplyTo(serverConfig, admissionInitializer)
```

View File

@ -132,7 +132,7 @@ the same time, we can introduce an additional etcd event type: EtcdResync
Thus, we need to create the EtcdResync event, extend watch.Interface and
its implementations to support it and handle those events appropriately
in places like
[Reflector](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/cache/reflector.go)
[Reflector](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/reflector.go)
However, this might turn out to be unnecessary optimization if apiserver
will always keep up (which is possible in the new design). We will work

View File

@ -94,7 +94,7 @@ In the following, the second approach is described without a proxy. At which po
1. as one of the REST handlers (as in [#27087](https://github.com/kubernetes/kubernetes/pull/27087)),
2. as an admission controller.
The former approach (currently implemented) was picked over the other one, due to the need to be able to get information about both the user submitting the request and the impersonated user (and group), which is being overridden inside the [impersonation filter](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go). Additionally admission controller does not have access to the response and runs after authorization which will prevent logging failed authorization. All of that resulted in continuing the solution started in [#27087](https://github.com/kubernetes/kubernetes/pull/27087), which implements auditing as one of the REST handlers
The former approach (currently implemented) was picked over the other one, due to the need to be able to get information about both the user submitting the request and the impersonated user (and group), which is being overridden inside the [impersonation filter](https://git.k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go). Additionally admission controller does not have access to the response and runs after authorization which will prevent logging failed authorization. All of that resulted in continuing the solution started in [#27087](https://github.com/kubernetes/kubernetes/pull/27087), which implements auditing as one of the REST handlers
after authentication, but before impersonation and authorization.
## Proposed Design

View File

@ -24,7 +24,7 @@ Development would be based on a generated client using OpenAPI and [swagger-code
### Client Capabilities
* Bronze Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Bronze-blue.svg?style=plastic&colorB=cd7f32&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Bronze Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Bronze-blue.svg?style=plastic&colorB=cd7f32&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Support loading config from kube config file
@ -40,11 +40,11 @@ Development would be based on a generated client using OpenAPI and [swagger-code
* Works from within the cluster environment.
* Silver Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Silver-blue.svg?style=plastic&colorB=C0C0C0&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Silver Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Silver-blue.svg?style=plastic&colorB=C0C0C0&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Support watch calls
* Gold Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Gold-blue.svg?style=plastic&colorB=FFD700&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Gold Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Gold-blue.svg?style=plastic&colorB=FFD700&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Support exec, attach, port-forward calls (these are not normally supported out of the box from [swagger-codegen](https://github.com/swagger-api/swagger-codegen))
@ -54,11 +54,11 @@ Development would be based on a generated client using OpenAPI and [swagger-code
### Client Support Level
* Alpha [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-alpha-green.svg?style=plastic&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Alpha [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-alpha-green.svg?style=plastic&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Clients dont even have to meet bronze requirements
* Beta [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-beta-green.svg?style=plastic&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Beta [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-beta-green.svg?style=plastic&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Client at least meets bronze standards
@ -68,7 +68,7 @@ Development would be based on a generated client using OpenAPI and [swagger-code
* 2+ individual maintainers/owners of the repository
* Stable [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-stable-green.svg?style=plastic&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Stable [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-stable-green.svg?style=plastic&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Support level documented per-platform
@ -96,5 +96,5 @@ For each client language, well make a client-[lang]-base and client-[lang] re
# Support
These clients will be supported by the Kubernetes [API Machinery special interest group](https://github.com/kubernetes/community/tree/master/sig-api-machinery); however, individual owner(s) will be needed for each client language for them to be considered stable; the SIG wont be able to handle the support load otherwise. If the generated clients prove as easy to maintain as we hope, then a few individuals may be able to own multiple clients.
These clients will be supported by the Kubernetes [API Machinery special interest group](/sig-api-machinery); however, individual owner(s) will be needed for each client language for them to be considered stable; the SIG wont be able to handle the support load otherwise. If the generated clients prove as easy to maintain as we hope, then a few individuals may be able to own multiple clients.

View File

@ -53,7 +53,7 @@ Each binary that generates events:
* Maintains a historical record of previously generated events:
* Implemented with
["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go)
in [`pkg/client/record/events_cache.go`](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/record/events_cache.go).
in [`pkg/client/record/events_cache.go`](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/record/events_cache.go).
* Implemented behind an `EventCorrelator` that manages two subcomponents:
`EventAggregator` and `EventLogger`.
* The `EventCorrelator` observes all incoming events and lets each
@ -98,7 +98,7 @@ of time and generates tons of unique events, the previously generated events
cache will not grow unchecked in memory. Instead, after 4096 unique events are
generated, the oldest events are evicted from the cache.
* When an event is generated, the previously generated events cache is checked
(see [`pkg/client/unversioned/record/event.go`](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/record/event.go)).
(see [`pkg/client/unversioned/record/event.go`](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/record/event.go)).
* If the key for the new event matches the key for a previously generated
event (meaning all of the above fields match between the new event and some
previously generated event), then the event is considered to be a duplicate and

View File

@ -237,9 +237,9 @@ Service endpoints are found primarily via [DNS](https://kubernetes.io/docs/conce
### Add-ons and other dependencies
A number of components, called [*add-ons*](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) typically run on Kubernetes
A number of components, called [*add-ons*](https://git.k8s.io/kubernetes/cluster/addons) typically run on Kubernetes
itself:
* [DNS](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns)
* [DNS](https://git.k8s.io/kubernetes/cluster/addons/dns)
* [Ingress controller](https://github.com/kubernetes/ingress-gce)
* [Heapster](https://github.com/kubernetes/heapster/) (resource monitoring)
* [Dashboard](https://github.com/kubernetes/dashboard/) (GUI)

View File

@ -6,13 +6,13 @@ Support multi-fields merge key in Strategic Merge Patch.
## Background
Strategic Merge Patch is covered in this [doc](https://github.com/kubernetes/community/blob/master/contributors/devel/strategic-merge-patch.md).
Strategic Merge Patch is covered in this [doc](/contributors/devel/strategic-merge-patch.md).
In Strategic Merge Patch, we use Merge Key to identify the entries in the list of non-primitive types.
It must always be present and unique to perform the merge on the list of non-primitive types,
and will be preserved.
The merge key exists in the struct tag (e.g. in [types.go](https://github.com/kubernetes/kubernetes/blob/5a9759b0b41d5e9bbd90d5a8f3a4e0a6c0b23b47/pkg/api/v1/types.go#L2831))
and the [OpenAPI spec](https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json).
and the [OpenAPI spec](https://git.k8s.io/kubernetes/api/openapi-spec/swagger.json).
## Motivation

View File

@ -29,7 +29,7 @@ but we only focus on storage API calls here.
### Metric format and collection
Metrics emitted from cloud provider will fall under category of service metrics
as defined in [Kubernetes Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md).
as defined in [Kubernetes Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md).
The metrics will be emitted using [Prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and available for collection
@ -40,7 +40,7 @@ metrics on `/metrics` HTTP endpoint. This proposal merely extends available metr
Any collector which can parse Prometheus metric format should be able to collect
metrics from these endpoints.
A more detailed description of monitoring pipeline can be found in [Monitoring architecture] (https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
A more detailed description of monitoring pipeline can be found in [Monitoring architecture] (/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
#### Metric Types

View File

@ -221,7 +221,7 @@ Only one of `--discovery-file` or `--discovery-token` can be set. If more than
Our documentations (and output from `kubeadm`) should stress to users that when the token is configured for authentication and used for TLS bootstrap is a pretty powerful credential due to that any person with access to it can claim to be a node.
The highest risk regarding being able to claim a credential in the `system:nodes` group is that it can read all Secrets in the cluster, which may compromise the cluster.
The [Node Authorizer](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/kubelet-authorizer.md) locks this down a bit, but an untrusted person could still try to
The [Node Authorizer](/contributors/design-proposals/node/kubelet-authorizer.md) locks this down a bit, but an untrusted person could still try to
guess a node's name, get such a credential, guess the name of the Secret and be able to get that.
Users should set a TTL on the token to limit the above mentioned risk. `kubeadm` sets a 24h TTL on the node bootstrap token by default in v1.8.

View File

@ -140,7 +140,7 @@ Testing of this feature will occur in three parts.
* None at this time.
[0]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
[0]: /contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
[1]: https://github.com/kubernetes/community/pull/825
[2]: https://docs.google.com/document/d/1hhrCa_nv0Sg4O_zJYOnelE8a5ClieyewEsQM6c7-5-o/edit?ts=5988fba8#
[3]: https://docs.google.com/document/d/1qmK0Iq4fqxnd8COBFZHpip27fT-qSPkOgy1x2QqjYaQ/edit?ts=599b797c#

View File

@ -3,7 +3,7 @@
> ***Please note: this proposal doesn't reflect final implementation, it's here for the purpose of capturing the original ideas.***
> ***You should probably [read `kubeadm` docs](http://kubernetes.io/docs/getting-started-guides/kubeadm/), to understand the end-result of this effor.***
Luke Marsden & many others in [SIG-cluster-lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle).
Luke Marsden & many others in [SIG-cluster-lifecycle](/sig-cluster-lifecycle).
17th August 2016

View File

@ -100,4 +100,4 @@ Kubernetes self-hosted is working today. Bootkube is an implementation of the "t
- [Health check endpoints for components don't work correctly](https://github.com/kubernetes-incubator/bootkube/issues/64#issuecomment-228144345)
- [kubeadm does do self-hosted, but isn't tested yet](https://github.com/kubernetes/kubernetes/pull/40075)
- The Kubernetes [versioning policy](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md) allows for version skew of kubelet and control plane but not skew between control plane components themselves. We must add testing and validation to Kubernetes that this skew works. Otherwise the work to make Kubernetes HA is rather pointless if it can't be upgraded in an HA manner as well.
- The Kubernetes [versioning policy](/contributors/design-proposals/release/versioning.md) allows for version skew of kubelet and control plane but not skew between control plane components themselves. We must add testing and validation to Kubernetes that this skew works. Otherwise the work to make Kubernetes HA is rather pointless if it can't be upgraded in an HA manner as well.

View File

@ -28,22 +28,22 @@ This document proposes a design for the set of metrics included in an eventual C
### Definitions
"Kubelet": The daemon that runs on every kubernetes node and controls pod and container lifecycle, among many other things.
["cAdvisor":](https://github.com/google/cadvisor) An open source container monitoring solution which only monitors containers, and has no concept of kubernetes constructs like pods or volumes.
["Summary API":](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go) A kubelet API which currently exposes node metrics for use by both system components and monitoring systems.
["CRI":](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md) The Container Runtime Interface designed to provide an abstraction over runtimes (docker, rkt, etc).
"Core Metrics": A set of metrics described in the [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) whose purpose is to provide metrics for first-class resource isolation and utilization features, including [resource feasibility checking](https://github.com/eBay/Kubernetes/blob/master/docs/design/resources.md#the-resource-model) and node resource management.
["Summary API":](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go) A kubelet API which currently exposes node metrics for use by both system components and monitoring systems.
["CRI":](/contributors/devel/container-runtime-interface.md) The Container Runtime Interface designed to provide an abstraction over runtimes (docker, rkt, etc).
"Core Metrics": A set of metrics described in the [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) whose purpose is to provide metrics for first-class resource isolation and utilization features, including [resource feasibility checking](https://github.com/eBay/Kubernetes/blob/master/docs/design/resources.md#the-resource-model) and node resource management.
"Resource": A consumable element of a node (e.g. memory, disk space, CPU time, etc).
"First-class Resource": A resource critical for scheduling, whose requests and limits can be (or soon will be) set via the Pod/Container Spec.
"Metric": A measure of consumption of a Resource.
### Background
The [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal contains a blueprint for a set of metrics referred to as "Core Metrics". The purpose of this proposal is to specify what those metrics are, to enable work relating to the collection, by the kubelet, of the metrics.
The [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal contains a blueprint for a set of metrics referred to as "Core Metrics". The purpose of this proposal is to specify what those metrics are, to enable work relating to the collection, by the kubelet, of the metrics.
Kubernetes vendors cAdvisor into its codebase, and the kubelet uses cAdvisor as a library that enables it to collect metrics on containers. The kubelet can then combine container-level metrics from cAdvisor with the kubelet's knowledge of kubernetes constructs (e.g. pods) to produce the kubelet Summary statistics, which provides metrics for use by the kubelet, or by users through the [Summary API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go). cAdvisor works by collecting metrics at an interval (10 seconds, by default), and the kubelet then simply queries these cached metrics whenever it has a need for them.
Kubernetes vendors cAdvisor into its codebase, and the kubelet uses cAdvisor as a library that enables it to collect metrics on containers. The kubelet can then combine container-level metrics from cAdvisor with the kubelet's knowledge of kubernetes constructs (e.g. pods) to produce the kubelet Summary statistics, which provides metrics for use by the kubelet, or by users through the [Summary API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go). cAdvisor works by collecting metrics at an interval (10 seconds, by default), and the kubelet then simply queries these cached metrics whenever it has a need for them.
Currently, cAdvisor collects a large number of metrics related to system and container performance. However, only some of these metrics are consumed by the kubelet summary API, and many are not used. The kubelet [Summary API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go) is published to the kubelet summary API endpoint (stats/summary). Some of the metrics provided by the summary API are consumed by kubernetes system components, but many are included for the sole purpose of providing metrics for monitoring.
Currently, cAdvisor collects a large number of metrics related to system and container performance. However, only some of these metrics are consumed by the kubelet summary API, and many are not used. The kubelet [Summary API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go) is published to the kubelet summary API endpoint (stats/summary). Some of the metrics provided by the summary API are consumed by kubernetes system components, but many are included for the sole purpose of providing metrics for monitoring.
### Motivations
The [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal explains why a separate monitoring pipeline is required.
The [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal explains why a separate monitoring pipeline is required.
By publishing core metrics, the kubelet is relieved of its responsibility to provide metrics for monitoring.
The third party monitoring pipeline also is relieved of any responsibility to provide these metrics to system components.
@ -56,7 +56,7 @@ This proposal is to use this set of core metrics, collected by the kubelet, and
The target "Users" of this set of metrics are kubernetes components (though not necessarily directly). This set of metrics itself is not designed to be user-facing, but is designed to be general enough to support user-facing components.
### Non Goals
Everything covered in the [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) design doc will not be covered in this proposal. This includes the third party metrics pipeline, and the methods by which the metrics found in this proposal are provided to other kubernetes components.
Everything covered in the [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) design doc will not be covered in this proposal. This includes the third party metrics pipeline, and the methods by which the metrics found in this proposal are provided to other kubernetes components.
Integration with CRI will not be covered in this proposal. In future proposals, integrating with CRI may provide a better abstraction of information required by the core metrics pipeline to collect metrics.
@ -82,7 +82,7 @@ Metrics requirements for "First Class Resource Isolation and Utilization Feature
- Kubelet
- Node-level usage metrics for Filesystems, CPU, and Memory
- Pod-level usage metrics for Filesystems and Memory
- Metrics Server (outlined in [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md)), which exposes the [Resource Metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md) to the following system components:
- Metrics Server (outlined in [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md)), which exposes the [Resource Metrics API](/contributors/design-proposals/instrumentation/resource-metrics-api.md) to the following system components:
- Scheduler
- Node-level usage metrics for Filesystems, CPU, and Memory
- Pod-level usage metrics for Filesystems, CPU, and Memory

View File

@ -5,7 +5,7 @@ Resource Metrics API is an effort to provide a first-class Kubernetes API
(stable, versioned, discoverable, available through apiserver and with client support)
that serves resource usage metrics for pods and nodes. The use cases were discussed
and the API was proposed a while ago in
[another proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md).
[another proposal](/contributors/design-proposals/instrumentation/resource-metrics-api.md).
This document describes the architecture and the design of the second part of this effort:
making the mentioned API available in the same way as the other Kubernetes APIs.
@ -43,18 +43,18 @@ Previously metrics server was blocked on this dependency.
### Design ###
Metrics server will be implemented in line with
[Kubernetes monitoring architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md)
[Kubernetes monitoring architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md)
and inspired by [Heapster](https://github.com/kubernetes/heapster).
It will be a cluster level component which periodically scrapes metrics from all Kubernetes nodes
served by Kubelet through Summary API. Then metrics will be aggregated,
stored in memory (see Scalability limitations) and served in
[Metrics API](https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1alpha1/types.go) format.
[Metrics API](https://git.k8s.io/metrics/pkg/apis/metrics/v1alpha1/types.go) format.
Metrics server will use apiserver library to implement http server functionality.
The library offers common Kubernetes functionality like authorization/authentication,
versioning, support for auto-generated client. To store data in memory we will replace
the default storage layer (etcd) by introducing in-memory store which will implement
[Storage interface](https://github.com/kubernetes/apiserver/blob/master/pkg/registry/rest/rest.go).
[Storage interface](https://git.k8s.io/apiserver/pkg/registry/rest/rest.go).
Only the most recent value of each metric will be remembered. If a user needs an access
to historical data they should either use 3rd party monitoring solution or
@ -71,13 +71,13 @@ due to security reasons (our policy allows only connection in the opposite direc
There will be only one instance of metrics server running in each cluster. In order to handle
high metrics volume, metrics server will be vertically autoscaled by
[addon-resizer](https://github.com/kubernetes/contrib/tree/master/addon-resizer).
[addon-resizer](https://git.k8s.io/contrib/addon-resizer).
We will measure its resource usage characteristic. Our experience from profiling Heapster shows
that it scales vertically effectively. If we hit performance limits we will consider scaling it
horizontally, though its rather complicated and is out of the scope of this doc.
Metrics server will be Kubernetes addon, create by kube-up script and managed by
[addon-manager](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager).
[addon-manager](https://git.k8s.io/kubernetes/cluster/addons/addon-manager).
Since there is a number of dependant components, it will be marked as a critical addon.
In the future when the priority/preemption feature is introduced we will migrate to use this
proper mechanism for marking it as a high-priority, system component.

View File

@ -57,7 +57,7 @@ Basic ideas:
### REST call monitoring
We do measure REST call duration in the Density test, but we need an API server monitoring as well, to avoid false failures caused e.g. by the network traffic. We already have
some metrics in place (https://github.com/kubernetes/kubernetes/blob/master/pkg/apiserver/metrics/metrics.go), but we need to revisit the list and add some more.
some metrics in place (https://git.k8s.io/kubernetes/pkg/apiserver/metrics/metrics.go), but we need to revisit the list and add some more.
Basic ideas:
- number of calls per verb, client, resource type

View File

@ -73,7 +73,7 @@ This is a fairly long topic. If you're interested how to cross-compile, see [det
The easiest way of running Kubernetes on another architecture at the time of writing is probably by using the docker-multinode deployment. Of course, you may choose whatever deployment you want, the binaries are easily downloadable from the URL above.
[docker-multinode](https://github.com/kubernetes/kube-deploy/tree/master/docker-multinode) is intended to be a "kick-the-tires" multi-platform solution with Docker as the only real dependency (but it's not production ready)
[docker-multinode](https://git.k8s.io/kube-deploy/docker-multinode) is intended to be a "kick-the-tires" multi-platform solution with Docker as the only real dependency (but it's not production ready)
But when we (`sig-cluster-lifecycle`) have standardized the deployments to about three and made them production ready; at least one deployment should support **all platforms**.
@ -377,7 +377,7 @@ In order to dynamically compile a go binary with `cgo`, we need `gcc` installed
The only Kubernetes binary that is using C code is the `kubelet`, or in fact `cAdvisor` on which `kubelet` depends. `hyperkube` is also dynamically linked as long as `kubelet` is. We should aim to make `kubelet` statically linked.
The normal `x86_64-linux-gnu` can't cross-compile binaries, so we have to install gcc cross-compilers for every platform. We do this in the [`kube-cross`](https://github.com/kubernetes/kubernetes/blob/master/build/build-image/cross/Dockerfile) image,
The normal `x86_64-linux-gnu` can't cross-compile binaries, so we have to install gcc cross-compilers for every platform. We do this in the [`kube-cross`](https://git.k8s.io/kubernetes/build/build-image/cross/Dockerfile) image,
and depend on the [`emdebian.org` repository](https://wiki.debian.org/CrossToolchains). Depending on `emdebian` isn't ideal, so we should consider using the latest `gcc` cross-compiler packages from the `ubuntu` main repositories in the future.
Here's an example when cross-compiling plain C code:

View File

@ -63,7 +63,7 @@ use cases.
## API
This document defines the cluster registry API. It is an evolution of the
[current Federation cluster API](https://github.com/kubernetes/federation/blob/master/apis/federation/types.go#L99),
[current Federation cluster API](https://git.k8s.io/federation/apis/federation/types.go#L99),
and is designed more specifically for the "cluster registry" use case in
contrast to the Federation `Cluster` object, which was made for the
active-control-plane Federation.
@ -84,7 +84,7 @@ Optional API operations:
support WATCH for this API. Implementations can choose to support or not
support this operation. An implementation that does not support the
operation should return HTTP error 405, StatusMethodNotAllowed, per the
[relevant Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#error-codes).
[relevant Kubernetes API conventions](/contributors/devel/api-conventions.md#error-codes).
We also intend to support a use case where the server returns a file that can be
stored for later use. We expect this to be doable with the standard API
@ -92,7 +92,7 @@ machinery; and if the API is implemented not using the Kubernetes API machinery,
that the returned file must be interoperable with the response from a Kubernetes
API server.
[The API](https://github.com/kubernetes/cluster-registry/blob/master/pkg/apis/clusterregistry/v1alpha1/types.go)
[The API](https://git.k8s.io/cluster-registry/pkg/apis/clusterregistry/v1alpha1/types.go)
is defined in the cluster registry repo, and is not replicated here in order to
avoid mismatches.
@ -107,7 +107,7 @@ objects that contain a value for the `ClusterName` field. The `Cluster` object's
of namespace scoped.
The `Cluster` object will have `Spec` and `Status` fields, following the
[Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status).
[Kubernetes API conventions](/contributors/devel/api-conventions.md#spec-and-status).
There was argument in favor of a `State` field instead of `Spec` and `Status`
fields, since the `Cluster` in the registry does not necessarily hold a user's
intent about the cluster being represented, but instead may hold descriptive
@ -141,7 +141,7 @@ extended appropriately.
The cluster registry API will not provide strongly-typed objects for returning
auth info. Instead, it will provide a generic type that clients can use as they
see fit. This is intended to mirror what `kubectl` does with its
[AuthProviderConfig](https://github.com/kubernetes/client-go/blob/master/tools/clientcmd/api/types.go#L144).
[AuthProviderConfig](https://git.k8s.io/client-go/tools/clientcmd/api/types.go#L144).
As open standards are developed for cluster auth, the API can be extended to
provide first-class support for these. We want to avoid baking non-open
standards into the API, and so having to support potentially a multiplicity of

View File

@ -28,7 +28,7 @@ A simple example of a placement policy is
> compliance.
The [Kubernetes Cluster
Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federation.md#policy-engine-and-migrationreplication-controllers)
Federation](/contributors/design-proposals/multicluster/federation.md#policy-engine-and-migrationreplication-controllers)
design proposal includes a pluggable policy engine component that decides how
applications/resources are placed across federated clusters.
@ -283,7 +283,7 @@ When the remediator component (in the sidecar) receives the notification it
sends a PATCH request to the federation-apiserver to update the affected
resource. This way, the actual rebalancing of ReplicaSets is still handled by
the [Rescheduling
Algorithm](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federated-replicasets.md)
Algorithm](/contributors/design-proposals/multicluster/federated-replicasets.md)
in the Federated ReplicaSet controller.
The remediator component must be deployed with a kubeconfig for the

View File

@ -34,7 +34,7 @@ Carrying forward the examples from above...
## Design
The proposed design uses a ClusterSelector annotation that has a value that is parsed into a struct definition that follows the same design as the [NodeSelector type used w/ nodeAffinity](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L1972) and will also use the [Matches function](https://github.com/kubernetes/apimachinery/blob/master/pkg/labels/selector.go#L172) of the apimachinery project to determine if an object should be sent on to federated clusters or not.
The proposed design uses a ClusterSelector annotation that has a value that is parsed into a struct definition that follows the same design as the [NodeSelector type used w/ nodeAffinity](https://git.k8s.io/kubernetes/pkg/api/types.go#L1972) and will also use the [Matches function](https://git.k8s.io/apimachinery/pkg/labels/selector.go#L172) of the apimachinery project to determine if an object should be sent on to federated clusters or not.
In situations where objects are not to be forwarded to federated clusters, instead a delete api call will be made using the object definition. If the object does not exist it will be ignored.

View File

@ -10,7 +10,7 @@ Implementation Owner: @johnbelamaric
CoreDNS is another CNCF project and is the successor to SkyDNS, which kube-dns is based on. It is a flexible, extensible
authoritative DNS server and directly integrates with the Kubernetes API. It can serve as cluster DNS,
complying with the [dns spec](https://github.com/kubernetes/dns/blob/master/docs/specification.md).
complying with the [dns spec](https://git.k8s.io/dns/docs/specification.md).
CoreDNS has fewer moving parts than kube-dns, since it is a single executable and single process. It is written in Go so
it is memory-safe (kube-dns includes dnsmasq which is not). It supports a number of use cases that kube-dns does not
@ -80,7 +80,7 @@ of the lines within `{ }` represent individual plugins:
* `cache 30` enables [caching](https://coredns.io/plugins/cache/) of positive and negative responses for 30 seconds
* `health` opens an HTTP port to allow [health checks](https://coredns.io/plugins/health) from Kubernetes
* `prometheus` enables Prometheus [metrics](https://coredns.io/plugins/metrics)
* `kubernetes 10.0.0.0/8 cluster.local` connects to the Kubernetes API and [serves records](https://coredns.io/plugins/kubernetes/) for the `cluster.local` domain and reverse DNS for 10.0.0.0/8 per the [spec](https://github.com/kubernetes/dns/blob/master/docs/specification.md)
* `kubernetes 10.0.0.0/8 cluster.local` connects to the Kubernetes API and [serves records](https://coredns.io/plugins/kubernetes/) for the `cluster.local` domain and reverse DNS for 10.0.0.0/8 per the [spec](https://git.k8s.io/dns/docs/specification.md)
* `proxy . /etc/resolv.conf` [forwards](https://coredns.io/plugins/proxy) any queries not handled by other plugins (the `.` means the root domain) to the nameservers configured in `/etc/resolv.conf`
### Configuring Stub Domains

View File

@ -206,5 +206,5 @@ The follow configurations will result in an invalid Pod spec:
# References
* [Kubernetes DNS name specification](https://github.com/kubernetes/dns/blob/master/docs/specification.md)
* [Kubernetes DNS name specification](https://git.k8s.io/dns/docs/specification.md)
* [`/etc/resolv.conf manpage`](http://manpages.ubuntu.com/manpages/zesty/man5/resolv.conf.5.html)

View File

@ -83,7 +83,7 @@ From the summary API, they will flow to heapster and stackdriver.
- Performance/Utilization testing: impact on cAdvisor/kubelet resource usage. Impact on GPU performance when we collect metrics.
## Alternatives Rejected
Why collect GPU metrics in cAdvisor? Why not collect them in [device plugins](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md)? The path forward if we collected GPU metrics in device plugin is not clear and may take a lot of time to get finalized.
Why collect GPU metrics in cAdvisor? Why not collect them in [device plugins](/contributors/design-proposals/resource-management/device-plugin.md)? The path forward if we collected GPU metrics in device plugin is not clear and may take a lot of time to get finalized.
Heres a rough sketch of how things could work:

View File

@ -418,7 +418,7 @@ func (p *dynamicPolicy) RemoveContainer(s State, containerID string) error {
[cpuset-files]: http://man7.org/linux/man-pages/man7/cpuset.7.html#FILES
[ht]: http://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html
[hwloc]: https://www.open-mpi.org/projects/hwloc
[node-allocatable]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-allocatable.md#phase-2---enforce-allocatable-on-pods
[node-allocatable]: /contributors/design-proposals/node/node-allocatable.md#phase-2---enforce-allocatable-on-pods
[procfs]: http://man7.org/linux/man-pages/man5/proc.5.html
[qos]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md
[qos]: /contributors/design-proposals/node/resource-qos.md
[topo]: http://github.com/intelsdi-x/swan/tree/master/pkg/isolation/topo

View File

@ -180,5 +180,5 @@ Future work could further limit a kubelet's API access:
Features that expand or modify the APIs or objects accessed by the kubelet will need to involve the node authorizer.
Known features in the design or development stages that might modify kubelet API access are:
* [Dynamic kubelet configuration](https://github.com/kubernetes/features/issues/281)
* [Local storage management](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/local-storage-overview.md)
* [Local storage management](/contributors/design-proposals/storage/local-storage-overview.md)
* [Bulk watch of secrets/configmaps](https://github.com/kubernetes/community/pull/443)

View File

@ -242,7 +242,7 @@ the `kubelet` will select a subsequent pod.
## Eviction Strategy
The `kubelet` will implement an eviction strategy oriented around
[Priority](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-api.md)
[Priority](/contributors/design-proposals/scheduling/pod-priority-api.md)
and pod usage relative to requests. It will target pods that are the lowest
Priority, and are the largest consumers of the starved resource relative to
their scheduling request.

View File

@ -135,7 +135,7 @@ The `kubelet` should associate node bootstrapping semantics to the configured
### Node allocatable
The proposal makes no changes to the definition as presented here:
https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/node-allocatable.md
https://git.k8s.io/kubernetes/docs/proposals/node-allocatable.md
The node will report a set of allocatable compute resources defined as follows:

View File

@ -28,7 +28,7 @@ pod cache, we can further improve Kubelet's CPU usage by
need to inspect containers with no state changes.
***Don't we already have a [container runtime cache]
(https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/container/runtime_cache.go)?***
(https://git.k8s.io/kubernetes/pkg/kubelet/container/runtime_cache.go)?***
The runtime cache is an optimization that reduces the number of `GetPods()`
calls from the workers. However,

View File

@ -124,7 +124,7 @@ Some real-world examples for the use of sysctls:
- a containerized IPv6 routing daemon requires e.g. `/proc/sys/net/ipv6/conf/all/forwarding` and
`/proc/sys/net/ipv6/conf/all/accept_redirects` (compare
[docker#4717](https://github.com/docker/docker/issues/4717#issuecomment-98653017))
- the [nginx ingress controller in kubernetes/contrib](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
- the [nginx ingress controller in kubernetes/contrib](https://git.k8s.io/contrib/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
uses a privileged sidekick container to set `net.core.somaxconn` and `net.ipv4.ip_local_port_range`.
- a huge software-as-a-service provider uses shared memory (`kernel.shm*`) and message queues (`kernel.msg*`) to
communicate between containers of their web-serving pods, configuring up to 20 GB of shared memory.
@ -251,7 +251,7 @@ Issues:
## Design Alternatives and Considerations
- Each pod has its own network stack that is shared among its containers.
A privileged side-kick or init container (compare https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
A privileged side-kick or init container (compare https://git.k8s.io/contrib/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
is able to set `net.*` sysctls.
Clearly, this is completely uncontrolled by the kubelet, but is a usable work-around if privileged

View File

@ -730,7 +730,7 @@ coupling it with container images.
* [CRI Tracking Issue](https://issues.k8s.io/28789)
* [CRI: expose optional runtime features](https://issues.k8s.io/32803)
* [Resource QoS in
Kubernetes](https://github.com/kubernetes/kubernetes/blob/master/docs/design/resource-qos.md)
Kubernetes](https://git.k8s.io/kubernetes/docs/design/resource-qos.md)
* Related Features
* [#1615](https://issues.k8s.io/1615) - Shared PID Namespace across
containers in a pod

View File

@ -75,7 +75,7 @@ When scheduling a pending pod, scheduler tries to place the pod on a node that d
#### Important notes
- When ordering the pods from lowest to highest priority for considering which pod(s) to preempt, among pods with equal priority the pods are ordered by their [QoS class](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes): Best Effort, Burstable, Guaranteed.
- When ordering the pods from lowest to highest priority for considering which pod(s) to preempt, among pods with equal priority the pods are ordered by their [QoS class](/contributors/design-proposals/node/resource-qos.md#qos-classes): Best Effort, Burstable, Guaranteed.
- Scheduler respects pods' disruption budget when considering them for preemption.
- Scheduler will try to minimize the number of preempted pods. As a result, it may preempt a pod while leaving lower priority pods running if preemption of those lower priority pods is not enough to schedule the pending pod while preemption of the higher priority pod(s) is enough to schedule the pending pod. For example, if node capacity is 10, and pending pod is priority 10 and requires 5 units of resource, and the running pods are {priority 0 request 3, priority 1 request 1, priority 2 request 5, priority 3 request 1}, scheduler will preempt the priority 2 pod only and leaves priority 1 and priority 0 running.
- Scheduler does not have the knowledge of resource usage of pods. It makes scheduling decisions based on the requested resources ("requests") of the pods and when it considers a pod for preemption, it assumes the "requests" to be freed on the node.
@ -183,6 +183,6 @@ To solve the problem, the user might try running his web server as Guaranteed, b
# References
- [Controlled Rescheduling in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/rescheduling.md)
- [Controlled Rescheduling in Kubernetes](/contributors/design-proposals/scheduling/rescheduling.md)
- [Resource sharing architecture for batch and serving workloads in Kubernetes](https://docs.google.com/document/d/1-H2hnZap7gQivcSU-9j4ZrJ8wE_WwcfOkTeAGjzUyLA)
- [Design proposal for adding priority to Kubernetes API](https://github.com/kubernetes/community/pull/604/files)

View File

@ -233,7 +233,7 @@ absolutely needed. Changing priority classes has the following disadvantages:
### Priority and QoS classes
Kubernetes has [three QoS
classes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes)
classes](/contributors/design-proposals/node/resource-qos.md#qos-classes)
which are derived from request and limit of pods. Priority is introduced as an
independent concept; meaning that any QoS class may have any valid priority.
When a node is out of resources and pods needs to be preempted, we give

View File

@ -313,7 +313,7 @@ scheduler to not put more than one pod from S in the same zone, and thus by
definition it will not put more than one pod from S on the same node, assuming
each node is in one zone. This rule is more useful as PreferredDuringScheduling
anti-affinity, e.g. one might expect it to be common in
[Cluster Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federation.md) clusters.)
[Cluster Federation](/contributors/design-proposals/multicluster/federation.md) clusters.)
* **Don't co-locate pods of this service with pods from service "evilService"**:
`{LabelSelector: selector that matches evilService's pods, TopologyKey: "node"}`

View File

@ -2,7 +2,7 @@
There are three ways to add new scheduling rules (predicates and priority
functions) to Kubernetes: (1) by adding these rules to the scheduler and
recompiling, [described here](https://github.com/kubernetes/community/blob/master/contributors/devel/scheduler.md),
recompiling, [described here](/contributors/devel/scheduler.md),
(2) implementing your own scheduler process that runs instead of, or alongside
of, the standard Kubernetes scheduler, (3) implementing a "scheduler extender"
process that the standard Kubernetes scheduler calls out to as a final pass when

View File

@ -29,7 +29,7 @@ Kubernetes volume plugins are currently “in-tree” meaning they are linked, c
4. Volume plugins get full privileges of kubernetes components (kubelet and kube-controller-manager).
5. Plugin developers are forced to make plugin source code available, and can not choose to release just a binary.
The existing [Flex Volume](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) plugin attempted to address this by exposing an exec based API for mount/unmount/attach/detach. Although it enables third party storage vendors to write drivers out-of-tree, it requires access to the root filesystem of node and master machines in order to deploy the third party driver files.
The existing [Flex Volume](/contributors/devel/flexvolume.md) plugin attempted to address this by exposing an exec based API for mount/unmount/attach/detach. Although it enables third party storage vendors to write drivers out-of-tree, it requires access to the root filesystem of node and master machines in order to deploy the third party driver files.
Additionally, it doesnt address another pain of in-tree volumes plugins: dependencies. Volume plugins tend to have many external requirements: dependencies on mount and filesystem tools, for example. These dependencies are assumed to be available on the underlying host OS, which often is not the case, and installing them requires direct machine access. There are efforts underway, for example https://github.com/kubernetes/community/pull/589, that are hoping to address this for in-tree volume plugins. But, enabling volume plugins to be completely containerized will make dependency management much easier.
@ -56,7 +56,7 @@ The objective of this document is to document all the requirements for enabling
* Recommend deployment process for Kubernetes compatible, third-party CSI Volume drivers on a Kubernetes cluster.
## Non-Goals
* Replace [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md)
* Replace [Flex Volume plugin](/contributors/devel/flexvolume.md)
* The Flex volume plugin exists as an exec based mechanism to create “out-of-tree” volume plugins.
* Because Flex drivers exist and depend on the Flex interface, it will continue to be supported with a stable API.
* The CSI Volume plugin will co-exist with Flex volume plugin.
@ -85,9 +85,9 @@ This document recommends a standard mechanism for deploying an arbitrary contain
Kubelet (responsible for mount and unmount) will communicate with an external “CSI volume driver” running on the same host machine (whether containerized or not) via a Unix Domain Socket.
CSI volume drivers should create a socket at the following path on the node machine: `/var/lib/kubelet/plugins/[SanitizedCSIDriverName]/csi.sock`. For alpha, kubelet will assume this is the location for the Unix Domain Socket to talk to the CSI volume driver. For the beta implementation, we can consider using the [Device Plugin Unix Domain Socket Registration](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md#unix-socket) mechanism to register the Unix Domain Socket with kubelet. This mechanism would need to be extended to support registration of both CSI volume drivers and device plugins independently.
CSI volume drivers should create a socket at the following path on the node machine: `/var/lib/kubelet/plugins/[SanitizedCSIDriverName]/csi.sock`. For alpha, kubelet will assume this is the location for the Unix Domain Socket to talk to the CSI volume driver. For the beta implementation, we can consider using the [Device Plugin Unix Domain Socket Registration](/contributors/design-proposals/resource-management/device-plugin.md#unix-socket) mechanism to register the Unix Domain Socket with kubelet. This mechanism would need to be extended to support registration of both CSI volume drivers and device plugins independently.
`Sanitized CSIDriverName` is CSI driver name that does not contain dangerous character and can be used as annotation name. It can follow the same pattern that we use for [volume plugins](https://github.com/kubernetes/kubernetes/blob/master/pkg/util/strings/escape.go#L27). Too long or too ugly driver names can be rejected, i.e. all components described in this document will report an error and won't talk to this CSI driver. Exact sanitization method is implementation detail (SHA in the worst case).
`Sanitized CSIDriverName` is CSI driver name that does not contain dangerous character and can be used as annotation name. It can follow the same pattern that we use for [volume plugins](https://git.k8s.io/kubernetes/pkg/util/strings/escape.go#L27). Too long or too ugly driver names can be rejected, i.e. all components described in this document will report an error and won't talk to this CSI driver. Exact sanitization method is implementation detail (SHA in the worst case).
Upon initialization of the external “CSI volume driver”, some external component must call the CSI method `GetNodeId` to get the mapping from Kubernetes Node names to CSI driver NodeID. It must then add the CSI driver NodeID to the `csi.volume.kubernetes.io/nodeid` annotation on the Kubernetes Node API object. The key of the annotation must be `csi.volume.kubernetes.io/nodeid`. The value of the annotation is a JSON blob, containing key/value pairs for each CSI driver.
@ -385,7 +385,7 @@ To deploy a containerized third-party CSI volume driver, it is recommended that
* This is the primary means of communication between Kubelet and the “CSI volume driver” container (gRPC over UDS).
* Have cluster admins deploy the above `StatefulSet` and `DaemonSet` to aded support for the storage system in their Kubernetes cluster.
Alternatively, deployment could be simplified by having all components (including external-provisioner and external-attacher) in the same pod (DaemonSet). Doing so, however, would consume more resources, and require a leader election protocol (likely https://github.com/kubernetes/contrib/tree/master/election) in the `external-provisioner` and `external-attacher` components.
Alternatively, deployment could be simplified by having all components (including external-provisioner and external-attacher) in the same pod (DaemonSet). Doing so, however, would consume more resources, and require a leader election protocol (likely https://git.k8s.io/contrib/election) in the `external-provisioner` and `external-attacher` components.
### Example Walkthrough
@ -477,7 +477,7 @@ Because the kubelet would be responsible for fetching and passing the mount secr
### Extending PersistentVolume Object
Instead of creating a new `VolumeAttachment` object, another option we considered was extending the exiting `PersistentVolume` object.
Instead of creating a new `VolumeAttachment` object, another option we considered was extending the exiting `PersistentVolume` object.
`PersistentVolumeSpec` would be extended to include:
* List of nodes to attach the volume to (initially empty).
@ -485,4 +485,4 @@ Instead of creating a new `VolumeAttachment` object, another option we considere
`PersistentVolumeStatus` would be extended to include:
* List of nodes the volume was successfully attached to.
We dismissed this approach because having attach/detach triggered by the creation/deletion of an object is much easier to manage (for both external-attacher and Kubernetes) and more robust (fewer corner cases to worry about).
We dismissed this approach because having attach/detach triggered by the creation/deletion of an object is much easier to manage (for both external-attacher and Kubernetes) and more robust (fewer corner cases to worry about).

View File

@ -10,7 +10,7 @@ Beginning in version 1.8, the Kubernetes Storage SIG is putting a stop to accept
[CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md) provides a single interface that storage vendors can implement in order for their storage solutions to work across many different container orchestrators, and volume plugins are out-of-tree by design. This is a large effort, the full implementation of CSI is several quarters away, and there is a need for an immediate solution for storage vendors to continue adding volume plugins.
[Flexvolume](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) is an in-tree plugin that has the ability to run any storage solution by executing volume commands against a user-provided driver on the Kubernetes host, and this currently exists today. However, the process of setting up Flexvolume is very manual, pushing it out of consideration for many users. Problems include having to copy the driver to a specific location in each node, manually restarting kubelet, and user's limited access to machines.
[Flexvolume](/contributors/devel/flexvolume.md) is an in-tree plugin that has the ability to run any storage solution by executing volume commands against a user-provided driver on the Kubernetes host, and this currently exists today. However, the process of setting up Flexvolume is very manual, pushing it out of consideration for many users. Problems include having to copy the driver to a specific location in each node, manually restarting kubelet, and user's limited access to machines.
An automated deployment technique is discussed in [Recommended Driver Deployment Method](#recommended-driver-deployment-method). The crucial change required to enable this method is allowing kubelet and controller manager to dynamically discover plugin changes.

View File

@ -17,7 +17,7 @@ higher than individual volume plugins.
### Metric format and collection
Volume metrics emitted will fall under category of service metrics
as defined in [Kubernetes Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md).
as defined in [Kubernetes Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md).
The metrics will be emitted using [Prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and available for collection
@ -27,7 +27,7 @@ from `/metrics` HTTP endpoint of kubelet and controller-manager.
Any collector which can parse Prometheus metric format should be able to collect
metrics from these endpoints.
A more detailed description of monitoring pipeline can be found in [Monitoring architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
A more detailed description of monitoring pipeline can be found in [Monitoring architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
### Metric Types

View File

@ -70,7 +70,7 @@ Guide](http://kubernetes.io/docs/admin/).
Authorization applies to all HTTP requests on the main apiserver port.
This doc explains the available authorization implementations.
* **Admission Control Plugins** ([admission_control](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control.md))
* **Admission Control Plugins** ([admission_control](/contributors/design-proposals/api-machinery/admission_control.md))
## Building releases

View File

@ -495,7 +495,7 @@ The generators that create go code have a `--go-header-file` flag
which should be a file that contains the header that should be
included. This header is the copyright that should be present at the
top of the generated file and should be checked with the
[`repo-infra/verify/verify-boilerplane.sh`](https://github.com/kubernetes/repo-infra/blob/master/verify/verify-boilerplate.sh)
[`repo-infra/verify/verify-boilerplane.sh`](https://git.k8s.io/repo-infra/verify/verify-boilerplate.sh)
script at a later stage of the build.
To invoke these generators, you can run `make update`, which runs a bunch of
@ -829,7 +829,7 @@ The preferred approach adds an alpha field to the existing object, and ensures i
1. Add a feature gate to the API server to control enablement of the new field (and associated function):
In [staging/src/k8s.io/apiserver/pkg/features/kube_features.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/features/kube_features.go):
In [staging/src/k8s.io/apiserver/pkg/features/kube_features.go](https://git.k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/features/kube_features.go):
```go
// owner: @you

View File

@ -252,7 +252,7 @@ Kubernetes cannot function without this basic API machinery and semantics, inclu
factor out functionality from existing components in running
clusters. At its core would be a pull-based declarative reconciler,
as provided by the [current add-on
manager](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager)
manager](https://git.k8s.io/kubernetes/cluster/addons/addon-manager)
and as described in the [whitebox app management
doc](https://docs.google.com/document/d/1S3l2F40LCwFKg6WG0srR6056IiZJBwDmDvzHWRffTWk/edit#heading=h.gh6cf96u8mlr). This
would be easier once we have [apply support in the
@ -528,7 +528,7 @@ routing APIs and functions include:
(NIY)
* Service DNS. DNS, using the [official Kubernetes
schema](https://github.com/kubernetes/dns/blob/master/docs/specification.md),
schema](https://git.k8s.io/dns/docs/specification.md),
is required.
The application layer may depend on:
@ -601,7 +601,7 @@ Automation APIs and functions:
* NIY: The vertical pod autoscaling API(s)
* [Cluster autoscaling and/or node
provisioning](https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler)
provisioning](https://git.k8s.io/contrib/cluster-autoscaler)
* The PodDisruptionBudget API
@ -649,7 +649,7 @@ The management layer may depend on:
* Replacement and/or additional horizontal and vertical pod
autoscalers
* [Cluster autoscaler and/or node provisioner](https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler)
* [Cluster autoscaler and/or node provisioner](https://git.k8s.io/contrib/cluster-autoscaler)
* Dynamic volume provisioners
@ -880,7 +880,7 @@ LoadBalancer API is present.
Extensions and their options should be registered via FooClass
resources, similar to
[StorageClass](https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/storage/v1beta1/types.go#L31),
[StorageClass](https://git.k8s.io/kubernetes/pkg/apis/storage/v1beta1/types.go#L31),
but with parameter descriptions, types (e.g., integer vs string),
constraints (e.g., range or regexp) for validation, and default
values, with a reference to fooClassName from the extended API. These

View File

@ -14,9 +14,9 @@ In an effort to
* maintain end-to-end test stability
* load test github's label feature
We have added an automated [submit-queue](https://github.com/kubernetes/test-infra/tree/master/mungegithub/submit-queue)
We have added an automated [submit-queue](https://git.k8s.io/test-infra/mungegithub/submit-queue)
to the
[github "munger"](https://github.com/kubernetes/test-infra/tree/master/mungegithub)
[github "munger"](https://git.k8s.io/test-infra/mungegithub)
for kubernetes.
The submit-queue does the following:
@ -48,7 +48,7 @@ If these tests pass a second time, the PR will be merged when this PR finishes r
## Github Munger
We run [github "mungers"](https://github.com/kubernetes/test-infra/tree/master/mungegithub).
We run [github "mungers"](https://git.k8s.io/test-infra/mungegithub).
This runs repeatedly over github pulls and issues and runs modular "mungers".
The mungers include the "submit-queue" referenced above along

View File

@ -3,7 +3,7 @@
Building and testing Kubernetes with Bazel is supported but not yet default.
Go rules are managed by the [`gazelle`](https://github.com/bazelbuild/rules_go/tree/master/go/tools/gazelle)
tool, with some additional rules managed by the [`kazel`](https://github.com/kubernetes/repo-infra/tree/master/kazel) tool.
tool, with some additional rules managed by the [`kazel`](https://git.k8s.io/repo-infra/kazel) tool.
These tools are called via the `hack/update-bazel.sh` script.
Instructions for installing Bazel
@ -26,7 +26,7 @@ $ bazel test //pkg/kubectl/...
## Planter
If you don't want to install Bazel, you can instead try using the unofficial
[Planter](https://github.com/kubernetes/test-infra/tree/master/planter) tool,
[Planter](https://git.k8s.io/test-infra/planter) tool,
which runs Bazel inside a Docker container.
For example, you can run

View File

@ -15,7 +15,7 @@ depending on the point in the release cycle.
to set the same label to confirm that no release note is needed.
1. `release-note` labeled PRs generate a release note using the PR title by
default OR the release-note block in the PR template if filled in.
* See the [PR template](https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md) for more details.
* See the [PR template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for more details.
* PR titles and body comments are mutable and can be modified at any time
prior to the release to reflect a release note friendly message.

View File

@ -3,9 +3,9 @@
## What is CRI?
CRI (_Container Runtime Interface_) consists of a
[protobuf API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto),
[protobuf API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto),
specifications/requirements (to-be-added),
and [libraries](https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/server/streaming)
and [libraries](https://git.k8s.io/kubernetes/pkg/kubelet/server/streaming)
for container runtimes to integrate with kubelet on a node. CRI is currently in Alpha.
In the future, we plan to add more developer tools such as the CRI validation
@ -59,8 +59,8 @@ Below is a mixed list of CRI specifications/requirements, design docs and
proposals. We are working on adding more documentation for the API.
- [Original proposal](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/container-runtime-interface-v1.md)
- [Networking](https://github.com/kubernetes/community/blob/master/contributors/devel/kubelet-cri-networking.md)
- [Container metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/cri-container-stats.md)
- [Networking](/contributors/devel/kubelet-cri-networking.md)
- [Container metrics](/contributors/devel/cri-container-stats.md)
- [Exec/attach/port-forward streaming requests](https://docs.google.com/document/d/1OE_QoInPlVCK9rMAx9aybRmgFiVjHpJCHI9LrfdNM_s/edit?usp=sharing)
- [Container stdout/stderr logs](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/kubelet-cri-logging.md)

View File

@ -13,14 +13,14 @@ A list of common resources when contributing to Kubernetes.
- [Gubernator Dashboard - k8s.reviews](https://k8s-gubernator.appspot.com/pr)
- [reviewable.kubernetes.io](https://reviewable.kubernetes.io/reviews#-)
- [Submit Queue](https://submit-queue.k8s.io)
- [Bot commands](https://github.com/kubernetes/test-infra/blob/master/commands.md)
- [Bot commands](https://git.k8s.io/test-infra/commands.md)
- [Release Buckets](http://gcsweb.k8s.io/gcs/kubernetes-release/)
- Developer Guide
- [Cherry Picking Guide](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md) - [Queue](http://cherrypick.k8s.io/#/queue)
- [Cherry Picking Guide](/contributors/devel/cherry-picks.md) - [Queue](http://cherrypick.k8s.io/#/queue)
## SIGs and Working Groups
- [Master SIG list](https://github.com/kubernetes/community/blob/master/sig-list.md#master-sig-list)
- [Master SIG list](/sig-list.md#master-sig-list)
## Community

View File

@ -32,7 +32,7 @@ When you're writing controllers, there are few guidelines that will help make su
1. Use `SharedInformers`. `SharedInformers` provide hooks to receive notifications of adds, updates, and deletes for a particular resource. They also provide convenience functions for accessing shared caches and determining when a cache is primed.
Use the factory methods down in https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/informers/factory.go to ensure that you are sharing the same instance of the cache as everyone else.
Use the factory methods down in https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/informers/factory.go to ensure that you are sharing the same instance of the cache as everyone else.
This saves us connections against the API server, duplicate serialization costs server-side, duplicate deserialization costs controller-side, and duplicate caching costs controller-side.
@ -62,7 +62,7 @@ When you're writing controllers, there are few guidelines that will help make su
This lets clients know that the controller has processed a resource. Make sure that your controller is the main controller that is responsible for that resource, otherwise if you need to communicate observation via your own controller, you will need to create a different kind of ObservedGeneration in the Status of the resource.
1. Consider using owner references for resources that result in the creation of other resources (eg. a ReplicaSet results in creating Pods). Thus you ensure that children resources are going to be garbage-collected once a resource managed by your controller is deleted. For more information on owner references, read more [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/controller-ref.md).
1. Consider using owner references for resources that result in the creation of other resources (eg. a ReplicaSet results in creating Pods). Thus you ensure that children resources are going to be garbage-collected once a resource managed by your controller is deleted. For more information on owner references, read more [here](/contributors/design-proposals/api-machinery/controller-ref.md).
Pay special attention in the way you are doing adoption. You shouldn't adopt children for a resource when either the parent or the children are marked for deletion. If you are using a cache for your resources, you will likely need to bypass it with a direct API read in case you observe that an owner reference has been updated for one of the children. Thus, you ensure your controller is not racing with the garbage collector.

View File

@ -1,7 +1,7 @@
# Container Runtime Interface: Container Metrics
[Container runtime interface
(CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md)
(CRI)](/contributors/devel/container-runtime-interface.md)
provides an abstraction for container runtimes to integrate with Kubernetes.
CRI expects the runtime to provide resource usage statistics for the
containers.
@ -12,7 +12,7 @@ Historically Kubelet relied on the [cAdvisor](https://github.com/google/cadvisor
library, an open-source project hosted in a separate repository, to retrieve
container metrics such as CPU and memory usage. These metrics are then aggregated
and exposed through Kubelet's [Summary
API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go)
API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go)
for the monitoring pipeline (and other components) to consume. Any container
runtime (e.g., Docker and Rkt) integrated with Kubernetes needed to add a
corresponding package in cAdvisor to support tracking container and image file
@ -23,9 +23,9 @@ progression to augment CRI to serve container metrics to eliminate a separate
integration point.
*See the [core metrics design
proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/core-metrics-pipeline.md)
proposal](/contributors/design-proposals/instrumentation/core-metrics-pipeline.md)
for more information on metrics exposed by Kubelet, and [monitoring
architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md)
architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md)
for the evolving monitoring pipeline in Kubernetes.*
# Container Metrics

View File

@ -293,13 +293,13 @@ make test
make test WHAT=./pkg/api/helper GOFLAGS=-v
# Run integration tests, requires etcd
# For more info, visit https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md#integration-tests
# For more info, visit https://git.k8s.io/community/contributors/devel/testing.md#integration-tests
make test-integration
# Run e2e tests by building test binaries, turn up a test cluster, run all tests, and tear the cluster down
# Equivalent to: go run hack/e2e.go -- -v --build --up --test --down
# Note: running all e2e tests takes a LONG time! To run specific e2e tests, visit:
# https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md#building-kubernetes-and-running-the-tests
# https://git.k8s.io/community/contributors/devel/e2e-tests.md#building-kubernetes-and-running-the-tests
make test-e2e
```
@ -398,9 +398,9 @@ masse. This makes reviews easier.
[OS X GNU tools]: https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x
[build/build-image/cross]: https://github.com/kubernetes/kubernetes/blob/master/build/build-image/cross
[build/common.sh]: https://github.com/kubernetes/kubernetes/blob/master/build/common.sh
[e2e-image]: https://github.com/kubernetes/test-infra/tree/master/jenkins/e2e-image
[build/build-image/cross]: https://git.k8s.io/kubernetes/build/build-image/cross
[build/common.sh]: https://git.k8s.io/kubernetes/build/common.sh
[e2e-image]: https://git.k8s.io/test-infra/jenkins/e2e-image
[etcd-latest]: https://coreos.com/etcd/docs/latest
[etcd-install]: testing.md#install-etcd-dependency
<!-- https://github.com/coreos/etcd/releases -->
@ -409,5 +409,5 @@ masse. This makes reviews easier.
[kubectl user guide]: https://kubernetes.io/docs/user-guide/kubectl
[kubernetes.io]: https://kubernetes.io
[mercurial]: http://mercurial.selenic.com/wiki/Download
[test-image]: https://github.com/kubernetes/test-infra/tree/master/jenkins/test-image
[test-image]: https://git.k8s.io/test-infra/jenkins/test-image
[Build with Bazel]: bazel.md

View File

@ -137,7 +137,7 @@ make test-e2e-node REMOTE=true IMAGE_PROJECT="<name-of-project-with-images>" IMA
```
Setting up your own host image may require additional steps such as installing etcd or docker. See
[setup_host.sh](https://github.com/kubernetes/kubernetes/tree/master/test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests.
[setup_host.sh](https://git.k8s.io/kubernetes/test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests.
## Create instances using a different instance name prefix
@ -223,7 +223,7 @@ the bottom of the comments section. To re-run just the node e2e tests from the
`@k8s-bot node e2e test this issue: #<Flake-Issue-Number or IGNORE>` and **include a link to the test
failure logs if caused by a flake.**
The PR builder runs tests against the images listed in [jenkins-pull.properties](https://github.com/kubernetes/kubernetes/tree/master/test/e2e_node/jenkins/jenkins-pull.properties)
The PR builder runs tests against the images listed in [jenkins-pull.properties](https://git.k8s.io/kubernetes/test/e2e_node/jenkins/jenkins-pull.properties)
The post submit tests run against the images listed in [jenkins-ci.properties](https://github.com/kubernetes/kubernetes/tree/master/test/e2e_node/jenkins/jenkins-ci.properties)
The post submit tests run against the images listed in [jenkins-ci.properties](https://git.k8s.io/kubernetes/test/e2e_node/jenkins/jenkins-ci.properties)

View File

@ -146,7 +146,7 @@ go run hack/e2e.go -- -v --down
The logic in `e2e.go` moved out of the main kubernetes repo to test-infra.
The remaining code in `hack/e2e.go` installs `kubetest` and sends it flags.
It now lives in [kubernetes/test-infra/kubetest](https://github.com/kubernetes/test-infra/tree/master/kubetest).
It now lives in [kubernetes/test-infra/kubetest](https://git.k8s.io/test-infra/kubetest).
By default `hack/e2e.go` updates and installs `kubetest` once per day.
Control the updater behavior with the `--get` and `--old` flags:
The `--` flag separates updater and kubetest flags (kubetest flags on the right).
@ -446,7 +446,7 @@ similarly enough to older versions. The general strategy is to cover the follow
same version (e.g. a cluster upgraded to v1.3 passes the same v1.3 tests as
a newly-created v1.3 cluster).
[hack/e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/e2e-image/e2e-runner.sh) is
[hack/e2e-runner.sh](https://git.k8s.io/test-infra/jenkins/e2e-image/e2e-runner.sh) is
the authoritative source on how to run version-skewed tests, but below is a
quick-and-dirty tutorial.
@ -569,7 +569,7 @@ breaking changes, it does *not* block the merge-queue, and thus should run in
some separate test suites owned by the feature owner(s)
(see [Continuous Integration](#continuous-integration) below).
Every test should be owned by a [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md),
Every test should be owned by a [SIG](/sig-list.md),
and have a corresponding `[sig-<name>]` label.
### Viper configuration and hierarchichal test parameters.
@ -582,7 +582,7 @@ To use viper, rather than flags, to configure your tests:
- Just add "e2e.json" to the current directory you are in, and define parameters in it... i.e. `"kubeconfig":"/tmp/x"`.
Note that advanced testing parameters, and hierarchichally defined parameters, are only defined in viper, to see what they are, you can dive into [TestContextType](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/test_context.go).
Note that advanced testing parameters, and hierarchichally defined parameters, are only defined in viper, to see what they are, you can dive into [TestContextType](https://git.k8s.io/kubernetes/test/e2e/framework/test_context.go).
In time, it is our intent to add or autogenerate a sample viper configuration that includes all e2e parameters, to ship with kubernetes.
@ -656,7 +656,7 @@ A quick overview of how we run e2e CI on Kubernetes.
We run a battery of `e2e` tests against `HEAD` of the master branch on a
continuous basis, and block merges via the [submit
queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the
subset is defined in the [munger config](https://github.com/kubernetes/test-infra/tree/master/mungegithub/mungers/submit-queue.go)
subset is defined in the [munger config](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and
`kubernetes-test-go` jobs for build and unit and integration tests).
@ -732,7 +732,7 @@ label, and will be incorporated into our core suites. If tests are not expected
to pass by default, (e.g. they require a special environment such as added
quota,) they should remain with the `[Feature:.+]` label, and the suites that
run them should be incorporated into the
[munger config](https://github.com/kubernetes/test-infra/tree/master/mungegithub/mungers/submit-queue.go)
[munger config](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
via the `jenkins-jobs` flag.
Occasionally, we'll want to add tests to better exercise features that are

View File

@ -14,10 +14,10 @@ The vendor and driver names must match flexVolume.driver in the volume spec, wit
## Dynamic Plugin Discovery
Beginning in v1.8, Flexvolume supports the ability to detect drivers on the fly. Instead of requiring drivers to exist at system initialization time or having to restart kubelet or controller manager, drivers can be installed, upgraded/downgraded, and uninstalled while the system is running.
For more information, please refer to the [design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md).
For more information, please refer to the [design document](/contributors/design-proposals/storage/flexvolume-deployment.md).
## Automated Plugin Installation/Upgrade
One possible way to install and upgrade your Flexvolume drivers is by using a DaemonSet. See [Recommended Driver Deployment Method](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md#recommended-driver-deployment-method) for details.
One possible way to install and upgrade your Flexvolume drivers is by using a DaemonSet. See [Recommended Driver Deployment Method](/contributors/design-proposals/storage/flexvolume-deployment.md#recommended-driver-deployment-method) for details.
## Plugin details
The plugin expects the following call-outs are implemented for the backend drivers. Some call-outs are optional. Call-outs are invoked from the Kubelet & the Controller manager nodes.
@ -50,7 +50,7 @@ Detach the volume from the Kubelet node. Nodename param is only valid/relevant i
```
#### Wait for attach:
Wait for the volume to be attached on the remote node. On success, the path to the device is returned. Called from both Kubelet & Controller manager. The timeout should be 10m (based on https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/volume_manager.go#L88 )
Wait for the volume to be attached on the remote node. On success, the path to the device is returned. Called from both Kubelet & Controller manager. The timeout should be 10m (based on https://git.k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go#L88 )
```
<driver executable> waitforattach <mount device> <json options>
@ -132,7 +132,7 @@ Note: Secrets are passed only to "mount/unmount" call-outs.
See [nginx.yaml] & [nginx-nfs.yaml] for a quick example on how to use Flexvolume in a pod.
[lvm]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/lvm
[nfs]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/nfs
[nginx.yaml]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/nginx.yaml
[nginx-nfs.yaml]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/nginx-nfs.yaml
[lvm]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/lvm
[nfs]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nfs
[nginx.yaml]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nginx.yaml
[nginx-nfs.yaml]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nginx-nfs.yaml

View File

@ -33,7 +33,7 @@ In addition, the following optional tags influence the client generation:
$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release"
```
**3.** ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](https://github.com/kubernetes/kubernetes/blob/master/pkg/client/clientset_generated/internalclientset/typed/core/internalversion/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen.
**3.** ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](https://git.k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/typed/core/internalversion/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen.
## Output of client-gen
@ -43,7 +43,7 @@ $ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release"
## Released clientsets
If you are contributing code to k8s.io/kubernetes, try to use the generated clientset [here](https://github.com/kubernetes/kubernetes/tree/master/pkg/client/clientset_generated/internalclientset).
If you are contributing code to k8s.io/kubernetes, try to use the generated clientset [here](https://git.k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset).
If you need a stable Go client to build your own project, please refer to the [client-go repository](https://github.com/kubernetes/client-go).

View File

@ -113,7 +113,7 @@ k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp
Gubernator provides a framework for debugging failures and introduces useful features.
There is still a lot of room for more features and growth to make the debugging process more efficient.
How to contribute (see https://github.com/kubernetes/test-infra/blob/master/gubernator/README.md)
How to contribute (see https://git.k8s.io/test-infra/gubernator/README.md)
* Extend GUBERNATOR flag to all local tests

View File

@ -33,7 +33,7 @@ for other github repositories related to Kubernetes is TBD.
Most people can leave comments and open issues. They don't have the ability to
set labels, change milestones and close other peoples issues. For that we use
a bot to manage labelling and triaging. The bot has a set of
[commands and permissions](https://github.com/kubernetes/test-infra/blob/master/commands.md)
[commands and permissions](https://git.k8s.io/test-infra/commands.md)
and this document will cover the basic ones.
## Determine if its a support request
@ -93,7 +93,7 @@ The Kubernetes Team
```
## Find the right SIG(s)
Components are divided among [Special Interest Groups (SIGs)](https://github.com/kubernetes/community/blob/master/sig-list.md). Find a proper SIG for the ownership of the issue using the bot:
Components are divided among [Special Interest Groups (SIGs)](/sig-list.md). Find a proper SIG for the ownership of the issue using the bot:
* Typing `/sig network` in a comment should add the sig/network label, for
example.

View File

@ -372,7 +372,7 @@ and as noted in [command conventions](#command-conventions), ideally that logic
should exist server-side so any client could take advantage of it. Notice that
this is not a mandatory structure and not every command is implemented this way,
but this is a nice convention so try to be compliant with it. As an example,
have a look at how [kubectl logs](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/logs.go) is implemented.
have a look at how [kubectl logs](https://git.k8s.io/kubernetes/pkg/kubectl/cmd/logs.go) is implemented.
## Exit code conventions

View File

@ -26,7 +26,7 @@ Heapster will hide the performance cost of serving those stats in the Kubelet.
Disabling addons is simple. Just ssh into the Kubernetes master and move the
addon from `/etc/kubernetes/addons/` to a backup location. More details
[here](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/).
[here](https://git.k8s.io/kubernetes/cluster/addons/).
### Which / how many pods?
@ -57,7 +57,7 @@ sampling.
## E2E Performance Test
There is an end-to-end test for collecting overall resource usage of node
components: [kubelet_perf.go](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/node/kubelet_perf.go). To
components: [kubelet_perf.go](https://git.k8s.io/kubernetes/test/e2e/node/kubelet_perf.go). To
run the test, simply make sure you have an e2e cluster running (`go run
hack/e2e.go -- -up`) and [set up](#cluster-set-up) correctly.

View File

@ -24,10 +24,10 @@ Federation CI e2e job names are as below:
Search for the above job names in various configuration files as below:
* Prow config: https://github.com/kubernetes/test-infra/blob/master/prow/config.yaml
* Test job/bootstrap config: https://github.com/kubernetes/test-infra/blob/master/jobs/config.json
* Test grid config: https://github.com/kubernetes/test-infra/blob/master/testgrid/config/config.yaml
* Job specific config: https://github.com/kubernetes/test-infra/tree/master/jobs/env
* Prow config: https://git.k8s.io/test-infra/prow/config.yaml
* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json
* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml
* Job specific config: https://git.k8s.io/test-infra/jobs/env
### Results
@ -73,10 +73,10 @@ Federation pre-submit jobs have following names.
Search for the above job names in various configuration files as below:
* Prow config: https://github.com/kubernetes/test-infra/blob/master/prow/config.yaml
* Test job/bootstrap config: https://github.com/kubernetes/test-infra/blob/master/jobs/config.json
* Test grid config: https://github.com/kubernetes/test-infra/blob/master/testgrid/config/config.yaml
* Job specific config: https://github.com/kubernetes/test-infra/tree/master/jobs/env
* Prow config: https://git.k8s.io/test-infra/prow/config.yaml
* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json
* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml
* Job specific config: https://git.k8s.io/test-infra/jobs/env
### Results
@ -91,7 +91,7 @@ We track the flakiness metrics of all the pre-submit jobs and
individual tests that run against PRs in
[kubernetes/federation](https://github.com/kubernetes/federation).
* The metrics that we track are documented in https://github.com/kubernetes/test-infra/blob/master/metrics/README.md#metrics.
* The metrics that we track are documented in https://git.k8s.io/test-infra/metrics/README.md#metrics.
* Job-level metrics are available in http://storage.googleapis.com/k8s-metrics/job-flakes-latest.json.
### Playbook

View File

@ -16,7 +16,7 @@ of OWNERS files
## OWNERS spec
The [mungegithub gitrepos
feature](https://github.com/kubernetes/test-infra/blob/master/mungegithub/features/repo-updates.go)
feature](https://git.k8s.io/test-infra/mungegithub/features/repo-updates.go)
is the main consumer of OWNERS files. If this page is out of date, look there.
Each directory that contains a unit of independent code or content may also contain an OWNERS file.
@ -72,7 +72,7 @@ GitHub usernames and aliases listed in OWNERS files are case-insensitive.
## Code Review Process
This is a simplified description of our [full PR testing and merge
workflow](https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md#the-testing-and-merge-workflow)
workflow](/contributors/devel/pull-requests.md#the-testing-and-merge-workflow)
that conveniently forgets about the existence of tests, to focus solely on the roles driven by
OWNERS files.
@ -158,13 +158,13 @@ is the state of today.
## Implementation
### [`mungegithub`](https://github.com/kubernetes/test-infra/tree/master/mungegithub)
### [`mungegithub`](https://git.k8s.io/test-infra/mungegithub)
Mungegithub polls GitHub, and "munges" things it finds, including issues and pull requests. It is
stateful, in that restarting it means it loses track of which things it has munged at what time.
- [feature:
gitrepos](https://github.com/kubernetes/test-infra/blob/master/mungegithub/features/repo-updates.go)
gitrepos](https://git.k8s.io/test-infra/mungegithub/features/repo-updates.go)
- responsible for parsing OWNERS and OWNERS_ALIAS files
- if its `use-reviewers` flag is set to false, **approvers** will also be **reviewers**
- if its `enable-md-yaml` flag is set, `.md` files will also be parsed to see if they have
@ -172,14 +172,14 @@ stateful, in that restarting it means it loses track of which things it has mung
[kubernetes.github.io](https://github.com/kubernetes/kubernetes.github.io/))
- used by other mungers to get the set of **reviewers** or **approvers** for a given path
- [munger:
blunderbuss](https://github.com/kubernetes/test-infra/blob/master/mungegithub/mungers/blunderbuss.go)
blunderbuss](https://git.k8s.io/test-infra/mungegithub/mungers/blunderbuss.go)
- responsible for determining **reviewers** and assigning to them
- chooses from people in the deepest/closest OWNERS files to the code being changed
- weights its choice based on the magnitude of lines changed for each file
- randomly chooses to ensure the same people aren't chosen every time
- if its `blunderbuss-number-assignees` flag is unset, it will default to 2 assignees
- [munger:
approval-handler](https://github.com/kubernetes/test-infra/blob/master/mungegithub/mungers/approval-handler.go)
approval-handler](https://git.k8s.io/test-infra/mungegithub/mungers/approval-handler.go)
- responsible for adding the `approved` label once an **approver** for each of the required
OWNERS files has `/approve`'d
- responsible for commenting as required OWNERS files are satisfied
@ -187,19 +187,19 @@ stateful, in that restarting it means it loses track of which things it has mung
- [full description of the
algorithm](https://github.com/kubernetes/test-infra/blob/6f5df70c29528db89d07106a8156411068518cbc/mungegithub/mungers/approval-handler.go#L99-L111)
- [munger:
submit-queue](https://github.com/kubernetes/test-infra/blob/master/mungegithub/mungers/submit-queue.go)
submit-queue](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
- responsible for merging PR's
- responsible for updating a GitHub status check explaining why a PR can't be merged (eg: a
missing `lgtm` or `approved` label)
### [`prow`](https://github.com/kubernetes/test-infra/tree/master/prow)
### [`prow`](https://git.k8s.io/test-infra/prow)
Prow receives events from GitHub, and reacts to them. It is effectively stateless.
- [plugin: lgtm](https://github.com/kubernetes/test-infra/tree/master/prow/plugins/lgtm)
- [plugin: lgtm](https://git.k8s.io/test-infra/prow/plugins/lgtm)
- responsible for adding the `lgtm` label when a **reviewer** comments `/lgtm` on a PR
- the **PR author** may not `/lgtm` their own PR
- [plugin: assign](https://github.com/kubernetes/test-infra/tree/master/prow/plugins/assign)
- [plugin: assign](https://git.k8s.io/test-infra/prow/plugins/assign)
- responsible for assigning GitHub users in response to `/assign` comments on a PR
- responsible for unassigning GitHub users in response to `/unassign` comments on a PR

View File

@ -44,7 +44,7 @@ pass or fail of continuous integration.
## Sign the CLA
You must sign the CLA before your first contribution. [Read more about the CLA.](https://github.com/kubernetes/community/blob/master/CLA.md)
You must sign the CLA before your first contribution. [Read more about the CLA.](/CLA.md)
If you haven't signed the Contributor License Agreement (CLA) before making a PR,
the `@k8s-ci-robot` will leave a comment with instructions on how to sign the CLA.
@ -92,7 +92,7 @@ For PRs that don't need to be mentioned at release time, just write "NONE" (case
The `/release-note-none` comment command can still be used as an alternative to writing "NONE" in the release-note block if it is left empty.
To see how to format your release notes, view the [PR template](https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. PR titles and body comments can be modified at any time prior to the release to make them friendly for release notes.
To see how to format your release notes, view the [PR template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. PR titles and body comments can be modified at any time prior to the release to make them friendly for release notes.
Release notes apply to PRs on the master branch. For cherry-pick PRs, see the [cherry-pick instructions](cherry-picks.md). The only exception to these rules is when a PR is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master PR.
@ -127,7 +127,7 @@ If you are a member, or a member comments `/ok-to-test`, the PR will be consider
Once the tests pass, all failures are commented as flakes, or the reviewer adds the labels `lgtm` and `approved`, the PR enters the final merge queue. The merge queue is needed to make sure no incompatible changes have been introduced by other PRs since the tests were last run on your PR.
Either the [on call contributor](on-call-rotations.md) will manage the merge queue manually, or the [GitHub "munger"](https://github.com/kubernetes/test-infra/tree/master/mungegithub) submit-queue plugin will manage the merge queue automatically.
Either the [on call contributor](on-call-rotations.md) will manage the merge queue manually, or the [GitHub "munger"](https://git.k8s.io/test-infra/mungegithub) submit-queue plugin will manage the merge queue automatically.
1. The PR enters the merge queue ([http://submit-queue.k8s.io](http://submit-queue.k8s.io))
1. The merge queue triggers a test re-run with the comment `/test all [submit-queue is verifying that this PR is safe to merge]`
@ -151,7 +151,7 @@ The GitHub robots will add and remove the `do-not-merge/hold` label as you use t
## Comment Commands Reference
[The commands doc](https://github.com/kubernetes/test-infra/blob/master/commands.md) contains a reference for all comment commands.
[The commands doc](https://git.k8s.io/test-infra/commands.md) contains a reference for all comment commands.
## Automation
@ -220,8 +220,8 @@ Are you sure Feature-X is something the Kubernetes team wants or will accept? Is
It's better to get confirmation beforehand. There are two ways to do this:
- Make a proposal doc (in docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)), or reach out to the affected special interest group (SIG). Here's a [list of SIGs](https://github.com/kubernetes/community/blob/master/sig-list.md)
- Coordinate your effort with [SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) ahead of time
- Make a proposal doc (in docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)), or reach out to the affected special interest group (SIG). Here's a [list of SIGs](/sig-list.md)
- Coordinate your effort with [SIG Docs](/sig-docs) ahead of time
- Make a sketch PR (e.g., just the API or Go interface). Write or code up just enough to express the idea and the design and why you made those choices
Or, do all of the above.

View File

@ -108,7 +108,7 @@ This looks fine-ish if you don't know that LIST are very expensive calls. Object
`Informer` is our library that provides a read interface of the store - it's a read-only cache that provides you a local copy of the store that will contain only object that you're interested in (matching given selector). From it you can GET, LIST, or do whatever read operations you want. `Informer` also allows you to register functions that will be called when an object is created, modified or deleted, which is what most people want.
The magic behind `Informers` is that they are populated by the WATCH, so they don't stress API server too much. Code for Informer is [here](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/cache/shared_informer.go).
The magic behind `Informers` is that they are populated by the WATCH, so they don't stress API server too much. Code for Informer is [here](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go).
In general: use `Informers` - if we were able to rewrite most vanilla controllers to them, you'll be able to do it as well. If you don't you may dramatically increase CPU requirements of the API server which will starve it and make it too slow to meet our SLOs.

View File

@ -216,7 +216,7 @@ item that has duplicates will delete all matching items.
`setElementOrder` directive provides a way to specify the order of a list.
The relative order specified in this directive will be retained.
Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cli/preserve-order-in-strategic-merge-patch.md) for more information.
Please refer to [proposal](/contributors/design-proposals/cli/preserve-order-in-strategic-merge-patch.md) for more information.
### Syntax
@ -295,7 +295,7 @@ containers:
`retainKeys` directive provides a mechanism for union types to clear mutual exclusive fields.
When this directive is present in the patch, all the fields not in this directive will be cleared.
Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md) for more information.
Please refer to [proposal](/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md) for more information.
### Syntax

View File

@ -159,7 +159,7 @@ See `go help test` and `go help testflag` for additional info.
is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests)
- Example: [TestNamespaceAuthorization](https://git.k8s.io/kubernetes/test/integration/auth/auth_test.go)
* Each test should create its own master, httpserver and config.
- Example: [TestPodUpdateActiveDeadlineSeconds](https://github.com/kubernetes/kubernetes/blob/master/test/integration/pods/pods_test.go)
- Example: [TestPodUpdateActiveDeadlineSeconds](https://git.k8s.io/kubernetes/test/integration/pods/pods_test.go)
* See [coding conventions](coding-conventions.md).
### Install etcd dependency
@ -201,7 +201,7 @@ make test-integration # Run all integration tests.
```
This script runs the golang tests in package
[`test/integration`](https://github.com/kubernetes/kubernetes/tree/master/test/integration).
[`test/integration`](https://git.k8s.io/kubernetes/test/integration).
### Run a specific integration test

View File

@ -227,7 +227,7 @@ my-nginx 3 3 3 3 1m
We did not start any Services, hence there are none listed. But we see three
replicas displayed properly. Check the
[guestbook](https://github.com/kubernetes/examples/tree/master/guestbook)
[guestbook](https://git.k8s.io/examples/guestbook)
application to learn how to create a Service. You can already play with scaling
the replicas with:

View File

@ -146,7 +146,7 @@ right thing.
Here are a few pointers:
+ [E2e Framework](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/framework.go):
+ [E2e Framework](https://git.k8s.io/kubernetes/test/e2e/framework/framework.go):
Familiarise yourself with this test framework and how to use it.
Amongst others, it automatically creates uniquely named namespaces
within which your tests can run to avoid name clashes, and reliably
@ -160,7 +160,7 @@ Here are a few pointers:
should always use this framework. Trying other home-grown
approaches to avoiding name clashes and resource leaks has proven
to be a very bad idea.
+ [E2e utils library](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/util.go):
+ [E2e utils library](https://git.k8s.io/kubernetes/test/e2e/framework/util.go):
This handy library provides tons of reusable code for a host of
commonly needed test functionality, including waiting for resources
to enter specified states, safely and consistently retrying failed
@ -178,9 +178,9 @@ Here are a few pointers:
+ **Follow the examples of stable, well-written tests:** Some of our
existing end-to-end tests are better written and more reliable than
others. A few examples of well-written tests include:
[Replication Controllers](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/apps/rc.go),
[Services](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/service.go),
[Reboot](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/lifecycle/reboot.go).
[Replication Controllers](https://git.k8s.io/kubernetes/test/e2e/apps/rc.go),
[Services](https://git.k8s.io/kubernetes/test/e2e/network/service.go),
[Reboot](https://git.k8s.io/kubernetes/test/e2e/lifecycle/reboot.go).
+ [Ginkgo Test Framework](https://github.com/onsi/ginkgo): This is the
test library and runner upon which our e2e tests are built. Before
you write or refactor a test, read the docs and make sure that you

View File

@ -14,9 +14,9 @@ The Kubernetes community abides by the CNCF [code of conduct](https://github.com
_As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._
As a member of the Kubernetes project, you represent the project and your fellow contributors.
As a member of the Kubernetes project, you represent the project and your fellow contributors.
We value our community tremendously and we'd like to keep cultivating a friendly and collaborative
environment for our contributors and users. We want everyone in the community to have
environment for our contributors and users. We want everyone in the community to have
[positive experiences](https://www.cncf.io/blog/2016/12/14/diversity-scholarship-series-one-software-engineers-unexpected-cloudnativecon-kubecon-experience).
# Community membership
@ -44,7 +44,7 @@ and code ownership, as well as providing focused forums for getting
work done, making decisions, and onboarding new contributors. Every
identifiable subpart of the project (e.g., github org, repository,
subdirectory, API, test, issue, PR) is intended to be owned by some
SIG.
SIG.
Areas covered by SIGs may be vertically focused on particular
components or functions, cross-cutting/horizontal, spanning many/all
@ -73,7 +73,7 @@ relatively free to customize or change how they operate, within some
broad guidelines and constraints imposed by cross-SIG processes (e.g.,
the release process) and assets (e.g., the kubernetes repo).
A primary reason that SIGs exist is as forums for collaboration.
A primary reason that SIGs exist is as forums for collaboration.
Much work in a SIG should stay local within that SIG. However, SIGs
must communicate in the open, ensure other SIGs and community members
can find notes of meetings, discussions, designs, and decisions, and
@ -137,7 +137,7 @@ community meeting.
All repositories under Kubernetes github orgs, such as kubernetes and kubernetes-incubator,
should follow the procedures outlined in the [incubator document](incubator.md). All code projects
use the [Apache Licence version 2.0](LICENSE). Documentation repositories should use the
[Creative Commons License version 4.0](https://github.com/kubernetes/kubernetes.github.io/blob/master/LICENSE).
[Creative Commons License version 4.0](https://git.k8s.io/website/LICENSE).
# Incubator process

View File

@ -30,13 +30,13 @@ To create a new project for incubation you must follow these steps: write a prop
Your proposal should include two items. First, a README which outlines the problem to be solved, an example use case as-if the project existed, and a rough roadmap with timelines. Second, an OWNERS file that outlines the makeup of the initial team developing the project. Initially this can be one person but ideally has 3 or more initial developers representing a few different companies or groups. You can use whatever tool you want to host and revise the proposal until the project is accepted to the Incubator: copy/paste from your text editor, Google Docs, GitHub gist, etc.
Once the proposal is written you should identify a champion; this person must be listed as either a reviewer or approver in an [OWNERS file](https://github.com/kubernetes/kubernetes/blob/master/OWNERS) in the Kubernetes project. Next, reach out to your potential champion via email to ask if they are interested in helping you through the Incubation process. Ideally some significant follow-up discussion happens via email, calls, or chat to improve the proposal before announcing it to the wider community.
Once the proposal is written you should identify a champion; this person must be listed as either a reviewer or approver in an [OWNERS file](https://git.k8s.io/kubernetes/OWNERS) in the Kubernetes project. Next, reach out to your potential champion via email to ask if they are interested in helping you through the Incubation process. Ideally some significant follow-up discussion happens via email, calls, or chat to improve the proposal before announcing it to the wider community.
The next discussion should be on a relevant Special Interest Group mailing list. You should post the proposal to the SIG mailing list and wait for discussion for a few days. Iterate on the proposal as needed and if there is rough consensus that the project belongs in the chosen SIG then list that SIG in the proposal README. If consensus isn't reached then identify another SIG and try again; repeat until a SIG is identified.
The final process is to email kubernetes-dev@googlegroups.com to announce your intention to form a new Incubator project. Include your entire proposal in the body of the email and prefix the Subject with [Incubator]. Include links to your discussion on the accepted SIG mailing list to guide the discussion.
Acceptance of the project into the Kubernetes Incubator happens once a Sponsor approves. Anyone listed as an approver in the top-level pkg [OWNERS file](https://github.com/kubernetes/kubernetes/blob/master/pkg/OWNERS) can sponsor a project by replying to the kubernetes-dev discussion with LGTM.
Acceptance of the project into the Kubernetes Incubator happens once a Sponsor approves. Anyone listed as an approver in the top-level pkg [OWNERS file](https://git.k8s.io/kubernetes/pkg/OWNERS) can sponsor a project by replying to the kubernetes-dev discussion with LGTM.
## Creation of the Incubator Project

View File

@ -47,7 +47,7 @@ To get started with this template:
If you disagree with what is already in a document, open a new PR with suggested changes.
* As a KEP is approved, rename the file yet again with the final KEP number.
The canonical place for the latest set of instructions (and the likely source of this file) is [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/0000-kep-template.md).
The canonical place for the latest set of instructions (and the likely source of this file) is [here](/contributors/design-proposals/architecture/0000-kep-template.md).
The `Metadata` section above is intended to support the creation of tooling around the KEP process.
This will be a YAML section that is fenced as a code block.

View File

@ -120,7 +120,7 @@ for SIGs to deliberate.
[tell a story]: https://blog.rust-lang.org/2017/08/31/Rust-1.20.html
[road to Go 2]: https://blog.golang.org/toward-go2
[survey data]: http://opensourcesurvey.org/2017/
[design proposals]: https://github.com/kubernetes/community/tree/master/contributors/design-proposals
[design proposals]: /contributors/design-proposals
## Reference-level explanation
@ -397,7 +397,7 @@ required in the [features issue template][] may be a heavy burden for non native
English speakers and here the role of the KEP editor combined with kindness and
empathy will be crucial to making the process successful.
[features issue template]: https://github.com/kubernetes/features/blob/master/ISSUE_TEMPLATE.md
[features issue template]: https://git.k8s.io/features/ISSUE_TEMPLATE.md
## Alternatives

View File

@ -73,11 +73,11 @@ _Note, the [minutes and agenda have moved to Google Docs](https://docs.google.co
## May 31, 2016
* Canceled in honor of a short week
## May 25, 2016 [[notes & video](https://github.com/kubernetes/community/blob/master/sig-apps/minutes/2016-05-25.md)]
## May 25, 2016 [[notes & video](/sig-apps/minutes/2016-05-25.md)]
* Intro
* Mike Metral of Rackspace will demo how to recursively process configuration files with the -R flag
## May 18, 2016 [[notes](https://github.com/kubernetes/community/blob/master/sig-apps/minutes/2016-05-18.md)]
## May 18, 2016 [[notes](/sig-apps/minutes/2016-05-18.md)]
* Intro
* Discussion on the future of SIG-Apps
* Pranshanth B. of Google will demo PetSet

View File

@ -11,7 +11,7 @@
- Michelle Noorali gave an update on where you can find information and examples on PetSets.
- Here are some links provided by Pranshanth from Google.
- [github issue](https://github.com/kubernetes/kubernetes/issues/260#issuecomment-220395798)
- [example pets](https://github.com/kubernetes/contrib/tree/master/pets)
- [example pets](https://git.k8s.io/contrib/pets)
- Feel free to get your hands dirty. We will be discussing the provided examples in the upcoming weeks.
Watch the [recording](https://youtu.be/wXZAXemhGb0).

View File

@ -10,7 +10,7 @@
A: _(Clayton)_ Yes. Handling deployment failures at a high level, a generic idea for a trigger controller which watches another system for changes and makes updates to a Deployment, and hooks.
* Ryan showed off OC which is a command line tool which is a wrapper for kubectl
* Comment: One of the challenges Kubernetes faces today is that there is not a great way to extensibly pull in new chunks of APIs.
* This is something that is actively being worked on today. This work is being discussed and worked on in [SIG-API-Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
* This is something that is actively being worked on today. This work is being discussed and worked on in [SIG-API-Machinery](/sig-api-machinery)
* Free O'Reilly EBooks can be found [here](http://gist-reveal.it/4ca683dff6cdb9601c495e27d4bb5289#/oreilly-ebooks) courtesy of Red Hat.

View File

@ -21,7 +21,7 @@ Specific areas of focus include:
* Establishing and documenting design principles
* [Design principles](../contributors/design-proposals/architecture/principles.md)
* Establishing and documenting conventions for system and user-facing APIs
* [API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md)
* [API conventions](/contributors/devel/api-conventions.md)
* Developing necessary technical review processes, such as the proposal and API review processes
* Driving improvement of overall code organization, including github orgs and repositories
* Educating approvers/owners of other SIGs (e.g., by holding office hours)
@ -29,7 +29,7 @@ Specific areas of focus include:
Out of scope:
* Issues specific to a particular component or functional area, which would be the purview
of some other SIG, except where they deviate from project-wide principles and conventions.
* [Release support policy](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md)
* [Release support policy](/contributors/design-proposals/release/versioning.md)
is owned by SIG Release
TODO:

View File

@ -31,7 +31,7 @@ This is important.
- Play around with Kubernetes [Kubernetes Basics Tutorial].
- Get to know possibilities to set up Kubernetes on AWS https://kubernetes.io/docs/getting-started-guides/aws/
- Understand how Kubernetes on aws differs from other installations of Kubernetes [https://github.com/kubernetes/community/blob/master/contributors/design-proposals/aws/aws_under_the_hood.md]
- Understand how Kubernetes on aws differs from other installations of Kubernetes [/contributors/design-proposals/aws/aws_under_the_hood.md]
## Adopt an issue
@ -71,15 +71,15 @@ and group [meeting] times.
[Kubernetes Basics Tutorial]: https://kubernetes.io/docs/tutorials/kubernetes-basics
[PR]: https://help.github.com/articles/creating-a-pull-request
[agenda]: https://docs.google.com/document/d/1-i0xQidlXnFEP9fXHWkBxqySkXwJnrGJP9OGyP2_P14/edit
[communication]: https://github.com/kubernetes/community/tree/master/sig-aws#contact
[community page]: https://github.com/kubernetes/community/tree/master/sig-aws
[design repo]: https://github.com/kubernetes/community/tree/master/contributors/design-proposals/aws
[development guide]: https://github.com/kubernetes/community/blob/master/contributors/devel/development.md
[communication]: /sig-aws#contact
[community page]: /sig-aws
[design repo]: /contributors/design-proposals/aws
[development guide]: /contributors/devel/development.md
[group]: https://groups.google.com/forum/#!forum/kubernetes-sig-aws
[kops]: https://github.com/kubernetes/kops/tree/master/
[leads]: https://github.com/kubernetes/community/tree/master/sig-aws#leads
[kops]: https://git.k8s.io/kops/
[leads]: /sig-aws#leads
[management overview]: https://kubernetes.io/docs/concepts/tools/kubectl/object-management-overview
[meeting]: https://github.com/kubernetes/community/tree/master/sig-aws#meetings
[meeting]: /sig-aws#meetings
[slack-messages]: https://kubernetes.slack.com/messages/sig-aws
[slack-signup]: http://slack.k8s.io/
[kube-aws-tools]: kubernetes-on-aws.md

View File

@ -413,12 +413,12 @@ See the sig-cli [community page] for points of contact and meeting times:
[`PTAL`]: https://en.wiktionary.org/wiki/PTAL
[agenda]: https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit
[bug]: #bug-lifecycle
[communication]: https://github.com/kubernetes/community/tree/master/sig-cli#contact
[community page]: https://github.com/kubernetes/community/tree/master/sig-cli
[communication]: /sig-cli#contact
[community page]: /sig-cli
[design proposal]: #design-proposals
[design repo]: https://github.com/kubernetes/community/tree/master/contributors/design-proposals/cli
[design template]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/Design_Proposal_TEMPLATE.md
[development guide]: https://github.com/kubernetes/community/blob/master/contributors/devel/development.md
[design repo]: /contributors/design-proposals/cli
[design template]: /contributors/design-proposals/Design_Proposal_TEMPLATE.md
[development guide]: /contributors/devel/development.md
[existing issue]: #adopt-an-issue
[feature repo]: https://github.com/kubernetes/features
[feature request]: #feature-requests
@ -426,14 +426,14 @@ See the sig-cli [community page] for points of contact and meeting times:
[group]: https://groups.google.com/forum/#!forum/kubernetes-sig-cli
[issue]: https://github.com/kubernetes/kubectl/projects/3
[kubectl docs]: https://kubernetes.io/docs/tutorials/object-management-kubectl/object-management/
[kubernetes/cmd/kubectl]: https://github.com/kubernetes/kubernetes/tree/master/cmd/kubectl
[kubernetes/pkg/kubectl]: https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl
[leads]: https://github.com/kubernetes/community/tree/master/sig-cli#leads
[kubernetes/cmd/kubectl]: https://git.k8s.io/kubernetes/cmd/kubectl
[kubernetes/pkg/kubectl]: https://git.k8s.io/kubernetes/pkg/kubectl
[leads]: /sig-cli#leads
[management overview]: https://kubernetes.io/docs/concepts/tools/kubectl/object-management-overview
[meeting]: https://github.com/kubernetes/community/tree/master/sig-cli#meetings
[meeting]: /sig-cli#meetings
[release]: #release
[slack-messages]: https://kubernetes.slack.com/messages/sig-cli
[slack-signup]: http://slack.k8s.io/
[tests]: https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md
[tests]: /contributors/devel/testing.md
[cli mentors]: https://groups.google.com/a/google.com/forum/#!forum/kubernetes-sig-cli-mentors
[about me form]: https://docs.google.com/forms/d/1ID6DX1abiDr9Z9_sXXC0DsMwuyHb_NeFdB3xeRa4Vf0

View File

@ -2,7 +2,7 @@
`kubectl` is the Kubernetes CLI.
If you'd like to contribute, please read the [conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/kubectl-conventions.md) and familiarize yourself with [existing commands](http://kubernetes.io/docs/user-guide/kubectl-overview/).
If you'd like to contribute, please read the [conventions](/contributors/devel/kubectl-conventions.md) and familiarize yourself with [existing commands](http://kubernetes.io/docs/user-guide/kubectl-overview/).
**Owner:** @kubernetes/kubectl
@ -30,28 +30,28 @@ If you'd like to contribute, please read the [conventions](https://github.com/ku
* [Make `kubectl run --restart=Never` creates Pods (instead of Jobs)](https://github.com/kubernetes/kubernetes/issues/24533)
* Create commands/flags for common get + template patterns (e.g. getting service IP address)
* [Implement `kubectl cp`](https://github.com/kubernetes/kubernetes/issues/13776) to copy files between containers and local for debugging
* `kubectl rollout`
* `kubectl rollout`
* [Add `kubectl rollout start` to show how to start a rollout](https://github.com/kubernetes/kubernetes/issues/25142)
* [Add `kubectl rollout status`](https://github.com/kubernetes/kubernetes/issues/25235)
* Scripting support
* [wait](https://github.com/kubernetes/kubernetes/issues/1899)
* [watch / IFTTT](https://github.com/kubernetes/kubernetes/issues/5164)
* [Add `kubectl top`](https://github.com/kubernetes/kubernetes/issues/11382) which lists resource metrics.
* [Add `kubectl top`](https://github.com/kubernetes/kubernetes/issues/11382) which lists resource metrics.
### Alternative interfaces
* Create a terminal based console, ref [docker console](https://github.com/dustinlacewell/console) ([video](https://www.youtube.com/watch?v=wSzZxbDYgtY))
* [Add `kubectl sh`, an interactive shell](https://github.com/kubernetes/kubernetes/issues/25385), or make a kubectlshell in contrib and make bash completion part of it (ref [pythonshell](https://gist.github.com/bprashanth/9a3c8dfbba443698ddd960b8087107bf))
* Think about how/whether to invoke generation commands such as `kubectl run` or `kubectl create configmap` in bulk, declaratively, such as part of the `apply` flow.
* Think about how/whether to invoke generation commands such as `kubectl run` or `kubectl create configmap` in bulk, declaratively, such as part of the `apply` flow.
* [ChatOps](https://www.pagerduty.com/blog/what-is-chatops/) bot -- such as [kubebot](https://github.com/harbur/kubebot) (add to tools documentation)
### Improve help / error messages / output
### Improve help / error messages / output
* Make kubectl functionality more discoverable
* [Overhaul kubectl help](https://github.com/kubernetes/kubernetes/issues/16089)
* ~~[Print "Usage" at the bottom](https://github.com/kubernetes/kubernetes/issues/7496)~~
* Add keywords (critical words) to help
* List valid resources for each command
* Make short description of each command more concrete; use the same language for each command
* Add keywords (critical words) to help
* List valid resources for each command
* Make short description of each command more concrete; use the same language for each command
* Link to docs ([kubernetes.io/docs](http://kubernetes.io/docs))
* [Update `kubectl help` descriptions and examples from docs](https://github.com/kubernetes/kubernetes/issues/25290)
* Embed formatting and post-process for different media (terminal, man, github, etc.)
@ -96,7 +96,7 @@ If you'd like to contribute, please read the [conventions](https://github.com/ku
### Installation / Release
* `gcloud` should enable kubectl bash completion when installing `kubectl`
* [Pipe-to-sh to install kubectl](https://github.com/kubernetes/kubernetes/issues/25386)
* [Static build of kubectl for containers](https://github.com/kubernetes/kubernetes/issues/23708) ([we have it](https://github.com/kubernetes/kubernetes/tree/master/examples/kubectl-container), but it's not part of the release)
* [Static build of kubectl for containers](https://github.com/kubernetes/kubernetes/issues/23708) ([we have it](https://git.k8s.io/kubernetes/examples/kubectl-container), but it's not part of the release)
### Others
* [Move functionality to server](https://github.com/kubernetes/kubernetes/issues/12143)

View File

@ -18,7 +18,7 @@ It groups containers that make up an application into logical units for easy man
[kubernetes.io](https://kubernetes.io/)
## What are SIGs / What is SIG-CLI?
Kubernetes is a set of projects, each shepherded by a special interest group (SIG). To get a grasp of the projects that we work on, check out the complete [list of SIGs](https://github.com/kubernetes/community/blob/master/sig-list.md).
Kubernetes is a set of projects, each shepherded by a special interest group (SIG). To get a grasp of the projects that we work on, check out the complete [list of SIGs](/sig-list.md).
SIG-CLI Covers kubectl and related tools. We focus on the development and standardization of the CLI framework and its dependencies, the establishment of conventions for writing CLI commands, POSIX compliance, and improving the command line tools from a developer and devops user experience and usability perspective.
@ -38,7 +38,7 @@ As part of the application process, the Outreachy program recommends that candid
To start working on the project, make sure to fill out the CLA and check if you have the right environment with this guide. The README in the [community repo](https://github.com/kubernetes/community) details these things and more.
Check out these specific resources for how to contribute to CLI:
* SIG-CLI - [How to Contribute](https://github.com/kubernetes/community/blob/master/sig-cli/CONTRIBUTING.md)
* SIG-CLI - [How to Contribute](/sig-cli/CONTRIBUTING.md)
* Filter issue search for: `is:open is:issue label:sig/cli label:"help wanted"`
* Hand picked issues for outreachy applications: https://github.com/kubernetes/kubectl/projects/3

View File

@ -56,9 +56,9 @@ For simplicity, users shouldn't need to install/launch more than one component o
Once we have this, we should delete out-of-date, untested "getting-started guides" ([example broken cluster debugging thread](https://github.com/kubernetes/dashboard/issues/971)).
See also:
* [Summary proposal](https://github.com/kubernetes/kubernetes-anywhere/blob/master/PROPOSAL.md)
* [Summary proposal](https://git.k8s.io/kubernetes-anywhere/PROPOSAL.md)
* [kubernetes-anywhere umbrella issue](https://github.com/kubernetes/kubernetes-anywhere/issues/127)
* https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/cluster-deployment.md
* https://git.k8s.io/kubernetes/docs/proposals/cluster-deployment.md
* [Bootstrap API](https://github.com/kubernetes/kubernetes/issues/5754)
* [jbeda's simple setup sketch](https://gist.github.com/jbeda/7e66965a23c40a91521cf6bbc3ebf007)
@ -154,7 +154,7 @@ Examples:
* [Kube-AWS](https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/aws)
* [kops](https://github.com/kubernetes/kops)
* [Kargo](https://github.com/kubespray/kargo)
* (is https://github.com/kubernetes/contrib/tree/master/ansible still needed?)
* (is https://git.k8s.io/contrib/ansible still needed?)
* [kompose8](https://github.com/digitalrebar/kompos8)
* [Tectonic](https://tectonic.com/)
* [Kraken](https://github.com/samsung-cnct/kraken)

View File

@ -4,7 +4,7 @@ This is an autogenerated file!
Please do not edit this file directly, but instead make changes to the
sigs.yaml file in the project root.
To understand how this file is generated, see https://git.k8s.io/community/generator/README.md
To understand how this file is generated, see /generator/README.md
-->
# CLI SIG

View File

@ -1,8 +1,8 @@
# Docs and examples roadmap
If you'd like to help with documentation, please read the [kubernetes.io site instructions](https://github.com/kubernetes/kubernetes.github.io/blob/master/README.md).
If you'd like to help with documentation, please read the [kubernetes.io site instructions](https://git.k8s.io/kubernetes.github.io/README.md).
If you'd like to contribute an example, please read the [guidelines](https://github.com/kubernetes/kubernetes/blob/master/examples/guidelines.md).
If you'd like to contribute an example, please read the [guidelines](https://git.k8s.io/kubernetes/examples/guidelines.md).
Owners: @kubernetes/docs, @kubernetes/examples

View File

@ -11,7 +11,7 @@ In order to standardize Special Interest Group efforts, create maximum transpare
* Participate in release planning meetings and retrospectives, and burndown meetings, as needed
* Ensure related work happens in a project-owned github org and repository, with code and tests explicitly owned and supported by the SIG, including issue triage, PR reviews, test-failure response, bug fixes, etc.
* Use the above forums as the primary means of working, communicating, and collaborating, as opposed to private emails and meetings
* Represent the SIG for the PM group - see [PM SIG representatives](https://github.com/kubernetes/community/blob/master/sig-product-management/SIG%20PM%20representatives.md).
* Represent the SIG for the PM group - see [PM SIG representatives](/sig-product-management/SIG%20PM%20representatives.md).
## SIG roles
- **SIG Participant**: active in one or more areas of the project; wide
@ -28,7 +28,7 @@ In order to standardize Special Interest Group efforts, create maximum transpare
* Slack activity is archived at [kubernetes.slackarchive.io](http://kubernetes.slackarchive.io). To start archiving a new channel invite the slackarchive bot to the channel via `/invite @slackarchive`
* Organize video meetings as needed. No need to wait for the [Weekly Community Video Conference](community/README.md) to discuss. Please report summary of SIG activities there.
* Request a Zoom account by emailing Paris Pittman(`parispittman@google.com`) and Jorge Castro(`jorge@heptio.com`). You must set up a google group (see below) for the SIG leads so that all the SIG leads have the ability to reset the password if necessary.
* Read [how to use YouTube](https://github.com/kubernetes/community/blob/master/community/K8sYoutubeCollaboration.md) for publishing your videos to the Kubernetes channel.
* Read [how to use YouTube](/community/K8sYoutubeCollaboration.md) for publishing your videos to the Kubernetes channel.
* Calendars
1. Create a calendar on your own account. Make it public.
2. Share it with all SIG leads with full ownership of the calendar - they can edit, rename, or even delete it.

View File

@ -269,11 +269,11 @@ See the sig-multicluster [community page] for points of contact and meeting time
[`PTAL`]: https://en.wiktionary.org/wiki/PTAL
[agenda]: https://docs.google.com/document/d/18mk62nOXE_MCSSnb4yJD_8UadtzJrYyJxFwbrgabHe8/edit
[bug]: #bug-lifecycle
[community page]: https://github.com/kubernetes/community/tree/master/sig-multicluster
[community page]: /sig-multicluster
[design proposal]: #design-proposals
[design repo]: https://github.com/kubernetes/community/tree/master/contributors/design-proposals/sig-multicluster
[design template]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/sig-multicluster/template.md
[development guide]: https://github.com/kubernetes/community/blob/master/contributors/devel/development.md
[design repo]: /contributors/design-proposals/sig-multicluster
[design template]: /contributors/design-proposals/sig-multicluster/template.md
[development guide]: /contributors/devel/development.md
[existing issue]: #adopt-an-issue
[feature repo]: https://github.com/kubernetes/features
[feature request]: #feature-requests
@ -281,14 +281,14 @@ See the sig-multicluster [community page] for points of contact and meeting time
[group]: https://groups.google.com/forum/#!forum/kubernetes-sig-multicluster
[issue]: https://github.com/kubernetes/kubernetes/issues
[multicluster_help_wanted_issues]: https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A"help+wanted"+label%3Asig%2Fmulticluster
[kubectl concept docs]: https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/concepts/tools/kubectl
[kubectl concept docs]: https://git.k8s.io/kubernetes.github.io/docs/concepts/tools/kubectl
[kubectl docs]: https://kubernetes.io/docs/user-guide/kubectl-overview
[kubernetes/cmd/kubectl]: https://github.com/kubernetes/kubernetes/tree/master/cmd/kubectl
[kubernetes/pkg/kubectl]: https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl
[leads]: https://github.com/kubernetes/community/tree/master/sig-multicluster#leads
[kubernetes/cmd/kubectl]: https://git.k8s.io/kubernetes/cmd/kubectl
[kubernetes/pkg/kubectl]: https://git.k8s.io/kubernetes/pkg/kubectl
[leads]: /sig-multicluster#leads
[management overview]: https://kubernetes.io/docs/concepts/tools/kubectl/object-management-overview
[meeting]: https://github.com/kubernetes/community/tree/master/sig-multicluster#meetings
[meeting]: /sig-multicluster#meetings
[release]: #release
[slack-messages]: https://kubernetes.slack.com/messages/sig-multicluster
[slack-signup]: http://slack.k8s.io/
[tests]: https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md
[tests]: /contributors/devel/testing.md

View File

@ -10,8 +10,8 @@ follows:
most common failure scenarios and suggest improvements. Its up to the sig or
individuals to prioritize and take up those tasks.
Oncall playbook:
https://github.com/kubernetes/community/blob/master/contributors/devel/on-call-federation-build-cop.md
The on-call playbook is available
[here](/contributors/devel/on-call-federation-build-cop.md)
# Joining the rotation

View File

@ -10,7 +10,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener
Responsible for answering scalability related questions such as:
What size clusters do we think that we should support with Kubernetes in the short to medium term? How performant do we think that the control system should be at scale? What resource overhead should the Kubernetes control system reasonably consume?
For more details about our objectives please review our [Scaling And Performance Goals](https://github.com/kubernetes/community/blob/master/sig-scalability/goals.md)
For more details about our objectives please review our [Scaling And Performance Goals](https://git.k8s.io/community/sig-scalability/goals.md)
## Meetings
* [Thursdays at 16:00 UTC](https://zoom.us/j/989573207) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:00&tz=UTC).

View File

@ -120,7 +120,7 @@ proposed</td>
* Leader election results are non-deterministic on on a typical cluster, and a config would be best served to be configured as worst-case. Not presently known whether there are performance impacts resulting from leader election resulting in either co-location or distribution of those components.
* Improving the cluster performance loading to match production deployment scenarios is critical on-going work, especially clusterloader: [https://github.com/kubernetes/perf-tests/tree/master/clusterloader](https://github.com/kubernetes/perf-tests/tree/master/clusterloader)
* Improving the cluster performance loading to match production deployment scenarios is critical on-going work, especially clusterloader: [https://git.k8s.io/perf-tests/clusterloader](https://git.k8s.io/perf-tests/clusterloader)
* Multi-zone / multi-az deployments are often used to manage large clusters, but for testing/scalability efforts the target is intentionally a single Availability Zone. This keeps greater consistency between environments that do and dont support AZ-based deployments. Failures during scalability testing are outside the SIG charter. Protecting against network partitioning and improving total cluster availability (one of the key benefits to a multi-AZ strategy) are currently out scope for the Scalability SIG efforts.

Some files were not shown because too many files have changed in this diff Show More