Merge remote-tracking branch 'upstream/master' into keps
This commit is contained in:
commit
92728d654f
22
CLA.md
22
CLA.md
|
@ -13,6 +13,8 @@ It is important to read and understand this legal agreement.
|
|||
|
||||
## How do I sign?
|
||||
|
||||
If your work is done as an employee of your company, contact your company's legal department and ask to be put on the list of approved contributors for the Kubernetes CLA. Below, we have included steps for "Corporation signup" in case your company does not have a company agreement and would like to have one.
|
||||
|
||||
#### 1. Log in to the Linux Foundation ID Portal with Github
|
||||
|
||||
Click one of:
|
||||
|
@ -38,7 +40,9 @@ person@organization.domain email address in the CNCF account registration page.
|
|||
#### 3. Complete signing process
|
||||
|
||||
After creating your account, follow the instructions to complete the
|
||||
signing process through Hellosign.
|
||||
signing process through HelloSign.
|
||||
|
||||
If you did not receive an email from HelloSign, [then request it here](https://identity.linuxfoundation.org/projects/cncf).
|
||||
|
||||
#### 4. Ensure your Github e-mail address matches address used to sign CLA
|
||||
|
||||
|
@ -50,7 +54,7 @@ You must also set your [git e-mail](https://help.github.com/articles/setting-you
|
|||
to match this e-mail address as well.
|
||||
|
||||
If you already submitted a PR you can correct your user.name and user.email
|
||||
and then use use `git commit --amend --reset-author` and then `git push --force` to
|
||||
and then use `git commit --amend --reset-author` and then `git push --force` to
|
||||
correct the PR.
|
||||
|
||||
#### 5. Look for an email indicating successful signup.
|
||||
|
@ -67,6 +71,20 @@ Once you have this, the CLA authorizer bot will authorize your PRs.
|
|||
|
||||

|
||||
|
||||
## Changing your Affiliation
|
||||
|
||||
If you've changed employers and still contribute to Kubernetes, your affiliation
|
||||
needs to be updated. The Cloud Native Computing Foundation uses [gitdm](https://github.com/cncf/gitdm)
|
||||
to track who is contributing and from where. Create a pull request to the gitdm
|
||||
repository with a change to [developers_affiliations.txt](https://github.com/cncf/gitdm/blob/master/developers_affiliations.txt).
|
||||
Your entry should look similar to this:
|
||||
|
||||
```
|
||||
Jorge O. Castro*: jorge!heptio.com, jorge!ubuntu.com, jorge.castro!gmail.com
|
||||
Heptio
|
||||
Canonical until 2017-03-31
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you have problems signing the CLA, send an email message to: `helpdesk@rt.linuxfoundation.org`.
|
||||
|
|
|
@ -9,9 +9,10 @@ aliases:
|
|||
sig-architecture-leads:
|
||||
- bgrant0607
|
||||
- jdumars
|
||||
- mattfarina
|
||||
sig-auth-leads:
|
||||
- liggitt
|
||||
- mikedanese
|
||||
- enj
|
||||
- tallclair
|
||||
sig-autoscaling-leads:
|
||||
- mwielgus
|
||||
|
@ -22,8 +23,9 @@ aliases:
|
|||
- d-nishi
|
||||
sig-azure-leads:
|
||||
- justaugustus
|
||||
- shubheksha
|
||||
- dstrebel
|
||||
- khenidak
|
||||
- feiskyer
|
||||
sig-big-data-leads:
|
||||
- foxish
|
||||
- erikerlandson
|
||||
|
@ -50,9 +52,9 @@ aliases:
|
|||
- grodrigues3
|
||||
- cblecker
|
||||
sig-docs-leads:
|
||||
- zacharysarah
|
||||
- chenopis
|
||||
- jaredbhatti
|
||||
- zacharysarah
|
||||
- bradamant3
|
||||
sig-gcp-leads:
|
||||
- abgworrall
|
||||
sig-ibmcloud-leads:
|
||||
|
@ -81,8 +83,9 @@ aliases:
|
|||
- idvoretskyi
|
||||
- calebamiles
|
||||
sig-release-leads:
|
||||
- jdumars
|
||||
- calebamiles
|
||||
- justaugustus
|
||||
- tpepper
|
||||
sig-scalability-leads:
|
||||
- wojtek-t
|
||||
- countspongebob
|
||||
|
@ -90,10 +93,10 @@ aliases:
|
|||
- bsalamat
|
||||
- k82cn
|
||||
sig-service-catalog-leads:
|
||||
- pmorie
|
||||
- carolynvs
|
||||
- kibbles-n-bytes
|
||||
- duglin
|
||||
- jboyd01
|
||||
sig-storage-leads:
|
||||
- saad-ali
|
||||
- childsb
|
||||
|
@ -121,7 +124,10 @@ aliases:
|
|||
- smarterclayton
|
||||
- destijl
|
||||
wg-iot-edge-leads:
|
||||
- cindyxing
|
||||
- dejanb
|
||||
- ptone
|
||||
- cantbewong
|
||||
wg-kubeadm-adoption-leads:
|
||||
- luxas
|
||||
- justinsb
|
||||
|
@ -142,20 +148,23 @@ aliases:
|
|||
wg-resource-management-leads:
|
||||
- vishh
|
||||
- derekwaynecarr
|
||||
wg-security-audit-leads:
|
||||
- aasmall
|
||||
- joelsmith
|
||||
- cji
|
||||
## BEGIN CUSTOM CONTENT
|
||||
steering-committee:
|
||||
- bgrant0607
|
||||
- brendanburns
|
||||
- derekwaynecarr
|
||||
- dims
|
||||
- jbeda
|
||||
- michelleN
|
||||
- philips
|
||||
- pwittrock
|
||||
- quinton-hoole
|
||||
- sarahnovotny
|
||||
- smarterclayton
|
||||
- spiffxp
|
||||
- thockin
|
||||
- timothysc
|
||||
code-of-conduct-committee:
|
||||
- jdumars
|
||||
|
|
31
README.md
31
README.md
|
@ -13,16 +13,27 @@ issues, mailing lists, conferences, etc.
|
|||
|
||||
For more specific topics, try a SIG.
|
||||
|
||||
## SIGs
|
||||
## Governance
|
||||
|
||||
Kubernetes is a set of subprojects, each shepherded by a Special Interest Group (SIG).
|
||||
Kubernetes has three types of groups that are officially supported:
|
||||
|
||||
A first step to contributing is to pick from the [list of kubernetes SIGs](sig-list.md).
|
||||
* **Committees** are named sets of people that are chartered to take on sensitive topics.
|
||||
This group is encouraged to be as open as possible while achieving its mission but, because of the nature of the topics discussed, private communications are allowed.
|
||||
Examples of committees include the steering committee and things like security or code of conduct.
|
||||
* **Special Interest Groups (SIGs)** are persistent open groups that focus on a part of the project.
|
||||
SIGs must have open and transparent proceedings.
|
||||
Anyone is welcome to participate and contribute provided they follow the Kubernetes Code of Conduct.
|
||||
The purpose of a SIG is to own and develop a set of **subprojects**.
|
||||
* **Subprojects** Each SIG can have a set of subprojects.
|
||||
These are smaller groups that can work independently.
|
||||
Some subprojects will be part of the main Kubernetes deliverables while others will be more speculative and live in the `kubernetes-sigs` github org.
|
||||
* **Working Groups** are temporary groups that are formed to address issues that cross SIG boundaries.
|
||||
Working groups do not own any code or other long term artifacts.
|
||||
Working groups can report back and act through involved SIGs.
|
||||
|
||||
A SIG can have its own policy for contribution,
|
||||
described in a `README` or `CONTRIBUTING` file in the SIG
|
||||
folder in this repo (e.g. [sig-cli/CONTRIBUTING](sig-cli/CONTRIBUTING.md)),
|
||||
and its own mailing list, slack channel, etc.
|
||||
See the [full governance doc](governance.md) for more details on these groups.
|
||||
|
||||
A SIG can have its own policy for contribution, described in a `README` or `CONTRIBUTING` file in the SIG folder in this repo (e.g. [sig-cli/CONTRIBUTING.md](sig-cli/CONTRIBUTING.md)), and its own mailing list, slack channel, etc.
|
||||
|
||||
If you want to edit details about a SIG (e.g. its weekly meeting time or its leads),
|
||||
please follow [these instructions](./generator) that detail how our docs are auto-generated.
|
||||
|
@ -34,7 +45,11 @@ lead to many relevant technical topics.
|
|||
|
||||
## Contribute
|
||||
|
||||
The [Contributor Guide](contributors/guide/README.md) provides detailed instructions on how to get your ideas and bug fixes seen and accepted, including:
|
||||
A first step to contributing is to pick from the [list of kubernetes SIGs](sig-list.md).
|
||||
Start attending SIG meetings, join the slack channel and subscribe to the mailing list.
|
||||
SIGs will often have a set of "help wanted" issues that can help new contributors get involved.
|
||||
|
||||
The [Contributor Guide](contributors/guide/README.md) provides detailed instruction on how to get your ideas and bug fixes seen and accepted, including:
|
||||
1. How to [file an issue]
|
||||
1. How to [find something to work on]
|
||||
1. How to [open a pull request]
|
||||
|
|
|
@ -11,4 +11,4 @@ The members and their terms are as follows:
|
|||
|
||||
Please see the [bootstrapping document](./bootstrapping-process.md) for more information on how members are picked, their responsibilities, and how the committee will initially function.
|
||||
|
||||
_More information on how to contact this committee and learn about its process to come in the near future. For now, any Code of Conduct or Code of Conduct Committee concerns can be directed to steering-private@kubernetes.io_
|
||||
_More information on how to contact this committee and learn about its process to come in the near future. For now, any Code of Conduct or Code of Conduct Committee concerns can be directed to <conduct@kubernetes.io>_.
|
||||
|
|
|
@ -18,7 +18,7 @@ All Kubernetes SIGs must define a charter defining the scope and governance of t
|
|||
6. Send the SIG Charter out for review to steering@kubernetes.io. Include the subject "SIG Charter Proposal: YOURSIG"
|
||||
and a link to the PR in the body.
|
||||
7. Typically expect feedback within a week of sending your draft. Expect longer time if it falls over an
|
||||
event such as Kubecon or holidays. Make any necessary changes.
|
||||
event such as KubeCon/CloudNativeCon or holidays. Make any necessary changes.
|
||||
8. Once accepted, the steering committee will ratify the PR by merging it.
|
||||
|
||||
## Steps to update an existing SIG charter
|
||||
|
@ -28,6 +28,25 @@ All Kubernetes SIGs must define a charter defining the scope and governance of t
|
|||
- For minor updates to that only impact issues or areas within the scope of the SIG the SIG Chairs should
|
||||
facilitate the change.
|
||||
|
||||
## SIG Charter approval process
|
||||
|
||||
When introducing a SIG charter or modification of a charter the following process should be used.
|
||||
As part of this we will define roles for the [OARP] process (Owners, Approvers, Reviewers, Participants)
|
||||
|
||||
- Identify a small set of Owners from the SIG to drive the changes.
|
||||
Most typically this will be the SIG chairs.
|
||||
- Work with the rest of the SIG in question (Reviewers) to craft the changes.
|
||||
Make sure to keep the SIG in the loop as discussions progress with the Steering Committee (next step).
|
||||
Including the SIG mailing list in communications with the steering committee would work for this.
|
||||
- Work with the steering committee (Approvers) to gain approval.
|
||||
This can simply be submitting a PR and sending mail to [steering@kubernetes.io].
|
||||
If more substantial changes are desired it is advisable to socialize those before drafting a PR.
|
||||
- The steering committee will be looking to ensure the scope of the SIG as represented in the charter is reasonable (and within the scope of Kubernetes) and that processes are fair.
|
||||
- For large changes alert the rest of the Kubernetes community (Participants) as the scope of the changes becomes clear.
|
||||
Sending mail to [kubernetes-dev@googlegroups.com] and/or announcing at the community meeting are a good ways to do this.
|
||||
|
||||
If there are questions about this process please reach out to the steering committee at [steering@kubernetes.io].
|
||||
|
||||
## How to use the templates
|
||||
|
||||
SIGs should use [the template][Short Template] as a starting point. This document links to the recommended [SIG Governance][sig-governance] but SIGs may optionally record deviations from these defaults in their charter.
|
||||
|
@ -41,9 +60,12 @@ The primary goal of the charters is to define the scope of the SIG within Kubern
|
|||
|
||||
See [frequently asked questions]
|
||||
|
||||
[OARP]: https://stumblingabout.com/tag/oarp/
|
||||
[Recommendations and requirements]: sig-governance-requirements.md
|
||||
[sig-governance]: sig-governance.md
|
||||
[Short Template]: sig-charter-template.md
|
||||
[frequently asked questions]: FAQ.md
|
||||
[sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml
|
||||
[sig-architecture example]: ../../sig-architecture/charter.md
|
||||
[steering@kubernetes.io]: mailto:steering@kubernetes.io
|
||||
[kubernetes-dev@googlegroups.com]: mailto:kubernetes-dev@googlegroups.com
|
||||
|
|
|
@ -72,7 +72,7 @@ All technical assets *MUST* be owned by exactly 1 SIG subproject. The following
|
|||
- *SHOULD* define target metrics for health signal (e.g. broken tests fixed within N days)
|
||||
- *SHOULD* define process for meeting target metrics (e.g. all tests run as presubmit, build cop, etc)
|
||||
|
||||
[lazy-consensus]: http://communitymgt.wikia.com/wiki/Lazy_consensus
|
||||
[lazy-consensus]: http://en.osswiki.info/concepts/lazy_consensus
|
||||
[super-majority]: https://en.wikipedia.org/wiki/Supermajority#Two-thirds_vote
|
||||
[warnocks-dilemma]: http://communitymgt.wikia.com/wiki/Warnock%27s_Dilemma
|
||||
[slo]: https://en.wikipedia.org/wiki/Service_level_objective
|
||||
|
|
|
@ -84,7 +84,7 @@ Subproject Owner Role. (this different from a SIG or Organization Member).
|
|||
|
||||
- SIG meets bi-weekly on zoom with agenda in meeting notes
|
||||
- *SHOULD* be facilitated by chairs unless delegated to specific Members
|
||||
- SIG overview and deep-dive sessions organized for Kubecon
|
||||
- SIG overview and deep-dive sessions organized for KubeCon/CloudNativeCon
|
||||
- *SHOULD* be organized by chairs unless delegated to specific Members
|
||||
- SIG updates to Kubernetes community meeting on a regular basis
|
||||
- *SHOULD* be presented by chairs unless delegated to specific Members
|
||||
|
@ -155,7 +155,7 @@ Issues impacting multiple subprojects in the SIG should be resolved by either:
|
|||
- after 3 or more months it *SHOULD* be retired
|
||||
- after 6 or more months it *MUST* be retired
|
||||
|
||||
[lazy-consensus]: http://communitymgt.wikia.com/wiki/Lazy_consensus
|
||||
[lazy-consensus]: http://en.osswiki.info/concepts/lazy_consensus
|
||||
[super-majority]: https://en.wikipedia.org/wiki/Supermajority#Two-thirds_vote
|
||||
[KEP]: https://github.com/kubernetes/community/blob/master/keps/0000-kep-template.md
|
||||
[sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml#L1454
|
||||
|
|
|
@ -0,0 +1,121 @@
|
|||
# Kubernetes Working Group Formation and Disbandment
|
||||
|
||||
## Process Overview and Motivations
|
||||
Working Groups provide a formal avenue for disparate groups to collaborate around a common problem, craft a balanced
|
||||
position, and disband. Because they represent the interests of multiple groups, they are a vehicle for consensus
|
||||
building. If code is developed as part of collaboration within the Working Group, that code will be housed in an
|
||||
appropriate repository as described in the [repositories document]. The merging of this code into the repository
|
||||
will be governed by the standard policies regarding submitting code to that repository (e.g. developed within one or
|
||||
more Subprojects owned by SIGs).
|
||||
|
||||
Because a working group is an official part of the Kubernetes project it is subject to steering committee oversight
|
||||
over its formation and disbanding.
|
||||
|
||||
## Goals of the process
|
||||
|
||||
- An easy-to-navigate process for those wishing to establish and eventually disband a new Working Group
|
||||
- Simple guidance and differentiation on where a Working Group makes sense, and does not
|
||||
- Clear understanding that no authority is vested in a Working Group
|
||||
- Ensure all future Working Groups conform with this process
|
||||
|
||||
## Non-goals of the process
|
||||
|
||||
- Documenting other governance bodies such as sub-projects or SIGs
|
||||
- Changing the status of existing Working Groups/SIGs/Sub-projects
|
||||
|
||||
## Working Group Relationship To SIGs
|
||||
Assets owned by the Kubernetes project (e.g. code, docs, blogs, processes, etc) are owned and
|
||||
managed by [SIGs](sig-governance.md). The exception to this is specific assets that may be owned
|
||||
by Working Groups, as outlined below.
|
||||
|
||||
Working Groups provide structure for governance and communication channels, and as such may
|
||||
own the following types of assets:
|
||||
|
||||
- Calendar Events
|
||||
- Slack Channels
|
||||
- Discussion Forum Groups
|
||||
|
||||
Working Groups are distinct from SIGs in that they are intend to:
|
||||
|
||||
- facilitate collaboration across SIGs
|
||||
- facilitate an exploration of a problem / solution through a group with minimal governmental overhead
|
||||
|
||||
Working Groups will typically have stake holders whose participation is in the
|
||||
context of one or more SIGs. These SIGs should be documented as stake holders of the Working Group
|
||||
(see Creation Process).
|
||||
|
||||
## Is it a Working Group? Yes, if...
|
||||
- It does not own any code
|
||||
- It has a clear goal measured through a specific deliverable or deliverables
|
||||
- It is temporary in nature and will be disbanded after reaching its stated goal(s)
|
||||
|
||||
## Creation Process Description
|
||||
Working Group formation is less tightly-controlled than SIG formation since they:
|
||||
|
||||
- Do not own code
|
||||
- Have a clear entry and exit criteria
|
||||
- Do not have any organizational authority, only influence
|
||||
|
||||
Therefore, Working Group formation begins with the organizers asking themselves some important questions that
|
||||
should eventually be reflected in a pull request on sigs.yaml:
|
||||
|
||||
1. What is the exact problem this group is trying to solve?
|
||||
1. What is the artifact that this group will deliver, and to whom?
|
||||
1. How does the group know when the problem solving process is completed, and it is time for the Working Group to
|
||||
dissolve?
|
||||
1. Who are all of the stakeholders involved in this problem this group is trying to solve (SIGs, steering committee,
|
||||
other Working Groups)?
|
||||
1. What are the meeting mechanics (frequency, duration, roles)?
|
||||
1. Does the goal of the Working Group represent the needs of the project as a whole, or is it focused on the interests
|
||||
of a narrow set of contributors or companies?
|
||||
1. Who will chair the group, and ensure it continues to meet these requirements?
|
||||
1. Is diversity well-represented in the Working Group?
|
||||
|
||||
Once the above questions have been answered, the pull request against sigs.yaml can be created. Once the generator
|
||||
is run, this will in turn create the OWNERS_ALIASES file, readme files, and the main SIGs list. The minimum
|
||||
requirements for that are:
|
||||
|
||||
- name
|
||||
- directory
|
||||
- mission statement
|
||||
- chair information
|
||||
- meeting information
|
||||
- contact methods
|
||||
- any [sig](sig-governance.md) stakeholders
|
||||
|
||||
The pull request should be labeled with any SIG stakeholders and committee/steering. And since GitHub notifications
|
||||
are not a reliable means to contact people, an email should be sent to the mailing lists for the stakeholder SIGs,
|
||||
and the steering committee with a link to the PR. A member of the community admin team will place a /hold on it
|
||||
until it has an LGTM from at least one chair from each of the stakeholder SIGs, and a simple majority of the steering
|
||||
committee.
|
||||
|
||||
Once merged, the Working Group is officially chartered until it either completes its stated goal, or disbands
|
||||
voluntarily (e.g. due to new facts, member attrition, change in direction, etc). Working groups should strive to
|
||||
make regular reports to stakeholder SIGs in order to ensure the mission is still aligned with the current state.
|
||||
|
||||
Deliverables (documents, diagrams) for the group should be stored in the directory created for the Working Group.
|
||||
If the deliverable is a KEP, it would be helpful to link it in the closed formation/charter PR for future reference.
|
||||
|
||||
## Disbandment Process Description
|
||||
|
||||
Working Groups will be disbanded if either of the following is true:
|
||||
|
||||
- There is no long a Chair
|
||||
- (with a 4 week grace period)
|
||||
- None of the communication channels for the Working Group have been in use for the goals outlined at the founding of
|
||||
the Working Group
|
||||
- (with a 3 month grace period)
|
||||
- Slack
|
||||
- Email Discussion Forum
|
||||
- Zoom
|
||||
|
||||
The current Chair may step down at any time. When they do so, a new Chair may be selected through lazy consensus
|
||||
within the Working Group, and [sigs.yaml](/sigs.yaml) should be updated.
|
||||
|
||||
References
|
||||
|
||||
- [1] https://github.com/kubernetes/community/pull/1994
|
||||
- [2] https://groups.google.com/a/kubernetes.io/d/msg/steering/zEY93Swa_Ss/C0ziwjkGCQAJ
|
||||
|
||||
|
||||
[repository document]: https://github.com/kubernetes/community/blob/master/github-management/kubernetes-repositories.md
|
|
@ -22,17 +22,12 @@ and meetings devoted to Kubernetes.
|
|||
* [Twitter]
|
||||
* [Blog]
|
||||
* Pose questions and help answer them on [Stack Overflow].
|
||||
* [Slack] - sign up
|
||||
|
||||
To add new channels, contact one of the admins in the #slack-admins channel. Our guidelines are [here](/communication/slack-guidelines.md).
|
||||
## Slack
|
||||
|
||||
## Issues
|
||||
[Join Slack] - sign up and join channels on topics that interest you, but please read our [Slack Guidelines] before participating.
|
||||
|
||||
If you have a question about Kubernetes or have a problem using it,
|
||||
please start with the [troubleshooting guide].
|
||||
|
||||
If that doesn't answer your questions, or if you think you found a bug,
|
||||
please [file an issue].
|
||||
If you want to add a new channel, contact one of the admins in the #slack-admins channel.
|
||||
|
||||
|
||||
## Mailing lists
|
||||
|
@ -47,6 +42,14 @@ relevant to you, as you would any other Google Group.
|
|||
* [Discuss Kubernetes] is where kubernetes users trade notes
|
||||
* Additional Google groups exist and can be joined for discussion related to each SIG and Working Group. These are linked from the [SIG list](/sig-list.md).
|
||||
|
||||
## Issues
|
||||
|
||||
If you have a question about Kubernetes or have a problem using it,
|
||||
please start with the [troubleshooting guide].
|
||||
|
||||
If that doesn't answer your questions, or if you think you found a bug,
|
||||
please [file an issue].
|
||||
|
||||
## Accessing community documents
|
||||
|
||||
In order to foster real time collaboration there are many working documents
|
||||
|
@ -63,9 +66,9 @@ Office hours are held once a month. Please refer to [this document](/events/offi
|
|||
|
||||
## Weekly Meeting
|
||||
|
||||
We have PUBLIC and RECORDED [weekly meeting] every Thursday at 10am US Pacific Time over Zoom.
|
||||
We have a public and recorded [weekly meeting] every Thursday at 10am US Pacific Time over Zoom.
|
||||
|
||||
Map that to your local time with this [timezone table].
|
||||
Convert it to your local time using the [timezone table].
|
||||
|
||||
See it on the web at [calendar.google.com], or paste this [iCal url] into any iCal client.
|
||||
|
||||
|
@ -78,7 +81,7 @@ please propose a specific date on the [Kubernetes Community Meeting Agenda].
|
|||
|
||||
## Conferences
|
||||
|
||||
Kubernetes is the main focus of CloudNativeCon/KubeCon, held every spring in Europe and winter in North America. Information about these and other community events is available on the CNCF [events] pages.
|
||||
Kubernetes is the main focus of KubeCon + CloudNativeCon, held every spring in Europe, summer in China, and winter in North America. Information about these and other community events is available on the CNCF [events] pages.
|
||||
|
||||
|
||||
[Blog]: https://kubernetes.io/blog/
|
||||
|
@ -96,7 +99,8 @@ Kubernetes is the main focus of CloudNativeCon/KubeCon, held every spring in Eur
|
|||
[kubernetes-dev]: https://groups.google.com/forum/#!forum/kubernetes-dev
|
||||
[Discuss Kubernetes]: https://discuss.kubernetes.io
|
||||
[kubernetes.slack.com]: https://kubernetes.slack.com
|
||||
[Slack]: http://slack.k8s.io
|
||||
[Join Slack]: http://slack.k8s.io
|
||||
[Slack Guidelines]: /communication/slack-guidelines.md
|
||||
[Special Interest Group]: /README.md#SIGs
|
||||
[Stack Overflow]: https://stackoverflow.com/questions/tagged/kubernetes
|
||||
[timezone table]: https://www.google.com/search?q=1000+am+in+pst
|
||||
|
|
|
@ -326,7 +326,7 @@
|
|||
* Rolling out new contributor workshop + playground
|
||||
* Will have smaller summit in Shanghai (contact @jberkus)
|
||||
* Started planning for Seattle, will have an extra ½ day.
|
||||
* Registration will be going through kubecon site
|
||||
* Registration will be going through KubeCon/CloudNativeCon site
|
||||
* Manage alacarte events at other people's conferences
|
||||
* Communication pipelines & moderation
|
||||
* Clean up spam
|
||||
|
@ -380,7 +380,7 @@
|
|||
* Github Groups [Jorge Castro]
|
||||
* [https://github.com/kubernetes/community/issues/2323](https://github.com/kubernetes/community/issues/2323) working to make current 303 groups in the org easier to manage
|
||||
* Shoutouts this week (Check in #shoutouts on slack)
|
||||
* jberkus: To Jordan Liggitt for diagnosing & fixing the controller performance issue that has haunted us since last August, and to Julia Evans for reporting the original issue.
|
||||
* jberkus: To Jordan Liggitt for diagnosing & fixing the controller performance issue that has haunted us since last August, and to Julia Evans for reporting the original issue.
|
||||
* Maulion: And another to @liggitt for always helping anyone with a auth question in all the channels with kindness
|
||||
* jdumars: @paris - thank you for all of your work helping to keep our community safe and inclusive! I know that you've spent countless hours refining our Zoom usage, documenting, testing, and generally being super proactive on this.
|
||||
* Nikhita: shoutout to @cblecker for excellent meme skills!
|
||||
|
@ -612,7 +612,7 @@
|
|||
* GitHub:[ https://github.com/YugaByte/yugabyte-db](https://github.com/YugaByte/yugabyte-db)
|
||||
* Docs:[ https://docs.yugabyte.com/](https://docs.yugabyte.com/)
|
||||
* Slides: https://www.slideshare.net/YugaByte
|
||||
* Yugabyte is a database focusing on, planet scale, transactional and high availability. It implements many common database apis making it a drop in replacement for those DBs. Can run as a StatefulSet on k8s. Multiple db api paradigms can be used for one database.
|
||||
* Yugabyte is a database focusing on, planet scale, transactional and high availability. It implements many common database apis making it a drop in replacement for those DBs. Can run as a StatefulSet on k8s. Multiple db api paradigms can be used for one database.
|
||||
* No Kubernetes operator yet, but it's in progress.
|
||||
* Answers from Q&A:
|
||||
* @jberkus - For q1 - YB is optimized for small reads and writes, but can also perform batch reads and writes efficiently - mostly oriented towards modern OLTP/user-facing applications. Example is using spark or presto on top for use-cases like iot, fraud detection, alerting, user-personalization, etc.
|
||||
|
@ -778,7 +778,7 @@
|
|||
* Aish Sundar - Shoutout to Benjamin Elder for adding Conformance test results to all Sig-release dashboards - master-blocking and all release branches.
|
||||
* Josh Berkus and Stephen Augustus - To Misty Stanley-Jones for aggressively and doggedly pursuing 1.11 documentation deadlines, which both gives folks earlier warning about docs needs and lets us bounce incomplete features earlier
|
||||
* Help Wanted
|
||||
* Looking for Mandarin-speakers to help with new contributor workshop and other events at KubeCon Shanghai. If you can help, please contact @jberkus / [jberkus@redhat.com](mailto:jberkus@redhat.com)
|
||||
* Looking for Mandarin-speakers to help with new contributor workshop and other events at KubeCon/CloudNativeCon Shanghai. If you can help, please contact @jberkus / [jberkus@redhat.com](mailto:jberkus@redhat.com)
|
||||
* [KEP-005](https://github.com/kubernetes/community/blob/master/keps/sig-contributor-experience/0005-contributor-site.md) - Contributor Site - ping [jorge@heptio.com](mailto:jorge@heptio.com) if you can help!
|
||||
* Meet Our Contributors (mentors on demand)
|
||||
* June 6th at 230p and 8pm **UTC** [https://git.k8s.io/community/mentoring/meet-our-contributors.md](https://git.k8s.io/community/mentoring/meet-our-contributors.md)
|
||||
|
@ -852,7 +852,7 @@
|
|||
* External projects: SIG has something like 20 projects and is breaking them apart, looking for owners and out of tree locations for them to better live. Projects should move to CSI, a kubernetes-sigs/* repo, a utility library, or EOL
|
||||
* [ 0:00 ] **Announcements**
|
||||
* <span style="text-decoration:underline;">Shoutouts this week</span> (Check in #shoutouts on slack)
|
||||
* Big shoutout to @carolynvs for being welcoming and encouraging to newcomers, to @paris for all the community energy and dedication, and to all the panelists from the recent Kubecon diversity lunch for sharing their experiences.
|
||||
* Big shoutout to @carolynvs for being welcoming and encouraging to newcomers, to @paris for all the community energy and dedication, and to all the panelists from the recent KubeCon/CloudNativeCon diversity lunch for sharing their experiences.
|
||||
* Big shoutout to @mike.splain for running the Boston Kubernetes meetup (9 so far!)
|
||||
* everyone at svcat is awesome and patient especially @carolynvs, @Jeremy Rickard & @jpeeler who all took time to help me when I hit some bumps on my first PR.
|
||||
* <span style="text-decoration:underline;">Help Wanted</span>
|
||||
|
@ -962,7 +962,7 @@
|
|||
* SIG Scalability is looking for contributors!
|
||||
* We need more contributor mentors! [Fill this out.](https://goo.gl/forms/17Fzwdm5V2TVWiwy2)
|
||||
* The next Meet Our Contributors (mentors on demand!) will be on June 6th. Check out kubernetes.io/community for time slots and to copy to your calendar.
|
||||
* **Kubecon Follow Ups**
|
||||
* **KubeCon/CloudNativeCon Follow Ups**
|
||||
* Videos and slides: [https://github.com/cloudyuga/kubecon18-eu](https://github.com/cloudyuga/kubecon18-eu) Thanks CloudYuga for this!
|
||||
* **Other**
|
||||
* Don't forget to check out [discuss.kubernetes.io](https://discuss.kubernetes.io/)!
|
||||
|
@ -1014,7 +1014,7 @@
|
|||
* Communication platform
|
||||
* Flow in github
|
||||
* [Developers Guide underway](https://github.com/kubernetes/community/issues/1919) under Contributor Docs subproject
|
||||
* Contributor Experience Update [slide deck](https://docs.google.com/presentation/d/1KUbnP_Bl7ulLJ1evo-X_TdXhlvQWUyru4GuZm51YfjY/edit?usp=sharing) from KubeConEU [if you are in k-dev mailing list, you'll have access)
|
||||
* Contributor Experience Update [slide deck](https://docs.google.com/presentation/d/1KUbnP_Bl7ulLJ1evo-X_TdXhlvQWUyru4GuZm51YfjY/edit?usp=sharing) from KubeCon/CloudNativeCon UE [if you are in k-dev mailing list, you'll have access)
|
||||
* **Announcements:**
|
||||
* **Shoutouts!**
|
||||
* See someone doing something great in the community? Mention them in #shoutouts on slack and we'll mention them during the community meeting:
|
||||
|
@ -1023,7 +1023,7 @@
|
|||
* Tim Pepper to Aaron Crickenberger for being such a great leader on the project during recent months
|
||||
* Chuck Ha shouts out to the doc team - "Working on the website is such a good experience now that it's on hugo. Page rebuild time went from ~20 seconds to 60ms" :heart emoji:
|
||||
* Jason de Tiber would like to thank Leigh Capili (@stealthybox) for the hard work and long hours helping to fix kubeadm upgrade issues. (2nd shoutout in a row for Leigh! -ed)
|
||||
* Jorge Castro and Paris Pittman would like to thank Vanessa Heric and the rest of the CNCF/Linux Foundation personnel that helped us pull off another great Contributor Summit and Kubecon
|
||||
* Jorge Castro and Paris Pittman would like to thank Vanessa Heric and the rest of the CNCF/Linux Foundation personnel that helped us pull off another great Contributor Summit and KubeCon/CloudNativeCon
|
||||
* [Top Stackoverflow Users](https://stackoverflow.com/tags/kubernetes/topusers) in the Kubernetes Tag for the month
|
||||
* Anton Kostenko, Nicola Ben, Maruf Tuhin, Jonah Benton, Const
|
||||
* Message from the docs team re: hugo transition:
|
||||
|
@ -1042,7 +1042,7 @@
|
|||
**Help Wanted?**
|
||||
|
||||
* [SIG UI](https://github.com/kubernetes/community/blob/master/sig-ui/README.md) is looking for additional contributors (with javascript and/or go knowledge) and maintainers
|
||||
* [Piotr](https://github.com/bryk) and and [Konrad](https://github.com/konryd) from google have offered to bring folks up to speed.
|
||||
* [Piotr](https://github.com/bryk) and [Konrad](https://github.com/konryd) from google have offered to bring folks up to speed.
|
||||
* Take a look at open issues to get started or reach out to their slack channel, mailing list, or next meeting.
|
||||
* SIG UI mailing list: [https://groups.google.com/forum/#!forum/kubernetes-sig-ui](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)
|
||||
|
||||
|
@ -1076,7 +1076,7 @@
|
|||
* 35k users with 5k weekly active users
|
||||
* Produced Quarterly
|
||||
* **SIG Updates:**
|
||||
* **Thanks to test infra folks for labels**
|
||||
* **Thanks to test infra folks for labels**
|
||||
* **Cluster Lifecycle [Tim St. Clair]**
|
||||
* Kubeadm
|
||||
* Steadily burning down against 1.11
|
||||
|
@ -1103,21 +1103,21 @@
|
|||
* Slight changes to structure of object (Unify metrics sources)
|
||||
* Better e2e tests on all HPA functionality
|
||||
* Movement along the path to blocking HPA custom metrics e2e tests
|
||||
* VPA work coming along, alpha soon (demo at KubeCon)
|
||||
* Come say hi at KubeCon (Intro and Deep Dive, talks on HPA)
|
||||
* VPA work coming along, alpha soon (demo at KubeCon/CloudNativeCon)
|
||||
* Come say hi at KubeCon/CloudNativeCon (Intro and Deep Dive, talks on HPA)
|
||||
* **PM [Jaice Singer DuMars]**
|
||||
* Working on mechanisms to get feedback from the user community (playing with something like [http://kubernetes.report](http://kubernetes.report) -- in development, not ready for distro yet)
|
||||
* Presenting at KubeCon 16:35 on Thursday ~ Ihor and Aparna
|
||||
* Presenting at KubeCon/CloudNativeCon 16:35 on Thursday ~ Ihor and Aparna
|
||||
* Working on a charter draft
|
||||
* We actually represent three 'P' areas: product, project, and program
|
||||
* Help SIG focus on implementations
|
||||
* We're trying to look a
|
||||
* **Announcements:**
|
||||
* **Kubecon next week, no community meeting! **\o/
|
||||
* **KubeCon/CloudNativeCon next week, no community meeting! **\o/
|
||||
* **Last Chance to Register for the Contributor Summit - **
|
||||
* Registration ends Fri, Apr 7th @ 7pm UTC
|
||||
* Tuesday, May 1, day before Kubecon
|
||||
* You must [register here](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) even if you've registered for Kubecon
|
||||
* Tuesday, May 1, day before KubeCon/CloudNativeCon
|
||||
* You must [register here](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) even if you've registered for KubeCon/CloudNativeCon
|
||||
* SIGs, remember to [put yourself down on the SIG Update sheet](https://docs.google.com/spreadsheets/d/1adztrJ05mQ_cjatYSnvyiy85KjuI6-GuXsRsP-T2R3k/edit#gid=1543199895) to give your 5 minute update that afternoon.
|
||||
* **Shoutouts!**
|
||||
* See someone doing something great in the community? Mention them in #shoutouts on slack and we'll mention them during the community meeting:
|
||||
|
@ -1201,7 +1201,7 @@
|
|||
* @cblecker for fielding so many issues and PRs.
|
||||
* <span style="text-decoration:underline;">Help Wanted?</span>
|
||||
* SIG UI is looking for more active contributors to revitalize the dashboard. Please join their [communication channels](https://github.com/kubernetes/community/blob/master/sig-ui/README.md) and attend the next meeting to announce your interest.
|
||||
* <span style="text-decoration:underline;">KubeCon EU Update</span>
|
||||
* <span style="text-decoration:underline;">KubeCon/CloudNativeCon EU Update</span>
|
||||
* Current contributor track session voting will be emailed to attendees today!C
|
||||
* RSVP for Contributor Summit [[here]](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit)
|
||||
* SIG Leads, please do your updates for the 5 minute updates
|
||||
|
@ -1223,7 +1223,7 @@
|
|||
* Support for kubeadm and minikube
|
||||
* Create issues on crio project on github
|
||||
* sig-node does not have plans to choose one yet
|
||||
* Working on conformance to address implementations which should lead to choosing default implementation
|
||||
* Working on conformance to address implementations which should lead to choosing default implementation
|
||||
* Choice is important since it would be used under scalability testing
|
||||
* Test data? Plan to publish results to testgrid, will supply results ASAP
|
||||
* Previously blocked on dashboard issue
|
||||
|
@ -1301,8 +1301,8 @@
|
|||
* 6 charters in flight working on charter, then going to other SIGs
|
||||
* [r/kubernetes: Ask Me Anything](https://www.reddit.com/r/kubernetes/comments/8b7f0x/we_are_kubernetes_developers_ask_us_anything/) - thanks everyone for participating, lots of user feedback, please have a look.
|
||||
* We'll likely do more of these in the future.
|
||||
* [Kubernetes Contributor Summit @ Kubecon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 (jb)
|
||||
* You need to register for this even if you already registered for Kubecon! Link to the form in the link above.
|
||||
* [Kubernetes Contributor Summit @ KubeCon/CloudNativeCon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 (jb)
|
||||
* You need to register for this even if you already registered for KubeCon/CloudNativeCon! Link to the form in the link above.
|
||||
* New contributor/on-going contrib in morning and general tracks in afternoon
|
||||
* New CNCF Interactive Landscape: [https://landscape.cncf.io/](https://landscape.cncf.io/) (dan kohn)
|
||||
|
||||
|
@ -1322,7 +1322,7 @@
|
|||
* creating docker registry and helm repos, pushing helm chart
|
||||
* CLI and web UI
|
||||
* Caching upstream repositories
|
||||
* Walkthrough and Example: [https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/](https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/) & [https://github.com/jfrogtraining/kubernetes_example](https://github.com/jfrogtraining/kubernetes_example)
|
||||
* Walkthrough and Example: [https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/](https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/) & [https://github.com/jfrogtraining/kubernetes_example](https://github.com/jfrogtraining/kubernetes_example)
|
||||
* Questions
|
||||
* Difference between commercial and free (and what's the cost)
|
||||
* Free only has maven support, is open source, commercial supports everything (including Kubernetes-related technologies, like Helm)
|
||||
|
@ -1399,8 +1399,8 @@
|
|||
* They will be migrated, with blog manager opening PRs as needed
|
||||
* SIG Service Catalog - bumped to 5/24
|
||||
* **Announcements**
|
||||
* [Kubernetes Contributor Summit @ Kubecon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 [Jorge Castro]
|
||||
* You need to register for this even if you already registered for Kubecon! Link to the form in the link above.
|
||||
* [Kubernetes Contributor Summit @ KubeCon/CloudNativeCon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 [Jorge Castro]
|
||||
* You need to register for this even if you already registered for KubeCon/CloudNativeCon! Link to the form in the link above.
|
||||
* Current contributor track voting on topics will be emailed to attendees Monday
|
||||
* Reddit r/kubernetes AMA [Jorge Castro]
|
||||
* This next Tuesday: [https://www.reddit.com/r/kubernetes/comments/89gdv0/kubernetes_ama_will_be_on_10_april_tuesday/](https://www.reddit.com/r/kubernetes/comments/89gdv0/kubernetes_ama_will_be_on_10_april_tuesday/)
|
||||
|
@ -1439,7 +1439,7 @@
|
|||
* Generates join keys for kubeadm
|
||||
* Sends information like master election, cluster admin config file, etc back to shared data set
|
||||
* Resources:
|
||||
* Kubecon Presentation [https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes](https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes)
|
||||
* KubeCon/CloudNativeCon Presentation [https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes](https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes)
|
||||
* Longer Demo Video [https://www.youtube.com/watch?v=OMm6Oz1NF6I](https://www.youtube.com/watch?v=OMm6Oz1NF6I)
|
||||
* Digital Rebar:[https://github.com/digitalrebar/provision](https://github.com/digitalrebar/provision),
|
||||
* Project Site: [http://rebar.digital](http://rebar.digital)
|
||||
|
@ -1453,7 +1453,7 @@
|
|||
* Looking for contributors to answer questions, 2 slots
|
||||
* Reach out to @paris on Slack if you're interested in participating
|
||||
* Contributor Summit in Copenhagen May 1 - [registration](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/co-located-events/kubernetes-contributor-summit/) is live
|
||||
* KubeCon Copenhagen (May 2-4) is **on track to sell out**. [Register](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/)
|
||||
* KubeCon/CloudNativeCon Copenhagen (May 2-4) is **on track to sell out**. [Register](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/)
|
||||
* Shoutouts this week (from #shoutouts in slack):
|
||||
* @nabrahams who picked the 1.10 release notes as his first contribution. We literally could not have done this without him!
|
||||
* [ 0:15 ]** Kubernetes 1.10 Release Retrospective**
|
||||
|
@ -1590,7 +1590,7 @@
|
|||
* Registration for the Contributor Summit is now live:
|
||||
* See [this page](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/co-located-events/kubernetes-contributor-summit/) for details
|
||||
* Please register if you're planning on attending, we need this so we have the correct amount of food!
|
||||
* Just registering for Kubecon is not enough!
|
||||
* Just registering for KubeCon/CloudNativeCon is not enough!
|
||||
* [Office Hours Next Week!](https://github.com/kubernetes/community/blob/master/events/office-hours.md)
|
||||
* Volunteer developers needed to answer questions
|
||||
* [Helm Summit Videos](https://www.youtube.com/playlist?list=PL69nYSiGNLP3PlhEKrGA0oN4eY8c4oaAH&disable_polymer=true) are up.
|
||||
|
@ -1628,7 +1628,7 @@
|
|||
* [ 0:00 ] **Graph o' the Week **Zach Corleissen, SIG Docs
|
||||
* Weekly update on data from devstats.k8s.io
|
||||
* [https://k8s.devstats.cncf.io/d/44/time-metrics?orgId=1&var-period=w&var-repogroup_name=Docs&var-repogroup=docs&var-apichange=All&var-size_name=All&var-size=all&var-full_name=Kubernetes](https://k8s.devstats.cncf.io/d/44/time-metrics?orgId=1&var-period=w&var-repogroup_name=Docs&var-repogroup=docs&var-apichange=All&var-size_name=All&var-size=all&var-full_name=Kubernetes)
|
||||
* Docs folks had vague anxiety (without concrete data) on their response times for issues and PRs. Devstats shows less than approx. 4 days initial response times during the last year, outside of a few spikes associated with holidays on the calendar and KubeCon.
|
||||
* Docs folks had vague anxiety (without concrete data) on their response times for issues and PRs. Devstats shows less than approx. 4 days initial response times during the last year, outside of a few spikes associated with holidays on the calendar and KubeCon/CloudNativeCon.
|
||||
* Introduction of prow into kubernetes/website led to a demonstrable improvement in early 2018
|
||||
* [ 0:00 ] **SIG Updates**
|
||||
* SIG Apps [Adnan Abdulhussein] (confirmed)
|
||||
|
@ -1647,9 +1647,9 @@
|
|||
* [Governance.md updated with subprojects](https://github.com/kubernetes/community/blob/master/governance.md#subprojects)
|
||||
* [WIP: Subproject Meta](https://docs.google.com/document/d/1FHauGII5LNVM-dZcNfzYZ-6WRs9RoPctQ4bw5dczrkk/edit#heading=h.2nslsje41be1)
|
||||
* [WIP: Charter FAQ (the "Why"s)](https://github.com/kubernetes/community/pull/1908)
|
||||
* Reminder: [Contributor Summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit), 1 May, day before Kubecon
|
||||
* Reminder: [Contributor Summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit), 1 May, day before KubeCon/CloudNativeCon
|
||||
* CNCF would like feedback on the draft blog post for 1.10 beta:
|
||||
* [http://blog.kubernetes.io/2018/03/first-beta-version-of-kubernetes-1-10.html](http://blog.kubernetes.io/2018/03/first-beta-version-of-kubernetes-1-10.html)
|
||||
* [https://kubernetes.io/blog/2018/03/first-beta-version-of-kubernetes-1-10/](https://kubernetes.io/blog/2018/03/first-beta-version-of-kubernetes-1-10/)
|
||||
* Please contact [Natasha Woods](mailto:nwoods@linuxfoundation.org) with your feedback
|
||||
* Shoutouts this week
|
||||
* See someone doing something great for the community? Mention them in #shoutouts on slack.
|
||||
|
@ -1727,9 +1727,9 @@
|
|||
* [ 0:00 ] <strong>Announcements</strong>
|
||||
* [Owner/Maintainer ](https://github.com/kubernetes/community/pull/1861/files)[pwittrock]
|
||||
* Maintainer is folding into Owner
|
||||
* Reminder: Contributor Summit happens 1 May, day before Kubecon
|
||||
* Reminder: Contributor Summit happens 1 May, day before KubeCon/CloudNativeCon
|
||||
* [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit)
|
||||
* Kubecon price increase March 9
|
||||
* KubeCon/CloudNativeCon price increase March 9
|
||||
* [https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/)
|
||||
* Copenhagen May 2-4, 2018
|
||||
* [Meet Our Contributors is next Weds!](https://github.com/kubernetes/community/blob/master/mentoring/meet-our-contributors.md)
|
||||
|
@ -1737,7 +1737,7 @@
|
|||
* Ask current contributors anything on slack #meet-our-contributors - testing infra, how to make first time contribution, how did they get involved in k8s
|
||||
* Shoutouts!
|
||||
* None on slack this week, thank someone in #shoutouts!
|
||||
* Top 5 in the the Kubernetes StackOverflow tag for the week: Radek "Goblin" Pieczonka, aerokite, Vikram Hosakote, Jonah Benton, and fiunchinho
|
||||
* Top 5 in the Kubernetes StackOverflow tag for the week: Radek "Goblin" Pieczonka, aerokite, Vikram Hosakote, Jonah Benton, and fiunchinho
|
||||
|
||||
|
||||
## February 22, 2018 - ([recording](https://www.youtube.com/watch?v=7pN0xdiFqPE))
|
||||
|
@ -1772,7 +1772,7 @@
|
|||
* SIG Cluster Lifecycle [First Last]
|
||||
* Not happening
|
||||
* [ 0:00 ] **Announcements**
|
||||
* Reminder: Contributor Summit happens 1 May, day before Kubecon
|
||||
* Reminder: Contributor Summit happens 1 May, day before KubeCon/CloudNativeCon
|
||||
* [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit)
|
||||
* Shoutouts this week
|
||||
* Zhonghu Xu - @hzxuzhonghu for many high quality apiserver APIs PRs
|
||||
|
@ -1866,7 +1866,7 @@
|
|||
* Roadshow!
|
||||
* F2F this Tuesday @ INDEX
|
||||
* Contributor Summit in Copenhagen
|
||||
* May 1; registration will be on KubeCon site this week
|
||||
* May 1; registration will be on KubeCon/CloudNativeCon site this week
|
||||
* New weekly meeting (from bi-weekly) same day / time (Weds @ 5pUTC)
|
||||
* SIG API Machinery [Daniel Smith](c)
|
||||
* Reminder: SIG-API doesn't own the API (that's SIG-architecture), but rather mechanics in API server, registry and discovery
|
||||
|
@ -1876,7 +1876,7 @@
|
|||
* [ 0:00 ] **Announcements**
|
||||
* Office hours next week!
|
||||
* [https://github.com/kubernetes/community/blob/master/events/office-hours.md](https://github.com/kubernetes/community/blob/master/events/office-hours.md)
|
||||
* Reminder: Contributor Summit will be 1 May, the day before Kubecon EU: [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit)
|
||||
* Reminder: Contributor Summit will be 1 May, the day before KubeCon/CloudNativeCon EU: [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit)
|
||||
* /lgtm, /approve and the principle of least surprise
|
||||
* [https://github.com/kubernetes/test-infra/issues/6589](https://github.com/kubernetes/test-infra/issues/6589)
|
||||
* Do we all need to use [the exact same code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process)?
|
||||
|
@ -2076,11 +2076,11 @@
|
|||
* contributing tests
|
||||
* cleaning up tests
|
||||
* what things are tested
|
||||
* e2e framework
|
||||
* e2e framework
|
||||
* Conformance
|
||||
* Please come participate
|
||||
* Kubernetes Documentation [User Journeys MVP](https://kubernetes.io/docs/home/) launched [Andrew Chen]
|
||||
* Please give SIG Docs for feedback, still adding things later
|
||||
* Please give SIG Docs for feedback, still adding things later
|
||||
* Can contribute normally (join SIG docs for more information)
|
||||
* New landing page incorporating personas (users, contributors, operators)
|
||||
* Levels of knowledge (foundational, advanced, etc)
|
||||
|
@ -2090,7 +2090,7 @@
|
|||
* Feel free to comment offline or on the issue if you have comments
|
||||
* TL;DR: call it the "control plane"
|
||||
* Issue: [https://github.com/kubernetes/website/issues/6525](https://github.com/kubernetes/website/issues/6525)
|
||||
* Contributor Summit for Kubecon EU [Jorge and Paris]
|
||||
* Contributor Summit for KubeCon/CloudNativeCon EU [Jorge and Paris]
|
||||
* SAVE THE DATE: May 1, 2018
|
||||
* [https://github.com/kubernetes/community/pull/1718](https://github.com/kubernetes/community/pull/1718)
|
||||
* #shoutouts - [Jorge Castro]
|
||||
|
@ -2136,7 +2136,7 @@
|
|||
* Breaking up the monolithic kubectl.
|
||||
*
|
||||
* [ 0:00 ] **Announcements**
|
||||
* SIG leads: register to offer intros and deep dives in SIG track at KubeCon Copenhagen (May 2-4): [overview](https://groups.google.com/forum/#!searchin/kubernetes-dev/kohn%7Csort:date/kubernetes-dev/5U-eNRBav2Q/g71MW47ZAgAJ), [signup](https://docs.google.com/forms/d/e/1FAIpQLSedSif6MwGfdI1-Rb33NRjTYwotQtIhNL7-ebtYQoDARPB2Tw/viewform) (1/31 deadline)
|
||||
* SIG leads: register to offer intros and deep dives in SIG track at KubeCon/CloudNativeCon Copenhagen (May 2-4): [overview](https://groups.google.com/forum/#!searchin/kubernetes-dev/kohn%7Csort:date/kubernetes-dev/5U-eNRBav2Q/g71MW47ZAgAJ), [signup](https://docs.google.com/forms/d/e/1FAIpQLSedSif6MwGfdI1-Rb33NRjTYwotQtIhNL7-ebtYQoDARPB2Tw/viewform) (1/31 deadline)
|
||||
* [SIG Contributor Experience news: new lead, new meeting](https://groups.google.com/forum/#!topic/kubernetes-dev/65S1Y3IK8PQ)
|
||||
* [Meet Our Contributors ](https://github.com/kubernetes/community/blob/master/mentoring/meet-our-contributors.md)- Feb 7th [Paris]
|
||||
* 730a PST/ 3:30 pm UTC & 1pm PST / 9pm UTC
|
||||
|
@ -2200,7 +2200,7 @@
|
|||
* GSoC [Ihor D]
|
||||
* [https://github.com/cncf/soc](https://github.com/cncf/soc); [k8s gh](https://github.com/kubernetes/community/blob/master/mentoring/google-summer-of-code.md)
|
||||
* nikhita has volunteered to drive this program for Kubernetes
|
||||
* SIG Intros & Deep Dives sessions registration at KubeCon & CloudNativeCon will be announced shortly (stay tuned!)
|
||||
* SIG Intros & Deep Dives sessions registration at KubeCon/CloudNativeCon & CloudNativeCon will be announced shortly (stay tuned!)
|
||||
* Changes to this meeting's format [Jorge Castro]
|
||||
* SIGs scheduled per cycle instead of adhoc
|
||||
* Demo changes
|
||||
|
|
|
@ -21,7 +21,7 @@ Moderators _MUST_:
|
|||
- Take care of spam as soon as possible, which may mean taking action by removing a member from that resource.
|
||||
- Foster a safe and productive environment by being aware of potential multiple cultural differences between Kubernetes community members.
|
||||
- Understand that you might be contacted by moderators, community managers, and other users via private email or a direct message.
|
||||
- Report egregious behavior to steering@k8s.io.
|
||||
- Report violations of the Code of Conduct to <conduct@kubernetes.io>.
|
||||
|
||||
Moderators _SHOULD_:
|
||||
|
||||
|
@ -34,7 +34,7 @@ Moderators _SHOULD_:
|
|||
|
||||
## Violations
|
||||
|
||||
The Kubernetes [Steering Committee](https://github.com/kubernetes/steering) will have the final authority regarding escalated moderation matters. Violations of the Code of Conduct will be handled on a case by case basis. Depending on severity this can range up to and including removal of the person from the community, though this is extremely rare.
|
||||
The Kubernetes [Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) will have the final authority regarding escalated moderation matters. Violations of the Code of Conduct will be handled on a case by case basis. Depending on severity this can range up to and including removal of the person from the community, though this is extremely rare.
|
||||
|
||||
## Specific Guidelines
|
||||
|
||||
|
@ -42,6 +42,22 @@ These guidelines are for tool-specific policies that don't fit under a general u
|
|||
|
||||
### Mailing Lists
|
||||
|
||||
### Moderating a SIG/WG list
|
||||
|
||||
- SIG and Working Group mailing list should have parispittman@google.com and jorge@heptio.com as a coowner so that administrative functions can be managed centrally across the project.
|
||||
- Moderation of the SIG/WG lists is up to that individual SIG/WG, these admins are there to help facilitate leadership changes, reset lost passwords, etc.
|
||||
|
||||
- Users who are violating the Code of Conduct or other negative activities (like spamming) should be moderated.
|
||||
- [Lock the thread immediately](https://support.google.com/groups/answer/2466386?hl=en#) so that people cannot reply to the thread.
|
||||
- [Delete the post](https://support.google.com/groups/answer/1046523?hl=en) -
|
||||
- In some cases you might need to ban a user from the group, follow [these instructions](https://support.google.com/groups/answer/2646833?hl=en&ref_topic=2458761#) on how stop a member from being able to post to the group.
|
||||
|
||||
For more technical help on how to use Google Groups, check the [Groups Help](https://support.google.com/groups/answer/2466386?hl=en&ref_topic=2458761) page.
|
||||
|
||||
### New users posting to a SIG/WG list
|
||||
New members who post to a group will automatically have their messages put in a queue and be sent the following message automatically: "Since you're a new subscriber you're in a moderation queue, sorry for the inconvenience, a moderator will check your message shortly."
|
||||
|
||||
Moderators will receive emails when messages are in this queue and will process them accordingly.
|
||||
|
||||
### Slack
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Community Moderators
|
||||
|
||||
The following people are responsible for moderating/administrating Kuberentes communication channels and their home time zone.
|
||||
The following people are responsible for moderating/administrating Kubernetes communication channels and their home time zone.
|
||||
See our [moderation guidelines](./moderating.md) for policies and recommendations.
|
||||
|
||||
## Mailing Lists
|
||||
|
|
|
@ -0,0 +1,77 @@
|
|||
# Kubernetes Resources
|
||||
|
||||
> A collection of resources organized by medium (e.g. audio, text, video)
|
||||
|
||||
## Table of Contents
|
||||
|
||||
<!-- vim-markdown-toc GFM -->
|
||||
|
||||
- [Contributions](#contributions)
|
||||
- [Resources](#resources)
|
||||
- [Audio](#audio)
|
||||
- [Text](#text)
|
||||
- [Video](#video)
|
||||
- [Learning Resources](#learning-resources)
|
||||
|
||||
<!-- vim-markdown-toc -->
|
||||
|
||||
## Contributions
|
||||
|
||||
If you would like to contribute to this list, please submit a PR and add `/sig contributor-experience` and `/assign @petermbenjamin`.
|
||||
|
||||
The criteria for contributions are simple:
|
||||
|
||||
- The resource must be related to Kubernetes.
|
||||
- The resource must be free.
|
||||
- Avoid undifferentiated search links (e.g. `https://example.com/search?q=kubernetes`), unless you can ensure the most relevant results (e.g. `https://example.com/search?q=kubernetes&category=technology`)
|
||||
|
||||
## Resources
|
||||
|
||||
### Audio
|
||||
|
||||
- [PodCTL](https://twitter.com/PodCTL)
|
||||
- [Kubernetes Podcast](https://kubernetespodcast.com)
|
||||
- [The New Stack Podcasts](https://thenewstack.io/podcasts/)
|
||||
|
||||
### Text
|
||||
|
||||
- [Awesome Kubernetes](https://github.com/ramitsurana/awesome-kubernetes)
|
||||
- [CNCF Blog](https://www.cncf.io/newsroom/blog/)
|
||||
- [Dev.To](https://dev.to/t/kubernetes)
|
||||
- [Heptio Blog](https://blog.heptio.com)
|
||||
- [KubeTips](http://kubetips.com)
|
||||
- [KubeWeekly](https://twitter.com/kubeweekly)
|
||||
- [Kubedex](https://kubedex.com/category/blog/)
|
||||
- [Kubernetes Blog](https://kubernetes.io/blog/)
|
||||
- [Kubernetes Enhancements Repo](https://github.com/kubernetes/enhancements)
|
||||
- [Kubernetes Forum](https://discuss.kubernetes.io)
|
||||
- [Last Week in Kubernetes Development](http://lwkd.info)
|
||||
- [Medium](https://medium.com/tag/kubernetes)
|
||||
- [Reddit](https://www.reddit.com/r/kubernetes)
|
||||
- [The New Stack: CI/CD With Kubernetes](https://thenewstack.io/ebooks/kubernetes/ci-cd-with-kubernetes/)
|
||||
- [The New Stack: Kubernetes Deployment & Security Patterns](https://thenewstack.io/ebooks/kubernetes/kubernetes-deployment-and-security-patterns/)
|
||||
- [The New Stack: Kubernetes Solutions Directory](https://thenewstack.io/ebooks/kubernetes/kubernetes-solutions-directory/)
|
||||
- [The New Stack: State of Kubernetes Ecosystem](https://thenewstack.io/ebooks/kubernetes/state-of-kubernetes-ecosystem/)
|
||||
- [The New Stack: Use-Cases for Kubernetes](https://thenewstack.io/ebooks/use-cases/use-cases-for-kubernetes/)
|
||||
- [Weaveworks Blog](https://www.weave.works/blog/category/kubernetes/)
|
||||
|
||||
### Video
|
||||
|
||||
- [BrightTALK Webinars](https://www.brighttalk.com/search/?q=kubernetes)
|
||||
- [Ceph YouTube Channel](https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw)
|
||||
- [CNCF YouTube Channel](https://www.youtube.com/channel/UCvqbFHwN-nwalWPjPUKpvTA)
|
||||
- [Heptio YouTube Channel](https://www.youtube.com/channel/UCjQU5ZI2mHswy7OOsii_URg)
|
||||
- [Joe Hobot YouTube Channel](https://www.youtube.com/channel/UCdxEoi9hB617EDLEf8NWzkA)
|
||||
- [Kubernetes YouTube Channel](https://www.youtube.com/channel/UCZ2bu0qutTOM0tHYa_jkIwg)
|
||||
- [Lachlan Evenson YouTube Channel](https://www.youtube.com/channel/UCC5NsnXM2lE6kKfJKdQgsRQ)
|
||||
- [Rancher YouTube Channel](https://www.youtube.com/channel/UCh5Xtp82q8wjijP8npkVTBA)
|
||||
- [Rook YouTube Channel](https://www.youtube.com/channel/UCa7kFUSGO4NNSJV8MJVlJAA)
|
||||
- [Tigera YouTube Channel](https://www.youtube.com/channel/UC8uN3yhpeBeerGNwDiQbcgw)
|
||||
- [Weaveworks YouTube Channel](https://www.youtube.com/channel/UCmIz9ew1lA3-XDy5FqY-mrA/featured)
|
||||
|
||||
### Learning Resources
|
||||
|
||||
- [edx Courses](https://www.edx.org/course?search_query=kubernetes)
|
||||
- [Katacoda Interactive Tutorials](https://www.katacoda.com)
|
||||
- [Udacity Course](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
|
||||
- [Udemy Courses](https://www.udemy.com/courses/search/?courseLabel=&sort=relevance&q=kubernetes&price=price-free)
|
|
@ -1,6 +1,6 @@
|
|||
# SLACK GUIDELINES
|
||||
|
||||
Slack is the main communication platform for Kubernetes outside of our mailing lists. It’s important that conversation stays on topic in each channel, and that everyone abides by the Code of Conduct. We have over 30,000 members who should all expect to have a positive experience.
|
||||
Slack is the main communication platform for Kubernetes outside of our mailing lists. It’s important that conversation stays on topic in each channel, and that everyone abides by the Code of Conduct. We have over 50,000 members who should all expect to have a positive experience.
|
||||
|
||||
Chat is searchable and public. Do not make comments that you would not say on a video recording or in another public space. Please be courteous to others.
|
||||
|
||||
|
@ -33,7 +33,7 @@ Please reach out to the #slack-admins group with your request to create a new ch
|
|||
Channels are dedicated to [SIGs, WGs](/sig-list.md), sub-projects, community topics, and related Kubernetes programs/projects.
|
||||
Channels are not:
|
||||
* company specific; cloud providers are ok with product names as the channel. Discourse will be about Kubernetes-related topics and not proprietary information of the provider.
|
||||
* private unless there is an exception: code of conduct matters, mentoring, security/vulnerabilities, or steering committee.
|
||||
* private unless there is an exception: code of conduct matters, mentoring, security/vulnerabilities, github management, or steering committee.
|
||||
|
||||
Typical naming conventions:
|
||||
#kubernetes-foo #sig-foo #meetup-foo #location-users #projectname
|
||||
|
@ -47,11 +47,13 @@ Join the #slack-admins channel or contact one of the admins in the closest timez
|
|||
|
||||
What if you have a problem with an admin?
|
||||
Send a DM to another listed Admin and describe the situation OR
|
||||
If it’s a code of conduct issue, please send an email to steering-private@kubernetes.io and describe the situation
|
||||
If it’s a code of conduct issue, please send an email to conduct@kubernetes.io and describe the situation
|
||||
|
||||
## BOTS, TOKENS, WEBHOOKS, OH MY
|
||||
|
||||
Bots, tokens, and webhooks are reviewed on a case-by-case basis with most requests being rejected due to security, privacy, and usability concerns.. Bots and the like tend to make a lot of noise in channels. Our Slack instance has over 30,000 people and we want everyone to have a great experience. Please join #Slack-admins and have a discussion about your request before requesting the access. GitHub workflow alerts into certain channels and requests from CNCF are typically OK.
|
||||
Bots, tokens, and webhooks are reviewed on a case-by-case basis with most requests being rejected due to security, privacy, and usability concerns. Bots and the like tend to make a lot of noise in channels. Our Slack instance has over 50,000 people and we want everyone to have a great experience. Please join #slack-admins and have a discussion about your request before requesting the access.
|
||||
|
||||
Typically OK: GitHub, CNCF requests, and tools/platforms that we use to contribute to Kubernetes
|
||||
|
||||
## ADMIN MODERATION
|
||||
|
||||
|
@ -74,8 +76,15 @@ For reasons listed below, admins may inactivate individual Slack accounts. Due t
|
|||
In the case that certain channels have rules or guidelines, they will be listed in the purpose or pinned docs of that channel.
|
||||
|
||||
#kubernetes-dev = questions and discourse around upstream contributions and development to kubernetes
|
||||
#kubernetes-careers = job openings for positions working with/on/around Kubernetes. Postings should include contact details.
|
||||
#kubernetes-careers = job openings for positions working with/on/around Kubernetes. Post the job once and pin it. Pins expire after 30 days. Postings must include:
|
||||
- A link to the posting or job description
|
||||
- The business name that will employ the Kubernetes hire
|
||||
- The location of the role or if remote is OK
|
||||
|
||||
## DM (Direct Message) Conversations
|
||||
|
||||
Please do not engage in proprietary company specific conversations in the Kubernetes Slack instance. This is meant for conversations around related Kubernetes open source topics and community. Proprietary conversations should occur in your company Slack and/or communication platforms. As with all communication, please be mindful of appropriateness, professionalism, and applicability to the Kubernetes community.
|
||||
|
||||
|
||||
Note:
|
||||
We archive the entire workgroup's slack data in zip files when we have time. [The latest archive is from June 2016-November 2018.](https://drive.google.com/drive/folders/1idJkWcDuSfs8nFUm-1BgvzZxCqPMpDCb?usp=sharing)
|
||||
|
|
|
@ -68,7 +68,9 @@ Kubernetes organization to any related orgs automatically, but such is not the
|
|||
case currently. If you are a Kubernetes org member, you are implicitly eligible
|
||||
for membership in related orgs, and can request membership when it becomes
|
||||
relevant, by [opening an issue][membership request] against the kubernetes/org
|
||||
repo, as above.
|
||||
repo, as above. However, if you are a member of any of the related
|
||||
[Kubernetes GitHub organizations] but not of the [Kubernetes org],
|
||||
you will need explicit sponsorship for your membership request.
|
||||
|
||||
### Responsibilities and privileges
|
||||
|
||||
|
@ -226,6 +228,7 @@ The Maintainer role has been removed and replaced with a greater focus on [OWNER
|
|||
[contributor guide]: /contributors/guide/README.md
|
||||
[Kubernetes GitHub Admin team]: /github-management/README.md#github-administration-team
|
||||
[Kubernetes GitHub organizations]: /github-management#actively-used-github-organizations
|
||||
[Kubernetes org]: https://github.com/kubernetes
|
||||
[kubernetes-dev@googlegroups.com]: https://groups.google.com/forum/#!forum/kubernetes-dev
|
||||
[kubernetes-sigs]: https://github.com/kubernetes-sigs
|
||||
[membership request]: https://github.com/kubernetes/org/issues/new?template=membership.md&title=REQUEST%3A%20New%20membership%20for%20%3Cyour-GH-handle%3E
|
||||
|
|
|
@ -89,7 +89,7 @@ Implementations that cannot offer consistent ranging (returning a set of results
|
|||
|
||||
#### etcd3
|
||||
|
||||
For etcd3 the continue token would contain a resource version (the snapshot that we are reading that is consistent across the entire LIST) and the start key for the next set of results. Upon receiving a valid continue token the apiserver would instruct etcd3 to retrieve the set of results at a given resource version, beginning at the provided start key, limited by the maximum number of requests provided by the continue token (or optionally, by a different limit specified by the client). If more results remain after reading up to the limit, the storage should calculate a continue token that would begin at the next possible key, and the continue token set on the returned list.
|
||||
For etcd3 the continue token would contain a resource version (the snapshot that we are reading that is consistent across the entire LIST) and the start key for the next set of results. Upon receiving a valid continue token the apiserver would instruct etcd3 to retrieve the set of results at a given resource version, beginning at the provided start key, limited by the maximum number of requests provided by the continue token (or optionally, by a different limit specified by the client). If more results remain after reading up to the limit, the storage should calculate a continue token that would begin at the next possible key, and the continue token set on the returned list.
|
||||
|
||||
The storage layer in the apiserver must apply consistency checking to the provided continue token to ensure that malicious users cannot trick the server into serving results outside of its range. The storage layer must perform defensive checking on the provided value, check for path traversal attacks, and have stable versioning for the continue token.
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ while
|
|||
|
||||
## Constraints and Assumptions
|
||||
|
||||
* it is not the goal to implement all output formats one can imagine. The main goal is to be extensible with a clear golang interface. Implementations of e.g. CADF must be possible, but won't be discussed here.
|
||||
* it is not the goal to implement all output formats one can imagine. The main goal is to be extensible with a clear golang interface. Implementations of e.g. CADF must be possible, but won't be discussed here.
|
||||
* dynamic loading of backends for new output formats are out of scope.
|
||||
|
||||
## Use Cases
|
||||
|
@ -243,7 +243,7 @@ type PolicyRule struct {
|
|||
// An empty list implies every user.
|
||||
Users []string
|
||||
// The user groups this rule applies to. If a user is considered matching
|
||||
// if the are a member of any of these groups
|
||||
// if they are a member of any of these groups
|
||||
// An empty list implies every user group.
|
||||
UserGroups []string
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ Thanks: @dbsmith, @deads2k, @sttts, @liggit, @enisoc
|
|||
|
||||
### Summary
|
||||
|
||||
This document proposes a detailed plan for adding support for version-conversion of Kubernetes resources defined via Custom Resource Definitions (CRD). The API Server is extended to call out to a webhook at appropriate parts of the handler stack for CRDs.
|
||||
This document proposes a detailed plan for adding support for version-conversion of Kubernetes resources defined via Custom Resource Definitions (CRD). The API Server is extended to call out to a webhook at appropriate parts of the handler stack for CRDs.
|
||||
|
||||
No new resources are added; the [CRD resource](https://github.com/kubernetes/kubernetes/blob/34383aa0a49ab916d74ea897cebc79ce0acfc9dd/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types.go#L187) is extended to include conversion information as well as multiple schema definitions, one for each apiVersion that is to be served.
|
||||
|
||||
|
@ -89,12 +89,12 @@ type CustomResourceDefinitionSpec struct {
|
|||
Version string
|
||||
Names CustomResourceDefinitionNames
|
||||
Scope ResourceScope
|
||||
// This optional and correspond to the first version in the versions list
|
||||
// Optional, can only be provided if per-version schema is not provided.
|
||||
Validation *CustomResourceValidation
|
||||
// Optional, correspond to the first version in the versions list
|
||||
// Optional, can only be provided if per-version subresource is not provided.
|
||||
Subresources *CustomResourceSubresources
|
||||
Versions []CustomResourceDefinitionVersion
|
||||
// Optional, and correspond to the first version in the versions list
|
||||
// Optional, can only be provided if per-version additionalPrinterColumns is not provided.
|
||||
AdditionalPrinterColumns []CustomResourceColumnDefinition
|
||||
|
||||
Conversion *CustomResourceConversion
|
||||
|
@ -104,9 +104,11 @@ type CustomResourceDefinitionVersion struct {
|
|||
Name string
|
||||
Served Boolean
|
||||
Storage Boolean
|
||||
// These three fields should not be set for first item in Versions list
|
||||
// Optional, can only be provided if top level validation is not provided.
|
||||
Schema *JSONSchemaProp
|
||||
// Optional, can only be provided if top level subresource is not provided.
|
||||
Subresources *CustomResourceSubresources
|
||||
// Optional, can only be provided if top level additionalPrinterColumns is not provided.
|
||||
AdditionalPrinterColumns []CustomResourceColumnDefinition
|
||||
}
|
||||
|
||||
|
@ -125,21 +127,49 @@ type CustomResourceConversionWebhook {
|
|||
}
|
||||
```
|
||||
|
||||
### Defaulting
|
||||
### Top level fields to Per-Version fields
|
||||
|
||||
In case that there is no versions list, a single version with values defaulted to top level version will be created. That means a single version with a name set to spec.version.
|
||||
All newly added per version fields (schema, additionalPrinterColumns or subresources) will be defaulted to the coresponding top level field except for the first version in the list that will remain empty.
|
||||
In *CRD v1beta1* (apiextensions.k8s.io/v1beta1) there are per-version schema, additionalPrinterColumns or subresources (called X in this section) defined and these validation rules will be applied to them:
|
||||
|
||||
* Either top level X or per-version X can be set, but not both. This rule applies to individual X’s not the whole set. E.g. top level schema can be set while per-version subresources are set.
|
||||
* per-version X cannot be the same. E.g. if all per-version schema are the same, the CRD object will be rejected with an error message asking the user to use the top level schema.
|
||||
|
||||
### Validation
|
||||
in *CRD v1* (apiextensions.k8s.io/v1), there will be only version list with no top level X. The second validation guarantees a clean moving to v1. These are conversion rules:
|
||||
|
||||
To keep backward compatibility, the top level fields (schema, additionalPrinterColumns or subresources) stay the same and source of truth for first (top) version. The first item in the versions list must not set any of those fields. The plan is to use unified version list for v1.
|
||||
*v1beta1->v1:*
|
||||
|
||||
* If top level X is set in v1beta1, then it will be copied to all versions in v1.
|
||||
* If per-version X are set in v1beta1, then they will be used for per-version X in v1.
|
||||
|
||||
*v1->v1beta1:*
|
||||
|
||||
* If all per-version X are the same in v1, they will be copied to top level X in v1beta1
|
||||
* Otherwise, they will be used as per-version X in v1beta1
|
||||
|
||||
#### Alternative approaches considered
|
||||
|
||||
First a defaulting approach is considered which per-version fields would be defaulted to top level fields. but that breaks backward incompatible change; Quoting from API [guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#backward-compatibility-gotchas):
|
||||
|
||||
> A single feature/property cannot be represented using multiple spec fields in the same API version simultaneously
|
||||
|
||||
Hence the defaulting either implicit or explicit has the potential to break backward compatibility as we have two sets of fields representing the same feature.
|
||||
|
||||
There are other solution considered that does not involved defaulting:
|
||||
|
||||
* Field Discriminator: Use `Spec.Conversion.Strategy` as discriminator to decide which set of fields to use. This approach would work but the proposed solution is keeping the mutual excusivity in a broader sense and is preferred.
|
||||
* Per-version override: If a per-version X is specified, use it otherwise use the top level X if provided. While with careful validation and feature gating, this solution is also backward compatible, the overriding behaviour need to be kept in CRD v1 and that looks too complicated and not clean to keep for a v1 API.
|
||||
|
||||
Refer to [this document](http://bit.ly/k8s-crd-per-version-defaulting) for more details and discussions on those solutions.
|
||||
|
||||
### Support Level
|
||||
|
||||
The feature will be alpha in the first implementation and will have a feature gate that is defaulted to false. The roll-back story with a feature gate is much more clear. if we have the features as alpha in kubernetes release Y (>X where the feature is missing) and we make it beta in kubernetes release Z, it is not safe to use the feature and downgrade from Y to X but the feature is alpha in Y which is fine. It is safe to downgrade from Z to Y (given that we enable the feature gate in Y) and that is desirable as the feature is beta in Z.
|
||||
On downgrading from a Z to Y, stored CRDs can have per-version fields set. While the feature gate can be off on Y (alpha cluster), it is dangerous to disable per-version Schema Validation or Status subresources as it makes the status field mutable and validation on CRs will be disabled. Thus the feature gate in Y only protects adding per-version fields not the actual behaviour. Thus if the feature gate is off in Y:
|
||||
|
||||
* Per-version X cannot be set on CRD create (per-version fields are auto-cleared).
|
||||
* Per-version X can only be set/changed on CRD update *if* the existing CRD object already has per-version X set.
|
||||
|
||||
This way even if we downgrade from Z to Y, per-version validations and subresources will be honored. This will not be the case for webhook conversion itself. The feature gate will also protect the implementation of webhook conversion and alpha cluster with disabled feature gate will return error for CRDs with webhook conversion (that are created with a future version of the cluster).
|
||||
|
||||
### Rollback
|
||||
|
||||
|
@ -153,7 +183,7 @@ Users that need to rollback to version X (but may currently be running version Y
|
|||
|
||||
4. If the user rolls forward again, then custom resources will be served again.
|
||||
|
||||
If a user does not use the webhook feature but uses the versioned schema, additionalPrinterColumns, and/or subresources and rollback to a version that does not support them per version, any value set per version will be ignored and only values in top level spec.* will be honor.
|
||||
If a user does not use the webhook feature but uses the versioned schema, additionalPrinterColumns, and/or subresources and rollback to a version that does not support them per-version, any value set per-version will be ignored and only values in top level spec.* will be honor.
|
||||
|
||||
Please note that any of the fields added in this design that is not supported in previous kubernetes releases can be removed on an update operation (e.g. status update). The kubernetes release where defined the types but gate them with an alpha feature gate, however, can keep these fields but ignore there value.
|
||||
|
||||
|
@ -233,10 +263,10 @@ For operations that need more than one conversion (e.g. LIST), no partial result
|
|||
No new caching is planned as part of this work, but the API Server may in the future cache webhook POST responses.
|
||||
|
||||
Most API operations are reads. The most common kind of read is a watch. All watched objects are cached in memory. For CRDs, the cache
|
||||
is per version. That is the result of having one [REST store object](https://github.com/kubernetes/kubernetes/blob/3cb771a8662ae7d1f79580e0ea9861fd6ab4ecc0/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go#L72) per version which
|
||||
is per-version. That is the result of having one [REST store object](https://github.com/kubernetes/kubernetes/blob/3cb771a8662ae7d1f79580e0ea9861fd6ab4ecc0/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go#L72) per-version which
|
||||
was an arbitrary design choice but would be required for better caching with webhook conversion. In this model, each GVK is cached, regardless of whether some GVKs share storage. Thus, watches do not cause conversion. So, conversion webhooks will not add overhead to the watch path. Watch cache is per api server and eventually consistent.
|
||||
|
||||
Non-watch reads are also cached (if requested resourceVersion is 0 which is true for generated informers by default, but not for calls like `kubectl get ...`, namespace cleanup, etc). The cached objects are converted and per version (TODO: fact check). So, conversion webhooks will not add overhead here too.
|
||||
Non-watch reads are also cached (if requested resourceVersion is 0 which is true for generated informers by default, but not for calls like `kubectl get ...`, namespace cleanup, etc). The cached objects are converted and per-version (TODO: fact check). So, conversion webhooks will not add overhead here too.
|
||||
|
||||
If in the future this proves to be a performance problem, we might need to add caching later. The Authorization and Authentication webhooks already use a simple scheme with APIserver-side caching and a single TTL for expiration. This has worked fine, so we can repeat this process. It does not require Webhook hosts to be aware of the caching.
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ admission controller that uses code, rather than configuration, to map the
|
|||
resource requests and limits of a pod to QoS, and attaches the corresponding
|
||||
annotation.)
|
||||
|
||||
We anticipate a number of other uses for `MetadataPolicy`, such as defaulting
|
||||
We anticipate a number of other uses for `MetadataPolicy`, such as defaulting
|
||||
for labels and annotations, prohibiting/requiring particular labels or
|
||||
annotations, or choosing a scheduling policy within a scheduler. We do not
|
||||
discuss them in this doc.
|
||||
|
|
|
@ -267,7 +267,7 @@ ControllerRevisions, this approach is reasonable.
|
|||
- A revision is considered to be live while any generated Object labeled
|
||||
with its `.Name` is live.
|
||||
- This method has the benefit of providing visibility, via the label, to
|
||||
users with respect to the historical provenance of a generated Object.
|
||||
users with respect to the historical provenance of a generated Object.
|
||||
- The primary drawback is the lack of support for using garbage collection
|
||||
to ensure that only non-live version snapshots are collected.
|
||||
1. Controllers may also use the `OwnerReferences` field of the
|
||||
|
|
|
@ -143,7 +143,7 @@ For each creation or update for a Deployment, it will:
|
|||
is the one that the new RS uses and collisionCount is a counter in the DeploymentStatus
|
||||
that increments every time a [hash collision](#hashing-collisions) happens (hash
|
||||
collisions should be rare with fnv).
|
||||
- If the RSs and pods dont already have this label and selector:
|
||||
- If the RSs and pods don't already have this label and selector:
|
||||
- We will first add this to RS.PodTemplateSpec.Metadata.Labels for all RSs to
|
||||
ensure that all new pods that they create will have this label.
|
||||
- Then we will add this label to their existing pods
|
||||
|
@ -197,7 +197,7 @@ For example, consider the following case:
|
|||
Users can pause/cancel a rollout by doing a non-cascading deletion of the Deployment
|
||||
before it is complete. Recreating the same Deployment will resume it.
|
||||
For example, consider the following case:
|
||||
- User creats a Deployment to perform a rolling-update for 10 pods from image:v1 to
|
||||
- User creates a Deployment to perform a rolling-update for 10 pods from image:v1 to
|
||||
image:v2.
|
||||
- User then deletes the Deployment while the old and new RSs are at 5 replicas each.
|
||||
User will end up with 2 RSs with 5 replicas each.
|
||||
|
|
|
@ -61,7 +61,7 @@ think about it.
|
|||
about uniqueness, just labeling for user's own reasons.
|
||||
- Defaulting logic sets `job.spec.selector` to
|
||||
`matchLabels["controller-uid"]="$UIDOFJOB"`
|
||||
- Defaulting logic appends 2 labels to the `.spec.template.metadata.labels`.
|
||||
- Defaulting logic appends 2 labels to the `.spec.template.metadata.labels`.
|
||||
- The first label is controller-uid=$UIDOFJOB.
|
||||
- The second label is "job-name=$NAMEOFJOB".
|
||||
|
||||
|
|
|
@ -304,7 +304,7 @@ as follows.
|
|||
should be consistent with the version indicated by `Status.UpdateRevision`.
|
||||
1. If the Pod does not meet either of the prior two conditions, and if
|
||||
ordinal is in the sequence `[0, .Spec.UpdateStrategy.Partition.Ordinal)`,
|
||||
it should be consistent with the version indicated by
|
||||
it should be consistent with the version indicated by
|
||||
`Status.CurrentRevision`.
|
||||
1. Otherwise, the Pod should be consistent with the version indicated
|
||||
by `Status.UpdateRevision`.
|
||||
|
@ -446,7 +446,7 @@ object if any of the following conditions are true.
|
|||
1. `.Status.UpdateReplicas` is negative or greater than `.Status.Replicas`.
|
||||
|
||||
## Kubectl
|
||||
Kubectl will use the `rollout` command to control and provide the status of
|
||||
Kubectl will use the `rollout` command to control and provide the status of
|
||||
StatefulSet updates.
|
||||
|
||||
- `kubectl rollout status statefulset <StatefulSet-Name>`: displays the status
|
||||
|
@ -648,7 +648,7 @@ spec:
|
|||
### Phased Roll Outs
|
||||
Users can create a canary using `kubectl apply`. The only difference between a
|
||||
[canary](#canaries) and a phased roll out is that the
|
||||
`.Spec.UpdateStrategy.Partition.Ordinal` is set to a value less than
|
||||
`.Spec.UpdateStrategy.Partition.Ordinal` is set to a value less than
|
||||
`.Spec.Replicas-1`.
|
||||
|
||||
```yaml
|
||||
|
@ -810,7 +810,7 @@ intermittent compaction as a form of garbage collection. Applications that use
|
|||
log structured merge trees with size tiered compaction (e.g Cassandra) or append
|
||||
only B(+/*) Trees (e.g Couchbase) can temporarily double their storage requirement
|
||||
during compaction. If there is insufficient space for compaction
|
||||
to progress, these applications will either fail or degrade until
|
||||
to progress, these applications will either fail or degrade until
|
||||
additional capacity is added. While, if the user is using AWS EBS or GCE PD,
|
||||
there are valid manual workarounds to expand the size of a PD, it would be
|
||||
useful to automate the resize via updates to the StatefulSet's
|
||||
|
|
|
@ -65,7 +65,7 @@ The project is committed to the following (aspirational) [design ideals](princip
|
|||
approach is key to the system’s self-healing and autonomic capabilities.
|
||||
* _Advance the state of the art_. While Kubernetes intends to support non-cloud-native
|
||||
applications, it also aspires to advance the cloud-native and DevOps state of the art, such as
|
||||
in the [participation of applications in their own management](http://blog.kubernetes.io/2016/09/cloud-native-application-interfaces.html).
|
||||
in the [participation of applications in their own management](https://kubernetes.io/blog/2016/09/cloud-native-application-interfaces/).
|
||||
However, in doing
|
||||
so, we strive not to force applications to lock themselves into Kubernetes APIs, which is, for
|
||||
example, why we prefer configuration over convention in the [downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api).
|
||||
|
|
|
@ -30,7 +30,7 @@ What form should this configuration take in Kubernetes? The requirements are as
|
|||
|
||||
* In particular, it should be straightforward (but not required) to manage declarative intent under **version control**, which is [standard industry best practice](http://martinfowler.com/bliki/InfrastructureAsCode.html) and what Google does internally. Version control facilitates reproducibility, reversibility, and an audit trail. Unlike generated build artifacts, configuration is primary human-authored, or at least it is desirable to be human-readable, and it is typically changed with a human in the loop, as opposed to fully automated processes, such as autoscaling. Version control enables the use of familiar tools and processes for change control, review, and conflict resolution.
|
||||
|
||||
* Users need the ability to **customize** off-the-shelf configurations and to instantiate multiple **variants**, without crossing the [line into the ecosystem](https://docs.google.com/presentation/d/1oPZ4rznkBe86O4rPwD2CWgqgMuaSXguIBHIE7Y0TKVc/edit#slide=id.g21b1f16809_5_86) of [configuration domain-specific languages, platform as a service, functions as a service](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not), and so on, though users should be able to [layer such tools/systems on top](http://blog.kubernetes.io/2017/02/caas-the-foundation-for-next-gen-paas.html) of the mechanism, should they choose to do so.
|
||||
* Users need the ability to **customize** off-the-shelf configurations and to instantiate multiple **variants**, without crossing the [line into the ecosystem](https://docs.google.com/presentation/d/1oPZ4rznkBe86O4rPwD2CWgqgMuaSXguIBHIE7Y0TKVc/edit#slide=id.g21b1f16809_5_86) of [configuration domain-specific languages, platform as a service, functions as a service](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not), and so on, though users should be able to [layer such tools/systems on top](https://kubernetes.io/blog/2017/02/caas-the-foundation-for-next-gen-paas/) of the mechanism, should they choose to do so.
|
||||
|
||||
* We need to develop clear **conventions**, **examples**, and mechanisms that foster **structure**, to help users understand how to combine Kubernetes’s flexible mechanisms in an effective manner.
|
||||
|
||||
|
@ -384,7 +384,7 @@ Consider more automation, such as autoscaling, self-configuration, etc. to reduc
|
|||
|
||||
#### What about providing an intentionally restrictive simplified, tailored developer experience to streamline a specific use case, environment, workflow, etc.?
|
||||
|
||||
This is essentially a [DIY PaaS](http://blog.kubernetes.io/2017/02/caas-the-foundation-for-next-gen-paas.html). Write a configuration generator, either client-side or using CRDs ([example](https://github.com/pearsontechnology/environment-operator/blob/dev/User_Guide.md)). The effort involved to document the format, validate it, test it, etc. is similar to building a new API, but I could imagine someone eventually building a SDK to make that easier.
|
||||
This is essentially a [DIY PaaS](https://kubernetes.io/blog/2017/02/caas-the-foundation-for-next-gen-paas/). Write a configuration generator, either client-side or using CRDs ([example](https://github.com/pearsontechnology/environment-operator/blob/dev/User_Guide.md)). The effort involved to document the format, validate it, test it, etc. is similar to building a new API, but I could imagine someone eventually building a SDK to make that easier.
|
||||
|
||||
#### What about more sophisticated deployment orchestration?
|
||||
|
||||
|
|
|
@ -87,7 +87,7 @@ status:
|
|||
|
||||
API groups may be exposed as a unified API surface while being served by distinct [servers](https://kubernetes.io/docs/tasks/access-kubernetes-api/setup-extension-api-server/) using [**aggregation**](https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/), which is particularly useful for APIs with special storage needs. However, Kubernetes also supports [**custom resources**](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) (CRDs), which enables users to define new types that fit the standard API conventions without needing to build and run another server. CRDs can be used to make systems declaratively and dynamically configurable in a Kubernetes-compatible manner, without needing another storage system.
|
||||
|
||||
Each API server supports a custom [discovery API](https://github.com/kubernetes/client-go/blob/master/discovery/discovery_client.go) to enable clients to discover available API groups, versions, and types, and also [OpenAPI](http://blog.kubernetes.io/2016/12/kubernetes-supports-openapi.html), which can be used to extract documentation and validation information about the resource types.
|
||||
Each API server supports a custom [discovery API](https://github.com/kubernetes/client-go/blob/master/discovery/discovery_client.go) to enable clients to discover available API groups, versions, and types, and also [OpenAPI](https://kubernetes.io/blog/2016/12/kubernetes-supports-openapi/), which can be used to extract documentation and validation information about the resource types.
|
||||
|
||||
See the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md ) for more details.
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ while creating containers, for example
|
|||
`docker run --security-opt=no_new_privs busybox`.
|
||||
|
||||
Docker provides via their Go api an object named `ContainerCreateConfig` to
|
||||
configure container creation parameters. In this object, there is a string
|
||||
configure container creation parameters. In this object, there is a string
|
||||
array `HostConfig.SecurityOpt` to specify the security options. Client can
|
||||
utilize this field to specify the arguments for security options while
|
||||
creating new containers.
|
||||
|
|
|
@ -42,7 +42,7 @@ containers.
|
|||
|
||||
In order to support external integration with shared storage, processes running
|
||||
in a Kubernetes cluster should be able to be uniquely identified by their Unix
|
||||
UID, such that a chain of ownership can be established. Processes in pods will
|
||||
UID, such that a chain of ownership can be established. Processes in pods will
|
||||
need to have consistent UID/GID/SELinux category labels in order to access
|
||||
shared disks.
|
||||
|
||||
|
|
|
@ -211,6 +211,8 @@ the ReplicationController being autoscaled.
|
|||
```yaml
|
||||
kind: HorizontalPodAutoscaler
|
||||
apiVersion: autoscaling/v2alpha1
|
||||
metadata:
|
||||
name: WebFrontend
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
kind: ReplicationController
|
||||
|
|
|
@ -7,7 +7,7 @@ Author: @mengqiy
|
|||
Background of the Strategic Merge Patch is covered [here](../devel/strategic-merge-patch.md).
|
||||
|
||||
The Kubernetes API may apply semantic meaning to the ordering of items within a list,
|
||||
however the strategic merge patch does not keeping the ordering of elements.
|
||||
however the strategic merge patch does not keep the ordering of elements.
|
||||
Ordering has semantic meaning for Environment variables,
|
||||
as later environment variables may reference earlier environment variables,
|
||||
but not the other way around.
|
||||
|
@ -30,7 +30,7 @@ Add to the current patch, a directive ($setElementOrder) containing a list of el
|
|||
either the patch merge key, or for primitives the value. When applying the patch,
|
||||
the server ensures that the relative ordering of elements matches the directive.
|
||||
|
||||
The server will reject the patch if it doesn't satisfy the following 2 requirement.
|
||||
The server will reject the patch if it doesn't satisfy the following 2 requirements.
|
||||
- the relative order of any two items in the `$setElementOrder` list
|
||||
matches that in the patch list if they present.
|
||||
- the items in the patch list must be a subset or the same as the `$setElementOrder` list if the directive presents.
|
||||
|
@ -45,7 +45,7 @@ The relative order of two items are determined by the following order:
|
|||
If the relative order of the live config in the server is different from the order of the parallel list,
|
||||
the user's patch will always override the order in the server.
|
||||
|
||||
Here is an simple example of the patch format:
|
||||
Here is a simple example of the patch format:
|
||||
|
||||
Suppose we have a type called list. The patch will look like below.
|
||||
The order from the parallel list ($setElementOrder/list) will be respected.
|
||||
|
@ -60,7 +60,7 @@ list:
|
|||
- C
|
||||
```
|
||||
|
||||
All the items in the server's live list but not in the parallel list will be come before the parallel list.
|
||||
All the items in the server's live list but not in the parallel list will come before the parallel list.
|
||||
The relative order between these appended items are kept.
|
||||
|
||||
The patched list will look like:
|
||||
|
@ -114,7 +114,7 @@ list:
|
|||
### `$setElementOrder` may contain elements not present in the patch list
|
||||
|
||||
The $setElementOrder value may contain elements that are not present in the patch
|
||||
but present in the list to be merge to reorder the elements as part of the merge.
|
||||
but present in the list to be merged to reorder the elements as part of the merge.
|
||||
|
||||
Example where A & B have not changed:
|
||||
|
||||
|
@ -481,15 +481,15 @@ we send a whole list from user's config.
|
|||
It is NOT backward compatible in terms of list of primitives.
|
||||
|
||||
When patching a list of maps:
|
||||
- An old client sends a old patch to a new server, the server just merges the change and no reordering.
|
||||
- An old client sends an old patch to a new server, the server just merges the change and no reordering.
|
||||
The server behaves the same as before.
|
||||
- An new client sends a new patch to an old server, the server doesn't understand the new directive.
|
||||
- A new client sends a new patch to an old server, the server doesn't understand the new directive.
|
||||
So it just simply does the merge.
|
||||
|
||||
When patching a list of primitives:
|
||||
- An old client sends a old patch to a new server, the server will reorder the patch list which is sublist of user's.
|
||||
- An old client sends an old patch to a new server, the server will reorder the patch list which is sublist of user's.
|
||||
The server has the WRONG behavior.
|
||||
- An new client sends a new patch to an old server, the server will deduplicate after merging.
|
||||
- A new client sends a new patch to an old server, the server will deduplicate after merging.
|
||||
The server behaves the same as before.
|
||||
|
||||
## Example
|
||||
|
|
|
@ -0,0 +1,87 @@
|
|||
# **External Metrics API**
|
||||
|
||||
# Overview
|
||||
|
||||
[HPA v2 API extension proposal](https://github.com/kubernetes/community/blob/hpa_external/contributors/design-proposals/autoscaling/hpa-external-metrics.md) introduces new External metric type for autoscaling based on metrics coming from outside of Kubernetes cluster. This document proposes a new External Metrics API that will be used by HPA controller to get those metrics.
|
||||
|
||||
This API performs a similar role to and is based on existing [Custom Metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md). Unless explicitly specified otherwise all sections related to semantics, implementation and design decisions in [Custom Metrics API design](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md) apply to External Metrics API as well. It is generally expected that a Custom Metrics Adapter will provide both Custom Metrics API and External Metrics API, however, this is not a requirement and both APIs can be implemented and used separately.
|
||||
|
||||
|
||||
# API
|
||||
|
||||
The API will consist of a single path:
|
||||
|
||||
|
||||
```
|
||||
/apis/external.metrics.k8s.io/v1beta1/namespaces/<namespace_name>/<metric_name>?labelSelector=<selector>
|
||||
```
|
||||
|
||||
Similar to endpoints in Custom Metrics API it would only support GET requests.
|
||||
|
||||
The query would return the `ExternalMetricValueList` type described below:
|
||||
|
||||
```go
|
||||
// a list of values for a given metric for some set labels
|
||||
type ExternalMetricValueList struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ListMeta `json:"metadata,omitempty"`
|
||||
|
||||
// value of the metric matching a given set of labels
|
||||
Items []ExternalMetricValue `json:"items"`
|
||||
}
|
||||
|
||||
// a metric value for external metric
|
||||
type ExternalMetricValue struct {
|
||||
metav1.TypeMeta`json:",inline"`
|
||||
|
||||
// the name of the metric
|
||||
MetricName string `json:"metricName"`
|
||||
|
||||
// label set identifying the value within metric
|
||||
MetricLabels map[string]string `json:"metricLabels"`
|
||||
|
||||
// indicates the time at which the metrics were produced
|
||||
Timestamp unversioned.Time `json:"timestamp"`
|
||||
|
||||
// indicates the window ([Timestamp-Window, Timestamp]) from
|
||||
// which these metrics were calculated, when returning rate
|
||||
// metrics calculated from cumulative metrics (or zero for
|
||||
// non-calculated instantaneous metrics).
|
||||
WindowSeconds *int64 `json:"window,omitempty"`
|
||||
|
||||
// the value of the metric
|
||||
Value resource.Quantity
|
||||
}
|
||||
```
|
||||
|
||||
# Semantics
|
||||
|
||||
## Namespaces
|
||||
|
||||
Kubernetes namespaces don't have a natural 1-1 mapping to metrics coming from outside of Kubernetes. It is up to adapter implementing the API to decide which metric is available in which namespace. In particular a single metric may be available through many different namespaces.
|
||||
|
||||
## Metric Values
|
||||
|
||||
A request for a given metric may return multiple values if MetricSelector matches multiple time series. Each value should include a complete set of labels, which is sufficient to uniquely identify a timeseries.
|
||||
|
||||
A single value should always be returned if MetricSelector specifies a single value for every label defined for a given metric.
|
||||
|
||||
## Metric names
|
||||
|
||||
Custom Metrics API [doesn't allow using certain characters in metric names](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md#metric-names). The reason for that is a technical limitation in GO libraries. This list of forbidden characters includes slash (`/`). This is problematic as many systems use slashes in their metric naming convention.
|
||||
|
||||
Rather than expect metric adapters to come up with their custom ways of handling that this document proposes introducing `\|` as a custom escape sequence for slash. HPA controller will automatically replace any slashes in MetricName field for External metric with this escape sequence.
|
||||
|
||||
Otherwise the allowed metric names are the same as in Custom Metrics API.
|
||||
|
||||
## Access Control
|
||||
|
||||
Access can be controlled with per-metric granularity, same as in Custom Metrics API. The API has been designed to allow adapters to implement more granular access control if required. Possible future extension of API supporting label level access control is described in [ExternalMetricsPolicy](#externalmetricspolicy) section.
|
||||
|
||||
# Future considerations
|
||||
|
||||
## ExternalMetricsPolicy
|
||||
|
||||
If a more granular access control turns out to be a common requirement an ExternalMetricPolicy object could be added to API. This object could be defined at cluster level, per namespace or per user and would consist of a list of rules. Each rule would consist of a mandatory regexp and either a label selector or a 'deny' statement. For each metric the rules would be applied top to bottom, with the first matching rule being used. A query that hit a deny rule or specified a selector that is not a subset of selector specified by policy would be rejected with 403 error.
|
||||
|
||||
Additionally an admission controller could be used to check the policy when creating HPA object.
|
|
@ -209,7 +209,7 @@ Go 1.5 introduced many changes. To name a few that are relevant to Kubernetes:
|
|||
- The garbage collector became more efficient (but also [confused our latency test](https://github.com/golang/go/issues/14396)).
|
||||
- `linux/arm64` and `linux/ppc64le` were added as new ports.
|
||||
- The `GO15VENDOREXPERIMENT` was started. We switched from `Godeps/_workspace` to the native `vendor/` in [this PR](https://github.com/kubernetes/kubernetes/pull/24242).
|
||||
- It's not required to pre-build the whole standard library `std` when cross-compliling. [Details](#prebuilding-the-standard-library-std)
|
||||
- It's not required to pre-build the whole standard library `std` when cross-compiling. [Details](#prebuilding-the-standard-library-std)
|
||||
- Builds are approximately twice as slow as earlier. That affects the CI. [Details](#releasing)
|
||||
- The native Go DNS resolver will suffice in the most situations. This makes static linking much easier.
|
||||
|
||||
|
|
|
@ -286,7 +286,7 @@ enumerated the key idea elements:
|
|||
+ [E1] Master rejects LRS creation (for known or unknown
|
||||
reason). In this case another attempt to create a LRS should be
|
||||
attempted in 1m or so. This action can be tied with
|
||||
[[I5]](#heading=h.ififs95k9rng). Until the the LRS is created
|
||||
[[I5]](#heading=h.ififs95k9rng). Until the LRS is created
|
||||
the situation is the same as [E5]. If this happens multiple
|
||||
times all due replicas should be moved elsewhere and later moved
|
||||
back once the LRS is created.
|
||||
|
@ -348,7 +348,7 @@ to that LRS along with their current status and status change timestamp.
|
|||
+ [I6] If a cluster is removed from the federation then the situation
|
||||
is equal to multiple [E4]. It is assumed that if a connection with
|
||||
a cluster is lost completely then the cluster is removed from the
|
||||
the cluster list (or marked accordingly) so
|
||||
cluster list (or marked accordingly) so
|
||||
[[E6]](#heading=h.in6ove1c1s8f) and [[E7]](#heading=h.37bnbvwjxeda)
|
||||
don't need to be handled.
|
||||
|
||||
|
@ -383,7 +383,7 @@ To calculate the (re)scheduling moves for a given FRS:
|
|||
1. For each cluster FRSC calculates the number of replicas that are placed
|
||||
(not necessary up and running) in the cluster and the number of replicas that
|
||||
failed to be scheduled. Cluster capacity is the difference between the
|
||||
the placed and failed to be scheduled.
|
||||
placed and failed to be scheduled.
|
||||
|
||||
2. Order all clusters by their weight and hash of the name so that every time
|
||||
we process the same replica-set we process the clusters in the same order.
|
||||
|
|
|
@ -81,7 +81,7 @@ Kubelet would then populate the `runtimeConfig` section of the config when calli
|
|||
|
||||
### Pod Teardown
|
||||
|
||||
When we delete a pod, kubelet will bulid the runtime config for calling cni plugin `DelNetwork/DelNetworkList` API, which will remove this pod's bandwidth configuration.
|
||||
When we delete a pod, kubelet will build the runtime config for calling cni plugin `DelNetwork/DelNetworkList` API, which will remove this pod's bandwidth configuration.
|
||||
|
||||
## Next step
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ type AcceleratorStats struct {
|
|||
// ID of the accelerator. device minor number? Or UUID?
|
||||
ID string `json:"id"`
|
||||
|
||||
// Total acclerator memory.
|
||||
// Total accelerator memory.
|
||||
// unit: bytes
|
||||
MemoryTotal uint64 `json:"memory_total"`
|
||||
|
||||
|
@ -75,7 +75,7 @@ From the summary API, they will flow to heapster and stackdriver.
|
|||
|
||||
## Caveats
|
||||
- As mentioned before, this would add a requirement that cAdvisor and kubelet are dynamically linked.
|
||||
- We would need to make sure that kubelet is able to access the nvml libraries. Some existing container based nvidia driver installers install drivers in a special directory. We would need to make sure that that directory is in kubelet’s `LD_LIBRARY_PATH`.
|
||||
- We would need to make sure that kubelet is able to access the nvml libraries. Some existing container based nvidia driver installers install drivers in a special directory. We would need to make sure that directory is in kubelet’s `LD_LIBRARY_PATH`.
|
||||
|
||||
## Testing Plan
|
||||
- Adding unit tests and e2e tests to cAdvisor for this code.
|
||||
|
|
|
@ -20,7 +20,7 @@ On the Windows platform, processes may be assigned to a job object, which can ha
|
|||
[#547](https://github.com/kubernetes/features/issues/547)
|
||||
|
||||
## Motivation
|
||||
The goal is to start filling the gap of platform support in CRI, specifically for Windows platform. For example, currrently in dockershim Windows containers are scheduled using the default resource constraints and does not respect the resource requests and limits specified in POD. With this proposal, Windows containers will be able to leverage POD spec and CRI to allocate compute resource and respect restriction.
|
||||
The goal is to start filling the gap of platform support in CRI, specifically for Windows platform. For example, currently in dockershim Windows containers are scheduled using the default resource constraints and does not respect the resource requests and limits specified in POD. With this proposal, Windows containers will be able to leverage POD spec and CRI to allocate compute resource and respect restriction.
|
||||
|
||||
## Proposed design
|
||||
|
||||
|
|
|
@ -133,7 +133,7 @@ PodSandboxConfig.LogDirectory: /var/log/pods/<podUID>/
|
|||
ContainerConfig.LogPath: <containerName>_<instance#>.log
|
||||
```
|
||||
|
||||
Because kubelet determines where the logs are stores and can access them
|
||||
Because kubelet determines where the logs are stored and can access them
|
||||
directly, this meets requirement (1). As for requirement (2), the log collector
|
||||
can easily extract basic pod metadata (e.g., pod UID, container name) from
|
||||
the paths, and watch the directly for any changes. In the future, we can
|
||||
|
@ -150,7 +150,7 @@ one tag is defined in CRI to support multi-line log entries: partial or full.
|
|||
Partial (`P`) is used when a log entry is split into multiple lines by the
|
||||
runtime, and the entry has not ended yet. Full (`F`) indicates that the log
|
||||
entry is completed -- it is either a single-line entry, or this is the last
|
||||
line of the muiltple-line entry.
|
||||
line of the multiple-line entry.
|
||||
|
||||
For example,
|
||||
```
|
||||
|
@ -160,7 +160,7 @@ For example,
|
|||
2016-10-06T00:17:10.113242941Z stderr F Last line of the log entry 2
|
||||
```
|
||||
|
||||
With the knowledge, kubelet can parses the logs and serve them for `kubectl
|
||||
With the knowledge, kubelet can parse the logs and serve them for `kubectl
|
||||
logs` requests. This meets requirement (3). Note that the format is defined
|
||||
deliberately simple to provide only information necessary to serve the requests.
|
||||
We do not intend for kubelet to host various logging plugins. It is also worth
|
||||
|
@ -176,7 +176,7 @@ to rotate the logs periodically, similar to today's implementation.
|
|||
We do not rule out the possibility of letting kubelet or a per-node daemon
|
||||
(`DaemonSet`) to take up the responsibility, or even declare rotation policy
|
||||
in the kubernetes API as part of the `PodSpec`, but it is beyond the scope of
|
||||
the this proposal.
|
||||
this proposal.
|
||||
|
||||
**What about non-supported log formats?**
|
||||
|
||||
|
|
|
@ -190,7 +190,7 @@ Docker API does not provide user-namespace mapping. Therefore to handle `GetRunt
|
|||
## Future Work
|
||||
### Namespace-Level/Pod-Level user-namespace support
|
||||
There is no runtime today which supports creating containers with a specified user namespace configuration. For example here is the discussion related to this support in Docker https://github.com/moby/moby/issues/28593
|
||||
Once user-namespace feature in the runtimes has evolved to support container’s request for a specific user-namespace mapping(UID and GID range), we can extend current Node-Level user-namespace support in Kubernetes to support Namespace-level isolation(or if desired even pod-level isolation) by dividing and allocating learned mapping from runtime among Kubernetes namespaces (or pods, if desired). From end-user UI perspective, we dont expect any change in the UI related to user namespaces support.
|
||||
Once user-namespace feature in the runtimes has evolved to support container’s request for a specific user-namespace mapping(UID and GID range), we can extend current Node-Level user-namespace support in Kubernetes to support Namespace-level isolation(or if desired even pod-level isolation) by dividing and allocating learned mapping from runtime among Kubernetes namespaces (or pods, if desired). From end-user UI perspective, we don't expect any change in the UI related to user namespaces support.
|
||||
### Remote Volumes
|
||||
Remote Volumes support should be investigated and should be targeted in future once support is there at lower infra layers.
|
||||
|
||||
|
|
|
@ -169,7 +169,7 @@ Adding it there allows the user to change the mode bits of every file in the
|
|||
object, so it achieves the goal, while having the option to have a default and
|
||||
not specify all files in the object.
|
||||
|
||||
The are two downside:
|
||||
There are two downsides:
|
||||
|
||||
* The files are symlinks pointint to the real file, and the realfile
|
||||
permissions are only set. The symlink has the classic symlink permissions.
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 50 KiB |
Binary file not shown.
After Width: | Height: | Size: 43 KiB |
|
@ -177,7 +177,7 @@ topic that is outside the scope of this document. For example, resource fragment
|
|||
RequiredDuringScheduling node and pod affinity and anti-affinity means that even if the
|
||||
sum of the quotas at the top priority level is less than or equal to the total aggregate
|
||||
capacity of the cluster, some pods at the top priority level might still go pending. In
|
||||
general, priority provdes a *probabilistic* guarantees of pod schedulability in the face
|
||||
general, priority provides a *probabilistic* guarantees of pod schedulability in the face
|
||||
of overcommitment, by allowing prioritization of which pods should be allowed to run pods
|
||||
when demand for cluster resources exceeds supply.
|
||||
|
||||
|
|
|
@ -190,7 +190,7 @@ Please note with the change of predicates in subsequent development, this doc wi
|
|||
|
||||
- **Invalid predicates:**
|
||||
|
||||
- `MaxPDVolumeCountPredicate` (only if the added/deleted PVC as a binded volume so it drops to the PV change case, otherwise it should not affect scheduler).
|
||||
- `MaxPDVolumeCountPredicate` (only if the added/deleted PVC as a bound volume so it drops to the PV change case, otherwise it should not affect scheduler).
|
||||
|
||||
- **Scope:**
|
||||
- All nodes (we don't know which node this PV will be attached to).
|
||||
|
@ -229,14 +229,14 @@ Please note with the change of predicates in subsequent development, this doc wi
|
|||
- **Invalid predicates:**
|
||||
- `GeneralPredicates`. This invalidate should be done during `scheduler.assume(...)` because binding can be asynchronous. So we just optimistically invalidate predicate cached result there, and if later this pod failed to bind, the following pods will go through normal predicate functions and nothing breaks.
|
||||
|
||||
- No `MatchInterPodAffinity`: the scheduler will make sure newly binded pod will not break the existing inter pod affinity. So we does not need to invalidate MatchInterPodAffinity when pod added. But when a pod is deleted, existing inter pod affinity may become invalid. (e.g. this pod was preferred by some else, or vice versa).
|
||||
- No `MatchInterPodAffinity`: the scheduler will make sure newly bound pod will not break the existing inter pod affinity. So we do not need to invalidate MatchInterPodAffinity when pod added. But when a pod is deleted, existing inter pod affinity may become invalid. (e.g. this pod was preferred by some else, or vice versa).
|
||||
|
||||
- NOTE: assumptions above **will not** stand when we implemented features like `RequiredDuringSchedulingRequiredDuringExecution`.
|
||||
|
||||
- No `NoDiskConflict`: the newly scheduled pod fits to existing pods on this node, it will also fits to equivalence class of existing pods.
|
||||
|
||||
- **Scope:**
|
||||
- The node which the pod was binded with.
|
||||
- The node where the pod is bound.
|
||||
|
||||
|
||||
|
||||
|
@ -252,7 +252,7 @@ Please note with the change of predicates in subsequent development, this doc wi
|
|||
- `MatchInterPodAffinity` if the pod's labels are updated.
|
||||
|
||||
- **Scope:**
|
||||
- The node which the pod was binded with
|
||||
- The node where the pod is bound.
|
||||
|
||||
|
||||
|
||||
|
@ -270,7 +270,7 @@ Please note with the change of predicates in subsequent development, this doc wi
|
|||
- `NoDiskConflict` if the pod has special volume like `RBD`, `ISCSI`, `GCEPersistentDisk` etc.
|
||||
|
||||
- **Scope:**
|
||||
- The node which the pod was binded with.
|
||||
- The node where the pod is bound.
|
||||
|
||||
|
||||
### 3.5 Node
|
||||
|
|
|
@ -0,0 +1,436 @@
|
|||
|
||||
Status: Draft
|
||||
Created: 2018-04-09 / Last updated: 2018-08-15
|
||||
Author: bsalamat
|
||||
Contributors: misterikkit
|
||||
|
||||
---
|
||||
|
||||
#
|
||||
- [SUMMARY ](#summary-)
|
||||
- [OBJECTIVE](#objective)
|
||||
- [Terminology](#terminology)
|
||||
- [BACKGROUND](#background)
|
||||
- [OVERVIEW](#overview)
|
||||
- [Non-goals](#non-goals)
|
||||
- [DETAILED DESIGN](#detailed-design)
|
||||
- [Bare bones of scheduling](#bare-bones-of-scheduling)
|
||||
- [Communication and statefulness of plugins](#communication-and-statefulness-of-plugins)
|
||||
- [Plugin registration](#plugin-registration)
|
||||
- [Extension points](#extension-points)
|
||||
- [Scheduling queue sort](#scheduling-queue-sort)
|
||||
- [Pre-filter](#pre-filter)
|
||||
- [Filter](#filter)
|
||||
- [Post-filter](#post-filter)
|
||||
- [Scoring](#scoring)
|
||||
- [Post-scoring/pre-reservation](#post-scoringpre-reservation)
|
||||
- [Reserve](#reserve)
|
||||
- [Permit](#permit)
|
||||
- [Approving a Pod binding](#approving-a-pod-binding)
|
||||
- [Reject](#reject)
|
||||
- [Pre-Bind](#pre-bind)
|
||||
- [Bind](#bind)
|
||||
- [Post Bind](#post-bind)
|
||||
- [USE-CASES](#use-cases)
|
||||
- [Dynamic binding of cluster-level resources](#dynamic-binding-of-cluster-level-resources)
|
||||
- [Gang Scheduling](#gang-scheduling)
|
||||
- [OUT OF PROCESS PLUGINS](#out-of-process-plugins)
|
||||
- [CONFIGURING THE SCHEDULING FRAMEWORK](#configuring-the-scheduling-framework)
|
||||
- [BACKWARD COMPATIBILITY WITH SCHEDULER v1](#backward-compatibility-with-scheduler-v1)
|
||||
- [DEVELOPMENT PLAN](#development-plan)
|
||||
- [TESTING PLAN](#testing-plan)
|
||||
- [WORK ESTIMATES ](#work-estimates)
|
||||
|
||||
# SUMMARY
|
||||
|
||||
This document describes the Kubernetes Scheduling Framework. The scheduling
|
||||
framework implements only basic functionality, but exposes many extension points
|
||||
for plugins to expand its functionality. The plan is that this framework (with
|
||||
its plugins) will eventually replace the current Kubernetes scheduler.
|
||||
|
||||
# OBJECTIVE
|
||||
|
||||
- make scheduler more extendable.
|
||||
- Make scheduler core simpler by moving some of its features to plugins.
|
||||
- Propose extension points in the framework.
|
||||
- Propose a mechanism to receive plugin results and continue or abort based
|
||||
on the received results.
|
||||
- Propose a mechanism to handle errors and communicate it with plugins.
|
||||
|
||||
## Terminology
|
||||
|
||||
Scheduler v1, current scheduler: refer to existing scheduler of Kubernetes.
|
||||
Scheduler v2, scheduling framework: refer to the new scheduler proposed in this
|
||||
doc.
|
||||
|
||||
# BACKGROUND
|
||||
|
||||
Many features are being added to the Kubernetes default scheduler. They keep
|
||||
making the code larger and logic more complex. A more complex scheduler is
|
||||
harder to maintain, its bugs are harder to find and fix, and those users running
|
||||
a custom scheduler have a hard time catching up and integrating new changes.
|
||||
The current Kubernetes scheduler provides
|
||||
[webhooks to extend](./scheduler_extender.md)
|
||||
its functionality. However, these are limited in a few ways:
|
||||
|
||||
1. The number of extension points are limited: "Filter" extenders are called
|
||||
after default predicate functions. "Prioritize" extenders are called after
|
||||
default priority functions. "Preempt" extenders are called after running
|
||||
default preemption mechanism. "Bind" verb of the extenders are used to bind
|
||||
a Pod. Only one of the extenders can be a binding extender, and that
|
||||
extender performs binding instead of the scheduler. Extenders cannot be
|
||||
invoked at other points, for example, they cannot be called before running
|
||||
predicate functions.
|
||||
1. Every call to the extenders involves marshaling and unmarshalling JSON.
|
||||
Calling a webhook (HTTP request) is also slower than calling native functions.
|
||||
1. It is hard to inform an extender that scheduler has aborted scheduling of
|
||||
a Pod. For example, if an extender provisions a cluster resource and
|
||||
scheduler contacts the extender and asks it to provision an instance of the
|
||||
resource for the Pod being scheduled and then scheduler faces errors
|
||||
scheduling the Pod and decides to abort the scheduling, it will be hard to
|
||||
communicate the error with the extender and ask it to undo the provisioning
|
||||
of the resource.
|
||||
1. Since current extenders run as a separate process, they cannot use
|
||||
scheduler's cache. They must either build their own cache from the API
|
||||
server or process only the information they receive from the default scheduler.
|
||||
|
||||
The above limitations hinder building high performance and versatile scheduler
|
||||
extensions. We would ideally like to have an extension mechanism that is fast
|
||||
enough to allow keeping a bare minimum logic in the scheduler core and convert
|
||||
many of the existing features of default scheduler, such as predicate and
|
||||
priority functions and preemption into plugins. Such plugins will be compiled
|
||||
with the scheduler. We would also like to provide an extension mechanism that do
|
||||
not need recompilation of scheduler. The expected performance of such plugins is
|
||||
lower than in-process plugins. Such out-of-process plugins should be used in
|
||||
cases where quick invocation of the plugin is not a constraint.
|
||||
|
||||
# OVERVIEW
|
||||
|
||||
Scheduler v2 allows both built-in and out-of-process extenders. This new
|
||||
architecture is a scheduling framework that exposes several extension points
|
||||
during a scheduling cycle. Scheduler plugins can register to run at one or more
|
||||
extension points.
|
||||
|
||||
#### Non-goals
|
||||
|
||||
- We will keep Kubernetes API backward compatibility, but keeping scheduler
|
||||
v1 backward compatibility is a non-goal. Particularly, scheduling policy
|
||||
config and v1 extenders won't work in this new framework.
|
||||
- Solve all the scheduler v1 limitations, although we would like to ensure
|
||||
that the new framework allows us to address known limitations in the future.
|
||||
- Provide implementation details of plugins and call-back functions, such as
|
||||
all of their arguments and return values.
|
||||
|
||||
# DETAILED DESIGN
|
||||
|
||||
## Bare bones of scheduling
|
||||
|
||||
Pods that are not assigned to any node go to a scheduling queue and sorted by
|
||||
order specified by plugins (described [here](#scheduling-queue-sort)). The
|
||||
scheduling framework picks the head of the queue and starts a **scheduling
|
||||
cycle** to schedule the pod. At the end of the cycle scheduler determines
|
||||
whether the pod is schedulable or not. If the pod is not schedulable, its status
|
||||
is updated and goes back to the scheduling queue. If the pod is schedulable (one
|
||||
or more nodes are found that can run the Pod), the scoring process is started.
|
||||
The scoring process finds the best node to run the Pod. Once the best node is
|
||||
picked, the scheduler updates its cache and then a bind go routine is started to
|
||||
bind the pod.
|
||||
The above process is the same as what Kubernetes scheduler v1 does. Some of the
|
||||
essential features of scheduler v1, such as leader election, will also be
|
||||
transferred to the scheduling framework.
|
||||
In the rest of this section we describe how various plugins are used to enrich
|
||||
this basic workflow. This document focuses on in-process plugins.
|
||||
Out-of-process plugins are discussed later in a separate doc.
|
||||
|
||||
## Communication and statefulness of plugins
|
||||
|
||||
The scheduling framework provides a library that plugins can use to pass
|
||||
information to other plugins. This library keeps a map from keys of type string
|
||||
to opaque pointers of type interface{}. A write operation takes a key and a
|
||||
pointer and stores the opaque pointer in the map with the given key. Other
|
||||
plugins can provide the key and receive the opaque pointer. Multiple plugins can
|
||||
share the state or communicate via this mechanism.
|
||||
The saved state is preserved only during a single scheduling cycle. At the end
|
||||
of a scheduling cycle, this map is destructed. So, plugins cannot keep shared
|
||||
state across multiple scheduling cycle. They can, however, update the scheduler
|
||||
cache via the provided interface of the cache. The cache interface allows
|
||||
limited state preservation across multiple scheduling cycle.
|
||||
It is worth noting that plugins are assumed to be **trusted**. Scheduler does
|
||||
not prevent one plugin from accessing or modifying another plugin's state.
|
||||
|
||||
## Plugin registration
|
||||
|
||||
Plugin registration is done by providing an extension point and a function that
|
||||
should be called at that extension point. This step will be something like:
|
||||
|
||||
```go
|
||||
register("pre-filter", plugin.foo)
|
||||
```
|
||||
|
||||
The details of the function signature will be provided later.
|
||||
|
||||
## Extension points
|
||||
|
||||
The following picture shows the scheduling cycle of a Pod and the extension
|
||||
points that the scheduling framework exposes. In this picture "Filter" is
|
||||
equivalent to "Predicate" in scheduler v1 and "Scoring" is equivalent to
|
||||
"Priority function". Plugins are go functions. They are registered to be called
|
||||
at one of these extension points. They are called by the framework in the same
|
||||
order they are registered for each extension point.
|
||||
In the following sections we describe each extension point in the same order
|
||||
they are called in a schedule cycle.
|
||||
|
||||

|
||||
|
||||
### Scheduling queue sort
|
||||
|
||||
These plugins indicate how Pods should be sorted in the scheduling queue. A
|
||||
plugin registered at this point only returns greater, smaller, or equal to
|
||||
indicate an ordering between two Pods. In other words, a plugin at this
|
||||
extension point returns the answer to "less(pod1, pod2)". Multiple plugins may
|
||||
be registered at this point. Plugins registered at this point are called in
|
||||
order and the invocation continues as long as plugins return "equal". Once a
|
||||
plugin returns "greater" or "smaller" the invocation of these plugins are
|
||||
stopped.
|
||||
|
||||
### Pre-filter
|
||||
|
||||
These plugins are generally useful to check certain conditions that the cluster
|
||||
or the Pod must meet. These are also useful to perform pre-processing on the pod
|
||||
and store some information about the pod that can be used by other plugins.
|
||||
The pod pointer is passed as an argument to these plugins. If any of these
|
||||
plugins return an error, the scheduling cycle is aborted.
|
||||
These plugins are called serially in the same order registered.
|
||||
|
||||
### Filter
|
||||
|
||||
Filter plugins filter out nodes that cannot run the Pod. Scheduler runs these
|
||||
plugins per node in the same order that they are registered, but scheduler may
|
||||
run these filter function for multiple nodes in parallel. So, these plugins must
|
||||
use synchronization when they modify state.
|
||||
Scheduler stops running the remaining filter functions for a node once one of
|
||||
these filters fails for the node.
|
||||
|
||||
### Post-filter
|
||||
|
||||
The Pod and the set of nodes that can run the Pod are passed to these plugins.
|
||||
They are called whether Pod is schedulable or not (whether the set of nodes is
|
||||
empty or non-empty).
|
||||
If any of these plugins return an error or if the Pod is determined
|
||||
unschedulable, the scheduling cycle is aborted.
|
||||
These plugins are called serially.
|
||||
|
||||
### Scoring
|
||||
|
||||
These plugins are similar to priority function in scheduler v1. They are
|
||||
utilized to rank nodes that have passed the filtering stage. Similar to Filter
|
||||
plugins, these are called per node serially in the same order registered, but
|
||||
scheduler may run them for multiple nodes in parallel.
|
||||
Each one of these functions return a score for the given node. The score is
|
||||
multiplied by the weight of the function and aggregated with the result of other
|
||||
scoring functions to yield a total score for the node.
|
||||
These functions can never block scheduling. In case of an error they should
|
||||
return zero for the Node being ranked.
|
||||
|
||||
### Post-scoring/pre-reservation
|
||||
|
||||
After all scoring plugins are invoked and the score of nodes are determined, the
|
||||
framework picks the best node with the highest score and then it calls
|
||||
post-scoring plugins. The Pod and the chosen Node are passed to these plugins.
|
||||
These plugins have one more chance to check any conditions about the assignment
|
||||
of the Pod to this Node and reject the node if needed.
|
||||
|
||||

|
||||
|
||||
### Reserve
|
||||
|
||||
At this point scheduler updates its cache by "reserving" a Node (partially or
|
||||
fully) for the Pod. In scheduler v1 this stage is called "assume".
|
||||
At this point, only the scheduler cache is updated to
|
||||
reflect that the Node is (partially) reserved for the Pod. The scheduling
|
||||
framework calls plugins registered at this extension points so that they get a
|
||||
chance to perform cache updates or other accounting activities. These plugins
|
||||
do not return any value (except errors).
|
||||
|
||||
The actual assignment of the Node to the Pod happens during the "Bind" phase.
|
||||
That is when the API server updates the Pod object with the Node information.
|
||||
|
||||
### Permit
|
||||
|
||||
Permit plugins run in a separate go routine (in parallel). Each plugin can return
|
||||
one of the three possible values: 1) "permit", 2) "deny", or 3) "wait". If all
|
||||
plugins registered at this extension point return "permit", the pod is sent to
|
||||
the next step for binding. If any of the plugins returns "deny", the pod is
|
||||
rejected and sent back to the scheduling queue. If any of the plugins returns
|
||||
"wait", the Pod is kept in reserved state until it is explicitly approved for
|
||||
binding. A plugin that returns "wait" must return a "timeout" as well. If the
|
||||
timeout expires, the pod is rejected and goes back to the scheduling queue.
|
||||
|
||||
#### Approving a Pod binding
|
||||
|
||||
While any plugin can receive the list of reserved Pod from the cache and approve
|
||||
them, we expect only the "Permit" plugins to approve binding of reserved Pods
|
||||
that are in "waiting" state. Once a Pod is approved, it is sent to the Bind
|
||||
stage.
|
||||
|
||||
### Reject
|
||||
|
||||
Plugins called at "Permit" may perform some operations that should be undone if
|
||||
the Pod reservation fails. The "Reject" extension point allows such clean-up
|
||||
operations to happen. Plugins registered at this point are called if the
|
||||
reservation of the Pod is cancelled. The reservation is cancelled if any of the
|
||||
"Permit" plugins returns "reject" or if a Pod reservation, which is in "wait"
|
||||
state, times out.
|
||||
|
||||
### Pre-Bind
|
||||
|
||||
When a Pod is approved for binding it reaches to this stage. These plugins run
|
||||
before the actual binding of the Pod to a Node happens. The binding starts only
|
||||
if all of these plugins return true. If any returns false, the Pod is rejected
|
||||
and sent back to the scheduling queue. These plugins run in a separate go
|
||||
routine. The same go routine runs "Bind" after these plugins when all of them
|
||||
return true.
|
||||
|
||||
### Bind
|
||||
|
||||
Once all pre-bind plugins return true, the Bind plugins are executed. Multiple
|
||||
plugins may be registered at this extension point. Each plugin may return true
|
||||
or false (or an error). If a plugin returns false, the next plugin will be
|
||||
called until a plugin returns true. Once a true is returned **the remaining
|
||||
plugins are skipped**. If any of the plugins returns an error or all of them
|
||||
return false, the Pod is rejected and sent back to the scheduling queue.
|
||||
|
||||
### Post Bind
|
||||
|
||||
The Post Bind plugins can be useful for housekeeping after a pod is scheduled.
|
||||
These plugins do not return any value and are not expected to influence the
|
||||
scheduling decision made in the scheduling cycle.
|
||||
|
||||
### Informer Events
|
||||
|
||||
The scheduling framework, similar to Scheduler v1, will have informers that let
|
||||
the framework keep its copy of the state of the cluster up-to-date. The
|
||||
informers generate events, such as "PodAdd", "PodUpdate", "PodDelete", etc. The
|
||||
framework allows plugins to register their own handlers for any of these events.
|
||||
The handlers allow plugins with internal state or caches to keep their state
|
||||
updated.
|
||||
|
||||
# USE-CASES
|
||||
|
||||
In this section we provide a couple of examples on how the scheduling framework
|
||||
can be used to solve common scheduling scenarios.
|
||||
|
||||
### Dynamic binding of cluster-level resources
|
||||
|
||||
Cluster level resources are resources which are not immediately available on
|
||||
nodes at the time of scheduling Pods. Scheduler needs to ensure that such
|
||||
cluster level resources are bound to a chosen Node before it can schedule a Pod
|
||||
that requires such resources to the Node. We refer to this type of binding of
|
||||
resources to Nodes at the time of scheduling Pods as dynamic resource binding.
|
||||
Dynamic resource binding has proven to be a challenge in Scheduler v1, because
|
||||
Scheduler v1 is not flexible enough to support various types of plugins at
|
||||
different phases of scheduling. As a result, binding of storage volumes is
|
||||
integrated in the scheduler code and some non-trivial changes are done to the
|
||||
scheduler extender to support dynamic binding of network GPUs.
|
||||
The scheduling framework allows such dynamic bindings in a cleaner way. The main
|
||||
thread of scheduling framework process a pending Pod that requests a network
|
||||
resource and finds a node for the Pod and reserves the Pod. A dynamic resource
|
||||
binder plugin installed at "Pre-Bind" stage is invoked (in a separate thread).
|
||||
It analyzes the Pod and when detects that the Pod needs dynamic binding of the
|
||||
resource, the plugin tries to attach the cluster resource to the chosen node and
|
||||
then returns true so that the Pod can be bound. If the resource attachment
|
||||
fails, it returns false and the Pod will be retried.
|
||||
When there are multiple of such network resources, each one of them installs one
|
||||
"pre-bind" plugin. Each plugin looks at the Pod and if the Pod is not requesting
|
||||
the resource that they are interested in, they simply return "true" for the
|
||||
pod.
|
||||
|
||||
### Gang Scheduling
|
||||
|
||||
Gang scheduling allows a certain number of Pods to be scheduled simultaneously.
|
||||
If all the members of the gang cannot be scheduled at the same time, none of
|
||||
them should be scheduled. Gang scheduling may have various other features as
|
||||
well, but in this context we are interested in simultaneous scheduling of Pods.
|
||||
Gang scheduling in the scheduling framework can be done with an "Permit" plugin.
|
||||
The main scheduling thread processes pods one by one and reserves nodes for
|
||||
them. The gang scheduling plugin at the Permit stage is invoked for each pod.
|
||||
When it finds that the pod belongs to a gang, it checks the properties of the
|
||||
gang. If there are not enough members of the gang which are scheduled or in
|
||||
"wait" state, the plugin returns "wait". When the number reaches the desired
|
||||
value, all the Pods in wait state are approved and sent for binding.
|
||||
|
||||
# OUT OF PROCESS PLUGINS
|
||||
|
||||
Out of process plugins (OOPP) are called via JSON over an HTTP interface. In
|
||||
other words, the scheduler will support webhooks at most (maybe all) of the
|
||||
extension points. Data sent to an OOPP must be marshalled to JSON and data
|
||||
received must be unmarshalled. So, calling an OOPP is significantly slower than
|
||||
in-process plugins.
|
||||
We do not plan to build OOPPs in the first version of the scheduling framework.
|
||||
So, more details on them is to be determined.
|
||||
|
||||
|
||||
# DEVELOPMENT PLAN
|
||||
|
||||
Earlier, we wanted to develop the scheduling framework as an independent project
|
||||
from scheduler V1. However, that would need much engineering resources.
|
||||
It would also be more difficult to roll out a new and not fully-backward
|
||||
compatible scheduler in Kubernetes where tens of thousands of users depend on
|
||||
the behavior of the scheduler.
|
||||
After revisiting the ideas and challenges, we changed our plan and have decided
|
||||
to build some of the ideas of the scheduling framework into Scheduler V1 to make
|
||||
it more extendable.
|
||||
|
||||
As the first step, we would like to build:
|
||||
1. [Pre-bind](#pre-bind) and [Reserve](#reserve) plugin points. These will
|
||||
help us move our existing cluster resource binding code, such as persistent
|
||||
volume binding, to plugins.
|
||||
1. We will also build
|
||||
[the plugin communication mechanism](#communication-and-statefulness-of-plugins).
|
||||
This will allow us to build more sophisticated plugins that would require
|
||||
communication and also help us clean up existing scheduler's code by removing
|
||||
existing transient cache data.
|
||||
|
||||
More features of the framework can be added to the Scheduler in the future based
|
||||
on the requirements.
|
||||
|
||||
<s>
|
||||
# CONFIGURING THE SCHEDULING FRAMEWORK
|
||||
|
||||
TBD
|
||||
|
||||
# BACKWARD COMPATIBILITY WITH SCHEDULER v1
|
||||
|
||||
We will build a new set of plugins for scheduler v2 to ensure that the existing
|
||||
behavior of scheduler v1 in placing Pods on nodes is preserved. This includes
|
||||
building plugins that replicate default predicate and priority functions of
|
||||
scheduler v1 and its binding mechanism, but scheduler extenders built for
|
||||
scheduler v1 won't be compatible with scheduler v2. Also, predicate and priority
|
||||
functions which are not enabled by default (such as service affinity) are not
|
||||
guaranteed to exist in scheduler v2.
|
||||
|
||||
# DEVELOPMENT PLAN
|
||||
|
||||
We will develop the scheduling framework as an incubator project in SIG
|
||||
scheduling. It will be built in a separate code-base independently from
|
||||
scheduler v1, but we will probably use a lot of code from scheduler v1.
|
||||
|
||||
# TESTING PLAN
|
||||
|
||||
We will add unit-tests as we build functionalities of the scheduling framework.
|
||||
The scheduling framework should eventually be able to pass integration and e2e
|
||||
tests of scheduler v1, excluding those tests that involve scheduler extensions.
|
||||
The e2e and integration tests may need to be modified slightly as the
|
||||
initialization and configuration of the scheduling framework will be different
|
||||
than scheduler v1.
|
||||
|
||||
# WORK ESTIMATES
|
||||
|
||||
We expect to see an early version of the scheduling framework in two release
|
||||
cycles (end of 2018). If things go well, we will start offering it as an
|
||||
alternative to the scheduler v1 by the end of Q1 2019 and start the deprecation
|
||||
of scheduler v1. We will make it the default scheduler of Kubernetes in Q2 2019,
|
||||
but we will keep the option of using scheduler v1 for at least two more release
|
||||
cycles.
|
||||
</s>
|
||||
|
|
@ -87,7 +87,7 @@ allowed to use that new dedicated node group.
|
|||
|
||||
```go
|
||||
// The node this Taint is attached to has the effect "effect" on
|
||||
// any pod that that does not tolerate the Taint.
|
||||
// any pod that does not tolerate the Taint.
|
||||
type Taint struct {
|
||||
Key string `json:"key" patchStrategy:"merge" patchMergeKey:"key"`
|
||||
Value string `json:"value,omitempty"`
|
||||
|
|
|
@ -201,7 +201,7 @@ Once the following conditions are true, the external-attacher should call `Contr
|
|||
Before starting the `ControllerPublishVolume` operation, the external-attacher should add these finalizers to these Kubernetes API objects:
|
||||
|
||||
* To the `VolumeAttachment` so that when the object is deleted, the external-attacher has an opportunity to detach the volume first. External attacher removes this finalizer once the volume is fully detached from the node.
|
||||
* To the `PersistentVolume` referenced by `VolumeAttachment` so the the PV cannot be deleted while the volume is attached. External attacher needs information from the PV to perform detach operation. The attacher will remove the finalizer once all `VolumeAttachment` objects that refer to the PV are deleted, i.e. the volume is detached from all nodes.
|
||||
* To the `PersistentVolume` referenced by `VolumeAttachment` so the PV cannot be deleted while the volume is attached. External attacher needs information from the PV to perform detach operation. The attacher will remove the finalizer once all `VolumeAttachment` objects that refer to the PV are deleted, i.e. the volume is detached from all nodes.
|
||||
|
||||
If the operation completes successfully, the external-attacher will:
|
||||
|
||||
|
@ -587,7 +587,7 @@ In order to upgrade drivers using the recommended driver deployment mechanism, t
|
|||
#### Deleting Volumes
|
||||
|
||||
1. A user deletes a `PersistentVolumeClaim` object bound to a CSI volume.
|
||||
2. The external-provisioner for the CSI driver sees the the `PersistentVolumeClaim` was deleted and triggers the retention policy:
|
||||
2. The external-provisioner for the CSI driver sees the `PersistentVolumeClaim` was deleted and triggers the retention policy:
|
||||
1. If the retention policy is `delete`
|
||||
1. The external-provisioner triggers volume deletion by issuing a `DeleteVolume` call against the CSI volume plugin container.
|
||||
2. Once the volume is successfully deleted, the external-provisioner deletes the corresponding `PersistentVolume` object.
|
||||
|
|
|
@ -243,7 +243,7 @@ whether to use the CSI or the in-tree plugin for attach based on 3 criterea:
|
|||
2. Plugin Migratable (Implements MigratablePlugin interface)
|
||||
3. Node to Attach to has requisite Annotation
|
||||
|
||||
Note: All 3 criterea must be satisfied for A/D controller to Attach/Detach with
|
||||
Note: All 3 criteria must be satisfied for A/D controller to Attach/Detach with
|
||||
CSI instead of in-tree plugin. For example if a Kubelet has feature on and marks
|
||||
the annotation, but the A/D Controller does not have the feature gate flipped,
|
||||
we consider this user error and will throw some errors.
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
Note: this proposal is part of [Volume Snapshot](https://github.com/kubernetes/community/pull/2335) feature design, and also relevant to recently proposed [Volume Clone](https://github.com/kubernetes/community/pull/2533) feature.
|
||||
|
||||
## Goal
|
||||
Currently in Kuberentes, volume plugin only supports to provision an empty volume. With the new storage features (including [Volume Snapshot](https://github.com/kubernetes/community/pull/2335) and [volume clone](https://github.com/kubernetes/community/pull/2533)) being proposed, there is a need to support data population for volume provisioning. For example, volume can be created from a snapshot source, or volume could be cloned from another volume source. Depending on the sources for creating the volume, there are two scenarios
|
||||
Currently in Kubernetes, volume plugin only supports to provision an empty volume. With the new storage features (including [Volume Snapshot](https://github.com/kubernetes/community/pull/2335) and [volume clone](https://github.com/kubernetes/community/pull/2533)) being proposed, there is a need to support data population for volume provisioning. For example, volume can be created from a snapshot source, or volume could be cloned from another volume source. Depending on the sources for creating the volume, there are two scenarios
|
||||
1. Volume provisioner can recognize the source and be able to create the volume from the source directly (e.g., restore snapshot to a volume or clone volume).
|
||||
2. Volume provisioner does not recognize the volume source, and create an empty volume. Another external component (data populator) could watch the volume creation and implement the logic to populate/import the data to the volume provisioned. Only after data is populated to the volume, the PVC is ready for use.
|
||||
|
||||
|
|
|
@ -86,7 +86,7 @@ We propose that:
|
|||
|
||||
### Controller workflow for provisioning volumes
|
||||
|
||||
0. Kubernetes administator can configure name of a default StorageClass. This
|
||||
0. Kubernetes administrator can configure name of a default StorageClass. This
|
||||
StorageClass instance is then used when user requests a dynamically
|
||||
provisioned volume, but does not specify a StorageClass. In other words,
|
||||
`claim.Spec.Class == ""`
|
||||
|
|
|
@ -196,7 +196,7 @@ Open questions:
|
|||
|
||||
* Do we call them snapshots or backups?
|
||||
|
||||
* From the SIG email: "The snapshot should not be suggested to be a backup in any documentation, because in practice is is necessary, but not sufficient, when conducting a backup of a stateful application."
|
||||
* From the SIG email: "The snapshot should not be suggested to be a backup in any documentation, because in practice is necessary, but not sufficient, when conducting a backup of a stateful application."
|
||||
|
||||
* At what minimum granularity should snapshots be allowed?
|
||||
|
||||
|
|
|
@ -1079,7 +1079,7 @@ scenarios to keep equivalence class cache up to date:
|
|||
- on PV add/delete
|
||||
|
||||
When PVs are created or deleted, available PVs to choose from for volume
|
||||
scheduling will change, we need to to invalidate CheckVolumeBinding
|
||||
scheduling will change, we need to invalidate CheckVolumeBinding
|
||||
predicate.
|
||||
|
||||
- on PV update
|
||||
|
|
|
@ -1,14 +1,16 @@
|
|||
reviewers:
|
||||
- grodrigues3
|
||||
- Phillels
|
||||
- idvoretskyi
|
||||
- calebamiles
|
||||
- cblecker
|
||||
- grodrigues3
|
||||
- idvoretskyi
|
||||
- Phillels
|
||||
- spiffxp
|
||||
approvers:
|
||||
- grodrigues3
|
||||
- Phillels
|
||||
- idvoretskyi
|
||||
- calebamiles
|
||||
- cblecker
|
||||
- grodrigues3
|
||||
- idvoretskyi
|
||||
- lavalamp
|
||||
- Phillels
|
||||
- spiffxp
|
||||
- thockin
|
||||
|
|
|
@ -306,34 +306,57 @@ response reduces the complexity of these clients.
|
|||
##### Typical status properties
|
||||
|
||||
**Conditions** represent the latest available observations of an object's
|
||||
current state. Objects may report multiple conditions, and new types of
|
||||
conditions may be added in the future. Therefore, conditions are represented
|
||||
using a list/slice, where all have similar structure.
|
||||
state. They are an extension mechanism intended to be used when the details of
|
||||
an observation are not a priori known or would not apply to all instances of a
|
||||
given Kind. For observations that are well known and apply to all instances, a
|
||||
regular field is preferred. An example of a Condition that probably should
|
||||
have been a regular field is Pod's "Ready" condition - it is managed by core
|
||||
controllers, it is well understood, and it applies to all Pods.
|
||||
|
||||
Objects may report multiple conditions, and new types of conditions may be
|
||||
added in the future or by 3rd party controllers. Therefore, conditions are
|
||||
represented using a list/slice, where all have similar structure.
|
||||
|
||||
The `FooCondition` type for some resource type `Foo` may include a subset of the
|
||||
following fields, but must contain at least `type` and `status` fields:
|
||||
|
||||
```go
|
||||
Type FooConditionType `json:"type" description:"type of Foo condition"`
|
||||
Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"`
|
||||
Type FooConditionType `json:"type" description:"type of Foo condition"`
|
||||
Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"`
|
||||
|
||||
// +optional
|
||||
LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"`
|
||||
Reason *string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"`
|
||||
// +optional
|
||||
LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"`
|
||||
Message *string `json:"message,omitempty" description:"human-readable message indicating details about last transition"`
|
||||
|
||||
// +optional
|
||||
Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"`
|
||||
LastHeartbeatTime *unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"`
|
||||
// +optional
|
||||
Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"`
|
||||
LastTransitionTime *unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"`
|
||||
```
|
||||
|
||||
Additional fields may be added in the future.
|
||||
|
||||
Do not use fields that you don't need - simpler is better.
|
||||
|
||||
Use of the `Reason` field is encouraged.
|
||||
|
||||
Use the `LastHeartbeatTime` with great caution - frequent changes to this field
|
||||
can cause a large fan-out effect for some resources.
|
||||
|
||||
Conditions should be added to explicitly convey properties that users and
|
||||
components care about rather than requiring those properties to be inferred from
|
||||
other observations.
|
||||
other observations. Once defined, the meaning of a Condition can not be
|
||||
changed arbitrarily - it becomes part of the API, and has the same backwards-
|
||||
and forwards-compatibility concerns of any other part of the API.
|
||||
|
||||
Condition status values may be `True`, `False`, or `Unknown`. The absence of a
|
||||
condition should be interpreted the same as `Unknown`.
|
||||
condition should be interpreted the same as `Unknown`. How controllers handle
|
||||
`Unknown` depends on the Condition in question.
|
||||
|
||||
Condition types should indicate state in the "abnormal-true" polarity. For
|
||||
example, if the condition indicates when a policy is invalid, the "is valid"
|
||||
case is probably the norm, so the condition should be called "Invalid".
|
||||
|
||||
In general, condition values may change back and forth, but some condition
|
||||
transitions may be monotonic, depending on the resource and condition type.
|
||||
|
|
|
@ -95,9 +95,11 @@ backward-compatibly.
|
|||
|
||||
Before talking about how to make API changes, it is worthwhile to clarify what
|
||||
we mean by API compatibility. Kubernetes considers forwards and backwards
|
||||
compatibility of its APIs a top priority.
|
||||
compatibility of its APIs a top priority. Compatibility is *hard*, especially
|
||||
handling issues around rollback-safety. This is something every API change
|
||||
must consider.
|
||||
|
||||
An API change is considered forward and backward-compatible if it:
|
||||
An API change is considered compatible if it:
|
||||
|
||||
* adds new functionality that is not required for correct behavior (e.g.,
|
||||
does not add a new required field)
|
||||
|
@ -107,24 +109,35 @@ does not add a new required field)
|
|||
* which fields are required and which are not
|
||||
* mutable fields do not become immutable
|
||||
* valid values do not become invalid
|
||||
* explicitly invalid values do not become valid
|
||||
|
||||
Put another way:
|
||||
|
||||
1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before
|
||||
your change must work the same after your change.
|
||||
2. Any API call that uses your change must not cause problems (e.g. crash or
|
||||
degrade behavior) when issued against servers that do not include your change.
|
||||
3. It must be possible to round-trip your change (convert to different API
|
||||
1. Any API call (e.g. a structure POSTed to a REST endpoint) that succeeded
|
||||
before your change must succeed after your change.
|
||||
2. Any API call that does not use your change must behave the same as it did
|
||||
before your change.
|
||||
3. Any API call that uses your change must not cause problems (e.g. crash or
|
||||
degrade behavior) when issued against an API servers that do not include your
|
||||
change.
|
||||
4. It must be possible to round-trip your change (convert to different API
|
||||
versions and back) with no loss of information.
|
||||
4. Existing clients need not be aware of your change in order for them to
|
||||
continue to function as they did previously, even when your change is utilized.
|
||||
5. Existing clients need not be aware of your change in order for them to
|
||||
continue to function as they did previously, even when your change is in use.
|
||||
6. It must be possible to rollback to a previous version of API server that
|
||||
does not include your change and have no impact on API objects which do not use
|
||||
your change. API objects that use your change will be impacted in case of a
|
||||
rollback.
|
||||
|
||||
If your change does not meet these criteria, it is not considered strictly
|
||||
compatible, and may break older clients, or result in newer clients causing
|
||||
undefined behavior.
|
||||
If your change does not meet these criteria, it is not considered compatible,
|
||||
and may break older clients, or result in newer clients causing undefined
|
||||
behavior. Such changes are generally disallowed, though exceptions have been
|
||||
made in extreme cases (e.g. security or obvious bugs).
|
||||
|
||||
Let's consider some examples. In a hypothetical API (assume we're at version
|
||||
v6), the `Frobber` struct looks something like this:
|
||||
Let's consider some examples.
|
||||
|
||||
In a hypothetical API (assume we're at version v6), the `Frobber` struct looks
|
||||
something like this:
|
||||
|
||||
```go
|
||||
// API v6.
|
||||
|
@ -134,7 +147,7 @@ type Frobber struct {
|
|||
}
|
||||
```
|
||||
|
||||
You want to add a new `Width` field. It is generally safe to add new fields
|
||||
You want to add a new `Width` field. It is generally allowed to add new fields
|
||||
without changing the API version, so you can simply change it to:
|
||||
|
||||
```go
|
||||
|
@ -146,29 +159,55 @@ type Frobber struct {
|
|||
}
|
||||
```
|
||||
|
||||
The onus is on you to define a sane default value for `Width` such that rule #1
|
||||
above is true - API calls and stored objects that used to work must continue to
|
||||
work.
|
||||
The onus is on you to define a sane default value for `Width` such that rules
|
||||
#1 and #2 above are true - API calls and stored objects that used to work must
|
||||
continue to work.
|
||||
|
||||
For your next change you want to allow multiple `Param` values. You can not
|
||||
simply change `Param string` to `Params []string` (without creating a whole new
|
||||
API version) - that fails rules #1 and #2. You can instead do something like:
|
||||
simply remove `Param string` and add `Params []string` (without creating a
|
||||
whole new API version) - that fails rules #1, #2, #3, and #6. Nor can you
|
||||
simply add `Params []string` and use it instead - that fails #2 and #6.
|
||||
|
||||
You must instead define a new field and the relationship between that field and
|
||||
the existing field(s). Start by adding the new plural field:
|
||||
|
||||
```go
|
||||
// Still API v6, but kind of clumsy.
|
||||
// Still API v6.
|
||||
type Frobber struct {
|
||||
Height int `json:"height"`
|
||||
Width int `json:"width"`
|
||||
Param string `json:"param"` // the first param
|
||||
ExtraParams []string `json:"extraParams"` // additional params
|
||||
Params []string `json:"params"` // all of the params
|
||||
}
|
||||
```
|
||||
|
||||
Now you can satisfy the rules: API calls that provide the old style `Param`
|
||||
will still work, while servers that don't understand `ExtraParams` can ignore
|
||||
it. This is somewhat unsatisfying as an API, but it is strictly compatible.
|
||||
This new field must be inclusive of the singular field. In order to satisfy
|
||||
the compatibility rules you must handle all the cases of version skew, multiple
|
||||
clients, and rollbacks. This can be handled by defaulting or admission control
|
||||
logic linking the fields together with context from the API operation to get as
|
||||
close as possible to the user's intentions.
|
||||
|
||||
Part of the reason for versioning APIs and for using internal structs that are
|
||||
Upon any mutating API operation:
|
||||
* If only the singular field is specified (e.g. an older client), API logic
|
||||
must populate plural[0] from the singular value, and de-dup the plural
|
||||
field.
|
||||
* If only the plural field is specified (e.g. a newer client), API logic must
|
||||
populate the singular value from plural[0].
|
||||
* If both the singular and plural fields are specified, API logic must
|
||||
validate that the singular value matches plural[0].
|
||||
* Any other case is an error and must be rejected.
|
||||
|
||||
For this purpose "is specified" means the following:
|
||||
* On a create or patch operation: the field is present in the user-provided input
|
||||
* On an update operation: the field is present and has changed from the
|
||||
current value
|
||||
|
||||
Older clients that only know the singular field will continue to succeed and
|
||||
produce the same results as before the change. Newer clients can use your
|
||||
change without impacting older clients. The API server can be rolled back and
|
||||
only objects that use your change will be impacted.
|
||||
|
||||
Part of the reason for versioning APIs and for using internal types that are
|
||||
distinct from any one version is to handle growth like this. The internal
|
||||
representation can be implemented as:
|
||||
|
||||
|
@ -181,24 +220,26 @@ type Frobber struct {
|
|||
}
|
||||
```
|
||||
|
||||
The code that converts to/from versioned APIs can decode this into the somewhat
|
||||
uglier (but compatible!) structures. Eventually, a new API version, let's call
|
||||
it v7beta1, will be forked and it can use the clean internal structure.
|
||||
The code that converts to/from versioned APIs can decode this into the
|
||||
compatible structure. Eventually, a new API version, e.g. v7beta1,
|
||||
will be forked and it can drop the singular field entirely.
|
||||
|
||||
We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not
|
||||
We've seen how to satisfy rules #1, #2, and #3. Rule #4 means that you can not
|
||||
extend one versioned API without also extending the others. For example, an
|
||||
API call might POST an object in API v7beta1 format, which uses the cleaner
|
||||
`Params` field, but the API server might store that object in trusty old v6
|
||||
form (since v7beta1 is "beta"). When the user reads the object back in the
|
||||
v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This
|
||||
means that, even though it is ugly, a compatible change must be made to the v6
|
||||
API.
|
||||
API, as above.
|
||||
|
||||
However, this is very challenging to do correctly. It often requires multiple
|
||||
For some changes, this can be challenging to do correctly. It may require multiple
|
||||
representations of the same information in the same API resource, which need to
|
||||
be kept in sync in the event that either is changed. For example, let's say you
|
||||
decide to rename a field within the same API version. In this case, you add
|
||||
units to `height` and `width`. You implement this by adding duplicate fields:
|
||||
be kept in sync should either be changed.
|
||||
|
||||
For example, let's say you decide to rename a field within the same API
|
||||
version. In this case, you add units to `height` and `width`. You implement
|
||||
this by adding new fields:
|
||||
|
||||
```go
|
||||
type Frobber struct {
|
||||
|
@ -211,17 +252,17 @@ type Frobber struct {
|
|||
|
||||
You convert all of the fields to pointers in order to distinguish between unset
|
||||
and set to 0, and then set each corresponding field from the other in the
|
||||
defaulting pass (e.g., `heightInInches` from `height`, and vice versa), which
|
||||
runs just prior to conversion. That works fine when the user creates a resource
|
||||
from a hand-written configuration -- clients can write either field and read
|
||||
either field, but what about creation or update from the output of GET, or
|
||||
update via PATCH (see
|
||||
[In-place updates](https://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources))?
|
||||
In this case, the two fields will conflict, because only one field would be
|
||||
updated in the case of an old client that was only aware of the old field (e.g.,
|
||||
`height`).
|
||||
defaulting logic (e.g. `heightInInches` from `height`, and vice versa). That
|
||||
works fine when the user creates a sends a hand-written configuration --
|
||||
clients can write either field and read either field.
|
||||
|
||||
Say the client creates:
|
||||
But what about creation or update from the output of a GET, or update via PATCH
|
||||
(see [In-place updates](https://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources))?
|
||||
In these cases, the two fields will conflict, because only one field would be
|
||||
updated in the case of an old client that was only aware of the old field
|
||||
(e.g. `height`).
|
||||
|
||||
Suppose the client creates:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -252,17 +293,16 @@ then PUTs back:
|
|||
}
|
||||
```
|
||||
|
||||
The update should not fail, because it would have worked before `heightInInches`
|
||||
was added.
|
||||
As per the compatibility rules, the update must not fail, because it would have
|
||||
worked before the change.
|
||||
|
||||
## Backward compatibility gotchas
|
||||
|
||||
* A single feature/property cannot be represented using multiple spec fields in the same API version
|
||||
simultaneously, as the example above shows. Only one field can be populated in any resource at a time, and the client
|
||||
needs to be able to specify which field they expect to use (typically via API version),
|
||||
on both mutation and read. Old clients must continue to function properly while only manipulating
|
||||
the old field. New clients must be able to function properly while only manipulating the new
|
||||
field.
|
||||
* A single feature/property cannot be represented using multiple spec fields
|
||||
simultaneously within an API version. Only one representation can be
|
||||
populated at a time, and the client needs to be able to specify which field
|
||||
they expect to use (typically via API version), on both mutation and read. As
|
||||
above, older clients must continue to function properly.
|
||||
|
||||
* A new representation, even in a new API version, that is more expressive than an
|
||||
old one breaks backward compatibility, since clients that only understood the
|
||||
|
@ -283,7 +323,7 @@ was added.
|
|||
be set, it is acceptable to add a new option to the union if the [appropriate
|
||||
conventions](api-conventions.md#objects) were followed in the original object.
|
||||
Removing an option requires following the [deprecation process](https://kubernetes.io/docs/reference/deprecation-policy/).
|
||||
|
||||
|
||||
* Changing any validation rules always has the potential of breaking some client, since it changes the
|
||||
assumptions about part of the API, similar to adding new enum values. Validation rules on spec fields can
|
||||
neither be relaxed nor strengthened. Strengthening cannot be permitted because any requests that previously
|
||||
|
@ -291,7 +331,7 @@ was added.
|
|||
of the API resource. Status fields whose writers are under our control (e.g., written by non-pluggable
|
||||
controllers), may potentially tighten validation, since that would cause a subset of previously valid
|
||||
values to be observable by clients.
|
||||
|
||||
|
||||
* Do not add a new API version of an existing resource and make it the preferred version in the same
|
||||
release, and do not make it the storage version. The latter is necessary so that a rollback of the
|
||||
apiserver doesn't render resources in etcd undecodable after rollback.
|
||||
|
@ -308,16 +348,15 @@ was added.
|
|||
|
||||
## Incompatible API changes
|
||||
|
||||
There are times when this might be OK, but mostly we want changes that meet this
|
||||
definition. If you think you need to break compatibility, you should talk to the
|
||||
Kubernetes team first.
|
||||
There are times when incompatible changes might be OK, but mostly we want
|
||||
changes that meet the above definitions. If you think you need to break
|
||||
compatibility, you should talk to the Kubernetes API reviewers first.
|
||||
|
||||
Breaking compatibility of a beta or stable API version, such as v1, is
|
||||
unacceptable. Compatibility for experimental or alpha APIs is not strictly
|
||||
required, but breaking compatibility should not be done lightly, as it disrupts
|
||||
all users of the feature. Experimental APIs may be removed. Alpha and beta API
|
||||
versions may be deprecated and eventually removed wholesale, as described in the
|
||||
[versioning document](../design-proposals/release/versioning.md).
|
||||
all users of the feature. Alpha and beta API versions may be deprecated and
|
||||
eventually removed wholesale, as described in the [deprecation policy](https://kubernetes.io/docs/reference/deprecation-policy/).
|
||||
|
||||
If your change is going to be backward incompatible or might be a breaking
|
||||
change for API consumers, please send an announcement to
|
||||
|
|
|
@ -761,7 +761,7 @@ therefore wouldn’t be considered to be part of Kubernetes.
|
|||
applications, but not for specific applications.
|
||||
|
||||
* Platform as a Service: Kubernetes [provides a
|
||||
foundation](http://blog.kubernetes.io/2017/02/caas-the-foundation-for-next-gen-paas.html)
|
||||
foundation](https://kubernetes.io/blog/2017/02/caas-the-foundation-for-next-gen-paas/)
|
||||
for a multitude of focused, opinionated PaaSes, including DIY
|
||||
ones.
|
||||
|
||||
|
|
|
@ -6,65 +6,25 @@ Kubernetes uses a variety of automated tools in an attempt to relieve developers
|
|||
of repetitive, low brain power work. This document attempts to describe these
|
||||
processes.
|
||||
|
||||
## Tide
|
||||
|
||||
## Submit Queue
|
||||
This project formerly used a Submit Queue, it has since been replaced by
|
||||
[Tide](https://git.k8s.io/test-infra/prow/cmd/tide).
|
||||
|
||||
In an effort to
|
||||
* reduce load on core developers
|
||||
* maintain end-to-end test stability
|
||||
* load test github's label feature
|
||||
#### Ready to merge status
|
||||
|
||||
We have added an automated [submit-queue](https://git.k8s.io/test-infra/mungegithub/submit-queue)
|
||||
to the
|
||||
[github "munger"](https://git.k8s.io/test-infra/mungegithub)
|
||||
for kubernetes.
|
||||
|
||||
The submit-queue does the following:
|
||||
|
||||
```go
|
||||
for _, pr := range readyToMergePRs() {
|
||||
if testsAreStable() {
|
||||
if retestPR(pr) == success {
|
||||
mergePR(pr)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The status of the submit-queue is [online.](http://submit-queue.k8s.io/)
|
||||
|
||||
### Ready to merge status
|
||||
|
||||
A PR is considered "ready for merging" by the submit queue if it matches the set
|
||||
of conditions listed in the [merge requirements tab](http://submit-queue.k8s.io/#/info)
|
||||
of the info page.
|
||||
A PR is considered "ready for merging" by Tide if it matches the set
|
||||
of conditions listed in the [Tide dashboard](https://prow.k8s.io/tide).
|
||||
Please visit that page for more details.
|
||||
|
||||
### Merge process
|
||||
|
||||
If the PR has the `retest-not-required` label, it is simply merged. If the PR does
|
||||
not have this label, the aforementioned required tests are re-run.
|
||||
If these tests pass a second time, the PR will be merged when this PR finishes retesting.
|
||||
|
||||
## Github Munger
|
||||
|
||||
We run [github "mungers"](https://git.k8s.io/test-infra/mungegithub).
|
||||
|
||||
This runs repeatedly over github pulls and issues and runs modular "mungers".
|
||||
The mungers include the "submit-queue" referenced above along
|
||||
with numerous other functions. See the README in the link above.
|
||||
|
||||
Please feel free to unleash your creativity on this tool, send us new mungers
|
||||
that you think will help support the Kubernetes development process.
|
||||
|
||||
### Closing stale pull-requests
|
||||
|
||||
Github Munger will close pull-requests that don't have human activity in the
|
||||
Prow will close pull-requests that don't have human activity in the
|
||||
last 90 days. It will warn about this process 60 days before closing the
|
||||
pull-request, and warn again 30 days later. One way to prevent this from
|
||||
happening is to add the `keep-open` label on the pull-request.
|
||||
happening is to add the `lifecycle/frozen` label on the pull-request.
|
||||
|
||||
Feel free to re-open and maybe add the `keep-open` label if this happens to a
|
||||
Feel free to re-open and maybe add the `lifecycle/frozen` label if this happens to a
|
||||
valid pull-request. It may also be a good opportunity to get more attention by
|
||||
verifying that it is properly assigned and/or mention people that might be
|
||||
interested. Commenting on the pull-request will also keep it open for another 90
|
||||
|
@ -89,4 +49,4 @@ during the original test. It would be good to file flakes as an
|
|||
The simplest way is to comment `/retest`.
|
||||
|
||||
Any pushes of new code to the PR will automatically trigger a new test. No human
|
||||
interraction is required. Note that if the PR has a `lgtm` label, it will be removed after the pushes.
|
||||
interaction is required. Note that if the PR has a `lgtm` label, it will be removed after the pushes.
|
||||
|
|
|
@ -2,6 +2,12 @@
|
|||
|
||||
Building and testing Kubernetes with Bazel is supported but not yet default.
|
||||
|
||||
Bazel is used to run all Kubernetes PRs on [Prow](https://prow.k8s.io),
|
||||
as remote caching enables significantly reduced build and test times.
|
||||
|
||||
Some repositories (such as kubernetes/test-infra) have switched to using Bazel
|
||||
exclusively for all build, test, and release workflows.
|
||||
|
||||
Go rules are managed by the [`gazelle`](https://github.com/bazelbuild/rules_go/tree/master/go/tools/gazelle)
|
||||
tool, with some additional rules managed by the [`kazel`](https://git.k8s.io/repo-infra/kazel) tool.
|
||||
These tools are called via the `hack/update-bazel.sh` script.
|
||||
|
@ -9,13 +15,16 @@ These tools are called via the `hack/update-bazel.sh` script.
|
|||
Instructions for installing Bazel
|
||||
can be found [here](https://www.bazel.io/versions/master/docs/install.html).
|
||||
|
||||
Several `make` rules have been created for common operations:
|
||||
Several convenience `make` rules have been created for common operations:
|
||||
|
||||
* `make bazel-build`: builds all binaries in tree
|
||||
* `make bazel-test`: runs all unit tests
|
||||
* `make bazel-test-integration`: runs all integration tests
|
||||
* `make bazel-build`: builds all binaries in tree (`bazel build -- //...
|
||||
-//vendor/...`)
|
||||
* `make bazel-test`: runs all unit tests (`bazel test --config=unit -- //...
|
||||
//hack:verify-all -//build/... -//vendor/...`)
|
||||
* `make bazel-test-integration`: runs all integration tests (`bazel test
|
||||
--config integration //test/integration/...`)
|
||||
* `make bazel-release`: builds release tarballs, Docker images (for server
|
||||
components), and Debian images
|
||||
components), and Debian images (`bazel build //build/release-tars`)
|
||||
|
||||
You can also interact with Bazel directly; for example, to run all `kubectl` unit
|
||||
tests, run
|
||||
|
@ -46,26 +55,6 @@ There are several bazel CI jobs:
|
|||
Similar jobs are run on all PRs; additionally, several of the e2e jobs use
|
||||
Bazel-built binaries when launching and testing Kubernetes clusters.
|
||||
|
||||
## Known issues
|
||||
|
||||
[Cross-compilation is not currently supported](https://github.com/bazelbuild/rules_go/issues/70),
|
||||
so all binaries will be built for the host OS and architecture running Bazel.
|
||||
(For example, you can't currently target linux/amd64 from macOS or linux/s390x
|
||||
from an amd64 machine.)
|
||||
|
||||
Additionally, native macOS support is still a work in progress. Using Planter is
|
||||
a possible workaround in the interim.
|
||||
|
||||
[Bazel does not validate build environment](https://github.com/kubernetes/kubernetes/issues/51623), thus make sure that needed
|
||||
tools and development packages are installed in the system. Bazel builds require presence of `make`, `gcc`, `g++`, `glibc and libstdc++ development headers` and `glibc static development libraries`. Please check your distribution for exact names of the packages. Examples for some commonly used distributions are below:
|
||||
|
||||
| Dependency | Debian/Ubuntu | CentOS | OpenSuSE |
|
||||
|:---------------------:|-------------------------------|--------------------------------|-----------------------------------------|
|
||||
| Build essentials | `apt install build-essential` | `yum groupinstall development` | `zypper install -t pattern devel_C_C++` |
|
||||
| GCC C++ | `apt install g++` | `yum install gcc-c++` | `zypper install gcc-c++` |
|
||||
| GNU Libc static files | `apt install libc6-dev` | `yum install glibc-static` | `zypper install glibc-devel-static` |
|
||||
|
||||
|
||||
## Updating `BUILD` files
|
||||
|
||||
To update `BUILD` files, run:
|
||||
|
@ -77,10 +66,10 @@ $ ./hack/update-bazel.sh
|
|||
To prevent Go rules from being updated, consult the [gazelle
|
||||
documentation](https://github.com/bazelbuild/rules_go/tree/master/go/tools/gazelle).
|
||||
|
||||
Note that much like Go files and `gofmt`, BUILD files have standardized,
|
||||
Note that much like Go files and `gofmt`, `BUILD` files have standardized,
|
||||
opinionated style rules, and running `hack/update-bazel.sh` will format them for you.
|
||||
|
||||
If you want to auto-format BUILD files in your editor, using something like
|
||||
If you want to auto-format `BUILD` files in your editor, use of
|
||||
[Buildifier](https://github.com/bazelbuild/buildtools/blob/master/buildifier/README.md)
|
||||
is recommended.
|
||||
|
||||
|
@ -90,6 +79,106 @@ Updating the `BUILD` file for a package will be required when:
|
|||
* A `BUILD` file has been updated and needs to be reformatted
|
||||
* A new `BUILD` file has been added (parent `BUILD` files will be updated)
|
||||
|
||||
## Known issues and limitations
|
||||
|
||||
### [Cross-compilation of cgo is not currently natively supported](https://github.com/bazelbuild/rules_go/issues/1020)
|
||||
All binaries are currently built for the host OS and architecture running Bazel.
|
||||
(For example, you can't currently target linux/amd64 from macOS or linux/s390x
|
||||
from an amd64 machine.)
|
||||
|
||||
The Go rules support cross-compilation of pure Go code using the `--platforms`
|
||||
flag, and this is being used successfully in the kubernetes/test-infra repo.
|
||||
|
||||
It may already be possible to cross-compile cgo code if a custom CC toolchain is
|
||||
set up, possibly reusing the kube-cross Docker image, but this area needs
|
||||
further exploration.
|
||||
|
||||
### The CC toolchain is not fully hermetic
|
||||
Bazel requires several tools and development packages to be installed in the system, including `gcc`, `g++`, `glibc and libstdc++ development headers` and `glibc static development libraries`. Please check your distribution for exact names of the packages. Examples for some commonly used distributions are below:
|
||||
|
||||
| Dependency | Debian/Ubuntu | CentOS | OpenSuSE |
|
||||
|:---------------------:|-------------------------------|--------------------------------|-----------------------------------------|
|
||||
| Build essentials | `apt install build-essential` | `yum groupinstall development` | `zypper install -t pattern devel_C_C++` |
|
||||
| GCC C++ | `apt install g++` | `yum install gcc-c++` | `zypper install gcc-c++` |
|
||||
| GNU Libc static files | `apt install libc6-dev` | `yum install glibc-static` | `zypper install glibc-devel-static` |
|
||||
|
||||
If any of these packages change, they may also cause spurious build failures
|
||||
as described in [this issue](https://github.com/bazelbuild/bazel/issues/4907).
|
||||
|
||||
An example error might look something like
|
||||
```
|
||||
ERROR: undeclared inclusion(s) in rule '//vendor/golang.org/x/text/cases:go_default_library.cgo_c_lib':
|
||||
this rule is missing dependency declarations for the following files included by 'vendor/golang.org/x/text/cases/linux_amd64_stripped/go_default_library.cgo_codegen~/_cgo_export.c':
|
||||
'/usr/lib/gcc/x86_64-linux-gnu/7/include/stddef.h'
|
||||
```
|
||||
|
||||
The only way to recover from this error is to force Bazel to regenerate its
|
||||
automatically-generated CC toolchain configuration by running `bazel clean
|
||||
--expunge`.
|
||||
|
||||
Improving cgo cross-compilation may help with all of this.
|
||||
|
||||
### Changes to Go imports requires updating BUILD files
|
||||
The Go rules in `BUILD` and `BUILD.bazel` files must be updated any time files
|
||||
are added or removed or Go imports are changed. These rules are automatically
|
||||
maintained by `gazelle`, which is run via `hack/update-bazel.sh`, but this is
|
||||
still a source of friction.
|
||||
|
||||
[Autogazelle](https://github.com/bazelbuild/bazel-gazelle/tree/master/cmd/autogazelle)
|
||||
is a new experimental tool which may reduce or remove the need for developers
|
||||
to run `hack/update-bazel.sh`, but no work has yet been done to support it in
|
||||
kubernetes/kubernetes.
|
||||
|
||||
### Code coverage support is incomplete for Go
|
||||
Bazel and the Go rules have limited support for code coverage. Running something
|
||||
like `bazel coverage -- //... -//vendor/...` will run tests in coverage mode,
|
||||
but no report summary is currently generated. It may be possible to combine
|
||||
`bazel coverage` with
|
||||
[Gopherage](https://github.com/kubernetes/test-infra/tree/master/gopherage),
|
||||
however.
|
||||
|
||||
### Kubernetes code generators are not fully supported
|
||||
The make-based build system in kubernetes/kubernetes runs several code
|
||||
generators at build time:
|
||||
* [conversion-gen](https://github.com/kubernetes/code-generator/tree/master/cmd/conversion-gen)
|
||||
* [deepcopy-gen](https://github.com/kubernetes/code-generator/tree/master/cmd/deepcopy-gen)
|
||||
* [defaulter-gen](https://github.com/kubernetes/code-generator/tree/master/cmd/defaulter-gen)
|
||||
* [openapi-gen](https://github.com/kubernetes/kube-openapi/tree/master/cmd/openapi-gen)
|
||||
* [go-bindata](https://github.com/jteeuwen/go-bindata/tree/master/go-bindata)
|
||||
|
||||
Of these, only `openapi-gen` and `go-bindata` are currently supported when
|
||||
building Kubernetes with Bazel.
|
||||
|
||||
The `go-bindata` generated code is produced by hand-written genrules.
|
||||
|
||||
The other code generators use special build tags of the form `//
|
||||
+k8s:generator-name=arg`; for example, input files to the openapi-gen tool are
|
||||
specified with `// +k8s:openapi-gen=true`.
|
||||
|
||||
`kazel` is used to find all packages that require OpenAPI generation, and then a
|
||||
handwritten genrule consumes this list of packages to run `openapi-gen`.
|
||||
|
||||
For `openapi-gen`, a single output file is produced in a single Go package, which
|
||||
makes this fairly compatible with Bazel.
|
||||
All other Kubernetes code generators generally produce one output file per input
|
||||
package, which is less compatible with the Bazel workflow.
|
||||
|
||||
The make-based build system batches up all input packages into one call to the
|
||||
code generator binary, but this is inefficient for Bazel's incrementality, as a
|
||||
change in one package may result in unnecessarily recompiling many other
|
||||
packages.
|
||||
On the other hand, calling the code generator binary multiple times is less
|
||||
efficient than calling it once, since many of the generators parse the tree for
|
||||
Go type information and other metadata.
|
||||
|
||||
One additional challenge is that many of the code generators add additional
|
||||
Go imports which `gazelle` (and `autogazelle`) cannot infer, and so they must be
|
||||
explicitly added as dependencies in the `BUILD` files.
|
||||
|
||||
Kubernetes has even more code generators than this limited list, but the rest
|
||||
are generally run as `hack/update-*.sh` scripts and checked into the repository,
|
||||
and so are not immediately needed for Bazel parity.
|
||||
|
||||
## Contacts
|
||||
For help or discussion, join the [#bazel](https://kubernetes.slack.com/messages/bazel)
|
||||
channel on Kubernetes Slack.
|
||||
|
|
|
@ -1,44 +1,73 @@
|
|||
# Overview
|
||||
|
||||
This document explains how cherry picks are managed on release
|
||||
branches within the Kubernetes projects. A common use case for this
|
||||
task is for backporting PRs from master to release branches.
|
||||
This document explains how cherry-picks are managed on release branches within
|
||||
the kubernetes/kubernetes repository.
|
||||
A common use case for this task is backporting PRs from master to release
|
||||
branches.
|
||||
|
||||
## Prerequisites
|
||||
* [Contributor License Agreement](http://git.k8s.io/community/CLA.md)
|
||||
is considered implicit for all code within cherry-pick pull requests,
|
||||
***unless there is a large conflict***.
|
||||
* [Contributor License Agreement](http://git.k8s.io/community/CLA.md) is
|
||||
considered implicit for all code within cherry-pick pull requests,
|
||||
**unless there is a large conflict**.
|
||||
* A pull request merged against the master branch.
|
||||
* [Release branch](https://git.k8s.io/release/docs/branching.md) exists.
|
||||
* The normal git and GitHub configured shell environment for pushing
|
||||
to your kubernetes `origin` fork on GitHub and making a pull request
|
||||
against a configured remote `upstream` that tracks
|
||||
"https://github.com/kubernetes/kubernetes.git", including
|
||||
`GITHUB_USER`.
|
||||
* The normal git and GitHub configured shell environment for pushing to your
|
||||
kubernetes `origin` fork on GitHub and making a pull request against a
|
||||
configured remote `upstream` that tracks
|
||||
"https://github.com/kubernetes/kubernetes.git", including `GITHUB_USER`.
|
||||
* Have `hub` installed, which is most easily installed via `go get
|
||||
github.com/github/hub` assuming you have a standard golang development
|
||||
environment.
|
||||
|
||||
## Initiate a Cherry Pick
|
||||
* Run the [cherry pick script](https://git.k8s.io/kubernetes/hack/cherry_pick_pull.sh).
|
||||
## Initiate a Cherry-pick
|
||||
* Run the [cherry-pick
|
||||
script](https://git.k8s.io/kubernetes/hack/cherry_pick_pull.sh).
|
||||
This example applies a master branch PR #98765 to the remote branch
|
||||
`upstream/release-3.14`: `hack/cherry_pick_pull.sh upstream/release-3.14
|
||||
98765`
|
||||
* Your cherrypick PR targeted to the release branch will immediately get the
|
||||
`do-not-merge/cherry-pick-not-approved` label. The release branch owner
|
||||
will triage PRs targeted to the branch. Normal rules apply for code merge.
|
||||
* Be aware the cherry-pick script assumes you have a git remote called
|
||||
`upstream` that points at the Kubernetes github org.
|
||||
Please see our [recommended Git workflow](https://git.k8s.io/community/contributors/guide/github-workflow.md#workflow).
|
||||
* You will need to run the cherry-pick script separately for each patch release you want to cherry-pick to.
|
||||
|
||||
* Your cherry-pick PR will immediately get the `do-not-merge/cherry-pick-not-approved` label.
|
||||
The [Branch Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager)
|
||||
will triage PRs targeted to the next .0 minor release branch up until the
|
||||
release, while the [Patch Release Team](https://git.k8s.io/sig-release/release-team/role-handbooks/patch-release-manager)
|
||||
will handle all cherry-picks to patch releases.
|
||||
Normal rules apply for code merge.
|
||||
* Reviewers `/lgtm` and owners `/approve` as they deem appropriate.
|
||||
* The approving release branch owner is responsible for applying the
|
||||
`cherrypick-approved` label.
|
||||
* Milestones on cherry-pick PRs should be the milestone for the target
|
||||
release branch (for example, milestone 1.11 for a cherry-pick onto
|
||||
release-1.11).
|
||||
* You can find the current release team members in the
|
||||
[appropriate release folder](https://git.k8s.io/sig-release/releases) for the target release.
|
||||
You may cc them with `<@githubusername>` on your cherry-pick PR.
|
||||
|
||||
## Cherry Pick Review
|
||||
## Cherry-pick Review
|
||||
|
||||
Cherry pick pull requests are reviewed differently than normal pull requests. In
|
||||
particular, they may be self-merged by the release branch owner without fanfare,
|
||||
in the case the release branch owner knows the cherry pick was already
|
||||
requested - this should not be the norm, but it may happen.
|
||||
Cherry-pick pull requests have an additional requirement compared to normal pull
|
||||
requests.
|
||||
They must be approved specifically for cherry-pick by Approvers.
|
||||
The [Branch Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager)
|
||||
or the [Patch Release Team](https://git.k8s.io/sig-release/release-team/role-handbooks/patch-release-manager)
|
||||
are the final authority on removing the `do-not-merge/cherry-pick-not-approved`
|
||||
label and triggering a merge into the target branch.
|
||||
|
||||
## Searching for Cherry Picks
|
||||
## Searching for Cherry-picks
|
||||
|
||||
See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for
|
||||
status of PRs labeled as `cherrypick-candidate`.
|
||||
- [A sample search on kubernetes/kubernetes pull requests that are labeled as `cherry-pick-approved`](https://github.com/kubernetes/kubernetes/pulls?q=is%3Aopen+is%3Apr+label%3Acherry-pick-approved)
|
||||
|
||||
- [A sample search on kubernetes/kubernetes pull requests that are labeled as `do-not-merge/cherry-pick-not-approved`](https://github.com/kubernetes/kubernetes/pulls?q=is%3Aopen+is%3Apr+label%3Ado-not-merge%2Fcherry-pick-not-approved)
|
||||
|
||||
|
||||
## Troubleshooting Cherry-picks
|
||||
|
||||
Contributors may encounter some of the following difficulties when initiating a cherry-pick.
|
||||
|
||||
- A cherry-pick PR does not apply cleanly against an old release branch.
|
||||
In that case, you will need to manually fix conflicts.
|
||||
|
||||
- The cherry-pick PR includes code that does not pass CI tests.
|
||||
In such a case you will have to fetch the auto-generated branch from your fork, amend the problematic commit and force push to the auto-generated branch.
|
||||
Alternatively, you can create a new PR, which is noisier.
|
||||
|
|
|
@ -1,67 +1,116 @@
|
|||
# Conformance Testing in Kubernetes
|
||||
|
||||
The Kubernetes conformance test suite is a set of testcases, currently a
|
||||
subset of the integration/e2e tests, that the Architecture SIG has approved
|
||||
to define the core set of interoperable features that all Kubernetes
|
||||
deployments must support.
|
||||
The Kubernetes Conformance test suite is a subset of e2e tests that SIG
|
||||
Architecture has approved to define the core set of interoperable features that
|
||||
all conformant Kubernetes clusters must support. The tests verify that the
|
||||
expected behavior works as a user might encounter it in the wild.
|
||||
|
||||
Contributors must write and submit e2e tests first (approved by owning Sigs).
|
||||
Once the new tests prove to be stable in CI runs, later create a follow up PR
|
||||
to add the test to conformance. This approach also decouples the development
|
||||
of useful tests from their promotion to conformance.
|
||||
The process to add new conformance tests is intended to decouple the development
|
||||
of useful tests from their promotion to conformance:
|
||||
- Contributors write and submit e2e tests, to be approved by owning SIGs
|
||||
- Tests are proven to meet the [conformance test requirements] by review
|
||||
and by accumulation of data on flakiness and reliability
|
||||
- A follow up PR is submitted to [promote the test to conformance](#promoting-tests-to-conformance)
|
||||
|
||||
A conformance test verifies the expected functionality works as a user might encounter it in the wild,
|
||||
and tests should begin by covering the most important and visible aspects of the function.
|
||||
NB: This should be viewed as a living document in a few key areas:
|
||||
- The desired set of conformant behaviors is not adequately expressed by the
|
||||
current set of e2e tests, as such this document is currently intended to
|
||||
guide us in the addition of new e2e tests than can fill this gap
|
||||
- This document currently focuses solely on the requirements for GA,
|
||||
non-optional features or APIs. The list of requirements will be refined over
|
||||
time to the point where it as concrete and complete as possible.
|
||||
- There are currently conformance tests that violate some of the requirements
|
||||
(e.g., require privileged access), we will be categorizing these tests and
|
||||
deciding what to do once we have a better understanding of the situation
|
||||
- Once we resolve the above issues, we plan on identifying the appropriate areas
|
||||
to relax requirements to allow for the concept of conformance Profiles that
|
||||
cover optional or additional behaviors
|
||||
|
||||
### Conformance Test Requirements
|
||||
## Conformance Test Requirements
|
||||
|
||||
A test is eligible for promotion to conformance if it meets the following requirements:
|
||||
Conformance tests currently test only GA, non-optional features or APIs. More
|
||||
specifically, a test is eligible for promotion to conformance if:
|
||||
|
||||
- testing GA feature (not alpha or beta APIs, nor deprecated features)
|
||||
- must be portable (not dependent on provider-specific capabilities or on the public internet)
|
||||
- cannot test a feature which obviously cannot be supported on a broad range of platforms
|
||||
(i.e. testing of multiple disk mounts, GPUs, high density)
|
||||
- cannot test an optional feature (e.g. not policy enforcement)
|
||||
- should be non-privileged (neither root on nodes, network, nor cluster)
|
||||
- cannot rely on any particular non-standard file system permissions granted to containers or users
|
||||
(i.e. sharing writable host /tmp with a container)
|
||||
- should be stable and run consistently
|
||||
- cannot skip providers (there should be no Skip like directives added to the test),
|
||||
especially in the Nucleus or Application layers as described
|
||||
[here](https://github.com/kubernetes/community/blob/master/contributors/devel/architectural-roadmap.md).
|
||||
- cannot test cloud provider specific features (i.e. GCE monitoring, S3 Bucketing, ...)
|
||||
- should work with default settings for all configuration parameters
|
||||
(example: the default list of admission plugins should not have to be tweaked for passing conformance).
|
||||
- cannot rely on any binaries that are not required for the
|
||||
linux kernel or for a kubelet to run (i.e. git)
|
||||
- any container images used in the test must support all architectures for which kubernetes releases are built
|
||||
- it tests only GA, non-optional features or APIs (e.g., no alpha or beta
|
||||
endpoints, no feature flags required, no deprecated features)
|
||||
- it works for all providers (e.g., no `SkipIfProviderIs`/`SkipUnlessProviderIs`
|
||||
calls)
|
||||
- it is non-privileged (e.g., does not require root on nodes, access to raw
|
||||
network interfaces, or cluster admin permissions)
|
||||
- it works without access to the public internet (short of whatever is required
|
||||
to pre-pull images for conformance tests)
|
||||
- it works without non-standard filesystem permissions granted to pods
|
||||
- it does not rely on any binaries that would not be required for the linux
|
||||
kernel or kubelet to run (e.g., can't rely on git)
|
||||
- any container images used within the test support all architectures for which
|
||||
kubernetes releases are built
|
||||
- it passes against the appropriate versions of kubernetes as spelled out in
|
||||
the [conformance test version skew policy]
|
||||
- it is stable and runs consistently (e.g., no flakes)
|
||||
|
||||
### Conformance Test Version Skew Policy
|
||||
Examples of features which are not currently eligible for conformance tests:
|
||||
|
||||
- node/platform-reliant features, eg: multiple disk mounts, GPUs, high density,
|
||||
etc.
|
||||
- optional features, eg: policy enforcement
|
||||
- cloud-provider-specific features, eg: GCE monitoring, S3 Bucketing, etc.
|
||||
- anything that requires a non-default admission plugin
|
||||
|
||||
Examples of tests which are not eligible for promotion to conformance:
|
||||
- anything that checks specific Events are generated, as we make no guarantees
|
||||
about the contents of events, nor their delivery
|
||||
- anything that checks optional Condition fields, such as Reason or Message, as
|
||||
these may change over time (however it is reasonable to verify these fields
|
||||
exist or are non-empty)
|
||||
|
||||
Examples of areas we may want to relax these requirements once we have a
|
||||
sufficient corpus of tests that define out of the box functionality in all
|
||||
reasonable production worthy environments:
|
||||
- tests may need to create or set objects or fields that are alpha or beta that
|
||||
bypass policies that are not yet GA, but which may reasonably be enabled on a
|
||||
conformant cluster (e.g., pod security policy, non-GA scheduler annotations)
|
||||
|
||||
## Conformance Test Version Skew Policy
|
||||
|
||||
As each new release of Kubernetes provides new functionality, the subset of
|
||||
tests necessary to demonstrate conformance grows with each release. Conformance
|
||||
is thus considered versioned, with the same backwards compatibility guarantees
|
||||
as laid out in [our versioning policy](/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew).
|
||||
as laid out in the [kubernetes versioning policy]
|
||||
|
||||
To quote:
|
||||
|
||||
> For example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes, and
|
||||
> should work with v1.2, v1.3, and v1.4 clients.
|
||||
|
||||
Conformance tests for a given version should be run off of the release branch
|
||||
that corresponds to that version. Thus `v1.2` conformance tests would be run
|
||||
from the head of the `release-1.2` branch. eg:
|
||||
from the head of the `release-1.2` branch.
|
||||
|
||||
- A v1.3 development cluster should pass v1.1, v1.2 conformance tests
|
||||
For example, suppose we're in the midst of developing kubernetes v1.3. Clusters
|
||||
with the following versions must pass conformance tests built from the
|
||||
following branches:
|
||||
|
||||
- A v1.2 cluster should pass v1.1, v1.2 conformance tests
|
||||
| cluster version | master | release-1.3 | release-1.2 | release-1.1 |
|
||||
| --------------- | ----- | ----------- | ----------- | ----------- |
|
||||
| v1.3.0-alpha | yes | yes | yes | no |
|
||||
| v1.2.x | no | no | yes | yes |
|
||||
| v1.1.x | no | no | no | yes |
|
||||
|
||||
- A v1.1 cluster should pass v1.0, v1.1 conformance tests, and fail v1.2
|
||||
conformance tests
|
||||
## Running Conformance Tests
|
||||
|
||||
Conformance tests are designed to be run even when there is no cloud provider
|
||||
configured. Conformance tests must be able to be run against clusters that have
|
||||
not been created with `hack/e2e.go`, just provide a kubeconfig with the
|
||||
appropriate endpoint and credentials.
|
||||
|
||||
### Running Conformance Tests
|
||||
|
||||
Conformance tests are designed to be run with no cloud provider configured.
|
||||
Conformance tests can be run against clusters that have not been created with
|
||||
`hack/e2e.go`, just provide a kubeconfig with the appropriate endpoint and
|
||||
credentials.
|
||||
These commands are intended to be run within a kubernetes directory, either
|
||||
cloned from source, or extracted from release artifacts such as
|
||||
`kubernetes.tar.gz`. They assume you have a valid golang installation.
|
||||
|
||||
```sh
|
||||
# ensure kubetest is installed
|
||||
go get -u k8s.io/test-infra/kubetest
|
||||
|
||||
# build test binaries, ginkgo, and kubectl first:
|
||||
make WHAT="test/e2e/e2e.test vendor/github.com/onsi/ginkgo/ginkgo cmd/kubectl"
|
||||
|
||||
|
@ -69,42 +118,44 @@ make WHAT="test/e2e/e2e.test vendor/github.com/onsi/ginkgo/ginkgo cmd/kubectl"
|
|||
export KUBECONFIG=/path/to/kubeconfig
|
||||
export KUBERNETES_CONFORMANCE_TEST=y
|
||||
|
||||
# run all conformance tests
|
||||
go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\]"
|
||||
# Option A: run all conformance tests serially
|
||||
kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\]"
|
||||
|
||||
# run all parallel-safe conformance tests in parallel
|
||||
GINKGO_PARALLEL=y go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]"
|
||||
|
||||
# ... and finish up with remaining tests in serial
|
||||
go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]"
|
||||
# Option B: run parallel conformance tests first, then serial conformance tests serially
|
||||
GINKGO_PARALLEL=y kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]"
|
||||
kubetest --provider=skeleton --test --test_args="--ginkgo.focus=\[Serial\].*\[Conformance\]"
|
||||
```
|
||||
|
||||
### Kubernetes Conformance Document
|
||||
For each Kubernetes release, a Conformance Document will be generated
|
||||
that lists all of the tests that comprise the conformance test suite, along
|
||||
with the formal specification of each test. For example conformance document for
|
||||
1.9 can be found [here](https://github.com/cncf/k8s-conformance/blob/master/docs/KubeConformance-1.9.md).
|
||||
This document will help people understand what features are being tested without having to look through
|
||||
the testcase's code directly.
|
||||
## Kubernetes Conformance Document
|
||||
|
||||
For each Kubernetes release, a Conformance Document will be generated that lists
|
||||
all of the tests that comprise the conformance test suite, along with the formal
|
||||
specification of each test. For an example, see the [v1.9 conformance doc].
|
||||
This document will help people understand what features are being tested without
|
||||
having to look through the testcase's code directly.
|
||||
|
||||
|
||||
## Adding New Tests
|
||||
## Promoting Tests to Conformance
|
||||
|
||||
To promote a testcase to the conformance test suite, the following
|
||||
steps must be taken:
|
||||
- as a prerequisite, the test case is already part of e2e and is not flaky
|
||||
- the testcase must use the `framework.ConformanceIt()` function rather
|
||||
than the `framework.It()` function
|
||||
- the testcase must include a comment immediately before the
|
||||
`framework.ConformanceIt()` call that includes all of the required
|
||||
metadata about the test (see the [Test Metadata](#test-metadata) section)
|
||||
- use "Promote xxx e2e test to Conformance" as template of your PR title
|
||||
- tag your PR with "/area conformance" label
|
||||
- send your PR to Sig-Architecture for review by adding "@kubernetes/sig-architecture-pr-reviews"
|
||||
also CC the relevant Sig and Sig-Architecture
|
||||
To promote a test to the conformance test suite, open a PR as follows:
|
||||
- is titled "Promote xxx e2e test to Conformance"
|
||||
- includes information and metadata in the description as follows:
|
||||
- "/area conformance" on a newline
|
||||
- "@kubernetes/sig-architecture-pr-reviews @kubernetes/sig-foo-pr-reviews
|
||||
@kubernetes/cncf-conformance-wg" on a new line, where sig-foo is whichever
|
||||
sig owns this test
|
||||
- any necessary information in the description to verify that the test meets
|
||||
[conformance test requirements], such as links to reports or dashboards that
|
||||
prove lack of flakiness
|
||||
- contains no other modifications to test source code other than the following:
|
||||
- modifies the testcase to use the `framework.ConformanceIt()` function rather
|
||||
than the `framework.It()` function
|
||||
- adds a comment immediately before the `ConformanceIt()` call that includes
|
||||
all of the required [conformance test comment metadata]
|
||||
- add the PR to SIG Architecture's [Conformance Test Review board]
|
||||
|
||||
|
||||
### Test Metadata
|
||||
### Conformance Test Comment Metadata
|
||||
|
||||
Each conformance test must include the following piece of metadata
|
||||
within its associated comment:
|
||||
|
@ -114,7 +165,7 @@ within its associated comment:
|
|||
then those releases should be included as well (comma separated)
|
||||
- `Testname`: a human readable short name of the test
|
||||
- `Description`: a detailed description of the test. This field must describe
|
||||
the required behaviour of the Kubernetes components being tested using
|
||||
the required behaviour of the Kubernetes components being tested using
|
||||
[RFC2119](https://tools.ietf.org/html/rfc2119) keywords. This field
|
||||
is meant to be a "specification" of the tested Kubernetes features, as
|
||||
such, it must be detailed enough so that readers can fully understand
|
||||
|
@ -140,20 +191,26 @@ framework.ConformanceIt("it should print the output to logs", func() {
|
|||
})
|
||||
```
|
||||
|
||||
The corresponding portion of the Kubernetes Conformance Documentfor this test would then look
|
||||
like this:
|
||||
The corresponding portion of the Kubernetes Conformance Documentfor this test
|
||||
would then look like this:
|
||||
|
||||
>
|
||||
> ## [Kubelet: log output](https://github.com/kubernetes/kubernetes/tree/release-1.9/test/e2e_node/kubelet_test.go#L47)
|
||||
>
|
||||
>
|
||||
> Release : v1.9
|
||||
>
|
||||
>
|
||||
> By default the stdout and stderr from the process being executed in a pod MUST be sent to the pod's logs.
|
||||
|
||||
### Reporting Conformance Test Results
|
||||
|
||||
Conformance test results, by provider and releases, can be viewed in the
|
||||
federated [Conformance TestGrid dashboard](https://k8s-testgrid.appspot.com/conformance-all).
|
||||
If you wish to contribute conformance test results for your provider,
|
||||
please follow this [on-boarding document](https://docs.google.com/document/d/1lGvP89_DdeNO84I86BVAU4qY3h2VCRll45tGrpyx90A/edit#).
|
||||
Conformance test results, by provider and releases, can be viewed in the
|
||||
[testgrid conformance dashboard]. If you wish to contribute test results
|
||||
for your provider, please see the [testgrid conformance README]
|
||||
|
||||
[kubernetes versioning policy]: /contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew
|
||||
[Conformance Test Review board]: https://github.com/kubernetes-sigs/architecture-tracking/projects/1
|
||||
[conformance test requirements]: #conformance-test-requirements
|
||||
[conformance test metadata]: #conformance-test-metadata
|
||||
[conformance test version skew policy]: #conformance-test-version-skew-policy
|
||||
[testgrid conformance dashboard]: https://testgrid.k8s.io/conformance-all
|
||||
[testgrid conformance README]: https://github.com/kubernetes/test-infra/blob/master/testgrid/conformance/README.md
|
||||
[v1.9 conformance doc]: https://github.com/cncf/k8s-conformance/blob/master/docs/KubeConformance-1.9.md
|
||||
|
|
|
@ -51,7 +51,7 @@ The old, pre-CRI Docker integration was removed in 1.7.
|
|||
|
||||
## Specifications, design documents and proposals
|
||||
|
||||
The Kubernetes 1.5 [blog post on CRI](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html)
|
||||
The Kubernetes 1.5 [blog post on CRI](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
|
||||
serves as a general introduction.
|
||||
|
||||
|
||||
|
|
|
@ -115,6 +115,11 @@ environment. Recent Linux distros should work out-of-the-box.
|
|||
macOS ships with outdated BSD-based tools. We recommend installing [macOS GNU
|
||||
tools].
|
||||
|
||||
### rsync
|
||||
|
||||
Kubernetes build system requires `rsync` command present in the development
|
||||
platform.
|
||||
|
||||
### etcd
|
||||
|
||||
Kubernetes maintains state in [`etcd`][etcd-latest], a distributed key store.
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
- [Building Kubernetes and Running the Tests](#building-kubernetes-and-running-the-tests)
|
||||
- [Cleaning up](#cleaning-up)
|
||||
- [Advanced testing](#advanced-testing)
|
||||
- [Installing/updating kubetest](#installingupdating-kubetest)
|
||||
- [Extracting a specific version of kubernetes](#extracting-a-specific-version-of-kubernetes)
|
||||
- [Bringing up a cluster for testing](#bringing-up-a-cluster-for-testing)
|
||||
- [Federation e2e tests](#federation-e2e-tests)
|
||||
|
@ -66,7 +65,12 @@ should also read [Writing Good e2e Tests](writing-good-e2e-tests.md)
|
|||
## Building Kubernetes and Running the Tests
|
||||
|
||||
There are a variety of ways to run e2e tests, but we aim to decrease the number
|
||||
of ways to run e2e tests to a canonical way: `hack/e2e.go`.
|
||||
of ways to run e2e tests to a canonical way: `kubetest`.
|
||||
|
||||
You can install `kubetest` as follows:
|
||||
```sh
|
||||
go get -u k8s.io/test-infra/kubetest
|
||||
```
|
||||
|
||||
You can run an end-to-end test which will bring up a master and nodes, perform
|
||||
some tests, and then tear everything down. Make sure you have followed the
|
||||
|
@ -81,33 +85,33 @@ you can do so via `make WHAT=test/e2e/e2e.test`, and then re-running the ginkgo
|
|||
To build Kubernetes, up a cluster, run tests, and tear everything down, use:
|
||||
|
||||
```sh
|
||||
go run hack/e2e.go -- --build --up --test --down
|
||||
kubetest --build --up --test --down
|
||||
```
|
||||
|
||||
If you'd like to just perform one of these steps, here are some examples:
|
||||
|
||||
```sh
|
||||
# Build binaries for testing
|
||||
go run hack/e2e.go -- --build
|
||||
kubetest --build
|
||||
|
||||
# Create a fresh cluster. Deletes a cluster first, if it exists
|
||||
go run hack/e2e.go -- --up
|
||||
kubetest --up
|
||||
|
||||
# Run all tests
|
||||
go run hack/e2e.go -- --test
|
||||
kubetest --test
|
||||
|
||||
# Run tests matching the regex "\[Feature:Performance\]" against a local cluster
|
||||
# Specify "--provider=local" flag when running the tests locally
|
||||
go run hack/e2e.go -- --test --test_args="--ginkgo.focus=\[Feature:Performance\]" --provider=local
|
||||
kubetest --test --test_args="--ginkgo.focus=\[Feature:Performance\]" --provider=local
|
||||
|
||||
# Conversely, exclude tests that match the regex "Pods.*env"
|
||||
go run hack/e2e.go -- --test --test_args="--ginkgo.skip=Pods.*env"
|
||||
kubetest --test --test_args="--ginkgo.skip=Pods.*env"
|
||||
|
||||
# Run tests in parallel, skip any that must be run serially
|
||||
GINKGO_PARALLEL=y go run hack/e2e.go -- --test --test_args="--ginkgo.skip=\[Serial\]"
|
||||
GINKGO_PARALLEL=y kubetest --test --test_args="--ginkgo.skip=\[Serial\]"
|
||||
|
||||
# Run tests in parallel, skip any that must be run serially and keep the test namespace if test failed
|
||||
GINKGO_PARALLEL=y go run hack/e2e.go -- --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-failure=false"
|
||||
GINKGO_PARALLEL=y kubetest --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-failure=false"
|
||||
|
||||
# Flags can be combined, and their actions will take place in this order:
|
||||
# --build, --up, --test, --down
|
||||
|
@ -115,18 +119,18 @@ GINKGO_PARALLEL=y go run hack/e2e.go -- --test --test_args="--ginkgo.skip=\[Seri
|
|||
# You can also specify an alternative provider, such as 'aws'
|
||||
#
|
||||
# e.g.:
|
||||
go run hack/e2e.go -- --provider=aws --build --up --test --down
|
||||
kubetest --provider=aws --build --up --test --down
|
||||
|
||||
# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for
|
||||
# cleaning up after a failed test or viewing logs.
|
||||
# cleaning up after a failed test or viewing logs.
|
||||
# kubectl output is default on, you can use --verbose-commands=false to suppress output.
|
||||
go run hack/e2e.go -- -ctl='get events'
|
||||
go run hack/e2e.go -- -ctl='delete pod foobar'
|
||||
kubetest -ctl='get events'
|
||||
kubetest -ctl='delete pod foobar'
|
||||
```
|
||||
|
||||
The tests are built into a single binary which can be used to deploy a
|
||||
Kubernetes system or run tests against an already-deployed Kubernetes system.
|
||||
See `go run hack/e2e.go -- --help` (or the flag definitions in `hack/e2e.go`) for
|
||||
See `kubetest --help` (or the flag definitions in `hack/e2e.go`) for
|
||||
more options, such as reusing an existing cluster.
|
||||
|
||||
### Cleaning up
|
||||
|
@ -136,26 +140,11 @@ something goes wrong and you still have some VMs running you can force a cleanup
|
|||
with this command:
|
||||
|
||||
```sh
|
||||
go run hack/e2e.go -- --down
|
||||
kubetest --down
|
||||
```
|
||||
|
||||
## Advanced testing
|
||||
|
||||
### Installing/updating kubetest
|
||||
|
||||
The logic in `e2e.go` moved out of the main kubernetes repo to test-infra.
|
||||
The remaining code in `hack/e2e.go` installs `kubetest` and sends it flags.
|
||||
It now lives in [kubernetes/test-infra/kubetest](https://git.k8s.io/test-infra/kubetest).
|
||||
By default `hack/e2e.go` updates and installs `kubetest` once per day.
|
||||
Control the updater behavior with the `--get` and `--old` flags:
|
||||
The `--` flag separates updater and kubetest flags (kubetest flags on the right).
|
||||
|
||||
```sh
|
||||
go run hack/e2e.go --get=true --old=1h -- # Update every hour
|
||||
go run hack/e2e.go --get=false -- # Never attempt to install/update.
|
||||
go install k8s.io/test-infra/kubetest # Manually install
|
||||
go get -u k8s.io/test-infra/kubetest # Manually update installation
|
||||
```
|
||||
### Extracting a specific version of kubernetes
|
||||
|
||||
The `kubetest` binary can download and extract a specific version of kubernetes,
|
||||
|
@ -166,28 +155,28 @@ There are a variety of values to pass this flag:
|
|||
|
||||
```sh
|
||||
# Official builds: <ci|release>/<latest|stable>[-N.N]
|
||||
go run hack/e2e.go -- --extract=ci/latest --up # Deploy the latest ci build.
|
||||
go run hack/e2e.go -- --extract=ci/latest-1.5 --up # Deploy the latest 1.5 CI build.
|
||||
go run hack/e2e.go -- --extract=release/latest --up # Deploy the latest RC.
|
||||
go run hack/e2e.go -- --extract=release/stable-1.5 --up # Deploy the 1.5 release.
|
||||
kubetest --extract=ci/latest --up # Deploy the latest ci build.
|
||||
kubetest --extract=ci/latest-1.5 --up # Deploy the latest 1.5 CI build.
|
||||
kubetest --extract=release/latest --up # Deploy the latest RC.
|
||||
kubetest --extract=release/stable-1.5 --up # Deploy the 1.5 release.
|
||||
|
||||
# A specific version:
|
||||
go run hack/e2e.go -- --extract=v1.5.1 --up # Deploy 1.5.1
|
||||
go run hack/e2e.go -- --extract=v1.5.2-beta.0 --up # Deploy 1.5.2-beta.0
|
||||
go run hack/e2e.go -- --extract=gs://foo/bar --up # --stage=gs://foo/bar
|
||||
kubetest --extract=v1.5.1 --up # Deploy 1.5.1
|
||||
kubetest --extract=v1.5.2-beta.0 --up # Deploy 1.5.2-beta.0
|
||||
kubetest --extract=gs://foo/bar --up # --stage=gs://foo/bar
|
||||
|
||||
# Whatever GKE is using (gke, gke-staging, gke-test):
|
||||
go run hack/e2e.go -- --extract=gke --up # Deploy whatever GKE prod uses
|
||||
kubetest --extract=gke --up # Deploy whatever GKE prod uses
|
||||
|
||||
# Using a GCI version:
|
||||
go run hack/e2e.go -- --extract=gci/gci-canary --up # Deploy the version for next gci release
|
||||
go run hack/e2e.go -- --extract=gci/gci-57 # Deploy the version bound to gci m57
|
||||
go run hack/e2e.go -- --extract=gci/gci-57/ci/latest # Deploy the latest CI build using gci m57 for the VM image
|
||||
kubetest --extract=gci/gci-canary --up # Deploy the version for next gci release
|
||||
kubetest --extract=gci/gci-57 # Deploy the version bound to gci m57
|
||||
kubetest --extract=gci/gci-57/ci/latest # Deploy the latest CI build using gci m57 for the VM image
|
||||
|
||||
# Reuse whatever is already built
|
||||
go run hack/e2e.go -- --up # Most common. Note, no extract flag
|
||||
go run hack/e2e.go -- --build --up # Most common. Note, no extract flag
|
||||
go run hack/e2e.go -- --build --stage=gs://foo/bar --extract=local --up # Extract the staged version
|
||||
kubetest --up # Most common. Note, no extract flag
|
||||
kubetest --build --up # Most common. Note, no extract flag
|
||||
kubetest --build --stage=gs://foo/bar --extract=local --up # Extract the staged version
|
||||
```
|
||||
|
||||
### Bringing up a cluster for testing
|
||||
|
@ -327,7 +316,7 @@ Next, specify the docker repository where your ci images will be pushed.
|
|||
* Compile the binaries and build container images:
|
||||
|
||||
```sh
|
||||
$ KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -- -build
|
||||
$ KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true kubetest -build
|
||||
```
|
||||
|
||||
* Push the federation container images
|
||||
|
@ -342,7 +331,7 @@ The following command will create the underlying Kubernetes clusters in each of
|
|||
federation control plane in the cluster occupying the last zone in the `E2E_ZONES` list.
|
||||
|
||||
```sh
|
||||
$ go run hack/e2e.go -- --up
|
||||
$ kubetest --up
|
||||
```
|
||||
|
||||
#### Run the Tests
|
||||
|
@ -350,13 +339,13 @@ $ go run hack/e2e.go -- --up
|
|||
This will run only the `Feature:Federation` e2e tests. You can omit the `ginkgo.focus` argument to run the entire e2e suite.
|
||||
|
||||
```sh
|
||||
$ go run hack/e2e.go -- --test --test_args="--ginkgo.focus=\[Feature:Federation\]"
|
||||
$ kubetest --test --test_args="--ginkgo.focus=\[Feature:Federation\]"
|
||||
```
|
||||
|
||||
#### Teardown
|
||||
|
||||
```sh
|
||||
$ go run hack/e2e.go -- --down
|
||||
$ kubetest --down
|
||||
```
|
||||
|
||||
#### Shortcuts for test developers
|
||||
|
@ -417,20 +406,20 @@ In order to run an E2E test against a locally running cluster, first make sure
|
|||
to have a local build of the tests:
|
||||
|
||||
```sh
|
||||
go run hack/e2e.go -- --build
|
||||
kubetest --build
|
||||
```
|
||||
|
||||
Then point the tests at a custom host directly:
|
||||
|
||||
```sh
|
||||
export KUBECONFIG=/path/to/kubeconfig
|
||||
go run hack/e2e.go -- --provider=local --test
|
||||
kubetest --provider=local --test
|
||||
```
|
||||
|
||||
To control the tests that are run:
|
||||
|
||||
```sh
|
||||
go run hack/e2e.go -- --provider=local --test --test_args="--ginkgo.focus=Secrets"
|
||||
kubetest --provider=local --test --test_args="--ginkgo.focus=Secrets"
|
||||
```
|
||||
|
||||
You will also likely need to specify `minStartupPods` to match the number of
|
||||
|
@ -438,7 +427,7 @@ nodes in your cluster. If you're testing against a cluster set up by
|
|||
`local-up-cluster.sh`, you will need to do the following:
|
||||
|
||||
```sh
|
||||
go run hack/e2e.go -- --provider=local --test --test_args="--minStartupPods=1 --ginkgo.focus=Secrets"
|
||||
kubetest --provider=local --test --test_args="--minStartupPods=1 --ginkgo.focus=Secrets"
|
||||
```
|
||||
|
||||
### Version-skewed and upgrade testing
|
||||
|
@ -470,7 +459,7 @@ export CLUSTER_API_VERSION=${OLD_VERSION}
|
|||
|
||||
# Deploy a cluster at the old version; see above for more details
|
||||
cd ./kubernetes_old
|
||||
go run ./hack/e2e.go -- --up
|
||||
kubetest --up
|
||||
|
||||
# Upgrade the cluster to the new version
|
||||
#
|
||||
|
@ -478,11 +467,11 @@ go run ./hack/e2e.go -- --up
|
|||
#
|
||||
# You can target Feature:MasterUpgrade or Feature:ClusterUpgrade
|
||||
cd ../kubernetes
|
||||
go run ./hack/e2e.go -- --provider=gke --test --check-version-skew=false --test_args="--ginkgo.focus=\[Feature:MasterUpgrade\]"
|
||||
kubetest --provider=gke --test --check-version-skew=false --test_args="--ginkgo.focus=\[Feature:MasterUpgrade\]"
|
||||
|
||||
# Run old tests with new kubectl
|
||||
cd ../kubernetes_old
|
||||
go run ./hack/e2e.go -- --provider=gke --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh"
|
||||
kubetest --provider=gke --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh"
|
||||
```
|
||||
|
||||
If you are just testing version-skew, you may want to just deploy at one
|
||||
|
@ -494,14 +483,14 @@ upgrade process:
|
|||
|
||||
# Deploy a cluster at the new version
|
||||
cd ./kubernetes
|
||||
go run ./hack/e2e.go -- --up
|
||||
kubetest --up
|
||||
|
||||
# Run new tests with old kubectl
|
||||
go run ./hack/e2e.go -- --test --test_args="--kubectl-path=$(pwd)/../kubernetes_old/cluster/kubectl.sh"
|
||||
kubetest --test --test_args="--kubectl-path=$(pwd)/../kubernetes_old/cluster/kubectl.sh"
|
||||
|
||||
# Run old tests with new kubectl
|
||||
cd ../kubernetes_old
|
||||
go run ./hack/e2e.go -- --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh"
|
||||
kubetest --test --test_args="--kubectl-path=$(pwd)/../kubernetes/cluster/kubectl.sh"
|
||||
```
|
||||
|
||||
#### Test jobs naming convention
|
||||
|
@ -576,10 +565,25 @@ suite, it receives a `[Feature:.+]` label, e.g. `[Feature:Performance]` or
|
|||
`[Feature:Ingress]`. `[Feature:.+]` tests are not run in our core suites,
|
||||
instead running in custom suites. If a feature is experimental or alpha and is
|
||||
not enabled by default due to being incomplete or potentially subject to
|
||||
breaking changes, it does *not* block the merge-queue, and thus should run in
|
||||
breaking changes, it does *not* block PR merges, and thus should run in
|
||||
some separate test suites owned by the feature owner(s)
|
||||
(see [Continuous Integration](#continuous-integration) below).
|
||||
|
||||
- `[Conformance]`: Designate that this test is included in the Conformance
|
||||
test suite for [Conformance Testing](conformance-tests.md). This test must
|
||||
meet a number of [requirements](conformance-tests.md#conformance-test-requirements)
|
||||
to be eligible for this tag. This tag does not supersed any other labels.
|
||||
|
||||
- The following tags are not considered to be exhaustively applied, but are
|
||||
intended to further categorize existing `[Conformance]` tests, or tests that are
|
||||
being considered as candidate for promotion to `[Conformance]` as we work to
|
||||
refine requirements:
|
||||
- `[Privileged]`: This is a test that requires privileged access
|
||||
- `[Internet]`: This is a test that assumes access to the public internet
|
||||
- `[Deprecated]`: This is a test that exercises a deprecated feature
|
||||
- `[Alpha]`: This is a test that exercises an alpha feature
|
||||
- `[Beta]`: This is a test that exercises a beta feature
|
||||
|
||||
Every test should be owned by a [SIG](/sig-list.md),
|
||||
and have a corresponding `[sig-<name>]` label.
|
||||
|
||||
|
@ -599,10 +603,6 @@ In time, it is our intent to add or autogenerate a sample viper configuration th
|
|||
|
||||
### Conformance tests
|
||||
|
||||
Finally, `[Conformance]` tests represent a subset of the e2e-tests we expect to
|
||||
pass on **any** Kubernetes cluster. The `[Conformance]` label does not supersede
|
||||
any other labels.
|
||||
|
||||
For more information on Conformance tests please see the [Conformance Testing](conformance-tests.md)
|
||||
|
||||
## Continuous Integration
|
||||
|
@ -611,12 +611,10 @@ A quick overview of how we run e2e CI on Kubernetes.
|
|||
|
||||
### What is CI?
|
||||
|
||||
We run a battery of `e2e` tests against `HEAD` of the master branch on a
|
||||
continuous basis, and block merges via the [submit
|
||||
queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the
|
||||
subset is defined in the [munger config](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
|
||||
via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and
|
||||
`kubernetes-test-go` jobs for build and unit and integration tests).
|
||||
We run a battery of [release-blocking jobs](https://k8s-testgrid.appspot.com/sig-release-master-blocking)
|
||||
against `HEAD` of the master branch on a continuous basis, and block merges
|
||||
via [Tide](https://git.k8s.io/test-infra/prow/cmd/tide) on a subset of those
|
||||
tests if they fail.
|
||||
|
||||
CI results can be found at [ci-test.k8s.io](http://ci-test.k8s.io), e.g.
|
||||
[ci-test.k8s.io/kubernetes-e2e-gce/10594](http://ci-test.k8s.io/kubernetes-e2e-gce/10594).
|
||||
|
@ -668,6 +666,9 @@ If a behavior does not currently have coverage and a developer wishes to add a
|
|||
new e2e test, navigate to the ./test/e2e directory and create a new test using
|
||||
the existing suite as a guide.
|
||||
|
||||
**NOTE:** To build/run with tests in a new directory within ./test/e2e, add the
|
||||
directory to import list in ./test/e2e/e2e_test.go
|
||||
|
||||
TODO(#20357): Create a self-documented example which has been disabled, but can
|
||||
be copied to create new tests and outlines the capabilities and libraries used.
|
||||
|
||||
|
@ -684,14 +685,11 @@ contend for resources; see above about [kinds of tests](#kinds_of_tests).
|
|||
|
||||
Generally, a feature starts as `experimental`, and will be run in some suite
|
||||
owned by the team developing the feature. If a feature is in beta or GA, it
|
||||
*should* block the merge-queue. In moving from experimental to beta or GA, tests
|
||||
*should* block PR merges and releases. In moving from experimental to beta or GA, tests
|
||||
that are expected to pass by default should simply remove the `[Feature:.+]`
|
||||
label, and will be incorporated into our core suites. If tests are not expected
|
||||
to pass by default, (e.g. they require a special environment such as added
|
||||
quota,) they should remain with the `[Feature:.+]` label, and the suites that
|
||||
run them should be incorporated into the
|
||||
[munger config](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
|
||||
via the `jenkins-jobs` flag.
|
||||
quota,) they should remain with the `[Feature:.+]` label.
|
||||
|
||||
Occasionally, we'll want to add tests to better exercise features that are
|
||||
already GA. These tests also shouldn't go straight to CI. They should begin by
|
||||
|
@ -715,7 +713,7 @@ system to 30,50,100 pods per/node and measures the different characteristics of
|
|||
the system, such as throughput, api-latency, etc.
|
||||
|
||||
For a good overview of how we analyze performance data, please read the
|
||||
following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html)
|
||||
following [post](https://kubernetes.io/blog/2015/09/kubernetes-performance-measurements-and/)
|
||||
|
||||
For developers who are interested in doing their own performance analysis, we
|
||||
recommend setting up [prometheus](http://prometheus.io/) for data collection,
|
||||
|
|
|
@ -76,18 +76,16 @@ we have the following guidelines:
|
|||
3. If you can reproduce it (or it's obvious from the logs what happened), you
|
||||
should then be able to fix it, or in the case where someone is clearly more
|
||||
qualified to fix it, reassign it with very clear instructions.
|
||||
4. PRs that fix or help debug flakes may have the P0 priority set to get them
|
||||
through the merge queue as fast as possible.
|
||||
5. Once you have made a change that you believe fixes a flake, it is conservative
|
||||
4. Once you have made a change that you believe fixes a flake, it is conservative
|
||||
to keep the issue for the flake open and see if it manifests again after the
|
||||
change is merged.
|
||||
6. If you can't reproduce a flake: __don't just close it!__ Every time a flake comes
|
||||
5. If you can't reproduce a flake: __don't just close it!__ Every time a flake comes
|
||||
back, at least 2 hours of merge time is wasted. So we need to make monotonic
|
||||
progress towards narrowing it down every time a flake occurs. If you can't
|
||||
figure it out from the logs, add log messages that would have help you figure
|
||||
it out. If you make changes to make a flake more reproducible, please link
|
||||
your pull request to the flake you're working on.
|
||||
7. If a flake has been open, could not be reproduced, and has not manifested in
|
||||
6. If a flake has been open, could not be reproduced, and has not manifested in
|
||||
3 months, it is reasonable to close the flake issue with a note saying
|
||||
why.
|
||||
|
||||
|
|
|
@ -15,6 +15,19 @@ the tools.
|
|||
|
||||
This doc will focus on predictability and reproducibility.
|
||||
|
||||
## Justifications for an update
|
||||
|
||||
Before you update a dependency, take a moment to consider why it should be
|
||||
updated. Valid reasons include:
|
||||
1. We need new functionality that is in a later version.
|
||||
2. New or improved APIs in the dependency significantly improve Kubernetes code.
|
||||
3. Bugs were fixed that impact Kubernetes.
|
||||
4. Security issues were fixed even if they don't impact Kubernetes yet.
|
||||
5. Performance, scale, or efficiency was meaningfully improved.
|
||||
6. We need dependency A and there is a transitive dependency B.
|
||||
7. Kubernetes has an older level of a dependency that is precluding being able
|
||||
to work with other projects in the ecosystem.
|
||||
|
||||
## Theory of operation
|
||||
|
||||
The `go` toolchain assumes a global workspace that hosts all of your Go code.
|
||||
|
|
|
@ -23,8 +23,11 @@ The following conventions for the glog levels to use.
|
|||
* Scheduler log messages
|
||||
* glog.V(3) - Extended information about changes
|
||||
* More info about system state changes
|
||||
* glog.V(4) - Debug level verbosity (for now)
|
||||
* glog.V(4) - Debug level verbosity
|
||||
* Logging in particularly thorny parts of code where you may want to come back later and check it
|
||||
* glog.V(5) - Trace level verbosity
|
||||
* Context to understand the steps leading up to errors and warnings
|
||||
* More information for troubleshooting reported issues
|
||||
|
||||
As per the comments, the practical default level is V(2). Developers and QE
|
||||
environments may wish to run at V(3) or V(4). If you wish to change the log
|
||||
|
|
|
@ -26,7 +26,7 @@ Search for the above job names in various configuration files as below:
|
|||
|
||||
* Prow config: https://git.k8s.io/test-infra/prow/config.yaml
|
||||
* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json
|
||||
* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml
|
||||
* Test grid config: https://git.k8s.io/test-infra/testgrid/config.yaml
|
||||
* Job specific config: https://git.k8s.io/test-infra/jobs/env
|
||||
|
||||
### Results
|
||||
|
@ -75,7 +75,7 @@ Search for the above job names in various configuration files as below:
|
|||
|
||||
* Prow config: https://git.k8s.io/test-infra/prow/config.yaml
|
||||
* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json
|
||||
* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml
|
||||
* Test grid config: https://git.k8s.io/test-infra/testgrid/config.yaml
|
||||
* Job specific config: https://git.k8s.io/test-infra/jobs/env
|
||||
|
||||
### Results
|
||||
|
|
|
@ -4,7 +4,7 @@ This document is focused on Kubernetes developers and contributors
|
|||
who need to create a feature, issue, or pull request which targets a specific
|
||||
release milestone.
|
||||
|
||||
- [TL;DR](#tl-dr)
|
||||
- [TL;DR](#tldr)
|
||||
- [Definitions](#definitions)
|
||||
- [The Release Cycle](#the-release-cycle)
|
||||
- [Removal Of Items From The Milestone](#removal-of-items-from-the-milestone)
|
||||
|
@ -56,7 +56,7 @@ If you want your PR to get merged, it needs the following required labels and mi
|
|||
<td>Required Labels</td>
|
||||
<td>
|
||||
<ul>
|
||||
<li>/milestone {v1.y}</li>
|
||||
<!--Weeks 1-8-->
|
||||
<li>/sig {name}</li>
|
||||
<li>/kind {type}</li>
|
||||
<li>/lgtm</li>
|
||||
|
@ -65,6 +65,7 @@ If you want your PR to get merged, it needs the following required labels and mi
|
|||
</td>
|
||||
<td>
|
||||
<ul>
|
||||
<!--Week 9-->
|
||||
<li>/milestone {v1.y}</li>
|
||||
<li>/sig {name}</li>
|
||||
<li>/kind {type}</li>
|
||||
|
@ -75,6 +76,7 @@ If you want your PR to get merged, it needs the following required labels and mi
|
|||
</td>
|
||||
<td>
|
||||
<ul>
|
||||
<!--Weeks 10-12-->
|
||||
<li>/milestone {v1.y}</li>
|
||||
<li>/sig {name}</li>
|
||||
<li>/kind {bug, failing-test}</li>
|
||||
|
@ -84,15 +86,19 @@ If you want your PR to get merged, it needs the following required labels and mi
|
|||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<!--Weeks 12+-->
|
||||
Return to 'Normal Dev' phase requirements:
|
||||
<ul>
|
||||
<li>/milestone {v1.y}</li>
|
||||
<li>/sig {name}</li>
|
||||
<li>/kind {bug, failing-test}</li>
|
||||
<li>/priority critical-urgent</li>
|
||||
<li>/kind {type}</li>
|
||||
<li>/lgtm</li>
|
||||
<li>/approved</li>
|
||||
<li>cherry-pick labels set by release branch patch manager</li>
|
||||
</ul>
|
||||
|
||||
Merges into the 1.y branch are now [via cherrypicks](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md), approved by release branch manager.
|
||||
</td>
|
||||
<td>
|
||||
<ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
@ -138,11 +144,11 @@ project is always stable so that individual commits can be
|
|||
flagged as having broken something.
|
||||
|
||||
With ongoing feature definition through the year, some set of items
|
||||
will bubble up as targeting a given release. The **feature freeze**
|
||||
will bubble up as targeting a given release. The **enhancement freeze**
|
||||
starts ~4 weeks into release cycle. By this point all intended
|
||||
feature work for the given release has been defined in suitable
|
||||
planning artifacts in conjunction with the Release Team's [features
|
||||
lead](https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks/features).
|
||||
planning artifacts in conjunction with the Release Team's [enhancements
|
||||
lead](https://git.k8s.io/sig-release/release-team/role-handbooks/enhancements/README.md).
|
||||
|
||||
Implementation and bugfixing is ongoing across the cycle, but
|
||||
culminates in a code slush and code freeze period:
|
||||
|
@ -229,11 +235,11 @@ milestone by creating GitHub issues and marking them with the Prow "/milestone"
|
|||
command.
|
||||
|
||||
For the first ~4 weeks into the release cycle, the release team's
|
||||
Features Lead will interact with SIGs and feature owners via GitHub,
|
||||
Enhancements Lead will interact with SIGs and feature owners via GitHub,
|
||||
Slack, and SIG meetings to capture all required planning artifacts.
|
||||
|
||||
If you have a feature to target for an upcoming release milestone, begin a
|
||||
conversation with your SIG leadership and with that release's Features
|
||||
conversation with your SIG leadership and with that release's Enhancements
|
||||
Lead.
|
||||
|
||||
### Issue additions
|
||||
|
@ -266,6 +272,8 @@ described above.
|
|||
|
||||
## Other Required Labels
|
||||
|
||||
*Note* [Here is the list of labels and their use and purpose.](https://git.k8s.io/test-infra/label_sync/labels.md#labels-that-apply-to-all-repos-for-both-issues-and-prs)
|
||||
|
||||
### SIG Owner Label
|
||||
|
||||
The SIG owner label defines the SIG to which we escalate if a
|
||||
|
@ -297,20 +305,21 @@ of the issue.
|
|||
- even less urgent / critical than `priority/important-soon`
|
||||
- moved out of milestone more aggressively than `priority/important-soon`
|
||||
|
||||
### Issue Kind Label
|
||||
### Issue/PR Kind Label
|
||||
|
||||
The issue kind is used to help identify the types of changes going
|
||||
into the release over time. This may allow the release team to
|
||||
develop a better understanding of what sorts of issues we would
|
||||
miss with a faster release cadence.
|
||||
|
||||
This may also be used to escalate to the correct SIG GitHub team.
|
||||
For release targeted issues one of the follow issue types must be
|
||||
set (additional may also be set):
|
||||
For release targeted issues, including pull requests, one of the following
|
||||
issue kind labels must be set:
|
||||
|
||||
- `kind/api-change`: Adds, removes, or changes an API
|
||||
- `kind/bug`: Fixes a newly discovered bug.
|
||||
- were not known issues at the start of the development period.
|
||||
- `kind/feature`: New functionality.
|
||||
- `kind/cleanup`: Adding tests, refactoring, fixing old bugs.
|
||||
- `kind/design`: Related to design
|
||||
- `kind/documentation`: Adds documentation
|
||||
- `kind/failing-test`: CI test case is failing consistently.
|
||||
- `kind/feature`: New functionality.
|
||||
- `kind/flake`: CI test case is showing intermittent failures.
|
||||
|
|
|
@ -2,8 +2,12 @@
|
|||
|
||||
Running Kubernetes with Vagrant is an easy way to run/test/develop on your
|
||||
local machine in an environment using the same setup procedures when running on
|
||||
GCE or AWS cloud providers. This provider is not tested on a per PR basis, if
|
||||
you experience bugs when testing from HEAD, please open an issue.
|
||||
GCE or AWS cloud providers.
|
||||
|
||||
Note : Support for vagrant has been removed in 1.10. Check
|
||||
[#58118](https://github.com/kubernetes/kubernetes/pull/58118) and
|
||||
[#64561](https://github.com/kubernetes/kubernetes/issues/64561#issuecomment-394366611).
|
||||
You might run into issues with kubernetes versions >= 1.10
|
||||
|
||||
### Prerequisites
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ designing, writing and debugging your end-to-end tests. In
|
|||
particular, "flaky" tests, which pass most of the time but fail
|
||||
intermittently for difficult-to-diagnose reasons are extremely costly
|
||||
in terms of blurring our regression signals and slowing down our
|
||||
automated merge queue. Up-front time and effort designing your test
|
||||
automated merge velocity. Up-front time and effort designing your test
|
||||
to be reliable is very well spent. Bear in mind that we have hundreds
|
||||
of tests, each running in dozens of different environments, and if any
|
||||
test in any test environment fails, we have to assume that we
|
||||
|
@ -61,7 +61,7 @@ making the assumption that your test can run a pod on every node in a
|
|||
cluster is not a safe assumption, as some other tests, running at the
|
||||
same time as yours, might have saturated one or more nodes in the
|
||||
cluster. Similarly, running a pod in the system namespace, and
|
||||
assuming that that will increase the count of pods in the system
|
||||
assuming that will increase the count of pods in the system
|
||||
namespace by one is not safe, as some other test might be creating or
|
||||
deleting pods in the system namespace at the same time as your test.
|
||||
If you do legitimately need to write a test like that, make sure to
|
||||
|
|
|
@ -247,7 +247,7 @@ If you're looking to run e2e tests on your own infrastructure, [kubetest](https:
|
|||
## Issues Management or Triage
|
||||
|
||||
Have you ever noticed the total number of [open issues](https://issues.k8s.io)?
|
||||
Helping to manage or triage these open issues can be a great contributionand a great opportunity to learn about the various areas of the project.
|
||||
Helping to manage or triage these open issues can be a great contribution and a great opportunity to learn about the various areas of the project. Triaging is the word we use to describe the process of adding multiple types of descriptive labels to GitHub issues, in order to speed up routing issues to the right folks.
|
||||
Refer to the [Issue Triage Guidelines](/contributors/guide/issue-triage.md) for more information.
|
||||
|
||||
# Community
|
||||
|
@ -273,7 +273,7 @@ You may also contact Paris Pittman via direct message on Kubernetes Slack (@pari
|
|||
## Mentorship
|
||||
|
||||
Please learn about our mentoring initiatives [here](http://git.k8s.io/community/mentoring/README.md).
|
||||
|
||||
Feel free to ask us anything during our [Meet Our Contributors](https://github.com/kubernetes/community/blob/master/mentoring/meet-our-contributors.md) to connect with us.
|
||||
# Advanced Topics
|
||||
|
||||
This section includes things that need to be documented, but typical contributors do not need to interact with regularly.
|
||||
|
|
|
@ -15,12 +15,12 @@ A list of common resources when contributing to Kubernetes.
|
|||
## Workflow
|
||||
|
||||
- [Gubernator Dashboard - k8s.reviews](https://k8s-gubernator.appspot.com/pr)
|
||||
- [Submit Queue](https://submit-queue.k8s.io)
|
||||
- [Tide](https://prow.k8s.io/tide)
|
||||
- [Bot commands](https://go.k8s.io/bot-commands)
|
||||
- [GitHub labels](https://go.k8s.io/github-labels)
|
||||
- [Release Buckets](https://gcsweb.k8s.io/gcs/kubernetes-release/)
|
||||
- Developer Guide
|
||||
- [Cherry Picking Guide](/contributors/devel/cherry-picks.md) - [Queue](https://cherrypick.k8s.io/#/queue)
|
||||
- [Cherry Picking Guide](/contributors/devel/cherry-picks.md)
|
||||
- [Kubernetes Code Search](https://cs.k8s.io/), maintained by [@dims](https://github.com/dims)
|
||||
|
||||
|
||||
|
@ -52,6 +52,7 @@ A list of common resources when contributing to Kubernetes.
|
|||
- steering@kubernetes.io - Mail the steering committee. Public address with public archive.
|
||||
- steering-private@kubernetes.io - Mail the steering committee privately, for sensitive items.
|
||||
- helpdesk@rt.linuxfoundation.org - Mail the LF helpdesk for help with CLA issues.
|
||||
- conduct@kubernetes.io - Contact the Code of Conduct committee, private mailing list.
|
||||
|
||||
## Other
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ and in that case simply add a comment in the issue with your findings.
|
|||
|
||||
Following are few predetermined searches on issues for convenience:
|
||||
* [Longest untriaged issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aopen+sort%3Acreated-asc) (sorted by age)
|
||||
* [Needs to be assigned to a SIG](https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aopen+sort%3Acreated-asc)
|
||||
* [Needs to be assigned to a SIG](https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aopen+label%3Aneeds-sig)
|
||||
* [Newest incoming issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue)
|
||||
* [Busy untriaged issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aopen+sort%3Acomments-desc) (sorted by number of comments)
|
||||
* [Issues that need more attention](https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aopen+sort%3Acomments-asc)
|
||||
|
@ -233,3 +233,20 @@ not removed with the `/remove-lifecycle stale` label or prevented with the
|
|||
for more details. It is fine to add any of the `triage/*` labels described in
|
||||
this issue triage guidelines to issues triaged by the `fejta-bot` for a better
|
||||
understanding of the issue and closing of it.
|
||||
|
||||
## Help Wanted issues
|
||||
|
||||
We use two labels [help wanted](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
|
||||
and [good first issue](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
|
||||
to identify issues that have been specially groomed for new contributors.
|
||||
|
||||
We have specific [guidelines](/contributors/devel/help-wanted.md)
|
||||
for how to use these labels. If you see an issue that satisfies these
|
||||
guidelines, you can add the `help wanted` label with the `/help` command
|
||||
and the `good first issue` label with the `/good-first-issue` command.
|
||||
Please note that adding the `good first issue` label will also automatically
|
||||
add the `help wanted` label.
|
||||
|
||||
If an issue has these labels but does not satisify the guidelines, please
|
||||
ask for more details to be added to the issue or remove the labels using
|
||||
`/remove-help` or `/remove-good-first-issue` commands.
|
||||
|
|
|
@ -73,7 +73,7 @@ These are roles that are important to each and every SIG within the Kubernetes p
|
|||
- Editing PR text: release note, statement
|
||||
- Events
|
||||
- Organizing/helping run Face-to-Face meetings for SIGs/WGs/subprojects
|
||||
- Putting together SIG Intros & Deep-dives for Kubecon
|
||||
- Putting together SIG Intros & Deep-dives for KubeCon/CloudNativeCon
|
||||
|
||||
#### Non-Code Tasks in Primarily-Code roles
|
||||
These are roles that are not code-based, but require knowledge of either general coding, or specific domain knowledge of the Kubernetes code base.
|
||||
|
|
|
@ -84,7 +84,7 @@ filters:
|
|||
Instead, set a `.*` key inside `filters` (as shown in the previous example).
|
||||
|
||||
**WARNING**: The `approve` plugin [does not currently respect `filters`][test-infra-7690].
|
||||
Until that is fixed, `filters` should only be used for the the `labels` key (as shown in the above example).
|
||||
Until that is fixed, `filters` should only be used for the `labels` key (as shown in the above example).
|
||||
|
||||
### OWNERS_ALIASES
|
||||
|
||||
|
@ -209,21 +209,15 @@ is the state of today.
|
|||
|
||||
## Automation using OWNERS files
|
||||
|
||||
### ~[`mungegithub`](https://git.k8s.io/test-infra/mungegithub)~ is deprecated
|
||||
Kubernetes uses the Prow Blunderbuss plugin and Tide.
|
||||
Tide uses GitHub queries to select PRs into “tide pools”, runs as many in a
|
||||
batch as it can (“tide comes in”), and merges them (“tide goes out”).
|
||||
|
||||
Mungegithub's blunderbuss and submit-queue mungers are currently used for kubernetes/kubernetes. Their
|
||||
equivalents are the prow blunderbuss plugin, and prow's tide cmd. These docs will be removed once
|
||||
kubernetes/kubernetes has transitioned over to tide.
|
||||
|
||||
~Mungegithub polls GitHub, and "munges" things it finds, including issues and pull requests. It is
|
||||
stateful, in that restarting it means it loses track of which things it has munged at what time.~
|
||||
|
||||
- ~[munger:
|
||||
blunderbuss](https://git.k8s.io/test-infra/mungegithub/mungers/blunderbuss.go)~
|
||||
- ~responsible for determining **reviewers** and assigning to them~
|
||||
- [munger:
|
||||
submit-queue](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
|
||||
- responsible for merging PR's
|
||||
- [Blunderbuss plugin](https://git.k8s.io/test-infra/prow/plugins/blunderbuss):
|
||||
- responsible for determining **reviewers**
|
||||
- [Tide](https://git.k8s.io/test-infra/prow/cmd/tide):
|
||||
- responsible for automatically running batch tests and merging multiple PRs together whenever possible.
|
||||
- responsible for retriggering stale PR tests.
|
||||
- responsible for updating a GitHub status check explaining why a PR can't be merged (eg: a
|
||||
missing `lgtm` or `approved` label)
|
||||
|
||||
|
@ -247,7 +241,7 @@ pieces of prow are used to implement the code review process above.
|
|||
- [plugin: assign](https://git.k8s.io/test-infra/prow/plugins/assign)
|
||||
- assigns GitHub users in response to `/assign` comments on a PR
|
||||
- unassigns GitHub users in response to `/unassign` comments on a PR
|
||||
- [plugin: approve](https://git.k8s.io/test-infra/prow/plugins/assign)
|
||||
- [plugin: approve](https://git.k8s.io/test-infra/prow/plugins/approve)
|
||||
- per-repo configuration:
|
||||
- `issue_required`: defaults to `false`; when `true`, require that the PR description link to
|
||||
an issue, or that at least one **approver** issues a `/approve no-isse`
|
||||
|
@ -257,7 +251,7 @@ pieces of prow are used to implement the code review process above.
|
|||
OWNERS files has `/approve`'d
|
||||
- comments as required OWNERS files are satisfied
|
||||
- removes outdated approval status comments
|
||||
- [plugin: blunderbuss](https://git.k8s.io/test-infra/prow/plugins/assign)
|
||||
- [plugin: blunderbuss](https://git.k8s.io/test-infra/prow/plugins/blunderbuss)
|
||||
- determines **reviewers** and requests their reviews on PR's
|
||||
- [plugin: lgtm](https://git.k8s.io/test-infra/prow/plugins/lgtm)
|
||||
- adds the `lgtm` label when a **reviewer** comments `/lgtm` on a PR
|
||||
|
|
|
@ -72,34 +72,41 @@ Here's the process the pull request goes through on its way from submission to m
|
|||
|
||||
1. If you're **not** a member of the Kubernetes organization, a Reviewer/Kubernetes Member checks that the pull request is safe to test. If so, they comment `/ok-to-test`. Pull requests by Kubernetes organization [members](/community-membership.md) do not need this step. Now the pull request is considered to be trusted, and the pre-submit tests will run:
|
||||
|
||||
1. Automatic tests run. See the current list of tests on the [MERGE REQUIREMENTS tab, at this link](https://submit-queue.k8s.io/#/info)
|
||||
1. Automatic tests run. See the current list of tests at this [link](https://prow.k8s.io/?repo=kubernetes%2Fkubernetes&type=presubmit)
|
||||
1. If tests fail, resolve issues by pushing edits to your pull request branch
|
||||
1. If the failure is a flake, anyone on trusted pull requests can comment `/retest` to rerun failed tests
|
||||
|
||||
1. Reviewer suggests edits
|
||||
1. Push edits to your pull request branch
|
||||
1. Repeat the prior two steps as needed until reviewer(s) add `/lgtm` label
|
||||
1. Repeat the prior two steps as needed until reviewer(s) add `/lgtm` label. The `/lgtm` label, when applied by someone listed as an `reviewer` in the corresponding project `OWNERS` file, is a signal that the code has passed review from one or more trusted reviewers for that project
|
||||
1. (Optional) Some reviewers prefer that you squash commits at this step
|
||||
1. Follow the bot suggestions to assign an OWNER who will add the `/approve` label to the pull request
|
||||
1. Follow the bot suggestions to assign an OWNER who will add the `/approve` label to the pull request. The `/approve` label, when applied by someone listed as an `approver` in the corresponding project `OWNERS`, is a signal that the code has passed final review and is ready to be automatically merged
|
||||
|
||||
Once the tests pass, all failures are commented as flakes, or the reviewer adds the labels `/lgtm` and `/approved`, the pull request enters the final merge queue. The merge queue is needed to make sure no incompatible changes have been introduced by other pull requests since the tests were last run on your pull request.
|
||||
The behavior of Prow is configurable across projects. You should be aware of the following configurable behaviors.
|
||||
|
||||
* If you are listed as an `/approver` in the `OWNERS` file, an implicit `/approve` can be applied to your pull request. This can result in a merge being triggered by a `/lgtm` label. This is the configured behavior in many projects, including `kubernetes/kubernetes`. You can remove the implicit `/approve` with `/approve cancel`
|
||||
* `/lgtm` can be configured so that from someone listed as both a `reviewer` and an `approver` will cause both labels to be applied. For `kubernetes/kuebernetes` and many other projects this is _not_ the default behavior, and `/lgtm` is decoupled from `/approve`
|
||||
|
||||
Once the tests pass and the reviewer adds the `lgtm` and `approved` labels, the pull request enters the final merge pool. The merge pool is needed to make sure no incompatible changes have been introduced by other pull requests since the tests were last run on your pull request.
|
||||
<!-- TODO: create parallel instructions for reviewers -->
|
||||
|
||||
The [GitHub "munger"](https://git.k8s.io/test-infra/mungegithub) submit-queue plugin will manage the merge queue automatically.
|
||||
[Tide](https://git.k8s.io/test-infra/prow/cmd/tide) will manage the merge pool
|
||||
automatically. It uses GitHub queries to select PRs into “tide pools”,
|
||||
runs as many in a batch as it can (“tide comes in”), and merges them (“tide goes out”).
|
||||
|
||||
1. The pull request enters the merge queue ([http://submit-queue.k8s.io](http://submit-queue.k8s.io))
|
||||
1. The merge queue triggers a test re-run with the comment `/test all [submit-queue is verifying that this pull request is safe to merge]`
|
||||
1. Author has signed the CLA (`cncf-cla: yes` label added to pull request)
|
||||
1. No changes made since last `lgtm` label applied
|
||||
1. The pull request enters the [merge pool](https://prow.k8s.io/tide)
|
||||
if the merge criteria are met. The [PR dashboard](https://prow.k8s.io/pr) shows
|
||||
the difference between your PR's state and the merge criteria so that you can
|
||||
easily see all criteria that are not being met and address them.
|
||||
1. If tests fail, resolve issues by pushing edits to your pull request branch
|
||||
1. If the failure is a flake, anyone can comment `/retest` if the pull request is trusted
|
||||
1. If tests pass, the merge queue automatically merges the pull request
|
||||
1. If tests pass, Tide automatically merges the pull request
|
||||
|
||||
That's the last step. Your pull request is now merged.
|
||||
|
||||
## Marking Unfinished Pull Requests
|
||||
|
||||
If you want to solicit reviews before the implementation of your pull request is complete, you should hold your pull request to ensure that the merge queue does not pick it up and attempt to merge it. There are two methods to achieve this:
|
||||
If you want to solicit reviews before the implementation of your pull request is complete, you should hold your pull request to ensure that Tide does not pick it up and attempt to merge it. There are two methods to achieve this:
|
||||
|
||||
1. You may add the `/hold` or `/hold cancel` comment commands
|
||||
2. You may add or remove a `WIP` or `[WIP]` prefix to your pull request title
|
||||
|
|
|
@ -22,14 +22,12 @@ For pull requests that require additional action from users switching to the new
|
|||
action required: your release note here
|
||||
```
|
||||
|
||||
For pull requests that don't need to be mentioned at release time, just write "NONE" (case insensitive):
|
||||
For pull requests that don't need to be mentioned at release time, use the `/release-note-none` Prow command to add the `release-note-none` label to the PR. You can also write the string "NONE" as a release note in your PR description:
|
||||
|
||||
```release-note
|
||||
NONE
|
||||
```
|
||||
|
||||
The `/release-note-none` comment command can still be used as an alternative to writing "NONE" in the release-note block if it is left empty.
|
||||
To see how to format your release notes, view the kubernetes/kubernetes [pull request template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. Pull Request titles and body comments can be modified at any time prior to the release to make them friendly for release notes.
|
||||
|
||||
To see how to format your release notes, view the kubernetes/kubernetes [pull request template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. pull request titles and body comments can be modified at any time prior to the release to make them friendly for release notes.
|
||||
|
||||
Release notes apply to pull requests on the master branch. For cherry-pick pull requests, see the [cherry-pick instructions](contributors/devel/cherry-picks.md). The only exception to these rules is when a pull request is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master pull request.
|
||||
Release notes apply to pull requests on the master branch. For cherry-pick pull requests, see the [cherry-pick instructions](contributors/devel/cherry-picks.md). The only exception to these rules is when a pull request is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master pull request.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
This is an announcement for the 2017 Kubernetes Leadership Summit, which will occur on June 2nd, 2017 in San Jose, CA.
|
||||
This event will be similar to the [Kubernetes Developer's Summit](/events/2016/developer-summit-2016/Kubernetes_Dev_Summit.md) in November
|
||||
2016, but involving a smaller smaller audience comprised solely of leaders and influencers of the community. These leaders and
|
||||
2016, but involving a smaller audience comprised solely of leaders and influencers of the community. These leaders and
|
||||
influences include the SIG leads, release managers, and representatives from several companies, including (but not limited to)
|
||||
Google, Red Hat, CoreOS, WeaveWorks, Deis, and Mirantis.
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ Assumption: "big tangled ball of pasta" is hard to contribute to
|
|||
- jdumars: the vault provider thing was one of the ebtter things that happened, it pushed us at MS to thing about genercizing the solution, it pushed us to think about what's better for the community vs. what's better for the provider
|
||||
- jdumars: flipside is we need to have a process where people can up with a well accepted / adopted solution, the vault provider thing was one way of doing that
|
||||
- lavalamp: I tend to think that most extension points are special snowflakes and you can't have a generic process for adding a new extension point
|
||||
- thockin: wandering back to kubernetes/kubrnetes "main point", looking at staging as "already broken out", are there other ones that we want to break out?
|
||||
- thockin: wandering back to kubernetes/kubernetes "main point", looking at staging as "already broken out", are there other ones that we want to break out?
|
||||
- dims: kubeadm could move out if needed, could move it to staging for sure
|
||||
- thockin: so what about the rest? eg: kubelet, kube-proxy... do we think that people will concretely get benefits from that? or will that cause more pain
|
||||
- thockin: we recognize this will slow down things
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
Contributor summit - Kubecon 2017
|
||||
Contributor summit - KubeCon/CloudNativeCon 2017
|
||||
|
||||
**@AUTHORS - CONNOR DOYLE**
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ We had a meeting, and the two big items we'd been pushing on hard were:
|
|||
* votining in the proposals in the bootstrap committee
|
||||
* how we're going to handle incubator and contrib etc.
|
||||
|
||||
Incubator/contrib: one of our big concerns are what the the consequences for projects and ecosystems.
|
||||
Incubator/contrib: one of our big concerns are what the consequences for projects and ecosystems.
|
||||
We're still discussing it, please be patient. In the process of solving the incubator process, we have to answer
|
||||
what is kubernetes, which is probably SIGs, but what's a SIG, and who decides, and ... we end up having to examine
|
||||
everything. In terms of deciding what is and isn't kubernetes, we want to have that discussion in the open.
|
||||
|
|
|
@ -17,7 +17,7 @@ In some sense, the summit is a real-life extension of the community meetings and
|
|||
|
||||
## When and Where
|
||||
|
||||
- Tuesday, May 1, 2018 (before Kubecon EU)
|
||||
- Tuesday, May 1, 2018 (before KubeCon/CloudNativeCon EU)
|
||||
- Bella Center, Copenhagen, Denmark
|
||||
- Registration and breakfast start at 8am in Room C1-M0
|
||||
- Happy hour reception onsite to close at 5:30pm
|
||||
|
@ -58,7 +58,7 @@ There is a [Slack channel](https://kubernetes.slack.com/messages/contributor-sum
|
|||
| 7:00 | EmpowerHER event (offsite) |
|
||||
|
||||
- SIG Updates (~5 minutes per SIG)
|
||||
- 2 slides per SIG, focused on cross-SIG issues, not internal SIG discussions (those are for Kubecon)
|
||||
- 2 slides per SIG, focused on cross-SIG issues, not internal SIG discussions (those are for KubeCon/CloudNativeCon)
|
||||
- Identify potential issues that might affect multiple SIGs across the project
|
||||
- One-to-many announcements about changes a SIG expects that might affect others
|
||||
- Track Leads
|
||||
|
@ -68,6 +68,6 @@ There is a [Slack channel](https://kubernetes.slack.com/messages/contributor-sum
|
|||
|
||||
## Misc:
|
||||
|
||||
A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you.
|
||||
A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon/CloudNativeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you.
|
||||
|
||||
Further details to be updated on this doc. Please check back for a complete guide.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Kubernetes New Contributor Workshop - KubeCon EU 2018 - Notes
|
||||
# Kubernetes New Contributor Workshop - KubeCon/CloudNativeCon EU 2018 - Notes
|
||||
|
||||
Joining in the beginning was onboarding on a yacht
|
||||
Now is more onboarding a BIG cruise ship.
|
||||
|
@ -110,7 +110,7 @@ Everything will be refactored (cleaning, move, merged,...)
|
|||
|
||||
### Project
|
||||
|
||||
- [kubernetes/Community](https://github.com/kubernetes/Community): Kubecon, proposition, Code of conduct and Contribution guideline, SIG-list
|
||||
- [kubernetes/Community](https://github.com/kubernetes/Community): KubeCon/CloudNativeCon, proposition, Code of conduct and Contribution guideline, SIG-list
|
||||
- [kubernetes/Features](https://github.com/kubernetes/Features): Features proposal for future release
|
||||
- [kubernetes/Steering](https://github.com/kubernetes/Steering)
|
||||
- [kubernetes/Test-Infra](https://github.com/kubernetes/Test-Infra): All related to test except Perf
|
||||
|
|
|
@ -11,68 +11,152 @@ In some sense, the summit is a real-life extension of the community meetings and
|
|||
|
||||
## Registration
|
||||
|
||||
- [Form to pick tracks and RSVP for the Sunday evening event](https://goo.gl/X8YrRv)
|
||||
- If you are planning on attending the New Contributor Track, [Sign the CLA](/CLA.md) if you have not done so already.
|
||||
The event is now full and is accepting a wait list on the below form. If you are a SIG/WG Chair, Tech Lead, or Subproject Owner, please reach out to community@kubernetes.io after filling out the wait list form.
|
||||
|
||||
This is not your KubeCon/CloudNativeCon ticket. You will need to register for the conference separately.
|
||||
- [RSVP/Wait List Form](https://goo.gl/X8YrRv)
|
||||
- If you are planning on attending the New Contributor Track, [Sign the CLA](/CLA.md) if you have not done so already.
|
||||
|
||||
This is not your KubeCon/CloudNativeCon ticket. This is a co-located event. KC/CNC is currently sold out; however, you do not need a KC/CNC ticket to attend this event.
|
||||
|
||||
## When and Where
|
||||
|
||||
- Day 1: Optional pre-summit social
|
||||
- Sunday, Dec 9th from 5-8PM
|
||||
- Garage, 1130 Broadway Seattle, WA 98122
|
||||
- Day 2: Contributor Summit
|
||||
- Day 2: Contributor Summit
|
||||
- Monday, Dec 10th, 2018 from 8AM-530PM
|
||||
- 6th Floor, Washington State Convention Center, Seattle, WA
|
||||
(Signage will be present)
|
||||
|
||||
There is a [Slack channel](https://kubernetes.slack.com/messages/contributor-summit) (#contributor-summit) for you to use before and during the summit. Look here for volunteer opportunities and content updates. Feel free to pass URLs, notes, reserve the hallway track room, and connect with the organizers.
|
||||
### Badge pick up
|
||||
If you are not attending KubeCon/CnC but attending this event, please reach out to community@kubernetes.io for a separate process.
|
||||
|
||||
You will need your KubeCon/CnC badge to get into Sunday and Monday events. Badge locations:
|
||||
- participating hotels (TBA)
|
||||
- atrium on the 4th floor of the Washington Convention Center
|
||||
- Garage on Sunday night (convenient!)
|
||||
|
||||
## Agenda
|
||||
|
||||
Day 1 - Garage
|
||||
**Day 1 - [Garage](https://www.garagebilliards.com/)**
|
||||
|
||||
Dinner will be served. We will update with menu as we get closer to the event. Beer, wine, and nonalcoholic beverages available.
|
||||
Attendees will have access to bowl, play pool/billiards, kubernetes trivia, and socializing with other contributors.
|
||||
We will publish the dinner menu two weeks before the event but will have options for vegetarian, vegan, and gluten free. Beer, wine, and nonalcoholic beverages available.
|
||||
|
||||
Day 2 - Washington Convention Center
|
||||
- New Contributor Track / Workshop - A half day workshop aimed at getting new and first time contributors on boarded and comfortable with working within the Kubernetes Community. Staying for the duration is required; this is not a workshop you can drop into.
|
||||
- Current Contributor Track - talks, workshops, birds of a feather, unconference sessions, steering committee updates, and more!
|
||||
- Docs Sprint - Working on a curated list of issues and challenges that SIG Docs is tackling at that time.
|
||||
What to expect: Attendees will have access to bowl, play pool/billiards, suggest unconference sessions for the next day, and socializing with other contributors.
|
||||
|
||||
### Schedule (Draft)
|
||||
**Day 2 - Washington Convention Center**
|
||||
- New Contributor Track / Workshop -
|
||||
- A half day workshop aimed at getting new and first time contributors on boarded and comfortable with working within the Kubernetes Community. Staying for the duration is required; this is not a workshop you can drop into. (Capacity: 100)
|
||||
- Current Contributor Track -
|
||||
- talks, workshops, birds of a feather, unconference sessions, steering committee updates, and more!
|
||||
- Docs Planning Session -
|
||||
- Working on a curated list of issues and challenges that SIG Docs is tackling at that time. (Current+Doc Capacity: 300)
|
||||
|
||||
| Time | Main Track | New Contributor Summit | Docs Sprint | Track #1 | Track #2 | Track #3 | Track #4 | Contributor Lounge |
|
||||
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
||||
| **Room** | 608/609 | 602/603/604 | 613 | 606 | 607 | 605 | 611 | 610 |
|
||||
| 8:00am | Breakfast and Registration, Unconference Voting Board Opens |
|
||||
| 9:00am | Welcome & Details |
|
||||
| 10:00am | Technical Vision for Kubernetes| Where to Contribute | Docs Sprint
|
||||
| 10:30am | State of Networking | Where to Communicate |
|
||||
| 10:55am | 10 minute break
|
||||
| 11:05am | State of KEPs | OWNERS Files |
|
||||
| 11:30am | State of Cluster Lifecycle | Github Workflow
|
||||
| 12:00pm | Lunch + Unconference Voting |
|
||||
| 1:00pm | | Pull Request Practice | Docs Sprint | State of Developer Experience | KEP BoF | Networking BoF | Unconference Slot |
|
||||
| 1:50pm | 10 Minute Break |
|
||||
| 2:00pm | API Codebase Tour - @sttts | Testrgid tour, docs, membership | | Cluster lifecycle BoF | Release Management | Unconference Slot | Unconference Slot |
|
||||
| 2:50pm | 10 Minute Break |
|
||||
| 3:00pm | Live API Code Review - @lavalamp | SIG Meet and Greet | | Chair/TL Training | State of Security - @tallclair | etcd Maintainers Ask Us Anything | Unconference Slot |
|
||||
| 3:50pm | 10 Minute Break |
|
||||
| 4:00pm | Steering Committee Q+A |
|
||||
| 4:45pm | Conlusion and Wrap-Up |
|
||||
### Monday Schedule
|
||||
**The schedule will be posted to the main KubeCon/CloudNativeCon site on Monday, November 26th with more details about each of the sessions, birds of a feather statements, and more.**
|
||||
|
||||
**Morning**
|
||||
|
||||
08:00am
|
||||
- Breakfast and Registration
|
||||
- Last chance to turn in unconference voting session card. [pick up a card at the summit registration tables at Garage or 6th floor of Convention Center]
|
||||
- Add your +1 to cards already on the unconference board - start voting!
|
||||
|
||||
| Time | Main Track | Planning & Training | Contributor Lounge | New Contributor Workshop |
|
||||
| --- | :---: | :---: | :---: | :---: |
|
||||
| **Room** | 608/609 | 613 | 610 | 602/603/604 |
|
||||
| 08:00am | - | - | Open Space** | - |
|
||||
| 09:00am | Welcome & Details (all tracks) | - | | | - |
|
||||
| 09:30am | Technical Vision for Kubernetes w/ Architecture's @bgrant0607 | - | | | - |
|
||||
| 10:00am | State of Security w/ Auth's @tallclair @cjcullen | Doc Planning (open to all) | | | Where to Contribute |
|
||||
| 10:30am | State of Networking w/ @thockin| - | | | Where to Communicate |
|
||||
| 10:55am | 15 minute break | - | | | | |
|
||||
| 11:10am | State of KEPs w/ PM's @justaugustus | - | | | OWNERS Files |
|
||||
| 11:30am | State of Cluster Lifecycle w/ @timothysc | - | | | Github Workflow |
|
||||
| 12:05pm | Lunch Break | + | Last call for Unconference Voting* | - |
|
||||
|
||||
|
||||
**Afternoon**
|
||||
|
||||
| Time | Main Track | Planning & Training | Track #1 | Track #2 | Track #3 | Track #4 | Contributor Lounge | New Contributor Workshop |
|
||||
| --- | :---: | :---: | :---: | :---: | :---: |:---: | :---: | :---: |
|
||||
| **Room** | 608/609 [theater] | 613 [pods] | 606 [fishbowl] | 607 [fishbowl] | 605 [fishbowl] | 611 [fishbowl] | 610 [pods] | 602/603/604 [round classroom] |
|
||||
| 1:00pm | Automation and CI w/Testing and ContribEx @spiffxp @cblecker [50 mins] | Docs Planning [2 hrs; open to all] | Networking BoF [50 mins] | KEP BoF [50 mins] | *Unconference Slot [50 mins] | - | Open Space | Pull Request Practice |
|
||||
| 1:50pm | 10 Minute Break | - | - | - | - | - | - | | |
|
||||
| 2:00pm | API Codebase Tour - @sttts [50 mins] | " | Cluster lifecycle BoF [50 mins] | etcd Maintainers Ask Us Anything [50 mins] | - | *Unconference Slot [50 mins] | | | Testrgid tour, docs, membership |
|
||||
| 2:30pm | " | SIG/WG Chair/TL Training [1.5 hrs] | " | " | - | " | | | - |
|
||||
| 2:50pm | 10 Minute Break | " | - | - | - | - | - | | |
|
||||
| 3:00pm | Live API Code Review - @lavalamp | " | Long Term Support (WG LTS) - @tpepper [50 mins] | UI Dashboard Planning Discussion [50 mins] | *Unconference Slot [50 mins] | *Unconference Slot [50 mins] | | | SIG Meet and Greet |
|
||||
| 3:50pm | 10 Minute Break | - | - | - | - | - | - | | |
|
||||
| 4:00pm | Steering Committee Q+A | - | - | - | - | - | - | | |
|
||||
| 4:45pm | Wrap-Up and Group Pic! | - | - | - | - | - | - | - |
|
||||
|
||||
`*`= The unconference slots are generated by you by suggesting a topic and/or voting on another. Instructions on how to participate will be posted here closer to the event.
|
||||
** = Want to have a quick SIG meeting? Pair program? Get some work done? Use this space! No reservation required.
|
||||
|
||||
|
||||
*Evening*
|
||||
|
||||
| Time | Speaker & Title | Room |
|
||||
| --- | :----------: | :----------: |
|
||||
|5:10pm | CRDs arent just for add-ons anymore, Tim Hockin | Ballroom 6ABC |
|
||||
| 5:20pm | Kubernetes Release Notes Tips and Tricks, Mike Arpaia | Ballroom 6ABC |
|
||||
| 6:40pm | Kubernetes Community, a Story told through emojis and slack data, Paris Pittman | Ballroom 6ABC |
|
||||
|
||||
### Tuesday-Thursday Schedule
|
||||
Contributor content flows in KubeCon
|
||||
|
||||
*Tuesday*
|
||||
|
||||
| Time | Speaker & Title | Room |
|
||||
| --- | :----------: | :----------: |
|
||||
| 1030a | SIG Intros: Apps, Auth, IBMCloud | TBA |
|
||||
| 1140a | SIG Intros: Cluster Lifecycle, Service Catalog, Storage | TBA |
|
||||
| 1140a | Behind Your PR: How Kubernetes Uses Kubernetes to Run Kubernetes CI, Sen Lu and Ben Elder | Ballroom 6C |
|
||||
| 1140a | The Future of Your CRDs - Evolving an API, Stefan Schimanski and Mehdy Bohlool | Ballroom 6E |
|
||||
| 140p | SIG Intros: Multicluster, Release | TBA |
|
||||
| 235p | SIG Intros: Contributor Experience, OpenStack | TBA |
|
||||
| 235p | CNCF TOC Live Committee Meeting | 606-609 |
|
||||
|
||||
*Wednesday*
|
||||
|
||||
| Time | Speaker & Title | Room |
|
||||
| --- | :----------: | :----------: |
|
||||
| 1050a | SIG Intros: CLI, PM, Scheduling | TBA |
|
||||
| 1140a | Deep Dive: Contributor Experience | TBA |
|
||||
| 1140a | SIG Intros: Autoscaling, AWS, Azure | TBA |
|
||||
| 145p | Deep Dive: Release | TBA |
|
||||
| 145p | SIG Intros: Cloud Provider, Testing | TBA |
|
||||
| 145p | Open Source, Open Community, Open Development, Craig McLuckie | Tahoma 1/2 @ TCC |
|
||||
|235p | Deep Dive: PM | TBA |
|
||||
|235p | SIG Intro: IoT WG | TBA |
|
||||
|
||||
|
||||
*Thursday*
|
||||
|
||||
| Time | Speaker & Title | Room |
|
||||
| --- | :----------: | :----------: |
|
||||
| 1050a | Deep Dives: Auth, CLI, Cloud Provider, Multicluster | TBA |
|
||||
| 1140a | Deep Dives: API Machinery, Apps, Policy WG | TBA |
|
||||
| 145p | Deep Dives: Autoscaling, Cluster Lifecycle (kubeadm), IBMCloud, Service Catalog | TBA |
|
||||
| 235p | Deep Dives: Azure, Cluster Lifecycle (Cluster API), IoT WG | TBA |
|
||||
| 340p | Deep Dives: Container Identity WG, Testing, VMWare | TBA |
|
||||
| 430p | Deep Dives: Big Data, Scheduling | TBA |
|
||||
|
||||
## Chat With Us
|
||||
There is a [Slack channel](https://kubernetes.slack.com/messages/contributor-summit) (#contributor-summit) for you to use before and during the summit. Look here for volunteer opportunities and content updates. Feel free to pass URLs, notes, reserve the hallway track room, and connect with the organizers.
|
||||
|
||||
## Media Policy
|
||||
|
||||
A photographer and videographer will be onsite recording sessions, collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you.
|
||||
|
||||
|
||||
### Code of Conduct
|
||||
## Code of Conduct
|
||||
|
||||
This event, like all Kubernetes events, has a [Code of Conduct](/code-of-conduct.md). We will have an onsite rep with contact information to be provided here and posted during the event.
|
||||
|
||||
|
||||
### Misc
|
||||
We want to remove as many barriers as possible for you to attend this event. Please contact community@kubernetes.io to see if we can accommodate a request.
|
||||
We want to remove as many barriers as possible for you to attend this event. Please contact community@kubernetes.io to see if we can accommodate a request.
|
||||
|
||||
Further details to be updated on this doc. Please check back for a complete guide.
|
||||
Further details to be updated on this doc. Please check back for a complete guide.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Kubernetes Weekly Community Meeting
|
||||
|
||||
We have PUBLIC and RECORDED [weekly meeting](https://zoom.us/my/kubernetescommunity) every Thursday at [5pm UTC](https://www.google.com/search?q=5pm+UTC).
|
||||
We have PUBLIC and RECORDED [weekly meeting](https://zoom.us/my/kubernetescommunity) every Thursday at [6pm UTC](https://www.google.com/search?q=6pm+UTC).
|
||||
|
||||
See it on the web at [calendar.google.com](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles) , or paste this [iCal url](https://calendar.google.com/calendar/ical/cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com/public/basic.ics) into any [iCal client](https://en.wikipedia.org/wiki/ICalendar). Do NOT copy the meetings over to a your personal calendar, you will miss meeting updates. Instead use your client's calendaring feature to say you are attending the meeting so that any changes made to meetings will be reflected on your personal calendar.
|
||||
|
||||
|
@ -48,7 +48,7 @@ The first 10 minutes of a meeting is dedicated to demonstrations from the commun
|
|||
These demos are noted at the top of the community document.
|
||||
There is a hard stop of the demo at 10 minutes, with up to 5 more minutes for questions.
|
||||
Feel free to add your demo request to the bottom of the list, then one of the organizers will get back to you to schedule an exact date.
|
||||
Demo submissions MUST follow the the requirements listed below.
|
||||
Demo submissions MUST follow the requirements listed below.
|
||||
|
||||
### Requirements
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ If you believe you are a Member of Standing, please fill out [this form](https:/
|
|||
## DECISION
|
||||
The newly elected body will be announced in the weekly Kubernetes Community Meeting on October 5, 2017 at 10:00am US Pacific Time. [Please join us](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat).
|
||||
|
||||
Following the meeting, the raw voting results and winners will be published on the [Kubernetes Blog](http://blog.kubernetes.io/).
|
||||
Following the meeting, the raw voting results and winners will be published on the [Kubernetes Blog](https://kubernetes.io/blog/).
|
||||
|
||||
For more information, definitions, and/or detailed election process, see full [steering committee charter](https://github.com/kubernetes/steering/blob/master/charter.md).
|
||||
|
||||
|
|
|
@ -0,0 +1,308 @@
|
|||
Aaron Crickenberger,Davanum Srinivas,Kris Nova,Nikhita Raghunath,Quinton Hoole,Stephen Augustus,Tim Pepper,Timothy St. Clair
|
||||
8,8,8,8,1,8,8,8
|
||||
1,3,2,5,8,7,6,4
|
||||
3,8,4,5,7,6,2,1
|
||||
1,4,No opinion,5,3,No opinion,No opinion,2
|
||||
1,8,3,8,8,8,8,2
|
||||
1,3,8,2,8,8,8,3
|
||||
No opinion,No opinion,2,1,No opinion,No opinion,No opinion,No opinion
|
||||
8,2,8,8,1,8,8,8
|
||||
1,No opinion,No opinion,No opinion,No opinion,No opinion,2,3
|
||||
8,2,8,8,1,8,8,8
|
||||
1,7,3,6,5,8,4,2
|
||||
1,1,3,2,3,2,1,3
|
||||
4,3,7,8,2,5,6,1
|
||||
4,2,8,1,3,8,8,5
|
||||
No opinion,No opinion,2,1,No opinion,No opinion,No opinion,No opinion
|
||||
1,3,8,2,8,8,8,4
|
||||
1,4,8,3,6,5,2,7
|
||||
1,2,6,7,8,1,8,8
|
||||
1,3,8,2,4,4,4,7
|
||||
1,2,7,4,6,8,3,5
|
||||
4,2,6,7,8,1,3,5
|
||||
1,3,8,2,8,8,3,8
|
||||
8,4,8,8,1,8,8,8
|
||||
1,8,8,8,8,8,8,8
|
||||
2,5,6,8,7,4,1,3
|
||||
1,8,3,8,3,8,8,2
|
||||
1,3,5,2,5,5,5,4
|
||||
2,1,8,7,5,3,6,4
|
||||
3,No opinion,No opinion,No opinion,No opinion,No opinion,2,1
|
||||
8,2,8,8,1,8,8,3
|
||||
No opinion,8,8,8,8,8,No opinion,No opinion
|
||||
5,5,7,5,6,3,1,8
|
||||
1,2,6,4,3,8,5,7
|
||||
1,2,2,7,8,8,8,7
|
||||
1,2,No opinion,3,No opinion,No opinion,No opinion,4
|
||||
1,8,8,8,2,8,8,3
|
||||
8,1,3,8,8,2,8,8
|
||||
1,3,5,7,8,6,4,2
|
||||
1,8,8,8,8,8,8,8
|
||||
8,2,8,8,1,8,8,8
|
||||
4,7,2,6,3,8,5,1
|
||||
3,2,5,1,8,4,7,6
|
||||
8,2,8,8,1,8,8,8
|
||||
2,3,7,1,5,6,4,8
|
||||
4,6,8,3,2,5,7,1
|
||||
2,6,1,3,7,5,8,4
|
||||
No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion
|
||||
3,8,8,8,2,8,8,1
|
||||
2,1,2,2,8,8,8,1
|
||||
5,7,6,8,2,4,3,1
|
||||
2,6,6,1,7,3,6,8
|
||||
8,3,4,8,8,8,1,2
|
||||
8,8,3,8,8,2,8,1
|
||||
1,No opinion,1,1,No opinion,3,1,2
|
||||
4,6,3,1,7,2,8,5
|
||||
8,1,7,8,8,8,8,8
|
||||
1,No opinion,No opinion,No opinion,No opinion,No opinion,2,No opinion
|
||||
7,5,1,2,6,8,4,3
|
||||
1,2,8,2,8,5,4,8
|
||||
1,3,8,1,7,4,4,2
|
||||
1,2,8,6,4,6,5,3
|
||||
1,2,8,3,8,8,5,4
|
||||
8,2,3,3,4,8,8,1
|
||||
3,1,6,5,8,7,4,2
|
||||
6,2,3,1,7,8,5,4
|
||||
4,3,5,2,6,No opinion,No opinion,1
|
||||
8,4,3,5,8,6,2,1
|
||||
1,2,No opinion,No opinion,No opinion,No opinion,3,No opinion
|
||||
2,8,8,4,8,1,3,8
|
||||
2,6,8,7,3,5,1,3
|
||||
1,2,3,4,5,6,7,8
|
||||
1,2,5,3,8,7,4,6
|
||||
1,8,4,8,5,8,8,2
|
||||
3,7,1,8,4,6,5,2
|
||||
1,8,8,8,2,8,8,3
|
||||
1,2,4,2,8,No opinion,4,3
|
||||
2,8,3,8,1,8,8,8
|
||||
1,8,8,2,8,8,8,3
|
||||
8,8,8,8,1,8,3,2
|
||||
8,8,8,8,1,8,8,8
|
||||
2,1,8,8,8,8,3,8
|
||||
1,No opinion,No opinion,No opinion,No opinion,No opinion,2,3
|
||||
3,2,1,No opinion,No opinion,No opinion,No opinion,No opinion
|
||||
8,8,8,8,1,8,3,2
|
||||
6,4,5,2,8,3,1,7
|
||||
2,8,8,8,8,8,1,3
|
||||
1,8,8,8,8,2,8,3
|
||||
1,5,2,6,4,6,6,3
|
||||
1,2,8,7,6,3,4,5
|
||||
2,8,8,8,3,8,8,1
|
||||
8,8,8,8,1,8,8,8
|
||||
1,2,8,8,8,8,8,3
|
||||
1,4,8,3,7,5,4,2
|
||||
2,6,8,3,7,6,4,1
|
||||
8,1,1,8,8,8,8,1
|
||||
2,4,1,3,8,4,4,4
|
||||
2,1,8,8,3,1,8,2
|
||||
1,2,5,8,4,6,7,4
|
||||
5,6,7,3,8,2,4,1
|
||||
No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion
|
||||
4,3,5,2,7,1,8,8
|
||||
1,8,8,8,8,2,8,3
|
||||
1,3,No opinion,No opinion,No opinion,2,No opinion,No opinion
|
||||
1,2,8,2,5,4,3,3
|
||||
1,8,8,3,8,2,8,8
|
||||
1,2,8,8,8,8,8,3
|
||||
1,8,2,7,3,4,5,6
|
||||
1,3,6,2,7,5,8,4
|
||||
5,6,4,8,2,7,1,3
|
||||
2,1,8,8,6,8,8,4
|
||||
1,8,8,7,5,4,8,2
|
||||
5,1,7,3,8,6,4,2
|
||||
1,3,4,7,8,2,6,5
|
||||
1,6,3,7,2,8,4,5
|
||||
8,8,8,8,1,8,8,2
|
||||
8,3,2,5,8,4,8,1
|
||||
1,8,8,8,8,8,8,8
|
||||
1,3,4,5,8,7,6,2
|
||||
2,6,8,5,8,1,3,4
|
||||
1,4,7,8,3,7,7,2
|
||||
4,8,3,2,8,1,8,8
|
||||
No opinion,No opinion,1,No opinion,No opinion,No opinion,No opinion,1
|
||||
1,6,5,4,3,8,7,2
|
||||
6,4,1,7,8,3,5,2
|
||||
2,6,7,3,4,1,8,5
|
||||
2,3,8,1,7,6,4,5
|
||||
7,3,2,5,8,6,4,1
|
||||
2,1,3,8,8,8,8,8
|
||||
3,6,8,2,4,5,7,1
|
||||
3,1,6,7,8,2,4,5
|
||||
2,No opinion,No opinion,No opinion,3,No opinion,No opinion,1
|
||||
1,2,8,4,8,8,5,3
|
||||
1,4,4,4,3,3,4,2
|
||||
2,3,8,5,4,7,1,6
|
||||
8,4,2,5,6,7,3,1
|
||||
8,5,2,3,7,6,4,1
|
||||
7,2,3,6,8,1,4,5
|
||||
1,8,8,8,8,8,8,1
|
||||
1,2,8,4,5,6,7,3
|
||||
1,7,7,3,4,6,2,5
|
||||
1,4,4,2,8,7,4,3
|
||||
No opinion,No opinion,No opinion,No opinion,No opinion,1,8,No opinion
|
||||
2,8,3,1,8,5,4,6
|
||||
1,8,8,8,8,2,3,8
|
||||
1,8,8,2,8,8,8,8
|
||||
8,2,8,8,1,8,8,8
|
||||
8,3,8,1,8,2,8,8
|
||||
1,7,2,8,5,6,4,3
|
||||
1,No opinion,No opinion,No opinion,No opinion,No opinion,2,3
|
||||
3,8,2,4,8,8,8,1
|
||||
2,7,8,1,3,4,6,5
|
||||
1,4,8,3,8,8,8,2
|
||||
3,6,5,4,8,2,1,7
|
||||
1,1,8,8,8,1,2,8
|
||||
1,2,6,4,8,5,3,7
|
||||
2,8,8,8,8,1,8,3
|
||||
1,2,3,8,5,4,7,6
|
||||
1,3,7,5,4,6,8,2
|
||||
3,1,4,5,6,7,8,2
|
||||
1,7,3,7,8,7,7,2
|
||||
5,2,6,3,1,7,8,4
|
||||
1,2,8,4,8,8,8,3
|
||||
No opinion,No opinion,2,No opinion,1,No opinion,1,No opinion
|
||||
4,3,8,8,1,8,2,8
|
||||
4,5,1,8,3,6,7,2
|
||||
1,8,2,7,5,6,4,3
|
||||
1,2,3,No opinion,5,6,7,4
|
||||
1,No opinion,3,No opinion,No opinion,No opinion,No opinion,2
|
||||
3,1,7,4,2,5,8,6
|
||||
4,8,7,2,3,5,6,1
|
||||
1,3,8,4,2,8,5,8
|
||||
1,2,5,3,7,4,6,8
|
||||
8,2,8,8,8,1,8,8
|
||||
1,4,6,7,2,8,5,3
|
||||
1,3,8,7,4,6,5,2
|
||||
2,3,4,1,5,6,8,7
|
||||
5,6,2,4,8,7,1,3
|
||||
3,4,8,5,1,6,7,2
|
||||
1,4,3,6,5,8,2,7
|
||||
4,2,1,5,6,7,8,3
|
||||
4,3,No opinion,No opinion,1,No opinion,No opinion,2
|
||||
5,7,8,1,8,7,8,8
|
||||
1,2,8,4,7,6,3,5
|
||||
4,8,1,8,3,8,8,2
|
||||
8,8,8,8,1,8,8,8
|
||||
3,2,8,8,1,8,8,3
|
||||
1,4,3,8,5,2,7,No opinion
|
||||
1,3,8,6,4,7,5,2
|
||||
4,8,2,8,8,8,3,1
|
||||
1,2,4,5,8,6,3,7
|
||||
3,2,3,8,8,8,8,1
|
||||
1,6,3,7,8,2,5,4
|
||||
1,3,5,2,No opinion,7,6,4
|
||||
2,6,1,2,8,3,8,4
|
||||
1,3,8,4,8,8,8,2
|
||||
1,2,8,1,8,1,1,8
|
||||
1,3,1,No opinion,8,No opinion,7,1
|
||||
2,8,1,1,8,1,8,1
|
||||
1,6,6,2,7,8,8,7
|
||||
1,8,7,3,6,4,2,5
|
||||
1,8,8,8,8,8,8,2
|
||||
1,4,6,2,7,8,3,5
|
||||
3,6,8,5,3,5,7,1
|
||||
8,8,8,8,1,8,8,8
|
||||
1,2,4,4,3,4,4,3
|
||||
1,8,8,8,8,8,3,2
|
||||
2,5,8,1,3,8,8,4
|
||||
8,8,8,3,1,8,8,2
|
||||
4,2,3,8,8,8,8,1
|
||||
1,8,8,8,8,8,2,4
|
||||
3,7,6,4,8,2,5,1
|
||||
8,8,3,1,8,8,8,2
|
||||
1,2,4,4,No opinion,5,4,2
|
||||
No opinion,No opinion,1,2,No opinion,No opinion,No opinion,No opinion
|
||||
1,4,No opinion,3,No opinion,5,2,8
|
||||
2,1,6,8,7,3,5,4
|
||||
1,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,2
|
||||
1,No opinion,No opinion,3,No opinion,No opinion,No opinion,2
|
||||
1,3,8,4,7,5,5,2
|
||||
4,3,7,6,8,5,2,1
|
||||
8,1,8,2,8,3,8,8
|
||||
2,8,8,1,8,3,8,8
|
||||
1,4,2,8,5,8,8,3
|
||||
1,2,No opinion,No opinion,No opinion,No opinion,No opinion,3
|
||||
1,2,8,4,8,8,8,3
|
||||
8,3,8,8,1,8,8,2
|
||||
8,1,8,8,8,8,8,8
|
||||
5,6,2,3,8,7,4,1
|
||||
2,3,7,1,7,5,4,6
|
||||
2,8,8,8,8,8,3,1
|
||||
1,4,2,6,3,7,5,8
|
||||
2,4,5,6,3,8,7,1
|
||||
2,2,1,2,2,2,2,8
|
||||
8,3,8,8,8,1,8,2
|
||||
2,8,3,8,4,1,8,8
|
||||
5,4,2,1,8,7,6,3
|
||||
1,5,6,4,8,3,2,7
|
||||
1,8,3,8,8,8,8,2
|
||||
8,8,8,8,2,1,8,8
|
||||
5,No opinion,6,2,No opinion,4,3,1
|
||||
1,4,7,8,3,7,7,2
|
||||
5,2,6,3,1,6,4,4
|
||||
2,6,5,1,7,3,4,8
|
||||
4,6,3,6,8,1,7,2
|
||||
8,2,8,7,4,8,8,1
|
||||
1,2,6,1,6,6,8,2
|
||||
8,6,5,8,1,2,4,3
|
||||
1,2,8,8,8,8,3,8
|
||||
1,2,8,3,8,4,5,3
|
||||
1,2,4,8,8,8,5,3
|
||||
8,8,2,8,8,8,3,1
|
||||
1,2,5,4,8,5,5,3
|
||||
1,2,No opinion,3,No opinion,No opinion,No opinion,No opinion
|
||||
1,5,7,4,3,6,8,2
|
||||
3,No opinion,2,1,4,No opinion,No opinion,8
|
||||
8,2,3,4,8,8,1,8
|
||||
8,2,8,8,1,8,8,8
|
||||
2,6,8,3,5,7,4,1
|
||||
1,4,8,7,6,5,3,2
|
||||
No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion
|
||||
1,2,No opinion,3,5,7,6,4
|
||||
1,2,8,3,6,7,4,5
|
||||
3,No opinion,2,No opinion,No opinion,No opinion,No opinion,1
|
||||
3,8,2,8,8,1,8,8
|
||||
1,3,No opinion,5,4,No opinion,6,2
|
||||
1,1,8,6,4,2,6,1
|
||||
1,2,No opinion,No opinion,No opinion,No opinion,4,3
|
||||
3,7,1,2,8,4,6,5
|
||||
7,3,6,1,5,2,8,4
|
||||
1,8,8,3,4,8,8,2
|
||||
1,7,8,3,6,4,5,2
|
||||
8,2,8,8,1,8,8,8
|
||||
No opinion,7,No opinion,No opinion,No opinion,No opinion,No opinion,8
|
||||
1,8,8,3,2,8,8,8
|
||||
1,6,4,2,7,3,5,8
|
||||
8,2,8,8,1,8,8,8
|
||||
1,2,7,3,6,8,8,8
|
||||
1,2,8,8,8,3,8,8
|
||||
1,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion,No opinion
|
||||
2,3,8,8,8,8,8,1
|
||||
1,8,2,No opinion,3,5,6,4
|
||||
1,7,6,5,4,8,3,2
|
||||
3,6,1,2,8,5,4,8
|
||||
1,7,4,3,5,8,6,2
|
||||
1,8,3,2,6,5,7,4
|
||||
8,2,8,8,1,8,8,8
|
||||
1,7,6,5,2,8,3,4
|
||||
2,No opinion,No opinion,No opinion,No opinion,No opinion,3,1
|
||||
4,5,8,7,3,1,6,2
|
||||
1,No opinion,No opinion,1,No opinion,No opinion,No opinion,No opinion
|
||||
2,8,7,8,1,8,8,3
|
||||
8,8,8,8,8,8,8,8
|
||||
2,3,8,1,5,6,4,7
|
||||
1,6,6,7,3,2,6,8
|
||||
2,No opinion,4,3,No opinion,No opinion,5,1
|
||||
2,2,No opinion,4,3,4,1,8
|
||||
8,8,8,8,1,8,8,8
|
||||
2,No opinion,1,3,8,7,No opinion,8
|
||||
2,3,1,3,No opinion,No opinion,5,4
|
||||
4,8,7,1,5,6,3,2
|
||||
1,5,2,4,7,3,6,8
|
||||
8,2,8,8,1,8,8,8
|
||||
8,8,8,2,8,8,3,1
|
||||
3,8,1,2,8,8,8,8
|
||||
3,8,1,2,4,8,8,5
|
||||
1,4,8,2,8,6,7,3
|
||||
3,No opinion,5,4,No opinion,2,1,8
|
|
|
@ -151,5 +151,5 @@ Name | Organization/Company | GitHub
|
|||
[2017 candidate bios]: https://github.com/kubernetes/community/tree/master/events/elections/2017
|
||||
[election officers]: https://github.com/kubernetes/community/tree/master/events/elections#election-officers
|
||||
[Kubernetes Community Meeting]: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
|
||||
[Kubernetes Blog]: http://blog.kubernetes.io/
|
||||
[Kubernetes Blog]: https://kubernetes.io/blog/
|
||||
[eligible voters]: https://github.com/kubernetes/community/blob/master/events/elections/2018/voters.md
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
# Results of the 2018 Steering Committee Election
|
||||
|
||||
- Number of seats open: 3 (2 year term)
|
||||
- Number of eligible voters: 692
|
||||
- Number of votes cast: 307
|
||||
- Turnout: 44%
|
||||
|
||||
[Raw ballot data](BALLOTS.csv)
|
||||
|
||||
## Results
|
||||
|
||||
The final ranking, using the "Schulze" Condorcet completion, is as follows:
|
||||
|
||||
1. Aaron Crickenberger
|
||||
2. Timothy St. Clair
|
||||
3. Davanum Srinivas
|
||||
4. Nikhita Raghunath
|
||||
5. Kris Nova
|
||||
6. Quinton Hoole
|
||||
7. Stephen Augustus
|
||||
8. Tim Pepper
|
||||
|
||||
## Winners
|
||||
|
||||
The winners of the open seats are as follows:
|
||||
|
||||
Two year term:
|
||||
|
||||
1. Aaron Crickenberger
|
||||
2. Timothy St. Clair
|
||||
3. Davanum Srinivas
|
|
@ -6,8 +6,8 @@ Office Hours is a live stream where we answer live questions about Kubernetes fr
|
|||
|
||||
Third Wednesday of every month, there are two sessions:
|
||||
|
||||
- European Edition: [1pm UTC](https://www.google.com/search?q=1pm+UTC)
|
||||
- Western Edition: [8pm UTC](https://www.google.com/search?q=8pm+UTC)
|
||||
- European Edition: [2pm UTC](https://www.google.com/search?q=2pm+UTC)
|
||||
- Western Edition: [9pm UTC](https://www.google.com/search?q=9pm+UTC)
|
||||
|
||||
Tune into the [Kubernetes YouTube Channel](https://www.youtube.com/c/KubernetesCommunity/live) to follow along.
|
||||
|
||||
|
|
|
@ -178,6 +178,7 @@ func getExistingContent(path string, fileFormat string) (string, error) {
|
|||
|
||||
var funcMap = template.FuncMap{
|
||||
"tzUrlEncode": tzUrlEncode,
|
||||
"trimSpace": strings.TrimSpace,
|
||||
}
|
||||
|
||||
// tzUrlEncode returns a url encoded string without the + shortcut. This is
|
||||
|
|
|
@ -62,7 +62,7 @@ The following subprojects are owned by sig-{{.Label}}:
|
|||
{{- range .Subprojects }}
|
||||
- **{{.Name}}**
|
||||
{{- if .Description }}
|
||||
- Description: {{ .Description }}
|
||||
- Description: {{ trimSpace .Description }}
|
||||
{{- end }}
|
||||
- Owners:
|
||||
{{- range .Owners }}
|
||||
|
|
|
@ -44,6 +44,19 @@ require confirmation by the Steering Committee before taking effect. Time zones
|
|||
and country of origin should be considered when selecting membership, to ensure
|
||||
sufficient after North American business hours and holiday coverage.
|
||||
|
||||
### Other roles
|
||||
|
||||
#### New Membership Coordinator
|
||||
|
||||
New Membership Coordinators help serve as a friendly face to newer, prospective
|
||||
community members, guiding them through the
|
||||
[process](new-membership-procedure.md) to request membership to a Kubernetes
|
||||
GitHub organization.
|
||||
|
||||
Our current coordinators are:
|
||||
* Bob Killen (**[@mrbobbytables](https://github.com/mrbobbytables)**, US Eastern)
|
||||
* Stephen Augustus (**[@justaugustus](https://github.com/justaugustus)**, US Eastern)
|
||||
|
||||
## Project Owned Organizations
|
||||
|
||||
The following organizations are currently known to be part of the Kubernetes
|
||||
|
|
|
@ -1,11 +1,16 @@
|
|||
# Kubernetes Repository Guidelines
|
||||
|
||||
This document attempts to outline a structure for creating and associating GitHub repositories with the Kubernetes project. It also describes how and when
|
||||
This document attempts to outline a structure for creating and associating
|
||||
GitHub repositories with the Kubernetes project. It also describes how and when
|
||||
repositories are removed.
|
||||
|
||||
The document presents a tiered system of repositories with increasingly strict requirements in an attempt to provide the right level of oversight and flexibility for a variety of different projects.
|
||||
The document presents a tiered system of repositories with increasingly strict
|
||||
requirements in an attempt to provide the right level of oversight and
|
||||
flexibility for a variety of different projects.
|
||||
|
||||
Requests for creating, transferring, modifying, or archiving repositories can be made by [opening a request](https://github.com/kubernetes/org/issues/new/choose) against the kubernetes/org repo.
|
||||
Requests for creating, transferring, modifying, or archiving repositories can be
|
||||
made by [opening a request](https://github.com/kubernetes/org/issues/new/choose)
|
||||
against the kubernetes/org repo.
|
||||
|
||||
- [Associated Repositories](#associated-repositories)
|
||||
* [Goals](#goals)
|
||||
|
@ -24,38 +29,55 @@ Requests for creating, transferring, modifying, or archiving repositories can be
|
|||
|
||||
## Associated Repositories
|
||||
|
||||
Associated repositories conform to the Kubernetes community standards for a repository, but otherwise have no restrictions. Associated repositories exist solely for the purpose of making it easier for the Kubernetes community to work together. There is no implication of support or endorsement of any kind by the Kubernetes project, the goals are purely logistical.
|
||||
Associated repositories conform to the Kubernetes community standards for a
|
||||
repository, but otherwise have no restrictions. Associated repositories exist
|
||||
solely for the purpose of making it easier for the Kubernetes community to work
|
||||
together. There is no implication of support or endorsement of any kind by the
|
||||
Kubernetes project, the goals are purely logistical.
|
||||
|
||||
### Goals
|
||||
|
||||
To facilitate contributions and collaboration from the broader Kubernetes community. Contributions to random projects with random CLAs (or DCOs) can be logistically difficult, so associated repositories should be easier.
|
||||
To facilitate contributions and collaboration from the broader Kubernetes
|
||||
community. Contributions to random projects with random CLAs (or DCOs) can be
|
||||
logistically difficult, so associated repositories should be easier.
|
||||
|
||||
|
||||
### Rules
|
||||
|
||||
* Must adopt the Kubernetes Code of Conduct statement in their repo.
|
||||
* All code projects use the Apache License version 2.0. Documentation repositories must use the Creative Commons License version 4.0.
|
||||
* All code projects use the Apache License version 2.0. Documentation
|
||||
repositories must use the Creative Commons License version 4.0.
|
||||
* Must adopt the CNCF CLA bot automation for pull requests.
|
||||
|
||||
|
||||
## SIG repositories
|
||||
|
||||
SIG repositories serve as temporary homes for SIG-sponsored experimental projects or prototypes of new core functionality, or as permanent homes for SIG-specific projects and tools.
|
||||
SIG repositories serve as temporary homes for SIG-sponsored experimental
|
||||
projects or prototypes of new core functionality, or as permanent homes for
|
||||
SIG-specific projects and tools.
|
||||
|
||||
### Goals
|
||||
|
||||
To provide a place for SIGs to collaborate on projects endorsed by and actively worked on by members of the SIG. SIGs should be able to approve and create new repositories for SIG-sponsored projects without requiring higher level approval from a central body (e.g. steering committee or sig-architecture)
|
||||
To provide a place for SIGs to collaborate on projects endorsed by and actively
|
||||
worked on by members of the SIG. SIGs should be able to approve and create new
|
||||
repositories for SIG-sponsored projects without requiring higher level approval
|
||||
from a central body (e.g. steering committee or sig-architecture)
|
||||
|
||||
### Rules for new repositories
|
||||
|
||||
* For now all repos will live in github.com/kubernetes-sigs/\<project-name\>.
|
||||
* Must contain the topic for the sponsoring SIG - e.g. `k8s-sig-api-machinery`. (Added through the *Manage topics* link on the repo page.)
|
||||
* Must contain the topic for the sponsoring SIG - e.g.
|
||||
`k8s-sig-api-machinery`. (Added through the *Manage topics* link on the
|
||||
repo page.)
|
||||
* Must adopt the Kubernetes Code of Conduct
|
||||
* All code projects use the Apache License version 2.0. Documentation repositories must use the Creative Commons License version 4.0.
|
||||
* All code projects use the Apache License version 2.0. Documentation
|
||||
repositories must use the Creative Commons License version 4.0.
|
||||
* Must adopt the CNCF CLA bot, merge bot and Kubernetes PR commands/bots.
|
||||
* All OWNERS of the project must also be active SIG members.
|
||||
* SIG membership must vote using lazy consensus to create a new repository
|
||||
* SIG must already have identified all of their existing subprojects and code, with valid OWNERS files, in [`sigs.yaml`](https://github.com/kubernetes/community/blob/master/sigs.yaml)
|
||||
* SIG must already have identified all of their existing subprojects and
|
||||
code, with valid OWNERS files, in
|
||||
[`sigs.yaml`](https://github.com/kubernetes/community/blob/master/sigs.yaml)
|
||||
|
||||
### Rules for donated repositories
|
||||
|
||||
|
@ -63,44 +85,57 @@ The `kubernetes-sigs` organization is primarily intended to house net-new
|
|||
projects originally created in that organization. However, projects that a SIG
|
||||
adopts may also be donated.
|
||||
|
||||
In addition to the requirements for new repositories, donated repositories must demonstrate that:
|
||||
In addition to the requirements for new repositories, donated repositories must
|
||||
demonstrate that:
|
||||
|
||||
* All contributors must have signed the [CNCF Individual CLA](https://github.com/cncf/cla/blob/master/individual-cla.pdf)
|
||||
or [CNCF Corporate CLA](https://github.com/cncf/cla/blob/master/corporate-cla.pdf)
|
||||
* If (a) contributor(s) have not signed the CLA and could not be reached, a NOTICE
|
||||
file should be added referencing section 7 of the CLA with a list of the developers who could not be reached
|
||||
* Licenses of dependencies are acceptable; project owners can ping [@caniszczyk](https://github.com/caniszczyk) for review of third party deps
|
||||
* Boilerplate text across all files should attribute copyright as follows: `"Copyright <Project Authors>"` if no CLA was in place prior to donation
|
||||
* All contributors must have signed the [CNCF Individual
|
||||
CLA](https://github.com/cncf/cla/blob/master/individual-cla.pdf) or [CNCF
|
||||
Corporate CLA](https://github.com/cncf/cla/blob/master/corporate-cla.pdf)
|
||||
* If (a) contributor(s) have not signed the CLA and could not be reached, a
|
||||
NOTICE file should be added referencing section 7 of the CLA with a list of
|
||||
the developers who could not be reached
|
||||
* Licenses of dependencies are acceptable; project owners can ping
|
||||
[@caniszczyk](https://github.com/caniszczyk) for review of third party deps
|
||||
* Boilerplate text across all files should attribute copyright as follows:
|
||||
`"Copyright <Project Authors>"` if no CLA was in place prior to donation
|
||||
|
||||
## Core Repositories
|
||||
|
||||
Core repositories are considered core components of Kubernetes. They are utilities, tools, applications, or libraries that are expected to be present in every or nearly every Kubernetes cluster, such as components and tools included in official Kubernetes releases. Additionally, the kubernetes.io website, k8s.io machinery, and other project-wide infrastructure will remain in the kubernetes github organization.
|
||||
Core repositories are considered core components of Kubernetes. They are
|
||||
utilities, tools, applications, or libraries that are expected to be present in
|
||||
every or nearly every Kubernetes cluster, such as components and tools included
|
||||
in official Kubernetes releases. Additionally, the kubernetes.io website, k8s.io
|
||||
machinery, and other project-wide infrastructure will remain in the kubernetes
|
||||
github organization.
|
||||
|
||||
### Goals
|
||||
Create a broader base of repositories than the existing gh/kubernetes/kubernetes so that the project can scale. Present expectations about the centrality and importance of the repository in the Kubernetes ecosystem. Carries the endorsement of the Kubernetes community.
|
||||
### Goals Create a broader base of repositories than the existing
|
||||
gh/kubernetes/kubernetes so that the project can scale. Present expectations
|
||||
about the centrality and importance of the repository in the Kubernetes
|
||||
ecosystem. Carries the endorsement of the Kubernetes community.
|
||||
|
||||
### Rules
|
||||
|
||||
* Must live under `github.com/kubernetes/<project-name>`
|
||||
* Must adopt the Kubernetes Code of Conduct
|
||||
* All code projects use the Apache Licence version 2.0. Documentation repositories must use the Creative Commons License version 4.0.
|
||||
* All code projects use the Apache Licence version 2.0. Documentation
|
||||
repositories must use the Creative Commons License version 4.0.
|
||||
* Must adopt the CNCF CLA bot
|
||||
* Must adopt all Kubernetes automation (e.g. /lgtm, etc)
|
||||
* All OWNERS must be members of standing as defined by ability to vote in Kubernetes steering committee elections. in the Kubernetes community
|
||||
* All OWNERS must be members of standing as defined by ability to vote in
|
||||
Kubernetes steering committee elections. in the Kubernetes community
|
||||
* Repository must be approved by SIG-Architecture
|
||||
|
||||
## Removing Repositories
|
||||
|
||||
As important as it is to add new repositories, it is equally important to
|
||||
prune old repositories that are no longer relevant or useful.
|
||||
As important as it is to add new repositories, it is equally important to prune
|
||||
old repositories that are no longer relevant or useful.
|
||||
|
||||
It is in the best interests of everyone involved in the Kubernetes community
|
||||
that our various projects and repositories are active and healthy. This
|
||||
ensures that repositories are kept up to date with the latest Kubernetes
|
||||
wide processes, it ensures a rapid response to potential required fixes
|
||||
(e.g. critical security problems) and (most importantly) it ensures that
|
||||
contributors and users receive quick feedback on their issues and
|
||||
contributions.
|
||||
that our various projects and repositories are active and healthy. This ensures
|
||||
that repositories are kept up to date with the latest Kubernetes wide processes,
|
||||
it ensures a rapid response to potential required fixes (e.g. critical security
|
||||
problems) and (most importantly) it ensures that contributors and users receive
|
||||
quick feedback on their issues and contributions.
|
||||
|
||||
### Grounds for removal
|
||||
|
||||
|
@ -120,50 +155,69 @@ circumstances (e.g. a code of conduct violation).
|
|||
|
||||
### Procedure for removal
|
||||
|
||||
When a repository has been deemed eligible for removal, we take the following steps:
|
||||
When a repository has been deemed eligible for removal, we take the following
|
||||
steps:
|
||||
|
||||
* Ownership of the repo is transferred to the [kubernetes-retired] GitHub organization
|
||||
* Ownership of the repo is transferred to the [kubernetes-retired] GitHub
|
||||
organization
|
||||
* The repo description is edited to start with the phrase "[EOL]"
|
||||
* All open issues and PRs are closed
|
||||
* All external collaborators are removed
|
||||
* All webhooks, apps, integrations or services are removed
|
||||
* GitHub Pages are disabled
|
||||
* The repo is marked as archived using [GitHub's archive feature]
|
||||
* The removal is announced on the kubernetes-dev mailing list and community meeting
|
||||
* The removal is announced on the kubernetes-dev mailing list and community
|
||||
meeting
|
||||
|
||||
This maintains the complete record of issues, PRs and other contributions,
|
||||
leaves the repository read-only, and makes it clear that the repository
|
||||
should be considered retired and unmaintained.
|
||||
leaves the repository read-only, and makes it clear that the repository should
|
||||
be considered retired and unmaintained.
|
||||
|
||||
## FAQ
|
||||
|
||||
**My project is currently in kubernetes-incubator, what is going to happen to it?**
|
||||
**My project is currently in kubernetes-incubator, what is going to happen to
|
||||
it?**
|
||||
|
||||
Nothing. We’ll grandfather existing projects and they can stay in the incubator org for as long as they want to. We expect/hope that most projects will either move out to ecosystem, or into SIG or Core repositories following the same approval process described below.
|
||||
Nothing. We’ll grandfather existing projects and they can stay in the incubator
|
||||
org for as long as they want to. We expect/hope that most projects will either
|
||||
move out to ecosystem, or into SIG or Core repositories following the same
|
||||
approval process described below.
|
||||
|
||||
**My project wants to graduate from incubator, how can it do that?**
|
||||
|
||||
Either approval from a SIG to graduate to a SIG repository, or approval from SIG-Architecture to graduate into the core repository.
|
||||
Either approval from a SIG to graduate to a SIG repository, or approval from
|
||||
SIG-Architecture to graduate into the core repository.
|
||||
|
||||
**My incubator project wants to go GA, how can it do that?**
|
||||
|
||||
For now, the project determines if and when it is GA. For the future, we may define a cross Kubernetes notion of GA for core and sig repositories, but that’s not in this proposal.
|
||||
For now, the project determines if and when it is GA. For the future, we may
|
||||
define a cross Kubernetes notion of GA for core and sig repositories, but that’s
|
||||
not in this proposal.
|
||||
|
||||
**My project is currently in core, but doesn’t seem to fit these guidelines, what’s going to happen?**
|
||||
**My project is currently in core, but doesn’t seem to fit these guidelines,
|
||||
what’s going to happen?**
|
||||
|
||||
For now, nothing. Eventually, we may redistribute projects, but for now the goal is to adapt the process going forward, not re-legislate past decisions.
|
||||
For now, nothing. Eventually, we may redistribute projects, but for now the goal
|
||||
is to adapt the process going forward, not re-legislate past decisions.
|
||||
|
||||
**I’m starting a new project, what should I do?**
|
||||
|
||||
Is this a SIG-sponsored project? If so, convince some SIG to host it, take it to the SIG mailing list, meeting and get consensus, then the SIG can create a repo for you in the SIG organization.
|
||||
Is this a SIG-sponsored project? If so, convince some SIG to host it, take it to
|
||||
the SIG mailing list, meeting and get consensus, then the SIG can create a repo
|
||||
for you in the SIG organization.
|
||||
|
||||
Is this a small-group or personal project? If so, create a repository wherever you’d like, and make it an associated project.
|
||||
Is this a small-group or personal project? If so, create a repository wherever
|
||||
you’d like, and make it an associated project.
|
||||
|
||||
We suggest starting with the kubernetes-template-project to ensure you have the correct code of conduct, license, etc.
|
||||
We suggest starting with the kubernetes-template-project to ensure you have the
|
||||
correct code of conduct, license, etc.
|
||||
|
||||
**Much of the things needed (e.g. CLA Bot integration) is missing to support associated projects. Many things seem vague. Help!**
|
||||
**Much of the things needed (e.g. CLA Bot integration) is missing to support
|
||||
associated projects. Many things seem vague. Help!**
|
||||
|
||||
True, we need to improve these things. For now, do the best you can to conform to the spirit of the proposal (e.g. post the code of conduct, etc)
|
||||
True, we need to improve these things. For now, do the best you can to conform
|
||||
to the spirit of the proposal (e.g. post the code of conduct, etc)
|
||||
|
||||
[GitHub's archive feature]: https://help.github.com/articles/archiving-a-github-repository/
|
||||
[GitHub's archive feature]:
|
||||
https://help.github.com/articles/archiving-a-github-repository/
|
||||
[kubernetes-retired]: https://github.com/kubernetes-retired
|
||||
|
|
|
@ -39,9 +39,9 @@ managed by the Kubernetes project or use a different name.
|
|||
|
||||
## Transferring Outside Code Into A Kubernetes Organization
|
||||
|
||||
Due to licensing and CLA issues, prior to transferring software into a Kubernetes
|
||||
managed organization there is some due diligence that needs to occur. Please
|
||||
contact the steering committee and CNCF prior to moving any code in.
|
||||
Due to licensing and CLA issues, prior to transferring software into a
|
||||
Kubernetes managed organization there is some due diligence that needs to occur.
|
||||
Please contact the steering committee and CNCF prior to moving any code in.
|
||||
|
||||
It is easier to start new code in a Kubernetes organization than it is to
|
||||
transfer in existing code.
|
||||
|
@ -53,8 +53,9 @@ Each organization should have the following teams:
|
|||
- teams for each repo `foo`
|
||||
- `foo-admins`: granted admin access to the `foo` repo
|
||||
- `foo-maintainers`: granted write access to the `foo` repo
|
||||
- `foo-reviewers`: granted read access to the `foo` repo; intended to be used as
|
||||
a notification mechanism for interested/active contributors for the `foo` repo
|
||||
- `foo-reviewers`: granted read access to the `foo` repo; intended to be used
|
||||
as a notification mechanism for interested/active contributors for the `foo`
|
||||
repo
|
||||
- a `bots` team
|
||||
- should contain bots such as @k8s-ci-robot and @thelinuxfoundation that are
|
||||
necessary for org and repo automation
|
||||
|
@ -69,18 +70,20 @@ for all orgs going forward. Notable discrepancies at the moment:
|
|||
|
||||
- `foo-reviewers` teams are considered a historical subset of
|
||||
`kubernetes-sig-foo-pr-reviews` teams and are intended mostly as a fallback
|
||||
notification mechanism when requested reviewers are being unresponsive. Ideally
|
||||
OWNERS files can be used in lieu of these teams.
|
||||
notification mechanism when requested reviewers are being unresponsive.
|
||||
Ideally OWNERS files can be used in lieu of these teams.
|
||||
- `admins-foo` and `maintainers-foo` teams as used by the kubernetes-incubator
|
||||
org. This was a mistake that swapped the usual convention, and we would like
|
||||
to rename the team
|
||||
|
||||
## Repository Guidance
|
||||
|
||||
Repositories have additional guidelines and requirements, such as the use of
|
||||
CLA checking on all contributions. For more details on those please see the
|
||||
[Kubernetes Template Project](https://github.com/kubernetes/kubernetes-template-project), and the [Repository Guidelines](kubernetes-repositories.md)
|
||||
Repositories have additional guidelines and requirements, such as the use of CLA
|
||||
checking on all contributions. For more details on those please see the
|
||||
[Kubernetes Template
|
||||
Project](https://github.com/kubernetes/kubernetes-template-project), and the
|
||||
[Repository Guidelines](kubernetes-repositories.md)
|
||||
|
||||
|
||||
[GitHub Administration Team]: /github-management/README.md#github-administration-team
|
||||
[GitHub Administration Team]:
|
||||
/github-management/README.md#github-administration-team
|
||||
[@kubernetes/owners]: https://github.com/orgs/kubernetes/teams/owners
|
||||
|
|
|
@ -1,60 +1,76 @@
|
|||
# Setting up the CNCF CLA check
|
||||
|
||||
If you are trying to sign the CLA so your PR's can be merged, please
|
||||
[read the CLA docs](https://git.k8s.io/community/CLA.md)
|
||||
If you are trying to sign the CLA so your PR's can be merged, please [read the
|
||||
CLA docs](https://git.k8s.io/community/CLA.md)
|
||||
|
||||
If you are a Kubernetes GitHub organization or repo owner, and would like to setup
|
||||
the Linux Foundation CNCF CLA check for your repositories, please read on.
|
||||
If you are a Kubernetes GitHub organization or repo owner, and would like to
|
||||
setup the Linux Foundation CNCF CLA check for your repositories, please read on.
|
||||
|
||||
## Setup the webhook
|
||||
|
||||
1. Go to the settings for your organization or webhook, and choose Webhooks from the menu, then
|
||||
"Add webhook"
|
||||
- Payload URL: https://identity.linuxfoundation.org/lfcla/github/postreceive?group=284&comment=no&target=https://identity.linuxfoundation.org/projects/cncf
|
||||
- `group=284` specifies the ID of the CNCF project authorized committers group in our CLA system.
|
||||
- `comment=no` specifies that our system should not post help comments into the pull request (since the Kubernetes mungebot does this).
|
||||
- `target=https://identity.linuxfoundation.org/projects/cncf` specifies what will be used for the "Details" link in GitHub for this status check.
|
||||
1. Go to the settings for your organization or webhook, and choose Webhooks from
|
||||
the menu, then "Add webhook"
|
||||
- Payload URL:
|
||||
`https://identity.linuxfoundation.org/lfcla/github/postreceive?group=284&comment=no&target=https://identity.linuxfoundation.org/projects/cncf`
|
||||
- `group=284` specifies the ID of the CNCF project authorized committers
|
||||
group in our CLA system.
|
||||
- `comment=no` specifies that our system should not post help comments
|
||||
into the pull request (since the Kubernetes mungebot does this).
|
||||
- `target=https://identity.linuxfoundation.org/projects/cncf` specifies
|
||||
what will be used for the "Details" link in GitHub for this status
|
||||
check.
|
||||
- Content Type: 'application/json'
|
||||
- Secret: Please contact [@idvoretskyi](mailto:ihor@cncf.io), and [@caniszczyk](mailto:caniszczyk@linuxfoundation.org).
|
||||
- Secret: Please contact [@idvoretskyi](mailto:ihor@cncf.io), and
|
||||
[@caniszczyk](mailto:caniszczyk@linuxfoundation.org).
|
||||
- Events: Let me select individual events
|
||||
- Push: **unchecked**
|
||||
- Pull request: checked
|
||||
- Issue comment: checked
|
||||
- Active: checked
|
||||
1. Add the [@thelinuxfoundation](https://github.com/thelinuxfoundation) GitHub user as an **Owner**
|
||||
to your organization or repo to ensure the CLA status can be applied on PR's
|
||||
1. After you send an invite, contact the [Linux Foundation](mailto:helpdesk@rt.linuxfoundation.org); and cc [Chris Aniszczyk](mailto:caniszczyk@linuxfoundation.org), [Ihor Dvoretskyi](mailto:ihor@cncf.io), [Eric Searcy](mailto:eric@linuxfoundation.org) (to ensure that the invite gets accepted).
|
||||
1. Add the [@thelinuxfoundation](https://github.com/thelinuxfoundation) GitHub
|
||||
user as an **Owner** to your organization or repo to ensure the CLA status can
|
||||
be applied on PR's
|
||||
1. After you send an invite, contact the [Linux
|
||||
Foundation](mailto:helpdesk@rt.linuxfoundation.org); and cc [Chris
|
||||
Aniszczyk](mailto:caniszczyk@linuxfoundation.org), [Ihor
|
||||
Dvoretskyi](mailto:ihor@cncf.io), [Eric Searcy](mailto:eric@linuxfoundation.org)
|
||||
(to ensure that the invite gets accepted).
|
||||
1. Finally, open up a test PR to check that:
|
||||
1. webhooks are delivered correctly, which can be monitored in the “settings” for your org
|
||||
1. webhooks are delivered correctly, which can be monitored in the
|
||||
“settings” for your org
|
||||
1. the PR gets the cla/linuxfoundation status
|
||||
|
||||
## Branch protection
|
||||
|
||||
It is recommended that the Linux Foundation CLA check be added as a strict requirement
|
||||
for any change to be accepted to the master branch.
|
||||
It is recommended that the Linux Foundation CLA check be added as a strict
|
||||
requirement for any change to be accepted to the master branch.
|
||||
|
||||
To do this manually:
|
||||
|
||||
1. Go to the Settings for the repository, and choose Branches from the menu.
|
||||
1. Under Protected Branches, choose "master".
|
||||
1. Check "Protect this branch".
|
||||
1. Check "Require status checks to pass before merging", "Require branches to be up to date before merging", and the "cla/linuxfoundation" status check.
|
||||
1. Check "Require status checks to pass before merging", "Require branches to be
|
||||
up to date before merging", and the "cla/linuxfoundation" status check.
|
||||
|
||||
Given the Kubernetes projects anticipates having "human reviewed" CLA acceptance, you may
|
||||
not do the last step, but it is still recommended to enable branch protection to require all
|
||||
changes to be done through pull requests, instead of direct pushing that will never kick off
|
||||
a CLA check.
|
||||
Given the Kubernetes projects anticipates having "human reviewed" CLA
|
||||
acceptance, you may not do the last step, but it is still recommended to enable
|
||||
branch protection to require all changes to be done through pull requests,
|
||||
instead of direct pushing that will never kick off a CLA check.
|
||||
|
||||
## Label automation
|
||||
|
||||
The label automation is done using the [CLA plugin in prow](https://git.k8s.io/test-infra/prow/plugins/cla).
|
||||
In order to turn on the CLA labels on your repo, add it as appropriate within the
|
||||
[plugins.yaml](https://git.k8s.io/test-infra/prow/plugins.yaml), and add the cla plugin to it.
|
||||
The label automation is done using the [CLA plugin in
|
||||
prow](https://git.k8s.io/test-infra/prow/plugins/cla). In order to turn on the
|
||||
CLA labels on your repo, add it as appropriate within the
|
||||
[plugins.yaml](https://git.k8s.io/test-infra/prow/plugins.yaml), and add the cla
|
||||
plugin to it.
|
||||
|
||||
You also need to add [@k8s-ci-robot](https://github.com/k8s-ci-robot) as one of the owners in
|
||||
the same org/repo, to ensure that it can add labels `cncf-cla: yes` and `cncf-cla: no` based
|
||||
on the status published by the Linux Foundation webhook.
|
||||
You also need to add [@k8s-ci-robot](https://github.com/k8s-ci-robot) as one of
|
||||
the owners in the same org/repo, to ensure that it can add labels `cncf-cla:
|
||||
yes` and `cncf-cla: no` based on the status published by the Linux Foundation
|
||||
webhook.
|
||||
|
||||
The label automation may not be essential for your repository, if you’re not using merge
|
||||
automation. For repos with maintainers doing manual merges, GitHub protected branches may
|
||||
suffice.
|
||||
The label automation may not be essential for your repository, if you’re not
|
||||
using merge automation. For repos with maintainers doing manual merges, GitHub
|
||||
protected branches may suffice.
|
||||
|
|
|
@ -26,10 +26,12 @@ See [community membership]
|
|||
# Community groups
|
||||
|
||||
The project has 4 main types of groups:
|
||||
1. Special Interest Groups, SIGs
|
||||
2. Subprojects
|
||||
3. Working Groups, WGs
|
||||
4. Committees
|
||||
* Special Interest Groups, SIGs
|
||||
* Subprojects
|
||||
* Working Groups, WGs
|
||||
* Committees
|
||||
|
||||

|
||||
|
||||
## SIGs
|
||||
|
||||
|
@ -52,8 +54,8 @@ itself. Examples:
|
|||
* Horizontal: Scalability, Architecture
|
||||
* Project: Testing, Release, Docs, PM, Contributor Experience
|
||||
|
||||
SIGs must have at least one and ideally two SIG leads at any given
|
||||
time. SIG leads are intended to be organizers and facilitators,
|
||||
SIGs must have at least one and ideally two SIG chairs at any given
|
||||
time. SIG chairs are intended to be organizers and facilitators,
|
||||
responsible for the operation of the SIG and for communication and
|
||||
coordination with the other SIGs, the Steering Committee, and the
|
||||
broader community.
|
||||
|
@ -62,7 +64,8 @@ Each SIG must have a charter that specifies its scope (topics,
|
|||
subsystems, code repos and directories), responsibilities, areas of
|
||||
authority, how members and roles of authority/leadership are
|
||||
selected/granted, how decisions are made, and how conflicts are
|
||||
resolved. A [short template] for intra-SIG governance has been
|
||||
resolved. See the [SIG charter process] for details on how charters are managed.
|
||||
A [short template] for intra-SIG governance has been
|
||||
developed in order to simplify SIG creation, and additional templates
|
||||
are being developed, but SIGs should be relatively free to customize
|
||||
or change how they operate, within some broad guidelines and
|
||||
|
@ -79,7 +82,7 @@ community.
|
|||
See [sig governance] for more details about current SIG operating
|
||||
mechanics, such as mailing lists, meeting times, etc.
|
||||
|
||||
## Subprojects
|
||||
### Subprojects
|
||||
|
||||
Specific work efforts within SIGs are divided into **subprojects**.
|
||||
Every part of the Kubernetes code and documentation must be owned by
|
||||
|
@ -102,19 +105,17 @@ Subprojects for each SIG are documented in [sigs.yaml](sigs.yaml).
|
|||
|
||||
We need community rallying points to facilitate discussions/work
|
||||
regarding topics that are short-lived or that span multiple SIGs.
|
||||
This is the purpose of Working Groups (WG). The intent is to make
|
||||
Working Groups relatively easy to create and to deprecate, once
|
||||
inactive.
|
||||
|
||||
To propose a new working group, first find a SIG to sponsor the group.
|
||||
Next, send a proposal to kubernetes-dev@googlegroups.com and also include
|
||||
any potentially interested SIGs. Wait for public comment. If there's
|
||||
enough interest, a new Working Group should be formed.
|
||||
Working groups are primarily used to facilitate topics of discussion that are in
|
||||
scope for Kubernetes but that cross SIG lines. If a set of folks in the
|
||||
community want to get together and discuss a topic, they can do so without
|
||||
forming a Working Group.
|
||||
|
||||
See [working group governance] for more details about forming and disbanding
|
||||
Working Groups.
|
||||
|
||||
Working groups are documented in [sigs.yaml](sigs.yaml).
|
||||
|
||||
Create a new mailing list in the from of kubernetes-wg-group-name. Working
|
||||
groups typically have a Slack channel as well as regular meetings on zoom.
|
||||
It's encouraged to keep a clear record of all accomplishments that's publicly
|
||||
accessible.
|
||||
|
||||
## Committees
|
||||
|
||||
|
@ -124,7 +125,7 @@ open and anyone can join, Committees do not have open membership and do
|
|||
not always operate in the open. The steering committee can form
|
||||
committees as needed, for bounded or unbounded duration. Membership
|
||||
of a committee is decided by the steering committee. Like a SIG, a
|
||||
committee has a charter and a lead, and will report to the steering
|
||||
committee has a charter and a chair, and will report to the steering
|
||||
committee periodically, and to the community as makes sense, given the
|
||||
charter.
|
||||
|
||||
|
@ -151,17 +152,15 @@ to its charter, will own the decision. In the case of extended debate
|
|||
or deadlock, decisions may be escalated to the Steering Committee,
|
||||
which is expected to be uncommon.
|
||||
|
||||
The exact processes and guidelines for such cross-project
|
||||
communication have yet to be formalized, but when in doubt, use
|
||||
kubernetes-dev@googlegroups.com and make an announcement at the
|
||||
community meeting.
|
||||
The [KEP process] is being developed as a way to facilitate definition, agreement and communication of efforts that cross SIG boundaries.
|
||||
SIGs are encouraged to use this process for larger efforts.
|
||||
This process is also available for smaller efforts within a SIG.
|
||||
|
||||
# Repository guidelines
|
||||
|
||||
All repositories under Kubernetes github orgs, such as kubernetes and kubernetes-incubator,
|
||||
should follow the procedures outlined in the [incubator document](incubator.md). All code projects
|
||||
use the [Apache License version 2.0](LICENSE). Documentation repositories should use the
|
||||
[Creative Commons License version 4.0](https://git.k8s.io/website/LICENSE).
|
||||
All new repositories under Kubernetes github orgs should follow the process outlined in the [kubernetes repository guidelines].
|
||||
|
||||
Note that "Kubernetes incubator" process has been deprecated in favor of the new guidelines.
|
||||
|
||||
# CLA
|
||||
|
||||
|
@ -170,6 +169,9 @@ All contributors must sign the CNCF CLA, as described [here](CLA.md).
|
|||
[community membership]: /community-membership.md
|
||||
[sig governance]: /sig-governance.md
|
||||
[owners]: community-membership.md#subproject-owner
|
||||
[short template]: committee-steering/governance/sig-charter-template.md
|
||||
[sig charter process]: committee-steering/governance/README.md
|
||||
[short template]: committee-steering/governance/sig-governance-template-short.md
|
||||
[kubernetes repository guidelines]: kubernetes-repositories.md
|
||||
[working group governance]: committee-steering/governance/wg-governance.md
|
||||
|
||||
[]()
|
||||
|
|
|
@ -1,2 +1,4 @@
|
|||
events/elections/2017/
|
||||
vendor/
|
||||
sig-contributor-experience/contribex-survey-2018.csv
|
||||
|
||||
|
|
|
@ -274,7 +274,7 @@ failure of the KEP process are
|
|||
- distribution of time a KEP spends in each state
|
||||
- KEP rejection rate
|
||||
- PRs referencing a KEP merged per week
|
||||
- number of issued open which reference a KEP
|
||||
- number of issues open which reference a KEP
|
||||
- number of contributors who authored a KEP
|
||||
- number of contributors who authored a KEP for the first time
|
||||
- number of orphaned KEPs
|
||||
|
@ -330,7 +330,7 @@ this proposal attempts to place these concerns within a general framework.
|
|||
[accepted design and a proposal]: https://github.com/kubernetes/community/issues/914
|
||||
[the organization of design proposals]: https://github.com/kubernetes/community/issues/918
|
||||
|
||||
### Github issues vs. KEPs
|
||||
### GitHub issues vs. KEPs
|
||||
|
||||
The use of GitHub issues when proposing changes does not provide SIGs good
|
||||
facilities for signaling approval or rejection of a proposed change to Kubernetes
|
||||
|
|
|
@ -1,17 +1,14 @@
|
|||
reviewers:
|
||||
- sig-architecture-leads
|
||||
- jbeda
|
||||
- bgrant0607
|
||||
- jdumars
|
||||
- calebamiles
|
||||
- idvoretskyi
|
||||
- jbeda
|
||||
- justaugustus
|
||||
approvers:
|
||||
- sig-architecture-leads
|
||||
- jbeda
|
||||
- bgrant0607
|
||||
- jdumars
|
||||
- calebamiles
|
||||
- idvoretskyi
|
||||
- jbeda
|
||||
labels:
|
||||
- kind/kep
|
||||
- sig/architecture
|
||||
|
|
|
@ -0,0 +1,289 @@
|
|||
---
|
||||
kep-number: 30
|
||||
title: Migrating API objects to latest storage version
|
||||
authors:
|
||||
- "@xuchao"
|
||||
owning-sig: sig-api-machinery
|
||||
reviewers:
|
||||
- "@deads2k"
|
||||
- "@yliaog"
|
||||
- "@lavalamp"
|
||||
approvers:
|
||||
- "@deads2k"
|
||||
- "@lavalamp"
|
||||
creation-date: 2018-08-06
|
||||
last-updated: 2018-10-11
|
||||
status: provisional
|
||||
---
|
||||
|
||||
# Migrating API objects to latest storage version
|
||||
|
||||
## Table of Contents
|
||||
|
||||
* [Migrating API objects to latest storage version](#migrating-api-objects-to-latest-storage-version)
|
||||
* [Table of Contents](#table-of-contents)
|
||||
* [Summary](#summary)
|
||||
* [Motivation](#motivation)
|
||||
* [Goals](#goals)
|
||||
* [Proposal](#proposal)
|
||||
* [Alpha workflow](#alpha-workflow)
|
||||
* [Alpha API](#alpha-api)
|
||||
* [Failure recovery](#failure-recovery)
|
||||
* [Beta workflow - Automation](#beta-workflow---automation)
|
||||
* [Risks and Mitigations](#risks-and-mitigations)
|
||||
* [Graduation Criteria](#graduation-criteria)
|
||||
* [Alternatives](#alternatives)
|
||||
* [update-storage-objects.sh](#update-storage-objectssh)
|
||||
|
||||
## Summary
|
||||
|
||||
We propose a solution to migrate the stored API objects in Kubernetes clusters.
|
||||
In 2018 Q4, we will deliver a tool of alpha quality. The tool extends and
|
||||
improves based on the [oc adm migrate storage][] command. We will integrate the
|
||||
storage migration into the Kubernetes upgrade process in 2019 Q1. We will make
|
||||
the migration automatically triggered in 2019.
|
||||
|
||||
[oc adm migrate storage]:https://www.mankier.com/1/oc-adm-migrate-storage
|
||||
|
||||
## Motivation
|
||||
|
||||
"Today it is possible to create API objects (e.g., HPAs) in one version of
|
||||
Kubernetes, go through multiple upgrade cycles without touching those objects,
|
||||
and eventually arrive at a version of Kubernetes that can’t interpret the stored
|
||||
resource and crashes. See k8s.io/pr/52185."[1][]. We propose a solution to the
|
||||
problem.
|
||||
|
||||
[1]:https://docs.google.com/document/d/1eoS1K40HLMl4zUyw5pnC05dEF3mzFLp5TPEEt4PFvsM
|
||||
|
||||
### Goals
|
||||
|
||||
A successful storage version migration tool must:
|
||||
* work for Kubernetes built-in APIs, custom resources (CR), and aggregated APIs.
|
||||
* do not add burden to cluster administrators or Kubernetes distributions.
|
||||
* only cause insignificant load to apiservers. For example, if the master has
|
||||
10GB memory, the migration tool should generate less than 10 qps of single
|
||||
object operations(TODO: measure the memory consumption of PUT operations;
|
||||
study how well the default 10 Mbps bandwidth limit in the oc command work).
|
||||
* work for big clusters that have ~10^6 instances of some resource types.
|
||||
* make progress in flaky environment, e.g., flaky apiservers, or the migration
|
||||
process get preempted.
|
||||
* allow system administrators to track the migration progress.
|
||||
|
||||
As to the deliverables,
|
||||
* in the short term, providing system administrators with a tool to migrate
|
||||
the Kubernetes built-in API objects to the proper storage versions.
|
||||
* in the long term, automating the migration of Kubernetes built-in APIs, CR,
|
||||
aggregated APIs without further burdening system administrators or Kubernetes
|
||||
distributions.
|
||||
|
||||
## Proposal
|
||||
|
||||
### Alpha workflow
|
||||
|
||||
At the alpha stage, the migrator needs to be manually launched, and does not
|
||||
handle custom resources or aggregated resources.
|
||||
|
||||
After all the kube-apiservers are at the desired version, the cluster
|
||||
administrator runs `kubectl apply -f migrator-initializer-<k8s-versio>.yaml`.
|
||||
The apply command
|
||||
* creates a *kube-storage-migration* namespace
|
||||
* creates a *storage-migrator* service account
|
||||
* creates a *system:storage-migrator* cluster role that can *get*, *list*, and
|
||||
*update* all resources, and in addition, *create* and *delete* CRDs.
|
||||
* creates a cluster role binding to bind the created service account with the
|
||||
cluster role
|
||||
* creates a **migrator-initializer** job running with the
|
||||
*storage-migrator* service account.
|
||||
|
||||
The **migrator-initializer** job
|
||||
* deletes any existing deployment of **kube-migrator controller**
|
||||
* creates a **kube-migrator controller** deployment running with the
|
||||
*storage-migrator* service account.
|
||||
* generates a comprehensive list of resource types via the discovery API
|
||||
* discovers all custom resources via listing CRDs
|
||||
* discovers all aggregated resources via listing all `apiservices` that have
|
||||
`.spec.service != null`
|
||||
* removes the custom resources and aggregated resources from the comprehensive
|
||||
resource list. The list now only contains Kubernetes built-in resources.
|
||||
* removes resources that share the same storage. At the alpha stage, the
|
||||
information is hard-coded, like in this [list][].
|
||||
* creates `migration` CRD (see the [API section][] for the schema) if it does
|
||||
not exist.
|
||||
* creates `migration` CRs for all remaining resources in the list. The
|
||||
`ownerReferences` of the `migration` objects are set to the **kube-migrator
|
||||
controller** deployment. Thus, the old `migration`s are deleted with the old
|
||||
deployment in the first step.
|
||||
|
||||
The control loop of **kube-migrator controller** does the following:
|
||||
* runs a reflector to watch for the instances of the `migration` CR. The list
|
||||
function used to construct the reflector sorts the `migration`s so that the
|
||||
*Running* `migration` will be processed first.
|
||||
* syncs one `migration` at a time to avoid overloading the apiserver,
|
||||
* if `migration.status` is nil, or `migration.status.conditions` shows
|
||||
*Running*, it creates a **migration worker** goroutine to migrate the
|
||||
resource type.
|
||||
* adds the *Running* condition to `migration.status.conditions`.
|
||||
* waits until the **migration worker** goroutine finishes, adds either the
|
||||
*Succeeded* or *Failed* condition to `migration.status.conditions` and sets
|
||||
the *Running* condition to false.
|
||||
|
||||
The **migration worker** runs the equivalence of `oc adm migrate storage
|
||||
--include=<resource type>` to migrate a resource type. The **migration worker**
|
||||
uses API chunking to retrieve partial lists of a resource type and thus can
|
||||
migrate a small chunk at a time. It stores the [continue token] in the owner
|
||||
`migration.spec.continueToken`. With the inconsistent continue token
|
||||
introduced in [#67284][], the **migration worker** does not need to worry about
|
||||
expired continue token.
|
||||
|
||||
[list]:https://github.com/openshift/origin/blob/2a8633598ef0dcfa4589d1e9e944447373ac00d7/pkg/oc/cli/admin/migrate/storage/storage.go#L120-L184
|
||||
[#67284]:https://github.com/kubernetes/kubernetes/pull/67284
|
||||
[API section]:#alpha-api
|
||||
|
||||
The cluster admin can run the `kubectl wait --for=condition=Succeeded
|
||||
migrations` to wait for all migrations to succeed.
|
||||
|
||||
Users can run `kubectl create` to create `migration`s to request migrating
|
||||
custom resources and aggregated resources.
|
||||
|
||||
### Alpha API
|
||||
|
||||
We introduce the `storageVersionMigration` API to record the intention and the
|
||||
progress of a migration. Throughout this doc, we abbreviated it as `migration`
|
||||
for simplicity. The API will be a CRD defined in the `migration.k8s.io` group.
|
||||
|
||||
Read the [workflow section][] to understand how the API is used.
|
||||
|
||||
```golang
|
||||
type StorageVersionMigration struct {
|
||||
metav1.TypeMeta
|
||||
// For readers of this KEP, metadata.generateName will be "<resource>.<group>"
|
||||
// of the resource being migrated.
|
||||
metav1.ObjectMeta
|
||||
Spec StorageVersionMigrationSpec
|
||||
Status StorageVersionMigrationStatus
|
||||
}
|
||||
|
||||
// Note that the spec only contains an immutable field in the alpha version. To
|
||||
// request another round of migration for the resource, clients need to create
|
||||
// another `migration` CR.
|
||||
type StorageVersionMigrationSpec {
|
||||
// Resource is the resource that is being migrated. The migrator sends
|
||||
// requests to the endpoint tied to the Resource.
|
||||
// Immutable.
|
||||
Resource GroupVersionResource
|
||||
// ContinueToken is the token to use in the list options to get the next chunk
|
||||
// of objects to migrate. When the .status.conditions indicates the
|
||||
// migration is "Running", users can use this token to check the progress of
|
||||
// the migration.
|
||||
// +optional
|
||||
ContinueToken string
|
||||
}
|
||||
|
||||
type MigrationConditionType string
|
||||
|
||||
const (
|
||||
// MigrationRunning indicates that a migrator job is running.
|
||||
MigrationRunning MigrationConditionType = "Running"
|
||||
// MigrationSucceed indicates that the migration has completed successfully.
|
||||
MigrationSucceeded MigrationConditionType = "Succeeded"
|
||||
// MigrationFailed indicates that the migration has failed.
|
||||
MigrationFailed MigrationConditionType = "Failed"
|
||||
)
|
||||
|
||||
type MigrationCondition struct {
|
||||
// Type of the condition
|
||||
Type MigrationConditionType
|
||||
// Status of the condition, one of True, False, Unknown.
|
||||
Status corev1.ConditionStatus
|
||||
// The last time this condition was updated.
|
||||
LastUpdateTime metav1.Time
|
||||
// The reason for the condition's last transition.
|
||||
Reason string
|
||||
// A human readable message indicating details about the transition.
|
||||
Message string
|
||||
}
|
||||
|
||||
type StorageVersionMigrationStatus {
|
||||
// Conditions represents the latest available observations of the migration's
|
||||
// current state.
|
||||
Conditions []MigrationCondition
|
||||
}
|
||||
```
|
||||
|
||||
[continue token]:https://github.com/kubernetes/kubernetes/blob/972e1549776955456d9808b619d136ee95ebb388/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/types.go#L82
|
||||
[workflow section]:#alpha-workflow
|
||||
|
||||
### Failure recovery
|
||||
|
||||
As stated in the goals section, the migration has to make progress even if the
|
||||
environment is flaky. This section describes how the migrator recovers from
|
||||
failure.
|
||||
|
||||
Kubernetes **replicaset controller** restarts the **migration controller** `pod`
|
||||
if it fails. Because the migration states, including the continue token, are
|
||||
stored in the `migration` object, the **migration controller** can resume from
|
||||
where it left off.
|
||||
|
||||
[workflow section]:#alpha-workflow
|
||||
|
||||
### Beta workflow - Automation
|
||||
|
||||
It is a beta goal to automate the migration workflow. That is, migration does
|
||||
not need to be triggered manually by cluster admins, or by custom control loops
|
||||
of Kubernetes distributions.
|
||||
|
||||
The automated migration should work for Kubernetes built-in resource types,
|
||||
custom resources, and aggregated resources.
|
||||
|
||||
The trigger can be implemented as a separate control loop. It watches for the
|
||||
triggering signal, and creates `migration` to notify the **kube-migrator
|
||||
controller** to migrate a resource.
|
||||
|
||||
We haven't reached consensus on what signal would trigger storage migration. We
|
||||
will revisit this section during beta design.
|
||||
|
||||
### Risks and Mitigations
|
||||
|
||||
The migration process does not change the objects, so it will not pollute
|
||||
existing data.
|
||||
|
||||
If the rate limiting is not tuned well, the migration can overload the
|
||||
apiserver. Users can delete the migration controller and the migration
|
||||
jobs to mitigate.
|
||||
|
||||
Before upgrading or downgrading the cluster, the cluster administrator must run
|
||||
`kubectl wait --for=condition=Succeeded migrations` to make sure all
|
||||
migrations have completed. Otherwise the apiserver can crash, because it cannot
|
||||
interpret the serialized data in etcd. To mitigate, the cluster administrator
|
||||
can rollback the apiserver to the old version, and wait for the migration to
|
||||
complete. Even if the apiserver does not crash after upgrading or downgrading,
|
||||
the `migration` objects are not accurate anymore, because the default storage
|
||||
versions might have changed after upgrading or downgrading, but no one
|
||||
increments the `migration.spec.generation`. Administrator needs to re-run the
|
||||
`kubectl run migrate --image=migrator-initializer --restart=OnFailure` command
|
||||
to recover.
|
||||
|
||||
TODO: it is safe to rollback an apiserver to the previous configuration without
|
||||
waiting for the migration to complete. It is only unsafe to roll-forward or
|
||||
rollback twice. We need to design how to record the previous configuration.
|
||||
|
||||
## Graduation Criteria
|
||||
|
||||
* alpha: delivering a tool that implements the "alpha workflow" and "failure
|
||||
recovery" sections. ETA is 2018 Q4.
|
||||
|
||||
* beta: implementing the "beta workflow" and integrating the storage migration
|
||||
into Kubernetes upgrade tests.
|
||||
|
||||
* GA: TBD.
|
||||
|
||||
We will revisit this section in 2018 Q4.
|
||||
|
||||
## Alternatives
|
||||
|
||||
### update-storage-objects.sh
|
||||
|
||||
The Kubernetes repo has an update-storage-objects.sh script. It is not
|
||||
production ready: no rate limiting, hard-coded resource types, no persisted
|
||||
migration states. We will delete it, leaving a breadcrumb for any users to
|
||||
follow to the new tool.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue