Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Ihor Dvoretskyi 2018-06-12 16:29:42 +00:00
commit cdc99f5bb3
26 changed files with 306 additions and 49 deletions

View File

@ -28,6 +28,7 @@ aliases:
sig-big-data-leads:
- foxish
- erikerlandson
- liyinan926
sig-cli-leads:
- soltysh
- pwittrock

View File

@ -558,7 +558,7 @@ VPA controls the request (memory and CPU) of containers. In MVP it always sets
the limit to infinity. It is not yet clear whether there is a use-case for VPA
setting the limit.
The request is calculated based on analysis of the current and revious runs of
The request is calculated based on analysis of the current and previous runs of
the container and other containers with similar properties (name, image,
command, args).
The recommendation model (MVP) assumes that the memory and CPU consumption are

View File

@ -0,0 +1,51 @@
# Multicluster reserved namespaces
@perotinus
06/06/2018
## Background
sig-multicluster has identified the need for a canonical set of namespaces that
can be used for supporting multicluster applications and use cases. Initially,
an [issue](https://github.com/kubernetes/cluster-registry/issues/221) was filed
in the cluster-registry repository describing the need for a namespace that
would be used for public, global cluster records. This topic was further
discussed at the
[SIG meeting on June 5, 2018](https://www.youtube.com/watch?v=j6tHK8_mWz8&t=3012)
and in a
[thread](https://groups.google.com/forum/#!topic/kubernetes-sig-multicluster/8u-li_ZJpDI)
on the SIG mailing list.
## Reserved namespaces
We determined that there is currently a strong case for two reserved namespaces
for multicluster use:
- `kube-multicluster-public`: a global, public namespace for storing cluster
registry Cluster objects. If there are other custom resources that
correspond with the global, public Cluster objects, they can also be stored
here. For example, a custom resource that contains cloud-provider-specific
metadata about a cluster. Tools built against the cluster registry can
expect to find the canonical set of Cluster objects in this namespace[1].
- `kube-multicluster-system`: an administrator-accessible namespace that
contains components, such as multicluster controllers and their
dependencies, that are not meant to be seen by most users directly.
The definition of these namespaces is not intended to be exhaustive: in the
future, there may be reason to define more multicluster namespaces, and
potentially conventions for namespaces that are replicated between clusters (for
example, to support a global cluster list that is replicated to all clusters
that are contained in the list).
## Conventions for reserved namespaces
By convention, resources in these namespaces are local to the clusters in which
they exist and will not be replicated to other clusters. In other words, these
namespaces are private to the clusters they are in, and multicluster operations
must not replicate them or their resources into other clusters.
[1] Tools are by no means compelled to look in this namespace for clusters, and
can choose to reference Cluster objects from other namespaces as is suitable to
their design and environment.

View File

@ -742,7 +742,9 @@ APIs may return alternative representations of any resource in response to an
Accept header or under alternative endpoints, but the default serialization for
input and output of API responses MUST be JSON.
Protobuf serialization of API objects are currently **EXPERIMENTAL** and will change without notice.
A protobuf encoding is also accepted for built-in resources. As proto is not
self-describing, there is an envelope wrapper which describes the type of
the contents.
All dates should be serialized as RFC3339 strings.

View File

@ -2,19 +2,26 @@
CRI validation testing provides a test framework and a suite of tests to validate that the Container Runtime Interface (CRI) server implementation meets all the requirements. This allows the CRI runtime developers to verify that their runtime conforms to CRI, without needing to set up Kubernetes components or run Kubernetes end-to-end tests.
CRI validation testing is currently Alpha and is hosted at the [cri-tools](https://github.com/kubernetes-incubator/cri-tools) repository. Performance benchmarking will be added in the future. We encourage the CRI developers to report bugs or help extend the test coverage by adding more tests.
CRI validation testing is GA since v1.11.0 and is hosted at the [cri-tools](https://github.com/kubernetes-incubator/cri-tools) repository. We encourage the CRI developers to report bugs or help extend the test coverage by adding more tests.
## Install
The test suites can be installed easily via `go get` command:
The test suites can be downloaded from cri-tools [release page](https://github.com/kubernetes-incubator/cri-tools/releases):
```sh
go get github.com/kubernetes-incubator/cri-tools/cmd/critest
VERSION="v1.11.0"
wget https://github.com/kubernetes-incubator/cri-tools/releases/download/$VERSION/critest-$VERSION-linux-amd64.tar.gz
sudo tar zxvf critest-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f critest-$VERSION-linux-amd64.tar.gz
```
Then `critest` binary can be found in `$GOPATH/bin`.
critest requires [ginkgo](https://github.com/onsi/ginkgo) to run parallel tests. It could be installed by
*Note: ensure GO is installed and GOPATH is set before installing critest.*
```sh
go get -u github.com/onsi/ginkgo/ginkgo
```
*Note: ensure GO is installed and GOPATH is set before installing ginkgo.*
## Running tests
@ -25,7 +32,7 @@ Before running the test, you need to _ensure that the CRI server under test is r
### Run
```sh
critest validation
critest
```
This will
@ -34,15 +41,13 @@ This will
- Run the tests using `ginkgo`
- Output the test results to STDOUT
critest connects to `/var/run/dockershim.sock` by default. For other runtimes, the endpoint can be set in two ways:
- By setting flags `--runtime-endpoint` and `--image-endpoint`
- By setting environment variables `CRI_RUNTIME_ENDPOINT` and `CRI_IMAGE_ENDPOINT`
critest connects to `unix:///var/run/dockershim.sock` by default. For other runtimes, the endpoint can be set by flags `-runtime-endpoint` and `-image-endpoint`.
## Additional options
- `--focus`, `-f`: Only run the tests that match the regular expression.
- -`-ginkgo-flags`, `-g`: Space-separated list of arguments to pass to Ginkgo test runner.
- `--image-endpoint`, `-i`: Set the endpoint of image service. Same with runtime-endpoint if not specified.
- `--runtime-endpoint`, `-r`: Set the endpoint of runtime service. Default to `/var/run/dockershim.sock`.
- `--skip`, `-s`: Skip the tests that match the regular expression.
- `-ginkgo.focus`: Only run the tests that match the regular expression.
- `-image-endpoint`: Set the endpoint of image service. Same with runtime-endpoint if not specified.
- `-runtime-endpoint`: Set the endpoint of runtime service. Default to `unix:///var/run/dockershim.sock`.
- `-ginkgo.skip`: Skip the tests that match the regular expression.
- `-parallel`: The number of parallel test nodes to run (default 1). ginkgo must be installed to run parallel tests.
- `-h`: Show help and all supported options.

View File

@ -414,8 +414,14 @@ end of aforementioned script.
#### Testing against local clusters
In order to run an E2E test against a locally running cluster, point the tests
at a custom host directly:
In order to run an E2E test against a locally running cluster, first make sure
to have a local build of the tests:
```sh
go run hack/e2e.go -- --build
```
Then point the tests at a custom host directly:
```sh
export KUBECONFIG=/path/to/kubeconfig
@ -430,6 +436,14 @@ To control the tests that are run:
go run hack/e2e.go -- --provider=local --test --test_args="--ginkgo.focus=Secrets"
```
You will also likely need to specify `minStartupPods` to match the number of
nodes in your cluster. If you're testing against a cluster set up by
`local-up-cluster.sh`, you will need to do the following:
```sh
go run hack/e2e.go -- --provider=local --test --test_args="--minStartupPods=1 --ginkgo.focus=Secrets"
```
### Version-skewed and upgrade testing
We run version-skewed tests to check that newer versions of Kubernetes work

View File

@ -1,13 +1,141 @@
# Help Wanted
# Overview
We use two labels [help wanted](#help-wanted) and [good first
issue](#good-first-issue) to identify issues that have been specially groomed
for new contributors. The `good first issue` label is a subset of `help wanted`
label, indicating that members have committed to providing extra assistance for
new contributors. All `good first issue` items also have the `help wanted`
label.
We also have some [suggestions](#suggestions) for using these labels to help
grow and improve our community.
## Help Wanted
Items marked with the `help wanted` label need to ensure that they are:
- Sufficiently actionable: clear description of what should be done
- Tractable for new/casual contributors: there is documentation how that type of change should be made
- Goldilocks priority: Not too high that a core contributor should do it, but not too low that it isn't useful enough for a core contributor to spend time to review it, answer questions, help get it into a release, etc.
- Up to date: Often these issues become obsolete and have already been done, are no longer desirable, no longer make sense, change in priority, change in difficulty, etc.
- **Low Barrier to Entry**
It should be tractable for new contributors. Documentation on how that type of
change should be made should already exist.
- **Clear Task**
The task is agreed upon and does not require further discussions in the
community. Call out if that area of code is untested and requires new
fixtures.
API / CLI behavior is decided and included in the OP issue, for example: _"The
new command syntax is `svcat unbind NAME [--orphan] [--timeout 5m]`"_, with
expected validations called out.
- **Goldilocks priority**
Not too high that a core contributor should do it, but not too low that it
isn't useful enough for a core contributor to spend time to review it, answer
questions, help get it into a release, etc.
- **Up-To-Date**
Often these issues become obsolete and have already been done, are no longer
desired, no longer make sense, have changed priority or difficulty , etc.
Related commands:
- `/help` : adds the `help wanted` label to an issue
- `/remove-help` : removes the `help wanted` label from an issue
- `/help` : Adds the `help wanted` label to an issue.
- `/remove-help` : Removes the `help wanted` label from an issue. If the
`good first issue` label is present, it is removed as well.
## Good First Issue
Items marked with the `good first issue` label are intended for _first-time
contributors_. It indicates that members will keep an eye out for these pull
requests and shepherd it through our processes.
**New contributors should not be left to find an approver, ping for reviews,
decipher prow commands, or identify that their build failed due to a flake.**
This makes new contributors feel welcome, valued, and assures them that they
will have an extra level of help with their first contribution.
After a contributor has successfully completed 1-2 `good first issue`'s, they
should be ready to move on to `help wanted` items, saving remaining `good first
issue`'s for other new contributors.
These items need to ensure that they follow the guidelines for `help wanted`
labels (above) in addition to meeting the following criteria:
- **No Barrier to Entry**
The task is something that a new contributor can tackle without advanced
setup, or domain knowledge.
- **Solution Explained**
The recommended solution is clearly described in the issue.
- **Provides Context**
If background knowledge is required, this should be explicitly mentioned and a
list of suggested readings included.
- **Gives Examples**
Link to examples of similar implementations so new contributors have a
reference guide for their changes.
- **Identifies Relevant Code**
The relevant code and tests to be changed should be linked in the issue.
- **Ready to Test**
There should be existing tests that can be modified, or existing test cases
fit to be copied. If the area of code doesn't have tests, before labeling the
issue, add a test fixture. This prep often makes a great `help wanted` task!
Related commands:
- `/good-first-issue` : Adds the `good first issue` label to an issue. Also adds
the `help wanted` label, if not already present.
- `/remove-good-first-issue` : Removes the `good first issue` label from an
issue.
# Suggestions
We encourage our more experienced members to help new contributors, so that the
Kubernetes community can continue to grow and maintain the kind, inclusive
community that we all enjoy today.
The following suggestions go a long way toward preventing "drive-by" PRs, and
ensure that our investment in new contributors is rewarded by them coming back
and becoming regulars.
Provide extra assistance during reviews on `good first issue` pull requests:
- Answer questions and identify useful docs.
- Offer advice such as _"One way to reproduce this in a cluster is to do X and
then you can use kubectl to poke around"_, or _"Did you know that you can
use fake clients to setup and test this easier?"_.
- Help new contributors learn enough about the project, setting up their
environment, running tests, and navigating this area of the code so that they
can tackle a related `help wanted` issue next time.
If you make someone feel like a part of our community, that it's safe to ask
questions, that people will let them know the rules/norms, that their
contributions are helpful and appreciated... they will stick around! 🌈
- Encourage new contributors to seek help on the appropriate slack channels,
introduce them, and include them in your conversations.
- Invite them to the SIG meetings.
- Give credit to new contributors so that others get to know them, _"Hey, would
someone help give a second LGTM on @newperson's first PR on chocolate
bunnies?"_. Mention them in the SIG channel/meeting, thank them on twitter or
#shoutouts.
- Use all the emoji in your approve or lgtm comment. 💖 🚀
- Let them know that their `good first issue` is getting extra attention to make
the first one easier and help them find a follow-up issue.
- Suggest a related `help wanted` so that can build up experience in an area.
- People are more likely to continue contributing when they know what to expect,
what's the acceptable way to ask for people to review a PR, nudge things along
when a PR is stalled. Show them how we operate by helping move their first PR
along.
- If you have time, let the contributor know that they can DM you with questions
that they aren't yet comfortable asking the wider group.

View File

@ -72,7 +72,9 @@ You get the idea - if you ever see something you think should be fixed, you shou
### Find a good first topic
There are multiple repositories within the Kubernetes community and a full list of repositories can be found [here](https://github.com/kubernetes/).
Each repository in the Kubernetes organization has beginner-friendly issues that provide a good first issue. For example, [kubernetes/kubernetes](https://git.k8s.io/kubernetes) has [help wanted issues](https://go.k8s.io/help-wanted) that should not need deep knowledge of the system.
Each repository in the Kubernetes organization has beginner-friendly issues that provide a good first issue. For example, [kubernetes/kubernetes](https://git.k8s.io/kubernetes) has [help wanted](https://go.k8s.io/help-wanted) and [good first issue](https://github.com/kubernetes/kubernetes/labels/good%20first%20issue) labels for issues that should not need deep knowledge of the system.
The `good first issue` label indicates that members have committed to providing extra assistance for new contributors.
Another good strategy is to find a documentation improvement, such as a missing/broken link, which will give you exposure to the code submission/review process without the added complication of technical depth. Please see [Contributing](#contributing) below for the workflow.
### Learn about SIGs

View File

@ -33,7 +33,7 @@ for other github repositories related to Kubernetes is TBD.
Most people can leave comments and open issues. They don't have the ability to
set labels, change milestones and close other peoples issues. For that we use
a bot to manage labelling and triaging. The bot has a set of
[commands and permissions](https://git.k8s.io/test-infra/commands.md)
[commands and permissions](https://go.k8s.io/bot-commands)
and this document will cover the basic ones.
## Determine if it's a support request

View File

@ -108,14 +108,14 @@ via CURL.)
Some highlights of things we intend to change:
* Apply will be moved to the control plane: [overall design](goo.gl/UbCRuf).
* Apply will be moved to the control plane: [overall design](https://goo.gl/UbCRuf).
* It will be invoked by sending a certain Content-Type with the verb PATCH.
* The last-applied annotation will be promoted to a first-class citizen under
metadata. Multiple appliers will be allowed.
* Apply will have user-targeted and controller-targeted variants.
* The Go IDL will be fixed: [design](goo.gl/EBGu2V). OpenAPI data models will be fixed. Result: 2-way and
* The Go IDL will be fixed: [design](https://goo.gl/EBGu2V). OpenAPI data models will be fixed. Result: 2-way and
3-way merges can be implemented correctly.
* 2-way and 3-way merges will be implemented correctly: [design](goo.gl/nRZVWL).
* 2-way and 3-way merges will be implemented correctly: [design](https://goo.gl/nRZVWL).
* Dry-run will be implemented on control plane verbs (POST and PUT).
* Admission webhooks will have their API appended accordingly.
* The defaulting and conversion stack will be solidified to allow converting

View File

@ -10,10 +10,10 @@ participating-sigs:
reviewers:
- "@droot"
approvers:
- "@maciej"
- "@soltysh"
editor: "@droot"
creation-date: 2018-05-5
last-updated: 2018-05-5
creation-date: 2018-05-05
last-updated: 2018-05-23
status: implemented
see-also:
- n/a
@ -48,9 +48,9 @@ superseded-by:
Declarative specification of Kubernetes objects is the recommended way to manage Kubernetes
production workloads, however gaps in the kubectl tooling force users to write their own scripting and
tooling to augment the declarative tools with preprocessing transformations.
While most of theser transformations already exist as imperative kubectl commands, they are not natively accessible
While most of these transformations already exist as imperative kubectl commands, they are not natively accessible
from a declarative workflow.
This KEP describes how `kustomize` addresses this problem by providing a declarative format for users to access
the imperative kubectl commands they are already familiar natively from declarative workflows.
@ -59,7 +59,7 @@ the imperative kubectl commands they are already familiar natively from declarat
The kubectl command provides a cli for:
- accessing the Kubernetes apis through json or yaml configuration
- porcelain commands for generating and transforming configuration off of commandline flags.
- porcelain commands for generating and transforming configuration off of command line flags.
Examples:
@ -70,7 +70,7 @@ Examples:
- Create or update fields that cut across other fields and objects
- `kubectl label`, `kubectl annotate`
- Users can add and update labels for all objects composing an application
- Transform an existing declarative configuration without forking it
- `kubectl patch`
- Users may generate multiple variations of the same workload

View File

@ -15,7 +15,7 @@ approvers:
- "@yujuhong"
- "@roberthbailey"
editor:
name: @timothysc
name: "@timothysc"
creation-date: 2017-10-20
last-updated: 2018-01-23
---

View File

@ -280,7 +280,7 @@ existing `kubeadm join` flow:
in `kube-system` namespace.
> This requires to grant access to the above configMap for
`system:node-bootstrapper` group (or to provide the same information
`system:bootstrappers` group (or to provide the same information
provided in a file like in 1.).
2. Check if the cluster is ready for joining a new master node:

6
keps/sig-node/OWNERS Normal file
View File

@ -0,0 +1,6 @@
reviewers:
- sig-node-leads
approvers:
- sig-node-leads
labels:
- sig/node

View File

@ -1,11 +1,11 @@
## Kubernetes Repositories
This document attempts to outline a structure for creating and associating github repositories with the Kubernetes project.
This document attempts to outline a structure for creating and associating github repositories with the Kubernetes project. It also describes how and when
repositories are removed.
The document presents a tiered system of repositories with increasingly strict requirements in an attempt to provide the right level of oversight and flexibility for a variety of different projects.
### Associated Repositories
Associated repositories conform to the Kubernetes community standards for a repository, but otherwise have no restrictions. Associated repositories exist solely for the purpose of making it easier for the Kubernetes community to work together. There is no implication of support or endorsement of any kind by the Kubernetes project, the goals are purely logistical.
@ -73,6 +73,45 @@ Create a broader base of repositories than the existing gh/kubernetes/kubernetes
* All OWNERS must be members of standing as defined by ability to vote in Kubernetes steering committee elections. in the Kubernetes community
* Repository must be approved by SIG-Architecture
### Removing Repositories
As important as it is to add new repositories, it is equally important to
prune old repositories that are no longer relevant or useful.
It is in the best interests of everyone involved in the Kubernetes community
that our various projects and repositories are active and healthy. This
ensures that repositories are kept up to date with the latest Kubernetes
wide processes, it ensures a rapid response to potential required fixes
(e.g. critical security problems) and (most importantly) it ensures that
contributors and users receive quick feedback on their issues and
contributions.
#### Grounds for removal
SIG repositories and core repositories may be removed from the project if they
are deemed _inactive_. Inactive repositories are those that meet any of the
following criteria:
* There are no longer any active maintainers for the project and no
replacements can be found.
* All PRs or Issues have gone un-addressed for longer than six months.
* There have been no new commits or other changes in more than a year.
Associated repositories are much more loosely associated with the Kubernetes
project and are generally not subject to removal, except under exceptional
circumstances (e.g. a code of conduct violation).
#### Procedure for removal
When a repository is set for removal, it is moved into the
[kubernetes-retired](https://github.com/kubernetes-retired) organization.
This maintains the
complete record of issues, PRs and other contributions, but makes it clear
that the repository should be considered archival, not active. We will also
use the [github archive feature](https://help.github.com/articles/archiving-a-github-repository/) to mark the repository as archival and read-only.
The decision to archive a repository will be made by SIG architecture and
announced on the Kubernetes dev mailing list and community meeting.
### FAQ
*My project is currently in kubernetes-incubator, what is going to happen to it?*

View File

@ -52,7 +52,7 @@ Mentees will be asked to take responsibility for their growth and improvement as
* Discuss their path so far and ask for input/feedback.
* Code base tour of a certain area
* Or a mix - its up to the mentee with how theyd like to spend the time and your skills/experience.
* You'll fill out a form once a month to list your availibility.
* You'll fill out a form once a month to list your availability.
*[Outreachy](https://www.outreachy.org)*
* Being a mentor can take anywhere from 2-5 hours a week depending on the scope of the project that the intern(s) work on.

View File

@ -67,6 +67,8 @@ The following subprojects are owned by sig-api-machinery:
- https://raw.githubusercontent.com/kubernetes/sample-controller/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/sample-controller/OWNERS
- https://raw.githubusercontent.com/kubernetes-incubator/apiserver-builder/master/OWNERS
- https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/master/OWNERS
- https://raw.githubusercontent.com/kubernetes-sigs/kubebuilder/master/OWNERS
- **idl-schema-client-pipeline**
- Owners:
- https://raw.githubusercontent.com/kubernetes/gengo/master/OWNERS

View File

@ -20,8 +20,9 @@ Covers deploying and operating big data applications (Spark, Kafka, Hadoop, Flin
### Chairs
The Chairs of the SIG run operations and processes governing the SIG.
* Anirudh Ramanathan (**[@foxish](https://github.com/foxish)**), Google
* Anirudh Ramanathan (**[@foxish](https://github.com/foxish)**), Rockset
* Erik Erlandson (**[@erikerlandson](https://github.com/erikerlandson)**), Red Hat
* Yinan Li (**[@liyinan926](https://github.com/liyinan926)**), Google
## Contact
* [Slack](https://kubernetes.slack.com/messages/sig-big-data)

View File

@ -37,7 +37,7 @@ In order to standardize Special Interest Group efforts, create maximum transpare
5. Share individual events with `cgnt364vd8s86hr2phapfjc6uk@group.calendar.google.com` to publish on the universal calendar.
* Use existing proposal and PR process (to be documented)
* Announce new SIG on kubernetes-dev@googlegroups.com
* Leads should [subcribe to the kubernetes-sig-leads mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-leads)
* Leads should [subscribe to the kubernetes-sig-leads mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-leads)
* Submit a PR to add a row for the SIG to the table in the kubernetes/community README.md file, to create a kubernetes/community directory, and to add any SIG-related docs, schedules, roadmaps, etc. to your new kubernetes/community/SIG-foo directory.
### **Creating service accounts for the SIG**

View File

@ -29,7 +29,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md)
|[Autoscaling](sig-autoscaling/README.md)|autoscaling|* [Marcin Wielgus](https://github.com/mwielgus), Google<br>* [Solly Ross](https://github.com/directxman12), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-autoscaling)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-autoscaling)|* Regular SIG Meeting: [Mondays at 14:00 UTC (biweekly/triweekly)](https://zoom.us/my/k8s.sig.autoscaling)<br>
|[AWS](sig-aws/README.md)|aws|* [Justin Santa Barbara](https://github.com/justinsb)<br>* [Kris Nova](https://github.com/kris-nova), Heptio<br>* [Bob Wise](https://github.com/countspongebob), AWS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-aws)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-aws)|* Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/k8ssigaws)<br>
|[Azure](sig-azure/README.md)|azure|* [Stephen Augustus](https://github.com/justaugustus), Red Hat<br>* [Shubheksha Jalan](https://github.com/shubheksha), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-azure)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-azure)|* Regular SIG Meeting: [Wednesdays at 16:00 UTC (biweekly)](https://zoom.us/j/2015551212)<br>
|[Big Data](sig-big-data/README.md)|big-data|* [Anirudh Ramanathan](https://github.com/foxish), Google<br>* [Erik Erlandson](https://github.com/erikerlandson), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-big-data)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-big-data)|* Regular SIG Meeting: [Wednesdays at 17:00 UTC (weekly)](https://zoom.us/my/sig.big.data)<br>
|[Big Data](sig-big-data/README.md)|big-data|* [Anirudh Ramanathan](https://github.com/foxish), Rockset<br>* [Erik Erlandson](https://github.com/erikerlandson), Red Hat<br>* [Yinan Li](https://github.com/liyinan926), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-big-data)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-big-data)|* Regular SIG Meeting: [Wednesdays at 17:00 UTC (weekly)](https://zoom.us/my/sig.big.data)<br>
|[CLI](sig-cli/README.md)|cli|* [Maciej Szulik](https://github.com/soltysh), Red Hat<br>* [Phillip Wittrock](https://github.com/pwittrock), Google<br>* [Tony Ado](https://github.com/AdoHe), Alibaba<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cli)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cli)|* Regular SIG Meeting: [Wednesdays at 09:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/sigcli)<br>
|[Cloud Provider](sig-cloud-provider/README.md)|cloud-provider|* [Andrew Sy Kim](https://github.com/andrewsykim), DigitalOcean<br>* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation<br>* [Jago Macleod](https://github.com/jagosan), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cloud-provider)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider)|* Regular SIG Meeting: [Wednesdays at 10:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/sigcloudprovider)<br>
|[Cluster Lifecycle](sig-cluster-lifecycle/README.md)|cluster-lifecycle|* [Luke Marsden](https://github.com/lukemarsden), Weave<br>* [Robert Bailey](https://github.com/roberthbailey), Google<br>* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)<br>* [Timothy St. Clair](https://github.com/timothysc), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular SIG Meeting: [Tuesdays at 09:00 PT (Pacific Time) (weekly)](https://zoom.us/j/166836%E2%80%8B624)<br>* kubeadm Office Hours: [Wednesdays at 09:00 PT (Pacific Time) (weekly)](https://zoom.us/j/166836%E2%80%8B624)<br>* Cluster API working group: [Wednesdays at 10:00 PT (Pacific Time) (weekly)](https://zoom.us/j/166836%E2%80%8B624)<br>* kops Office Hours: [Fridays at 09:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/k8ssigaws)<br>

View File

@ -45,11 +45,12 @@ Important notes about the numbers:
| Number of pods per node<sup>1</sup> | 110 | | 500 |
| Number of pods per core<sup>1</sup> | 10 | | 10 |
| Number of namespaces (ns) | 10000 | | 100000 |
| Number of pods per ns | 15000 | | 50000 |
| Number of pods per ns | 3000 | | 50000 |
| Number of services | 10000 | | 100000 |
| Number of services per ns | 5000 | | 5000 |
| Number of all services backends | TBD | | 500000 |
| Number of backends per service | 5000 | | 5000 |
| Number of deployments per ns | 20000 | | 10000 |
| Number of deployments per ns | 2000 | | 10000 |
| Number of pods per deployment | TBD | | 10000 |
| Number of jobs per ns | TBD | | 1000 |
| Number of daemon sets per ns | TBD | | 100 |

View File

@ -79,6 +79,8 @@ sigs:
- https://raw.githubusercontent.com/kubernetes/sample-controller/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/sample-controller/OWNERS
- https://raw.githubusercontent.com/kubernetes-incubator/apiserver-builder/master/OWNERS
- https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/master/OWNERS
- https://raw.githubusercontent.com/kubernetes-sigs/kubebuilder/master/OWNERS
- name: idl-schema-client-pipeline
owners:
- https://raw.githubusercontent.com/kubernetes/gengo/master/OWNERS # possibly should be totally separate
@ -468,10 +470,13 @@ sigs:
chairs:
- name: Anirudh Ramanathan
github: foxish
company: Google
company: Rockset
- name: Erik Erlandson
github: erikerlandson
company: Red Hat
- name: Yinan Li
github: liyinan926
company: Google
meetings:
- description: Regular SIG Meeting
day: Wednesday