Merge pull request #1467 from castrojo/summit-notes
Automatic merge from submit-queue. Initial commit of summit notes This will be an ongoing PRfest! Fixes to formatting and general copyediting would be most appreciated.
This commit is contained in:
		
						commit
						49dee77583
					
				|  | @ -0,0 +1,168 @@ | |||
| Container identity | ||||
| 
 | ||||
| Lead: Greg Castle | ||||
| 
 | ||||
| How are people using k8s service accounts | ||||
| 
 | ||||
| * Default service accounts? | ||||
| 
 | ||||
| * User: | ||||
| 
 | ||||
|     * (Zalando) | ||||
| 
 | ||||
|         * 55 clusters | ||||
| 
 | ||||
|         * Only allow pre-set service accounts postgres "operator service accounts" | ||||
| 
 | ||||
|         * Don’t enable RBAC | ||||
| 
 | ||||
|         * Namespaces: usually use the default namespace, more sophisticated clients can use namespaces | ||||
| 
 | ||||
|         * Cluster per product | ||||
| 
 | ||||
|         * CRD that defines request for an OAuth2 token | ||||
| 
 | ||||
|             * Q: Would you want to tie it to a Kubernetes service account. E.g. annotation on a service account to get an OAuth2 token | ||||
| 
 | ||||
|     * (IBM) | ||||
| 
 | ||||
|         * Strict way of "cutting up cluster" | ||||
| 
 | ||||
|         * No team has access to the Kubernetes API | ||||
| 
 | ||||
|         * Humans do, pods don’t | ||||
| 
 | ||||
|         * Admins create service accounts and role bindings | ||||
| 
 | ||||
|         * Issues with pod authors in the namespace can run any service account. | ||||
| 
 | ||||
|             * Is there a missing binding to control the right to use a service account? | ||||
| 
 | ||||
|         * Namespace boundary | ||||
| 
 | ||||
|             * Application | ||||
| 
 | ||||
|     * (VMware) | ||||
| 
 | ||||
|         * 1 cluster | ||||
| 
 | ||||
|         * Projects are per-namespace (200 namespaces) | ||||
| 
 | ||||
|         * Wrote an admission controller between which pod is using which service account. Denies certain kind of changes. | ||||
| 
 | ||||
|         * Ownership model "I created this object, only I can change it" | ||||
| 
 | ||||
|         * Namespace but wanted sub-namespace primitives | ||||
| 
 | ||||
|         * Service accounts have labels, need to match the pod. | ||||
| 
 | ||||
|         * Q: Why not just different namespaces? | ||||
| 
 | ||||
|             * Don’t want to give users the ability to create namespaces | ||||
| 
 | ||||
|     * (Jetstat) | ||||
| 
 | ||||
|         * Dev, staging, and prod cluster | ||||
| 
 | ||||
|         * Namespace per team | ||||
| 
 | ||||
|         * Their developers don’t have permissions to create workloads | ||||
| 
 | ||||
|         * CI system is the thing that creates workloads, that controls stuff | ||||
| 
 | ||||
|     * (CoreOS) | ||||
| 
 | ||||
|         * Customers create namespaces that are controlled by CI/CD | ||||
| 
 | ||||
|         * Users have read-access to cluster | ||||
| 
 | ||||
|         * Central catalog controls the service account | ||||
| 
 | ||||
|             * Authoritative on RBAC | ||||
| 
 | ||||
|             * Catalog is currently controlled by CoreOS, can be used by users later | ||||
| 
 | ||||
|         * Namespaces | ||||
| 
 | ||||
|             * Catalog is enabled on a namespace | ||||
| 
 | ||||
|         * Also use service accounts to authenticate CI/CD systems | ||||
| 
 | ||||
| * Container identity | ||||
| 
 | ||||
|     * Authenticate workloads against each other | ||||
| 
 | ||||
|     * Does this fit into the current way you use service accounts? | ||||
| 
 | ||||
| * What about envoy istio? | ||||
| 
 | ||||
|     * Complementary efforts | ||||
| 
 | ||||
|     * Services can do service to service auth | ||||
| 
 | ||||
|     * Maybe not solving "I want to talk to s3, I want to talk to GitHub" | ||||
| 
 | ||||
|     * Give them a better way to provision x509 certs | ||||
| 
 | ||||
|         * Istio will be dependent on container identity | ||||
| 
 | ||||
|     * Istio and SPIFFE? | ||||
| 
 | ||||
|         * Istio is using SPIFFE conformant x509, not the workloads API | ||||
| 
 | ||||
| * Q: how does Zalando RBAC CRDs? | ||||
| 
 | ||||
|     * Trust two engineers that have the permissions to create the CRDs | ||||
| 
 | ||||
|     * Emergency procedure with time limited API | ||||
| 
 | ||||
| * Q: multiple clusters working together? | ||||
| 
 | ||||
|     * IBM: going to face it | ||||
| 
 | ||||
|     * CoreOS: thinking about users authenticating across multiple clusters through sharing public keys. | ||||
| 
 | ||||
| * Liggitt: namespaces are the boundaries of trust | ||||
| 
 | ||||
|     * Originally thought about using pod templates so admins could create specific templates that pods MUST use. | ||||
| 
 | ||||
|     * Reference pod template in RC or deployment instead of inlining them | ||||
| 
 | ||||
|         * Never bubbled up to be the most important thing yet | ||||
| 
 | ||||
| * Liggitt: Kubernetes API isn’t really designed for things that hold credentials | ||||
| 
 | ||||
|     * Moving the mounting and creation of the service account to the kubelet through flex volumes | ||||
| 
 | ||||
|     * Helps you leverage the node level integrations | ||||
| 
 | ||||
| * Q: what about the secrets proposal? | ||||
| 
 | ||||
|     * KMS is being added | ||||
| 
 | ||||
|     * No change for encrypting the secret as you read it | ||||
| 
 | ||||
| * Slides | ||||
| 
 | ||||
|     * Authorization should apply to the identity, rather than the namespace | ||||
| 
 | ||||
|     * Is multiple SA per namespace an anti-pattern? | ||||
| 
 | ||||
|         * Useful for bounding process within pods to certain permissions | ||||
| 
 | ||||
|         * Problem when a SA from cluster scoped manifests | ||||
| 
 | ||||
| * Q: kerberos and vault bindings implemented as flex volumes | ||||
| 
 | ||||
|     * Is it important to have standard identity mechanisms? (Yes) | ||||
| 
 | ||||
|         * Can we add it to conformance? | ||||
| 
 | ||||
|         * Avoid lock-in for IAM | ||||
| 
 | ||||
|     * Cluster identity also becomes an issue too | ||||
| 
 | ||||
|         * Pod/UID/Namespace/Cluster | ||||
| 
 | ||||
| * Potentially use OpenID Connect format so others can use that spec | ||||
| 
 | ||||
|  | @ -0,0 +1,134 @@ | |||
| # Kubernetes Ecosystem | ||||
| 
 | ||||
| Notes by @directxman12 | ||||
| 
 | ||||
| * How do we e.g. have an object validator | ||||
| 
 | ||||
|     * Went looking, found 2, second b/c first didn’t work well enough | ||||
| 
 | ||||
| * How do we enable people to build tools that consume and assist with working with Kubernetes | ||||
| 
 | ||||
|     * Kube-fuse, validatators, etc | ||||
| 
 | ||||
|     * How do we make those discoverable | ||||
| 
 | ||||
|         * Difficult to find via GitHub search | ||||
| 
 | ||||
|         * Difficult to find via search engines (find blog posts, etc, and not the tools) | ||||
| 
 | ||||
| * Proposal: structured content (tags, categories, search) for registering and discoverability | ||||
| 
 | ||||
| * Are there examples of ecosystems that we can follow | ||||
| 
 | ||||
|     * Package managers (PyPI, crates.io, NPM) | ||||
| 
 | ||||
|         * Doesn’t quite fit one-to-one b/c some stuff is best practices, etc | ||||
| 
 | ||||
|             * Docs, stuff like CNI plugins, etc doesn’t actually work well the same view as things that run on Kubernetes | ||||
| 
 | ||||
|             * At the basic point, just need to find what’s there | ||||
| 
 | ||||
|         * Traditional package managers focus on consuming the packages | ||||
| 
 | ||||
|             * If you don’t have packaging, it’s hard to approach (see also: Freshmeat problems) | ||||
| 
 | ||||
|     * Hadoop: classes that map the Hadoop ecosystem, infographs | ||||
| 
 | ||||
|     * Wordpress, Drupal are good examples | ||||
| 
 | ||||
|     * Ansible Galaxy | ||||
| 
 | ||||
|     * Chrome Extensions/Eclipse plugins | ||||
| 
 | ||||
|         * Look for things tagged, if comments say "broken" and last update is old, don’t use it	 | ||||
| 
 | ||||
|     * Packagist (PHP package repository) | ||||
| 
 | ||||
|         * Integrates with GitHub, pulls in details about updates, README, extensibility, etc from GitHub | ||||
| 
 | ||||
|         * Just helps with discoverability | ||||
| 
 | ||||
|     * Steam | ||||
| 
 | ||||
|         * Has curated lists by users | ||||
| 
 | ||||
| * Opinion: End users need focused, distributable bundles | ||||
| 
 | ||||
|     * Most people don’t need to do all of everything | ||||
| 
 | ||||
|     * Different systems for apps vs critical infrastructure | ||||
| 
 | ||||
|         * Critical infrastructure doesn’t change much | ||||
| 
 | ||||
|         * Still need to discover when initially building out your system | ||||
| 
 | ||||
| * Issue: people get overwhelmed with choice | ||||
| 
 | ||||
|     * We don’t want to endorse things -- users should choose | ||||
| 
 | ||||
|     * We could let people rank/vote/etc | ||||
| 
 | ||||
|         * For example, what’s wrong with an "awesome list" | ||||
| 
 | ||||
|     * Need to look at human-consumable media, not necessarily machine-consumable | ||||
| 
 | ||||
| * Question: do we currate | ||||
| 
 | ||||
|     * Do we require "discoverable" things to be maintained/use a CLA/etc? | ||||
| 
 | ||||
|     * Opinion: no, we can’t possibly curate everything ourselves | ||||
| 
 | ||||
| * It’s problematic if you can discover new things, but they’re not supported by your Kubernetes distros | ||||
| 
 | ||||
|     * Not as much a problem for apps stuff, but harder for infrastructure | ||||
| 
 | ||||
| * Doesn’t GitHub have labels, stars, etc | ||||
| 
 | ||||
|     * Yes! | ||||
| 
 | ||||
|     * We could just say "always label your GitHub repo with a XYZ label if you’re a CRI plugin" | ||||
| 
 | ||||
|     * Comes a bit back to curration to discover benefits of each | ||||
| 
 | ||||
|     * Enterprises, gitlab make this infeasible | ||||
| 
 | ||||
| * Core infrastructure is a bit of an edge case, perhaps focus on addons like logging, monitoring, etc | ||||
| 
 | ||||
|     * Still comes back to: "if distro doesn’t support well, will it still work" | ||||
| 
 | ||||
| * Issue: there’s things people don’t know they can choose | ||||
| 
 | ||||
|     * E.g. external DNS providers | ||||
| 
 | ||||
| * Have a partially curated list of topics, but not curate actual content | ||||
| 
 | ||||
|     * Maybe leave that up to distros (have different collections of options -- e.g. open-source only, etc) | ||||
| 
 | ||||
|     * Have "awesome-kubernetes" | ||||
| 
 | ||||
| * Let SIGs curate their lists? | ||||
| 
 | ||||
|     * Having all the SIG be different is difficult and confusing | ||||
| 
 | ||||
|     * SIG leads don’t necessarily want to be the gatekeepers | ||||
| 
 | ||||
|     * We don’t necessarily want to tempt SIG leads with being the gatekeepers | ||||
| 
 | ||||
| * If we have something "official", people will assume that stuff is tested, even if it’s not | ||||
| 
 | ||||
| * Can we just have distributions that have way more options (a la Linux distros) | ||||
| 
 | ||||
|     * There are currently 34 conformant distros | ||||
| 
 | ||||
| * If we have ecosystem.k8s.io it’s really easy for people to find how to find things, otherwise could be hard | ||||
| 
 | ||||
|     * E.g. people don’t necessarily know awesome lists are a thing to search for | ||||
| 
 | ||||
| * Someone should do a prototype, and then we can have the conversation | ||||
| 
 | ||||
| * Question: where should this convo continue | ||||
| 
 | ||||
|     * SIG Apps? | ||||
| 
 | ||||
|     * Breakout group from SIG Apps? | ||||
| 
 | ||||
										
											
												File diff suppressed because one or more lines are too long
											
										
									
								
							|  | @ -0,0 +1,336 @@ | |||
| Contributor summit - Kubecon 2017 | ||||
| 
 | ||||
| **@AUTHORS - CONNOR DOYLE** | ||||
| 
 | ||||
| **@SLIDE AUTHORS ARE THE PRESENTERS** | ||||
| 
 | ||||
| **@SKETCHNOTES AUTHOR DAN ROMLEIN** | ||||
| 
 | ||||
| # 2018 features and roadmap update | ||||
| 
 | ||||
| Presenters: Jaice, Aparna, Ihor, Craig, Caleb | ||||
| 
 | ||||
| Slidedeck: https://docs.google.com/presentation/d/10AcxtnYFT9Btg_oTV4yWGNRZy41BKK9OjK3rjwtje0g/edit?usp=sharing  | ||||
| 
 | ||||
| What is SIG PM: the "periscope" of Kubernetes | ||||
| 
 | ||||
| * They look out for what’s next, translate what’s going on in the community to what that means for Kubernetes | ||||
| 
 | ||||
| * Responsible for Roadmap and Features Process | ||||
| 
 | ||||
| Understanding the release notes is difficult sometimes since the docs aren't always done by the time the release team is compiling the notes. | ||||
| 
 | ||||
| ## Retrospective on 2017 | ||||
| 
 | ||||
| * What did we do since Seattle | ||||
| 
 | ||||
| * Feedback on how we can enhance SIG-PM | ||||
| 
 | ||||
| * Moving from "product-driven" to "project-driven" where SIGs are defining their roadmaps based on market, end-use input etc. | ||||
| 
 | ||||
| Major 2017 Features: | ||||
| 
 | ||||
| * (Apps) Workloads API to Stable | ||||
| 
 | ||||
| * (Scalability) 5k nodes in a single cluster | ||||
| 
 | ||||
| * (Networking) NetworkPolicy | ||||
| 
 | ||||
| * (Node) CRI, GPU support | ||||
| 
 | ||||
| * (Auth) RBAC stable | ||||
| 
 | ||||
| * (Cloud providers) Out of core | ||||
| 
 | ||||
| * (Cluster Lifecycle) kubeadm enhancements - road to GA in 2018? | ||||
| 
 | ||||
| * (Autoscaler & instrumentation): custom metrics for HPA | ||||
| 
 | ||||
| * (Storage) CSI (Container Storage Interface) | ||||
| 
 | ||||
| How did we do on the contributor roadmap? | ||||
| 
 | ||||
| * [See 2017 roadmap slides](https://docs.google.com/presentation/d/1GkDV6suQ3IhDUIMLtOown5DidD5uZuec4w6H8ve7XVo/edit) | ||||
| 
 | ||||
| Audience Feedback: | ||||
| 
 | ||||
| * Liked that it felt like there was a common vision or theme last year. | ||||
| 
 | ||||
| * Liked that there was a PM rep saying "do docs", "your tests are failing" etc | ||||
| 
 | ||||
| * Leadership summit 6 months ago: shot heard called was "stability releases, stability releases".  Not sure that 1.8 was really a stability release, not sure 1.9 is either.  Will 2018 be the year of stability (on the desktop) | ||||
| 
 | ||||
|     * (Brian Grant) Come to the feature branch session! | ||||
| 
 | ||||
|     * Notion of stability needs to be captured as a feature or roadmap item to talk and brag about.  Quantify, talk about as roadmap item | ||||
| 
 | ||||
|     * Idea for 2018: how do you measure and gamify stability?  See a number in the red, people will want to fix.  Testing, code quality, other metrics - might improve stability organically | ||||
| 
 | ||||
|     * Context of stability and testing: achievement was conformance program. 30+ conformant distros! | ||||
| 
 | ||||
|     * Want to see project continue to be useful: within your SIG, invest in conformance, extending suite. Going back to what is and is not k8s - define core, extensions: don't compromise the stability of the core. | ||||
| 
 | ||||
|     * Please come see Eric Tune and define stability :  | ||||
| 
 | ||||
|         * "cluster crashed" | ||||
| 
 | ||||
|         * "too many features" | ||||
| 
 | ||||
|         * "an API I was using stopped working" | ||||
| 
 | ||||
|         * "community is chaotic, how do I navigate that" | ||||
| 
 | ||||
| * There are many new alpha features in each new release. Please prioritize graduating and stabilizing the existing features.  (More than 50 APIs already) | ||||
| 
 | ||||
| * Looking for volunteers on writing a stability proposal? | ||||
| 
 | ||||
| * Jaice has one already! | ||||
| 
 | ||||
|     * *May* have broken the comment limit on Google Docs  | ||||
| 
 | ||||
|     * Need to define lens: | ||||
| 
 | ||||
|         * architecture, community etc; look at a proposal under each.  | ||||
| 
 | ||||
|         * Brian is working on arch stability. | ||||
| 
 | ||||
|         * Contribex is looking at mentorship and ladder. | ||||
| 
 | ||||
|         * Myriad of ways to approach problem set.  How do we mature the processes of the Kubernetes ecosystem? | ||||
| 
 | ||||
| * Looking for co-authors? | ||||
| 
 | ||||
| ## Proposals and KEPs | ||||
| 
 | ||||
| (Caleb) | ||||
| 
 | ||||
| Please hang out in SIG-Arch, SIG-Contribex, SIG-PM to drive this process forward | ||||
| 
 | ||||
| Looking at "is this feature ready to move to the next stability milestone?" (Alpha to Beta, Beta to GA etc) | ||||
| 
 | ||||
| * Proposals are now **K**ubernetes **E**nhancement **P**roposals | ||||
| 
 | ||||
|     * Piece-by-piece over on more multiple releases (living documents) | ||||
| 
 | ||||
|     * Looked at a lot of other open source projects, e.g. the Rust community (RFC Process) | ||||
| 
 | ||||
|     * designed in the era of GitHub; decided on a lightweight process that works well with the VCS.  | ||||
| 
 | ||||
| * Talk about what we want to do without a long spec doc, about what we agreed to ahead of time, but* don't want to diverge 2 years later*. | ||||
| 
 | ||||
| * Helps tracking individual features (easier to read release notes) | ||||
| 
 | ||||
|     * Release note writing take tracking down a lot of docs, GitHUb issues, design docs, Slack and Google Group comments; combine from a bunch of places | ||||
| 
 | ||||
|     * Hard to tell from the release notes what's important and what's a minor detail. | ||||
| 
 | ||||
| * Every SIG should set their own roadmap - the KEP proposal enables that. | ||||
| 
 | ||||
| * Template that asks you to think ahead for the lifecycle of the feature; let people know what you're planning to do.   | ||||
| 
 | ||||
|     * It's a medium for comms; not saying "It has to be done this way" but saying why this is important.  | ||||
| 
 | ||||
|     * Inspired by "[Toward Go 2](https://blog.golang.org/toward-go2)" blog post by rsc | ||||
| 
 | ||||
| * Has been tested - [draft KEP for Cloud Providers](https://github.com/kubernetes/community/pull/1455), Tim St. Clair has tested. | ||||
| 
 | ||||
| * Want to make easier for new contributors to write KEPs. | ||||
| 
 | ||||
| * Starting with "what is a unit of work, how do people care" | ||||
| 
 | ||||
| Questions:  | ||||
| 
 | ||||
| * Are KEPS google docs, or pull requests, etc?  How do you submit one? | ||||
| 
 | ||||
|     * Original intend: something that lives in source control.  Discoverable like any part of the code.  Attempt to combine design proposals and umbrella GitHub issues, link to 10s of other issues.  They will live as long as we're a project; doesn't depend on hosting providers. | ||||
| 
 | ||||
|     * Vision is that writing KEPs, know from them what the roadmap is; can write the blog post based on the value articulated in the KEPs. | ||||
| 
 | ||||
|     * Right now they are [buried in community repo](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/0000-kep-template.md) 3 levels deep: a lot of great feedback so far. Joe wouldn't object to making it more discoverable. | ||||
| 
 | ||||
|         * Kep.k8s.io for rendered versions? | ||||
| 
 | ||||
|         * Rust community has a sub-repo just for this (rust-lang/rfcs) | ||||
| 
 | ||||
|         * More than one person has said that KEPs weren't known about - move to somewhere discoverable sooner rather than later. Who can own that action? Matt Farina from Samsung is keen to help but doesn't have resources to lead. | ||||
| 
 | ||||
|         * *Can only have its own repo if we get rid of features repo!* | ||||
| 
 | ||||
|         * Don't want anyone to do anything not adding value to work; hope is that KEP is worthwhile and adds value. Caleb will help drive, and create repos as needed.  | ||||
| 
 | ||||
| ## Features | ||||
| 
 | ||||
| (Jaice) | ||||
| 
 | ||||
| Features repo and process: I will say what I'm not happy about and what I've heard, but I do want to say there has been so much work from Ihor and SIG-PM to get where we are. If you haven't seen the machinations of feature/release product you won't know! | ||||
| 
 | ||||
| 
 | ||||
| At velocity we have outgrown the notion of a central body leading.  Seeing increasing cross-SIG concern where some SIGs rely on others to get their work done. | ||||
| 
 | ||||
| We need to find ways to detangle those dependencies earlier. | ||||
| 
 | ||||
| KEPs are focusing on enhancements, changes in ecosystem.  A feature has a connotation of being user facing or interactive.  KEP can be for something contributor facing, doc facing transparent.  Get out of mindset we're delivering a "thing" - we're delivering value to user, contributor community, or both. | ||||
| 
 | ||||
| Currently the process is cumbersome.  We want to follow agile principles about local control, local understanding, sharing amongst teams. | ||||
| 
 | ||||
| ### The steel thread | ||||
| 
 | ||||
| KEP provides opportunity for "steel thread" of custody. A KEP, as a living Markdown doc with metadata, lets you see high level idea at any group of time.  A SIG can break this down into meltable ideas for each release.  A KEP can last multiple milestones.  In terms of features/issues, defined by SIGs, issue should be actionable.  If a SIG writes an issue for that release milestone it should be approachable by any contributor who is a SIG member. | ||||
| 
 | ||||
| PRs roll up to the issue: links to this issue, which links to this KEP. For the first time we would be able to see at the PR level, what the grand scheme behind everything we do, is. Not heavyweight - just linking of issues.  Limit administrivia, paperwork to delivery value. | ||||
| 
 | ||||
| Q: | ||||
| 
 | ||||
| * For CloudProvider - discovery, had an idea, people started digging in; how do you keep updating the KEP?  Do you update as you go, you odn' tknow what you need? | ||||
| 
 | ||||
|     * Yes, it's a living doc first and foremost.  Has metadata (alpha, beta, GA). The issues worked on per milestone - say you pick 3, you document those issues; if during the course of the PRs to complete those issues you realise it's a misstep, you close the issue, you update the KEP, say you're modifying this; look at prior versions to see what it looked like before/what it looks like now. As you move to next release milestone the issues reflect that change: in terms of KEP, issues, planning. | ||||
| 
 | ||||
| * Give an analysis on why issues in feature repo didn't solve this problem and why KEPs will? | ||||
| 
 | ||||
|     * From features standpoint: SIGs interacted with features only when they had to. By trying to keep all that info separate from PRs, issues in each milestone: no way to tie that work back to the feature issue.  No way to easily understand where in this features repo issue, long if multi-milestone: where was work, and the link to the issue?  Eliminate the work | ||||
| 
 | ||||
|     * KEP is not solving that problem.  KEP is saying "how do we best define and discuss the value we're adding to the ecosystem".  Learns from patterns and antipatterns of other communities.   | ||||
| 
 | ||||
|     * Features process is central body of people not doing the work. | ||||
| 
 | ||||
|     * Some friction with the GitHub mechanics of issues, relabeling quarterly, etc; would be nice to keep pace with the number of issues. Moving to files to provide value will help make that clearer. | ||||
| 
 | ||||
|     * Managing issues in the features repo: hundreds of issues from the last year, created spreadsheets etc.  Bringing value but consuming extra resources. Not synced with features repo. Good we have a spreadsheet, but difficult to manage. | ||||
| 
 | ||||
|     * Started going through this process in SIG-Cluster LIfecycle: replacing GitHub thread (no one updates top) with a living document (use git history for mutations; see concise summary) - think it will be a huge improvement. | ||||
| 
 | ||||
|         * KEPs are in Google Docs now, haven't converted to PRs | ||||
| 
 | ||||
| * Does this mean the features repo is going away? | ||||
| 
 | ||||
|     * "Yes, eventually".  | ||||
| 
 | ||||
| * We need clear communication about where we are in this process. Are KEPs required, encouraged, etc?  Trying with one project, be useful to know what the granularity is.	 | ||||
| 
 | ||||
|     * "It's in Alpha now" - trying to validate some assumptions. | ||||
| 
 | ||||
|         * Has momentum with SIG-Arch and SIG-PM | ||||
| 
 | ||||
|         * Assumption we will try and see if it works.  | ||||
| 
 | ||||
|         * Want to steward people into trying it to see if it's valuable.  If not, how do we make it so? | ||||
| 
 | ||||
|     * Community size and velocity now makes it difficult to radiate information to get what you need to know. | ||||
| 
 | ||||
|     * Kubernetes 1.10 would be a great place. | ||||
| 
 | ||||
| * Will it still be true in 1.10 that you need features? Can you have KEP instead of a Feature issue? Make it very obvious for 1.10 please. | ||||
| 
 | ||||
|     * One thing we need to sort out  | ||||
| 
 | ||||
|     * 1.10 will have existing features process | ||||
| 
 | ||||
|     * Cluster Lifecycle are experimenting with how to do differently | ||||
| 
 | ||||
|     * Would love to see SIGs try the KEP process but it's not a full surrogate for the features process yet. | ||||
| 
 | ||||
| * Existing features can be sorted, filtered etc - will there be tooling around KEPs? Until that's ready, I wouldn't want to switch | ||||
| 
 | ||||
|     * Proposals are different to features: issues will be associated with a KEP. In some ways it will be better as it's in the repo itself. | ||||
| 
 | ||||
| * How are the issues tied to the KEP? | ||||
| 
 | ||||
|     * Ties to this markdown file, in /kep/____ | ||||
| 
 | ||||
|     * We need to work out the details. | ||||
| 
 | ||||
|     * When I create issue template, it will have a link to that KEP. | ||||
| 
 | ||||
|     * *Make sure it's searchable!* | ||||
| 
 | ||||
| * (Jago Macleod) Be explicit of scope of a KEP.  A trend to more SIG ownership, code, roadmap etc; context is a SIG plans their own work.  Certain faster moving SIGs, more capacity, to solve problems in the wrong component.  Some KEPs will span SIG boundaries and should be explicit about this in comms.  | ||||
| 
 | ||||
| * (Tim Hockin) GitHub UX is not going to be great for this.  It's not built for what we're trying to do.  All of this predicates on having an actual site that manages, a web app? | ||||
| 
 | ||||
| If that's true, can we start to collect the expertise? | ||||
| 
 | ||||
| Is there a SaaS tool for this? | ||||
| 
 | ||||
|     * See other projects using tracking tooling, used on files in source control | ||||
| 
 | ||||
|     * Won't be pretty or searchable in first iteration. | ||||
| 
 | ||||
|     * Want to see built out as they're consumed. Don't build tooling before people are using the process | ||||
| 
 | ||||
|     * *GH search is less than usable* | ||||
| 
 | ||||
|     * Do you have a place to host this?  We will find a place to host if someone is willing to write.  Matt will help write. | ||||
| 
 | ||||
|     * As someone involved with Maven repositories (trello) - it's a total pain in the neck.  More in favour of PRs, discoverability, etc.  Keep PRs for process. Push a site for discoverability. | ||||
| 
 | ||||
| * (Joe Beda) Current processes are discussion, argument, write-only: comment threads no-one reads. Make it more discoverable, more readable: check stuff into GitHub, use a static site generator, publishes it somehow, crosslinks between documents. Just like when you go to an RFC, reference one there's a link; that's the ecosystem of discoverability we want. | ||||
| 
 | ||||
|     * If someone comes to project and we have no way of telling what's happening. | ||||
| 
 | ||||
| * (Eric Tune) I created features process so I'm a little attached.  This discussion is a refactor vs rewrite debate.  The feature repo template is there; in GitHub; change it. See how many incremental changes we can make to solve proposals.  Put fields there and yell at people who don't fill it in. | ||||
| 
 | ||||
| * (Angus, Bitnami) As someone involved in Rust community: | ||||
| 
 | ||||
|     *  fairly well-read Rust weekly news summary, including a section with a summary on outstanding proposals, broken down into early and late discussion phases.  Read late phase to see what's a done deal vs. what's up in the air. There's also a RFC podcast, where they get a couple of people, have a chat show about what's involved in that proposal.  Lots of ways as a community member to stay up to date. | ||||
| 
 | ||||
|     * Fixed timeline: trying to get approval. If nothing happens by the end it's approved by default.   | ||||
| 
 | ||||
|     * May not want to copy but good to know. | ||||
| 
 | ||||
| * (Daniel Smith, Google) summarising: problems are mostly discoverability: we'll write tooling. Why can't we write tooling against that existing process? Both current and hypothetical new process suffer  | ||||
| 
 | ||||
|     * Interacting with objects in GitHub is hard at our scale, we have a limited number of tokens. | ||||
| 
 | ||||
|     * Procedural aspects: if I look at a PR, there's a link to an issue: that will tie up to a KEP. | ||||
| 
 | ||||
|     * Discoverability of relationships: exposed in the API. Links on GitHub are implemented by full-text search, not hard to do this. | ||||
| 
 | ||||
| * (Chen Goldberg) Schedule follow-up sometime this week - communicating is hard, we're all here! | ||||
| 
 | ||||
| * (Henning, Zalando) Structured format with some metadata: all in one repo, or distributed amongst incubator projects etc?  Are KEPs with a unique identity going to live in other project repos? Was this an idea? | ||||
| 
 | ||||
|     * Idea is to have as consolidated as possible: given committee questions of "what is Kubernetes, what follows our PM norms".  Problem is visibility for people who trust workloads in the software we create.  Want to provide this so someone can determine why a thing was done a certain way.  Projects outside the core repo: would follow a similar process in a perfect world. | ||||
| 
 | ||||
| If you're not excited about features process, try a KEP! | ||||
| 
 | ||||
| Reach out to SIG-PM - calebamiles@  | ||||
| 
 | ||||
| ### The long view | ||||
| 
 | ||||
| (Jaice) Have planning sessions about what SIGs are hoping to deliver for any milestone. Want to facility planning meetings in a more structured way (Planning as a  Service).  I did this for a proposal SIG-Cluster Lifecycle are working on: to have the discussion early and untangle dependencies, see when things could go off the rails, etc. Will talk to SIG leaders. | ||||
| 
 | ||||
| This planning activity will be one of the key success factors for the project moving forward. | ||||
| 
 | ||||
| Roadmap for 2018 (30min summary) | ||||
| -------------------------- | ||||
| 
 | ||||
| Notes by @jberkus | ||||
| 
 | ||||
| Speakers (check spelling): Apprena Singha, Igor, Jaice DuMars, Caleb Miles, someone I didn't get.  SIG-PM | ||||
| 
 | ||||
| We have the roadmap, and we have this thing called the features process, which some of you may (not) love.  And then we write a blog post, because the release notes are long and most of the world doesn't understand them. | ||||
| 
 | ||||
| Went over SIG-PM mission.  We had several changes in how the community behave over 2017.  We are moving to a model where SIGs decide what they're going to do instead of overall product decisions. | ||||
| 
 | ||||
| 2017 Major Features listed (workloads API, scalability, networkpolicy, CRI, GPU support, etc.).  See slides.  The question is, how did we do following the 2017 roadmap? | ||||
| 
 | ||||
| Last year, we got together and each SIG put together a roadmap.  In your SIG, you can put together an evaluation of how close we came to what was planned. | ||||
| 
 | ||||
| Q: Last year we kept hearing about stability releases.  But I'm not sure that either 1.8 or 1.9 was a "stability release".  Will 2018 be the "year of the stability release?" | ||||
| 
 | ||||
| Q: Somehow the idea of stability needs to be captured as a feature or roadmap item. | ||||
| 
 | ||||
| Q: More clearly defining what is in/out of Kubernetes will help stability. | ||||
| 
 | ||||
| Q: What do we mean by stability?  Crashing, API churn, too many new features to track, community chaos? | ||||
| 
 | ||||
| Q: Maybe the idea for 2018 is to just measure stability.  Maybe we should gamify it a bit. | ||||
| 
 | ||||
| Q: The idea is to make existing interfaces and features easy to use for our users and stable.  In SIG-Apps we decided to limit new features to focus everything on the workloads API. | ||||
| 
 | ||||
| Proposals are now KEPs (Kubernetes Enhancement Proposals) are a way to catalog major initiatives.  KEPs are big picture items that get implemented in stages.  This idea is based partly on how the Rust project organizes changes.  Every SIG needs to set their own roadmap, so the KEP is just a template so that SIGs can plan ahead to the completion of the feature and SIG-PM and coordinate with other SIGs. | ||||
| 
 | ||||
| Q: How do you submit a KEP? | ||||
| A: It should live in source control.  Each KEP will releate to dozens or hundreds of issues, we need to preserve that as history. | ||||
| 
 | ||||
| If you look at the community repo, there's a draft KEP template in process.  We need to make it a discoverable doc. | ||||
|  | @ -0,0 +1,85 @@ | |||
| 
 | ||||
| Feature Workflow | ||||
| ---------------- | ||||
| 
 | ||||
| Notes by @jberkus | ||||
| 
 | ||||
| TSC: Getting something done in Kubernetes is byzantine.  You need to know someone, who to ask, where to go.  If you aren't already involved in the Kubernetes community, it's really hard to get involved.  Vendors don't know where to go. | ||||
| 
 | ||||
| Jeremy: we had to watch the bug tracker to figure out what sig owned the thing we wanted to change. | ||||
| 
 | ||||
| TSC: so you create a proposal.  But then what?  Who needs to buy-in for the feature to get approved? | ||||
| 
 | ||||
| Dhawal: maybe if it's in the right form, SIGs should be required to look at it. | ||||
| 
 | ||||
| Robert B: are we talking about users or developers? Are we talking about people who will build features or people who want to request features? | ||||
| 
 | ||||
| ???: Routing people to the correct SIG is the first hurdle.  You have to get the attention of a SIG to do anything.  Anybody can speak in a SIG meeting, but ideas do get shot down. | ||||
| 
 | ||||
| Caleb: we've had some success in the release process onboarding people to the right SIG. Maybe this is a model. The roles on the release team are documented. | ||||
| 
 | ||||
| Anthony: as a release team, we get scope from the SIGs.  The SIGs could come up with ideas for feature requests/improvement. | ||||
| 
 | ||||
| Tim: there's a priority sort, different projects have different priorities for developers.  You need a buddy in the sig. | ||||
| 
 | ||||
| Clayton: review bandwidth is a problem.  Review buddies hasn't really worked. If you have buy-in but no bandwidth, do you really have buy-in? | ||||
| 
 | ||||
| TSC: The KEP has owners, you could have a reviewer field and designate a reviewer.  But there's still a bandwidth problem. | ||||
| 
 | ||||
| Dhawal: many SIG meetings aren't really traceable because they're video meetings.  Stuff in Issues/PRs are much more referencable for new contributors.  If the feature is not searchable, then it's not available for anyone to check.  If it is going to a SIG, then you need to update the issue, and summarize the discussions in the SIG. | ||||
| 
 | ||||
| TSC: Just because a feature is assigned to a SIG doesn't mean they'll acutally look at it.  SIGs have their own priorities.  There's so many issues in the backlog, nobody can deal with it.  My search for sig/scheduling is 10 different searches to find all of the sig/scheduling issues.  SIG labels aren't always applied.  And then you have to prioritize the list. | ||||
| 
 | ||||
| ???: Test plans also seem to be late in the game.  This could be part of the KEP process.  And user-facing-documentation. | ||||
| 
 | ||||
| Robert B: but then there's a thousand comments.  the KEP proposal is better. | ||||
| 
 | ||||
| ???: The KEP process could be way to heavy-weight for new contributors. | ||||
| 
 | ||||
| ???: new contributors should not be starting on major features. The mentoring process should take them through minor contributions.  We have approximately 200 full-time contributors.  We need to make those people more effective. | ||||
| 
 | ||||
| TSC: even if you're a full timer, it's hard to get things in and get a reviewer.  Every release, just about everything that it's p0 or p1 gets cut, because the person working on it can't get the reviewer all of the stuff lined up. | ||||
| 
 | ||||
| Caleb: you need to spend some time in the project before you can make things work. | ||||
| 
 | ||||
| Dhawal: is there a way to measure contributor hours?  Are people not getting to things because people are overcommitting? | ||||
| 
 | ||||
| Jago: The problem is that the same people who are on the hook for the complicated features are the people who you need to review your complicated feature.  Googlers who work on this are trying to spread out their own projects to that they have more time at the end of the review cycle. | ||||
| 
 | ||||
| Jaice: If you're talking about a feature, and you can't get anyone to talk about it, either the right people aren't in the room, or there just aren't enough people to make it happen.  If we do "just enough" planning to decide what we can do and not do, then we'll waste a lot less effort.  We need to know what a SIG's "velocity" is. | ||||
| 
 | ||||
| Connor: the act of acquiring a shepard is itself subject to nepotism.  You have to know the right people.  We need a "hopper" for shepherding. | ||||
| 
 | ||||
| Tim: not every contributor is equal, some contributors require a lot more effort than others. | ||||
| 
 | ||||
| Robert: A "hopper" would circumvent the priority process. | ||||
| 
 | ||||
| Josh: there will always be more submitters than reviewers.  We've had this issue in Postgres forever.  The important thing is to have a written, transparent process so that when things get rejected it's clear why.  Even if it's "sorry, the SIG is super-busy and we just can't pay attention right now." | ||||
| 
 | ||||
| Dhawal: there needs to be a ladder.  The contributor ladder. | ||||
| 
 | ||||
| TSC: a lot of folks who work on Kube are a "volunteer army."  A lot of folks aren't full-time. | ||||
| 
 | ||||
| Caleb: there is a ladder.  People need to work hard on replacing themselves, so that they're not stuck doing the same thing all the time.  How do you scale trust? | ||||
| 
 | ||||
| ???: Kubernetes is a complicated system, and not enough is written down, and a lot of what's there we'd like to change.  It's a lot easier for a googler to help another googler, because they're in the same office, and the priorities alighn.  That's much harder to do across organizations, because maybe my company doesn't care about the VMWare provider. | ||||
| 
 | ||||
| Jaice: for the ladder, is there any notion that in order to assend the ladder you have to have a certain number of people you shepherded in?  There should be. | ||||
| 
 | ||||
| TSC: frankly, mentoring people is more important than writing code.  We need to bring more people into Kubernetes in order to scale the community. | ||||
| 
 | ||||
| Josh: we need the process to be documented, for minor features and major ones.  Maybe the minor feature process belongs to each SIG. | ||||
| 
 | ||||
| Jaice: the KEP is not feature documentation, it's process documentation for any major change.   It breaks down into multiple features and issues. | ||||
| 
 | ||||
| ???: The KEP needs to include who the shepherds should be. | ||||
| 
 | ||||
| Clayton: reviewer time is the critical resource.  The prioritization process needs to allocate that earlier to waste less. | ||||
| 
 | ||||
| Jeremy: the people we sell to are having problems we can't satisfy in Kubernetes.  We have a document for a new feature, but we need every SIG to look at it (multi-network).  This definitely needs a KEP, but is a KEP enough?  We've probably done too much talking. | ||||
| 
 | ||||
| Clayton: the conceptual load on this is so high that people are afraid of it.  This may be beyond what we can do in the feature horizon.  It's almost like breaking up the monolith. | ||||
| 
 | ||||
| Robert: even small changes you need buy-in across SIGs.  Big changes are worse. | ||||
| 
 | ||||
| Connor: working groups are one way to tackle some of these big features. | ||||
|  | @ -0,0 +1,232 @@ | |||
| Onboarding Developers through Better Documentation | ||||
| 
 | ||||
| @ K8s Contributor Summit, 12/5 | ||||
| 
 | ||||
| Note Taker: Jared Bhatti ([jaredbhatti](https://github.com/jaredbhatti)), Andrew Chen ([chenopis](https://github.com/chenopis)), Solly Ross ([directxman12](https://github.com/DirectXMan12)) | ||||
| 
 | ||||
| [[TOC]] | ||||
| 
 | ||||
| ### Goal | ||||
| 
 | ||||
| * Understand how we’re currently onboarding developers now | ||||
| 
 | ||||
| * Understand the "rough edges" of the onboarding experience for new users | ||||
| 
 | ||||
| * Understand how we can better target the needs of our audience | ||||
| 
 | ||||
| * Understand if our contributor process will meet the documentation needs.  | ||||
| 
 | ||||
| Note: the focus is on documentation, so we won’t be able to act on suggestions outside of that (but we will consider them!).  | ||||
| 
 | ||||
| ### Initial thoughts | ||||
| 
 | ||||
| * Where we were at the beginning of the 2017: ([@1.4](https://v1-4.docs.kubernetes.io/docs/)) | ||||
| 
 | ||||
| * Where we are now: ([@ 1.8](https://kubernetes.io/docs/home/)) | ||||
| 
 | ||||
|     * We are tied into the launch process | ||||
| 
 | ||||
|     * We have analytics on our docs | ||||
| 
 | ||||
|     * We have a full time writer and sig docs contrib group | ||||
| 
 | ||||
|     * Everything is in templates, we have a style guide | ||||
| 
 | ||||
|     * Better infrastructure (netlify) | ||||
| 
 | ||||
| ### Meeting Notes | ||||
| 
 | ||||
| * Introductions | ||||
| 
 | ||||
|     * Jared: SIG docs maintainer | ||||
| 
 | ||||
|     * Andrew: SIG docs maintainer, techwriter | ||||
| 
 | ||||
|     * Radika: 1.8 release team, SIG docs member | ||||
| 
 | ||||
|     * Phil Wittrock: interested b/c it determines how system is used | ||||
| 
 | ||||
|     * ?? | ||||
| 
 | ||||
|     * Paul Morie: interested b/c hit challenges that could be avoided with better dev docs (patterns used in Kube can’t be easily found on internet) | ||||
| 
 | ||||
|     * Nikita: work on CRDs | ||||
| 
 | ||||
|     * ??: docs are a good intro to core Kube codebase | ||||
| 
 | ||||
|     * Michael Rubin | ||||
| 
 | ||||
|     * ??: dashboard contributor, onboarding is a key part of the dashboard experience | ||||
| 
 | ||||
|     * Steve Wong: storage SIG, on-prem SIG, trying to get head around on-prem, docs geared towards cloud | ||||
| 
 | ||||
|     * Solly Ross (@directxman12): want to avoid changes to fiddly bits confusing people | ||||
| 
 | ||||
|     * Morgan Bauer: had issues with people changing fundamental bits out from under other contributors | ||||
| 
 | ||||
|     * Sahdev Zala: docs are important for new contributors, first place they look, contribex | ||||
| 
 | ||||
|     * Brad Topol: like explaining stuff, kube conformance WG | ||||
| 
 | ||||
|     * Tomas: working on side project, looking for inspiration on docs for that | ||||
| 
 | ||||
|     * Garrett: ContribEx, looking to reduce the question "how do I get started" | ||||
| 
 | ||||
|     * Stefan (@sttts): where do we push docs changes to mechanics | ||||
| 
 | ||||
|     * Josh Berkus: ContribEx | ||||
| 
 | ||||
| * contributor docs | ||||
| 
 | ||||
|     * Avoid the process for finding which of the 49 people have the right info 6 months after it merges | ||||
| 
 | ||||
|     * Have contributor docs which merge with "internal features" (e.g. API registration changes) | ||||
| 
 | ||||
|     * Have a flag for internal release notes, bot which checks | ||||
| 
 | ||||
| * Issue: opinion: we don’t want to write dev docs because they’ll change | ||||
| 
 | ||||
|     * Issue is it’s hard to capture every little thing | ||||
| 
 | ||||
|     * Good start is documentation on generalities (80%/20%): e.g building an API server | ||||
| 
 | ||||
|         * Started out with no docs on how to do that (has gotten better) | ||||
| 
 | ||||
| * Big concepts don’t change a lot, describe those | ||||
| 
 | ||||
|     * Ask questions: | ||||
| 
 | ||||
|         * What are the big nouns | ||||
| 
 | ||||
|         * What are the big verbs for those nouns | ||||
| 
 | ||||
|         * How are the nouns relate | ||||
| 
 | ||||
|     * Don’t fall into trap of (for example) tracepoints (accidentally creating a lot of small little fiddly points) | ||||
| 
 | ||||
| * Need a way for "heads up this is changing". If you have questions, ask these specific people.  | ||||
| 
 | ||||
|     * Very SIG dependent (some do meetings, others mailing-list focused, some have a low bus number) | ||||
| 
 | ||||
| * How is your SIG interacting with docs | ||||
| 
 | ||||
|     * Service Catalog | ||||
| 
 | ||||
|         * In docs folder | ||||
| 
 | ||||
|         * How do we interact with docs | ||||
| 
 | ||||
|         * Depends on moving into kubernetes org | ||||
| 
 | ||||
| * Issue: lots of docs with bits and pieces | ||||
| 
 | ||||
|     * Not organized into "as a developer, you’re interested in these links" | ||||
| 
 | ||||
|     * Spend a week, come up with a ToC | ||||
| 
 | ||||
|     * Many docs in different repos | ||||
| 
 | ||||
|     * Should have guidelines on where to put things | ||||
| 
 | ||||
|     * Should have templates for how to write different types of docs | ||||
| 
 | ||||
| * Issue: wrong abstraction layers are exposed | ||||
| 
 | ||||
|     * Documenting a complex system has less payoff than simplifying | ||||
| 
 | ||||
| * Issue: extending Kube blurs the line between contributor and user | ||||
| 
 | ||||
|     * Put more docs into the same place | ||||
| 
 | ||||
|     * "Crafting User Journeys": user stories buckets: Customer User Journeys ([project](https://github.com/kubernetes/website/projects/5)) | ||||
| 
 | ||||
|     * Need champion for different personas | ||||
| 
 | ||||
| * Issue: people (e.g. Storage SIG) want to write docs, but they don’t know where to start | ||||
| 
 | ||||
|     * Both contributors and users, and both (plugin writing, for example, or how to provision storage securely) | ||||
| 
 | ||||
|     * Templates/guidelines mentioned above may be helpful | ||||
| 
 | ||||
|     * Talk to SIG docs (SIG docs wants to try and send delagates to SIG, but for now, come to meetings) | ||||
| 
 | ||||
| * Question: should there even *be* a split between user and contributor docs? | ||||
| 
 | ||||
|     * No, but we have historically (didn’t used to have person-power to implement it) | ||||
| 
 | ||||
| * Onboarding docs push for 2018 | ||||
| 
 | ||||
|     * Point people from moment one to start understanding how to contribute (design patterns, etc) | ||||
| 
 | ||||
|     * How do we organize, make things findable | ||||
| 
 | ||||
|     * How do we organize allowing people to contribute right now | ||||
| 
 | ||||
|         * Main site is all user docs right now | ||||
| 
 | ||||
|         * Starting to look at auto-import docs | ||||
| 
 | ||||
|         * User journeys project (above) | ||||
| 
 | ||||
|         * Testing right now | ||||
| 
 | ||||
| * Lots of people writing blog posts | ||||
| 
 | ||||
|     * Any way to aggregate to some of that? | ||||
| 
 | ||||
|     * Wow to avoid "this doesn’t work anymore"? | ||||
| 
 | ||||
|     * There is a Kubernetes blog, being migrated under k8s.io | ||||
| 
 | ||||
|         * People can contribute to the blog for ephemeral stuff | ||||
| 
 | ||||
|     * We guarantee that k8s.io stuff is updated | ||||
| 
 | ||||
|     * For blogs, put a "sell-by date" | ||||
| 
 | ||||
| * Request: join dashboard conversation | ||||
| 
 | ||||
|     * How do we link from dashboard to docs and vice-versa | ||||
| 
 | ||||
| * Question: where do the docs of tools like kops fit in | ||||
| 
 | ||||
|     * More broadly hits on Kubernetes "kernel" vs wrapper projects | ||||
| 
 | ||||
|     * Right now, it’s checked into repo, so it should have docs on the main site | ||||
| 
 | ||||
|         * Documentation is endorsing, at bit | ||||
| 
 | ||||
|     * (but why is kops checked into the repo as opposed to other installers) | ||||
| 
 | ||||
|     * Have a easy way for projects (incubator and main org) to have their own docs sub-sites (e.g. gh-pages-style for the Kubernetes sites) | ||||
| 
 | ||||
| ### Follow-Ups | ||||
| 
 | ||||
| * Suggestion: Presentation on how to structure site, write docs, user studies done, etc (why are Kube docs the way they are) | ||||
| 
 | ||||
|     * "I want to get started fast" | ||||
| 
 | ||||
|     * "I want to build the next thing that gives me value" (task-driven) | ||||
| 
 | ||||
|     * "Something broke, where’s the reference docs" (troubleshooting) | ||||
| 
 | ||||
| ### Projects In Development | ||||
| 
 | ||||
| * Customer User Journeys ([project](https://github.com/kubernetes/website/projects/5)) | ||||
| 
 | ||||
| * Revamp of [docs landing page](https://kubernetes.io/docs/home/) | ||||
| 
 | ||||
| * Building out a glossary ([project](https://github.com/kubernetes/website/issues)) | ||||
| 
 | ||||
| * More doc sprints ([like this one](https://docs.google.com/document/d/1Ar4ploza6zA1JF3YO4e0lzq1RRBWWx8cAZtRQMMEdsw/edit)) | ||||
| 
 | ||||
| ### Continue participating! | ||||
| 
 | ||||
| * Content exists under [Kubernetes/website](https://github.com/kubernetes/website) (feel free to fix an issue by [creating a PR](https://kubernetes.io/docs/home/contribute/create-pull-request/)) | ||||
| 
 | ||||
| * Join the [kubernetes SIG Docs group](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)  | ||||
| 
 | ||||
| * Attend the next Kubernetes docs community meeting (Tuesdays @ 10:30am, PST) | ||||
| 
 | ||||
| * And join the kubernetes sig-docs slack channel ([kubernetes.slack.com](http://kubernetes.slack.com/) #sig-docs) | ||||
| 
 | ||||
|  | @ -0,0 +1,113 @@ | |||
| # Should Kubernetes use feature branches? | ||||
| 
 | ||||
| Lead: Brian Grant | ||||
| 
 | ||||
| Note Taker(s): Aaron Crickenberger | ||||
| 
 | ||||
| Focusing on our current release cadence: | ||||
| 
 | ||||
| - We spend several weeks putting features in | ||||
| - We spend several weeks stabilizing the week kinda/sortof | ||||
| 
 | ||||
| The people who are managing the release process don’t have the ability to push back and say this can’t go in and we don’t want to slip the release so they’re left powerless | ||||
| 
 | ||||
| In earlier session today over breaking up monolith: | ||||
| 
 | ||||
| - We have over 90 repos | ||||
| - The releases are getting bigger and more complex | ||||
| - Not everything we’re moving out of the monorepo, not everything makes sense | ||||
| 
 | ||||
| What do we do about those things | ||||
| I think the only way we can get out of the release fire drills is to have an earlier gate | ||||
| 
 | ||||
| Q: Aaron: Why not use the tests we have today and push back saying the tests have to be green? | ||||
| 
 | ||||
| - A lot of the new features have inadequate testing or inadequate features | ||||
| - We want to continue to release on a regular on cadence | ||||
| - Use some model like gitflow that puts feature work into feature branches before they land into master | ||||
| - I think we need to allow features to be developed in feature branches instead of landing in kubernetes/kubernetes master | ||||
| - But we also can’t have large monolithic feature branches | ||||
| - Because one thing going in might cause rebase hell for other things going in | ||||
| - Test machinery is getting to a point where it can use branches other than master to spin up tests | ||||
| - You can spin up a feature branch and get tests running on it without too many issues | ||||
| - I think we have to do within either 1.10 or 1.11 release | ||||
| 
 | ||||
| Q. I heard two 2 concerns: | ||||
| 
 | ||||
| - How do you protect mainline development from these features going in? | ||||
| - How do you know when they’re ready to go in? | ||||
| - So nodejs claims to have something like 90-something% coverage, we don’t even have coverage measurement | ||||
| - With the interdependence of everything in the system it’s basically impossible to back changes out | ||||
| 
 | ||||
| Q: (thockin)  | ||||
| How do we think we can do this across repos? | ||||
| 
 | ||||
| - Can we hear from people who have actually tried this for what the pain points are? | ||||
| - There are lots of details to work out and people are just going to have to try it at small scale and then at large scale | ||||
| - A lot of the times people try to get their foot in the door with tiny feature PR’s, and then docs, and then feature completion | ||||
| - Jan commenting on experience with workload api’s | ||||
| - Tried bumping workload api’s from v1beta2 to v1, tried it on feature branch | ||||
| - Main blocker was tests, tests weren’t running against the feature branch, only the master branch automatically | ||||
| - We decided we were just going to do it directly against the master branch | ||||
| 
 | ||||
| Q (spiffxp) How do we prevent merge chicken, aka racing to get in so you don’t have to avoid rebase hell | ||||
| 
 | ||||
| - With api machinery, one thing we’ve done is wait until after the release cycle to minimize the amount of time that people have to do rebasing | ||||
| - We scheduled that massive change to land right after code freeze | ||||
| - Re: how do we prevent once you’re in you’re in… that’s basically the way it goes right now | ||||
| 
 | ||||
| Thockin:  | ||||
| 
 | ||||
| - the pattern that I see frequently is somebody sends me a 10k line PR and I see please break this up and they do, but then they forget the docs and somewhere along the way we miss it | ||||
| - Bgrant: it’d be great if we could ask people to write the docs first (maybe this is a criteria for feature branches) | ||||
| - Venezia: what about having a branch per KEP? Also, facebook does something like feature flags, is that something we could do or is that just impossible? | ||||
| 
 | ||||
| Bgrant:  | ||||
| 
 | ||||
| - That can be used, but it requires a huge amount of discipline to ensure those flags are used correctly and consistently throughout the code | ||||
| - Some features we can easily do that eg: if it’s an API, but it’s much harder to do with a feature that’s threaded through components | ||||
| - A branch per KEP is roughly the right level of granularity | ||||
| - What’s the difference between a feature branch and a massive PR? | ||||
| - Review flow isn’t well suited to massive PR’s | ||||
| (thockin) It’s easier on maintainers if we get things in and lock them in, but not on the main tree | ||||
| (thockin) one thing I like about the linux kernel model is the lieutenant model | ||||
| 
 | ||||
| Q: (maru) you’re still going to have to rebase feature branches the way that you rebase PR’s right? | ||||
| 
 | ||||
| - A (jago) the difference is that a whole team could work on a feature branch | ||||
| - I like the idea of trying a few projects and then reporting back | ||||
| - I think roughly the level of granularity of having a branch per KEP sounds about right | ||||
| - Multiple feature branches affecting the same area of code would be a very useful place to get some infromation | ||||
| 
 | ||||
| Q (???) if someone needs to rebase on a feature branch, could it instead be resolved by a merge commit instead? | ||||
| 
 | ||||
| - (thockin) I think we could solve that with technology and it could be better than building a better review tool | ||||
| - Rebasing sucks, part of the reason we have that is generated code, part of it is poor modularity | ||||
| - We could stand to expend some energy to improve modularity to reduce rebases | ||||
| (bgrant) either feature branches or moving code to other repos are forcing functions to help us clean some of that up | ||||
| 
 | ||||
| Q: how do we make sure that when issues are filed people know which commit/feature branch they should be filing against? | ||||
| - (bgrant) I suspect most issues on kubernetes/kubernetes are filed by contributors | ||||
| - (jdumars) imagine you have a KEP that represents three issues, three icecubes you’ve chipped off the iceberg, the feature branch would hold code relevant to all of those issues | ||||
| - (luxas) feature branches should be protected | ||||
| 
 | ||||
| Yes we’ll definitely want a bot that automatically | ||||
| Q: (alex) do we want to have feature branches | ||||
| Q: (thockin) why not just use a fork instead of branches? | ||||
| 
 | ||||
| - I think it’s harder to get tests spun up for other users | ||||
| - (jgrafton) worry about adding yet another dimension of feature branches to our already large matrix of kubernetes and release branches | ||||
| 
 | ||||
| - Not so much for PR’s, but for CI/periodic jobs | ||||
| - You need to run more than just the PR tests | ||||
| - The experience with gated features is that tests do run less often, so racey bugs that pop up less frequently get exposed less often; so there is a concern that we wouldn’t get enough testing coverage on the code | ||||
| - We’ll figure out what the right granularity is for feature branches is, probably shorter than a year, but probably shorter than a year | ||||
| - One idea is instead of support forks to users, what if we forked to orgs owned by sigs, and just made sure the CI was well setup for those orgs? | ||||
| (ed. I’m nodding my head vigorously that this is possible) | ||||
| - Just another comment on branches on main repo vs. in other repos | ||||
| - The branches in the main repo could be another potential for mistakes and fat fingers that we could be better solve by making the tooling work against arbitrary repos | ||||
| - (jgrafton) related to the ownership of e2e tests and the explosion of stuff.. Do we expect sigs to own their tests across all feature branches? - Or should they only care about master | ||||
| 
 | ||||
| Q: worry about tying feature ownership too closely to a single sig, because features could involve multiple sigs | ||||
| 
 | ||||
| If you want to get started with feature branches, reach out to kubernetes-dev first and we’ll get you in touch with the right people to get this process started | ||||
|  | @ -0,0 +1,47 @@ | |||
| Steering Committee Update | ||||
| -------------------------- | ||||
| 
 | ||||
| Notes by @jberkus | ||||
| 
 | ||||
| Showed list of Steering Committee backlog. | ||||
| 
 | ||||
| We had a meeting, and the two big items we'd been pushing on hard were: | ||||
| 
 | ||||
| * votining in the proposals in the bootstrap committee | ||||
| * how we're going to handle incubator and contrib etc. | ||||
| 
 | ||||
| Incubator/contrib: one of our big concerns are what the the consequences for projects and ecosystems. | ||||
| We're still discussing it, please be patient.  In the process of solving the incubator process, we have to answer | ||||
| what is kubernetes, which is probably SIGs, but what's a SIG, and who decides, and ...  we end up having to examine | ||||
| everything.  In terms of deciding what is and isn't kubernetes, we want to have that discussion in the open. | ||||
| We also recognize that the project has big technical debt and is scary for new contributors. | ||||
| 
 | ||||
| We're also trying to figure out how to bring in new people in order to have them add, instead of contributing | ||||
| to chaos.  So we need contributors to do mentorship.  We also need some tools for project management, if anyone | ||||
| wants to work on these. | ||||
| 
 | ||||
| We're also going to be working on having a code of conduct for the project, if you | ||||
| want to be part of that discussion you can join the meetings.  For private concerns: steering-private@kubernetes.io. | ||||
| One of the challenges is deciding how enforcement will work.  We've had a few incidents over the last 2 years, | ||||
| which were handled very quietly.  But we're having more people show up on Slack who don't know norms, | ||||
| so feel free to educate people on what the community standards are.  As our project expands across various | ||||
| media, we need to track the behavior of individuals. | ||||
| 
 | ||||
| Q: "Can SIGs propose a decision for some of the stuff on the backlog list and bring it to the SC?" | ||||
| A: "Yes, please!" | ||||
| 
 | ||||
| The other big issue is who owns a SIG and how SIG leaders are chosen.  The project needs to delegate more things to the | ||||
| SIGs but first we need to have transparency around the SIG governance process. | ||||
| 
 | ||||
| Q: "As for What Kubernetes Is: is that a living document we can reference?" | ||||
| A: "There's an old one.  I have a talk on Friday about updating it." | ||||
| 
 | ||||
| There's also the question of whether we're involved in "ecosystem projects" like those hosted by the CNCF.  Things will change | ||||
| but the important thing is to have a good governance structure so that we can make transparent decisions as things change. | ||||
| 
 | ||||
| 
 | ||||
| 
 | ||||
| 
 | ||||
| 
 | ||||
| 
 | ||||
| 
 | ||||
|  | @ -0,0 +1,62 @@ | |||
| 
 | ||||
| What's Up with SIG-Cluster-Lifecycle | ||||
| ------------------------------------ | ||||
| 
 | ||||
| Notes by @jberkus | ||||
| 
 | ||||
| Luxas talking: I can go over what we did last year, but I'd like to see your ideas about what we should be doing for the future, especially around hosting etc. | ||||
| 
 | ||||
| How can we make Kubeadm beta for next year? | ||||
| Opinions: | ||||
| * HA | ||||
|   * etcd-multihost | ||||
|   * some solution for apiserver, controller | ||||
| * Self-hosted Kubeadm | ||||
| 
 | ||||
| Q: Can someone write a statement on purpose & scope of Kubeadm? | ||||
| 
 | ||||
| To install a minimum viable, best-practice cluster for kubernetes. You have to install your own CNI provider.  Kubeadm isn't to endorse any providers at any level of the stack. | ||||
| 
 | ||||
| Joe: sub-goal (not required for GA), would be to break out components so that you can just install specific things. | ||||
| Would also like documentation for what kubeadm does under the covers. | ||||
| 
 | ||||
| Josh: requested documentation on "how do I modify my kubeadm install".  Feels this is needed for GA.  Another attendee felt the same thing. | ||||
| 
 | ||||
| One of the other goals is to be a building block for higher-level installers.  Talking to Kubespray people, etc.  Enabling webhooks used as example. | ||||
| 
 | ||||
| There was some additional discussion of various things people might want.  One user wanted UI integration with Dashboard.  The team says they want to keep the scope really narrow in order to be successful.  UI would be a different project.  GKE team may be working on some combination of Kubeadm + Cluster API.  Amazon is not using Kubeadm, but Docker is.  Docker for Mac and Windows will ship with embedded kubernetes in beta later this week. | ||||
| 
 | ||||
| Kubeadm dilemma: we want humans to be able to run kubeadm and for it to be a good experience, and for automation to be able to run it.  I don't think that can be the same tool.  They've been targeting Kubeadm at people, we might want to make a slightly different UI for machines.  Josh says that automating it works pretty well, it's just that error output is annoying to capture. | ||||
| 
 | ||||
| HA definition: | ||||
| * etcd should be in a quorum HA standard (3+ nodes) | ||||
| * more than one master | ||||
| * all core components: apiserver, scheduler, kcm, need to be on each master | ||||
| * have to be able to add a master | ||||
| * upgrades | ||||
| 
 | ||||
| Or: we want to be able to survive a loss of one host/node, including a master node.  This is different, if we want to survive the loss of any one master, we only need two then. Argument ensued.  Also, what about recovery or replacement case? A new master needs to be able to join (manual command). | ||||
| 
 | ||||
| What about HA upgrades?  Are we going to support going from one master to three?  Yes, we have to support that. | ||||
| 
 | ||||
| Revised 4 requirements: | ||||
| * 3+ etcd replicas | ||||
| * all master components running in each master | ||||
| * all TLS secured | ||||
| * Upgrades for HA clusters | ||||
| 
 | ||||
| Everyone says that we want a production environment, but it's hard to define what "production grade" means.  We need to stop saying that.  Over time, what matters is, "is it maintained".  If it's still being worked on, over time it'll get better and better. | ||||
| 
 | ||||
| CoreOS guy: trying to do self-hosted etcd.  There's a lot of unexpected fragile moments.  Just HA etcd isn't well tested upstream, there's not enough E2E tests.  Self-hosting makes this worse.  The etcd operator needs work.  There needs to be a lot of work by various teams.  Self-hosted control planes work really well, they host all of their customers that way.  It's etcd that's special. | ||||
| 
 | ||||
| There's some problems with how Kubernetes uses HA Etcd in general.  Even if the etcd operator was perfect, and it worked, we couldn't necessarily convince people to use it. | ||||
| 
 | ||||
| Should Kubeadm focus on stable installs, or should it focus on the most cutting-edge features?  To date, it's been focused on the edge, but going to GA means slowing down.  Does this mean that someone else will need to be forward-looking?  Or do we do feature flags? | ||||
| 
 | ||||
| SIG-Cluster-Lifecycle should also document recommendations on things like "how much memory do I need."  But these requirements change all the time.  We need more data, and testing by sig-scalability. | ||||
| 
 | ||||
| For self-hosting, the single-master situation is different from multi-master.  We can't require HA.  Do we need to support non-self-hosted? We can't test all the paths, there's a cost of maintaining it.  One case for non-self-hosted is security, in order to prevent subversion of nodes. | ||||
| 
 | ||||
| Also, we need to support CA-signed Kubelet certs, but that's mostly done. | ||||
| 
 | ||||
| So, is HA a necessity for GA? There are a bunch of automation things that already work really well. Maybe we should leave that to external controllers (like Kops etc) to use Kubeadm as a primitive.  Now we're providing documentation for how you can provide HA by setting things up.  But how would kubeadm upgrade work, then? | ||||
		Loading…
	
		Reference in New Issue