community/sig-storage/archive/meeting-notes-2017.md

49 KiB
Raw Blame History

Kubernetes Storage SIG Meeting Notes (2017)

The Kubernetes Storage Special-Interest-Group (SIG) is a working group within the Kubernetes contributor community interested in storage and volume plugins. This document contains historical meeting notes from past meeting.

December 28, 2017

Cancelled. Happy holidays!

December 14, 2017

Recording: https://youtu.be/_a_pwZKgtT8

Agenda/Notes (Please include your name and estimated time):

  • [Saad Ali, 20 minutes] 1.9 Q4 End-of-quarter Status Update
  • PRs to discuss or that need attention
    • [Please add any PRs that need attention here]
  • Design Review
    • [Please add any design reviews here]
  • [Saad Ali, 10 minutes] Set Up a one-off “on-boarding to sig-storage” meeting
    • Proposal by Michael Rubin
    • Suggested topics (presentations/YouTube):
      • Local dev environment setup
      • How to contribute/What does the SIG need help with/appreciate?
      • How to propose and add new feature
      • How to add a volume plugin? In-tree vs Flex vs CSI vs External Provisioner
        • How to write a volume plugin?
      • How to get a bug fixed?
      • SIG Planning process?
      • Where are the docs? The code?
      • High level overview of storage subsystem -- controller, api?
      • How to write tests (E2E, Unit, etc.)?
      • How to make an API change?
    • Volunteers for presentations:
      • Saad Ali
      • Steve Wong
      • Vladimir Vivien
      • Jan Safranek (after Christmas)
      • Erin Boyd
      • Scott C.
    • Proposals:
      • Storage 101 “class” and Slack?
      • “Buddy system” -- answer one-on-one question
      • Paris had an idea for “group mentoring”, one-on-one can be intimidating
      • Check with contributor SIG and see if they have idea
      • Documentation of how volume plugin interface arch -- should merge
      • K8s.io docs site should be tailored for different audiences: storage vendor vs users, etc.
  • [Saad Ali, 5 minutes] Meeting December 28, 2017? Or skip that and meet Jan 4?
    • [Saad Ali, 20 minutes] 1.10 Q1 Planning Kickoff
      • Suggest delaying to first week of Jan
    • Conclusion: lets do it
  • [Yassine Tijani 5 minutes] discuss CSI timeline and coordination with wg-cloud-provider
  • [Erin Boyd, 5 minutes] Prioritization of Raw Block Plugin Update
    • Seems like there is a lot of community PRs around these, lets coordinate :)
    • Please capture any work related to Storage SIG, please add to planning spreadsheet
    • Flex will have block? If CSI takes too long?
  • [Saad Ali] CSI Statu
  • [Harry Zhang, 6 min] CSI support for KataContainers (hypervisor based container runtime)
  • [mayank] RBD Provisioner - in-tree vs external
    • No, not in cloud provider.
  • [lpabon] CSI testing

November 30, 2017

Recording: https://youtu.be/XK5KVj0SbOE

Agenda/Notes (Please include your name and estimated time):

November 23, 2017

Agenda/Notes (Please include your name and estimated time):

  • [Saad Ali] Cancelled due to U.S. Thanksgiving Holiday
  • [Saad Ali] Shifting meeting by 1 week (since 2 weeks from Nov 23 will overlap with Kubecon).

November 9, 2017

Recording: https://youtu.be/Xeup_ATnVEY

Agenda/Notes (Please include your name and estimated time):

  • [Saad Ali, 20 minutes] 1.9 Q4 Planning
  • [Saad Ali] Kubernetes CSI Implementation Meeting Notes
  • PRs to discuss or that need attention
  • Design Review
    • [Please add any design reviews here]
    • [Sandeep Pissay, 20 mins] Review proposal for “Backup of PVs” - Proposal doc
    • [Yunwen Bai, 5 minutes] Dynamically provisioning of local storage proposal. Design doc to be shared EOW
  • Kubecon Storage-SIG planning (AMA)
    • Reminder about SIG Storage AMA at KubeCon. Please come and share your expertise with each other and with the community. Great place to have high bandwidth discussions. Intended to just be an open hangout, but the room comes equipped with flipcharts for drawing as well as AV.

October 26, 2017

Agenda/Notes (Please include your name and estimated time):

  • [Eric Forgette] Intro to Dory, a Flex driver for Docker Volume Plugins
  • [Saad Ali] Feature freeze for 1.9 is tomorrow, Oct 27 (link) -- if you have a feature in the planning spreadsheet that should be tracked for k8s v1.9, please make sure there is a issue opened in the feature repo and marked with the 1.9 milestone.exte
  • [Brad Childs, 20 minutes] 1.9 Q4 Planning
  • PRs to discuss or that need attention
    • [Please add any PRs that need attention here]
  • Design Review
  • [Michael Rubin] Growing the sig-storage community
  • [Gerry Seidman] AuriStor (AFS) Flex Volume
    • I (and other engineers from AurIStor) will be at LISA next week in San Francisco.
    • If anyone planning on going, it would be great if we could meet up.
      • Id appreciate any feedback on our Volume plugin implementation plans.
      • Please contact me at gerry@auristor.com / 917-501-8287
    • We are also holding an open BoF at LISA (Wednesday, November 1, 8-9PM)
      • Please come by if you are local.

September 28, 2017

Recording: https://youtu.be/f69IsTKtuvE

Agenda/Notes:

  • [Palak Dalal] 1.9 Q4 Planning
  • [Eric Forgette] Intro to Dory, a Flex driver for Docker Volume Plugins
  • PRs to discuss or that need attention
    • [Please add any PRs that need attention here]
  • Design Review
    • [Please add any design reviews here]
  • [Saad Ali] Testing & Documentation
  • [Saad Ali] Next F2F meeting update
    • Dates: Oct 10-11, 2017
    • Location confirmed: Google 345 Spear Street San Francisco, CA 94105
    • Details: link
    • In-person event attendance at capacity. To attend remotely, add your name to the doc linked above.
  • [Michael Rubin] Growing the sig-storage community
  • [Brad Childs, 10 minutes] Summary of the F2F meeting
    • Oct 2017 Storage SIG F2F Agenda/Notes/Logistics Doc: link

September 14, 2017

Recording: https://youtu.be/AEk6mOsJEyw

Agenda/Notes:

August 31, 2017

Recording: https://youtu.be/OWLEMeedi98

Agenda/Notes:

  • [Saad Ali] Status Update
  • PRs to discuss or that need attention
    • [Please add any PRs that need attention here]
  • Design Review
    • [Please add any design reviews here]
  • [Saad Ali] Next F2F meeting: location? Time?
    • Erin Boyd sent out a survey: link
    • Decided on Oct 10-11, 2017 at Google somewhere in the Bay Area, California (details to follow)

August 17, 2017

Recording: https://youtu.be/yLkolHV3uiw

Agenda/Notes:

  • [Saad Ali] Status Update
  • PRs to discuss or that need attention
  • Design Review
  • [Saad Ali] Next F2F meeting: location? Time? September?
    • [Michelle] late august is right before code freeze…
    • [Steve Wong] Dell could host a meeting again if the location is Santa Clara or Austin
    • [Saad] How about September 13 and 14 in Los Angeles? To overlap with the Linux Foundation Open Source Summit North America which is Sept 11-14 in LA.
      • Might be too crowded and hard to find a place to host.
      • Could have a social gathering.
    • [Ardalan] If there is interest for meeting in late October (October 25) on the East Coast (Raleigh, NC), NetApp can host the event on the sidelines of All Things Open (https://allthingsopen.org/).
      • May conflict with European OSS
    • [Steve Watt] Austin + 1
    • Poll for location and times?
      • Erin will send out something.
  • CFP for Kubecon closes Monday. Submit something!

August 3, 2017

Recording: https://youtu.be/Eh7Qa7KOL8o

Agenda/Notes:

July 20, 2017

Recording: https://youtu.be/_oqj3JwkzVU

Agenda/Notes:

  • [Saad Ali] Status Update
  • PRs to discuss or that need attention
    • [Please add any PRs that need attention here]
  • Design Review
  • [Saad Ali] Future of In-tree Volume vs Flex vs CSI
  • [Jaice Singer DuMars] Release roles still need volunteers for 1.8

July 6, 2017

Recording: https://youtu.be/3A0IHmuqxBs

Agenda/Notes:

June 22, 2017

Recording: https://youtu.be/oYY-GqhrG2M

Agenda/Notes:

June 8, 2017

Recording: https://youtu.be/YEVy18n543k

Agenda/Notes:

May 25, 2017

Recording: https://youtu.be/NCPCqaXG2SQ

Agenda/Notes:

  • Status Update
  • Features tracking board should be filled in for 1.7
  • PRs to discuss or that need attention
  • Design Review
    • [Please add any design reviews here]
    • [Hemant] https://github.com/kubernetes/community/pull/657 - growing persistent volume
    • CSI Spec?
      • Move to next week
    • Rook Volume Plugin
      • Pretty much done. Wanted to make it for 1.7. Thin plugin
      • Would need to go through exception process.
  • On-boarding docs/proce
    • Punted to next meeeting
  • StorageOS Volume Plugin
    • [TODO:saadali] Some one from Google should take a look

May 11, 2017

Recording: https://youtu.be/RzUVcnue1j0

Agenda/Notes:

April 27, 2017

Recording: https://youtu.be/YANFlMtue4A

Agenda/Notes:

April 13, 2017

Recording: https://youtu.be/UFLl9H17RdQ

Agenda/Notes:

March 30, 2017

Recording: https://youtu.be/F8HDdJwX_OQ

Agenda/Notes:

March 16, 2017

Recording: https://youtu.be/_dqtg3fURmg

Agenda/Notes:

March 2, 2017

Recording (partial): https://youtu.be/2163WTvqCCI

Agenda/Notes:

February 16, 2017

Recording: https://youtu.be/33ILjxa7l-M

Agenda/Notes:

  • Status Update
    • Spreadsheet: link
    • Release 1.6 Code Freeze 1 week away
  • Design Review
    • [Please add any design reviews here]
    • [Steve Watt] Snapshot
      • Steve and Serge going to take up Snapshots w/Jing.
  • PRs to discuss or that need attention
  • [Erin Boyd] Deprecating Recycler Proce
  • Local Storage Volume
    • https://github.com/kubernetes/community/pull/306
    • Timeline?
      • 1.6 Design
      • 1.7 Alpha
        • First use case will be stateful set
    • Design
      • [Steve Watt] Support for network disks in the PV framework. Data management platform folks cant use PV framework use host path. If we can take local FS and expose them as PVs, then we can hook them up to StateFul Sets, and update scheduler to handle it, correct?
        • [msau] Correct
      • [Steve Watt] Stateful set 6 node cassandra w/6 disks, if 1 dies, we dont have capability to dynamically provision.
        • [msau] If a node dies, want pod to release volume and use a different volume, there needs to be an available volume. Pre-provision all local volumes that are available.
      • [Guo] Node selectors and node labels, how does this differ.
        • [msau] You dont have to decide the node ahead of time when deploying pod. It will find any available volume, and then fix node to pod. Second: idea of forgiveness, if that node is having issues, node fails or is unavailable, system can rebind to a different node a new PV.
        • [Steve Watt] Want to avoid creating a replication storm. Replace it if unavailable for a certain period of time
          • [msau] Either Completely off, no forgiveness or some time boundary)
        • [Steve Watt] Allows kubernetes to automatically handle failures vs node selectors/node labels. Huge step forward in self healing capabilities.
        • [Ardalan] If node goes down, data lost? Want to grab replica not empty scratch disk.
          • [msau] Wouldnt other pod already have replica?
          • [Serge] depends.
          • [Guo] Portworx another replica set on different node. So ok to go to another node. If one node goes down, ok to schedule on one of the other nodes.
          • [Steve Watt] Data replication you want to maintain quorum (2 or 3) generally speaking, if one goes down. Spin up a new pod on a new node and have scratch disk replicate
          • [Serge] We are using replication term with 2 different meanings. MongoDB kind of replication is like you said (1 shard fewer, create a new shard, repopulate). Vs Replicated volumes (start with replica as a seed for the new shard). Not sure what to name them.
          • [Steve Watt] Application vs block replication?
          • [Steve Watt] Similar hadoop, multiple disks (blocks) identical copies, if host dies against one, just want scheduler to schedule against one of the replicas (not a random PV).
          • [Yaron] Active-passive replica may not be logically two PDs, just one that changes location from node A (which crashes) to node B. That makes things simpler.
          • [Steve Watt] PVs are immutable.
          • [Serge] Sounds like that is a lower layer. SDS repair would happen under the covers.
          • [Saad] If PV can move around really local storage?
          • [Steve Wong] Thats more of a hybrid local. App can take advantage on rebuild.

February 2, 2017

Recording: https://youtu.be/EgtyplK3Ilc

Agenda/Notes:

January 19, 2017

Recording: https://youtu.be/Yrcx2AgrJYU

Agenda/Notes:

  • [Matt De Lio] 2017 Planning
    • Spreadsheet: link
      • We're missing reviewers for many of the features/tasks we're working on. Please sign up as a reviewer if you are able; and for the feature owners, please start reaching out to reviewers if you're still missing one in the next few days.
      • The feature repo will be frozen on January 24th, so we need to create items there that require it ASAP. Feature owners that haven't already done so should do this soon and please update the spreadsheet with issue number.
  • [Saad Ali] All new 1.6 features should have:
  • [Ardalan Kangarlou] NetApps external provisioner for Kubernete
  • [Huamin Chen]: Out of tree provisioner repo hosting
    • Bchilds - move NFS provisioner repo to “storage”. Will home external provisioner and FLEX plugins maintained by community.
  • Design Review
    • [Please add any design reviews here]
  • PRs to discuss or that need attention
  • Next Storage F2F
    • Some time in April in Santa Clara

January 5, 2017

Recording: https://youtu.be/Nksb3w1a2x4

Agenda/Notes: