community/contributors/devel/on-call-build-cop.md

5.9 KiB

Kubernetes "Github and Build-cop" Rotation

Preqrequisites

Traffic sources and responsibilities

  • GitHub Kubernetes issues: Your job is to be the first responder to all new issues. If you are not equipped to do this (which is fine!), it is your job to seek guidance!

    • Support issues should be closed and redirected to Stack Overflow (see example response here).

    • All incoming issues should be tagged with a team label (team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none}); for issues that overlap teams, you can use multiple team labels

      • There is a related concept of "Github teams" which allow you to @ mention a set of people; feel free to @ mention a Github team if you wish, but this is not a substitute for adding a team/* label, which is required

      • If the issue is reporting broken builds, broken e2e tests, or other obvious P0 issues, label the issue with priority/P0 and assign it to someone. This is the only situation in which you should add a priority/* label

        • non-P0 issues do not need a reviewer assigned initially
      • Assign any issues related to Vagrant to @derekwaynecarr (and @mention him in the issue)

    • Keep in mind that you can @ mention people in an issue to bring it to their attention without assigning it to them. You can also @ mention github teams, such as @kubernetes/goog-ux or @kubernetes/kubectl

    • If you need help triaging an issue, consult with (or assign it to) @brendandburns, @thockin, @bgrant0607, @davidopp, @dchen1107, @lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time).

    • At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: here.

Build-copping

  • The merge-bot submit queue (source) should auto-merge all eligible PRs for you once they've passed all the relevant checks mentioned below and all [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the merge-bot been disabled for some reason, or tests are failing, you might need to do some manual merging to get things back on track.

  • Once a day or so, look at the [flaky test builds] (https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters are failing to start, or tests are consistently failing (instead of just flaking), file an issue to get things back on track.

  • Jobs that are not in critical e2e tests or flaky test builds are not your responsibility to monitor. The Test owner: in the job description will be automatically emailed if the job is failing.

  • If you are oncall, ensure that PRs confirming to the following pre-requisites are being merged at a reasonable rate:

    • Have been LGTMd
    • Pass Travis and Jenkins per-PR tests.
    • Author has signed CLA if applicable.
  • Although the shift schedule shows you as being scheduled Monday to Monday, working on the weekend is neither expected nor encouraged. Enjoy your time off.

  • When the build is broken, roll back the PRs responsible ASAP

  • If the build job itself fails, Jenkins will not try again automatically and everything will halt. You can trigger one at http://kubekins.mtv.corp.google.com/job/ci-kubernetes-build/#. Click log in, then click Build Now in the left margin.

  • When E2E tests are unstable, a "merge freeze" may be instituted. During a merge freeze:

    • Oncall should slowly merge LGTMd changes throughout the day while monitoring E2E to ensure stability.

    • Ideally the E2E run should be green, but some tests are flaky and can fail randomly (not as a result of a particular change).

      • If a large number of tests fail, or tests that normally pass fail, that is an indication that one or more of the PR(s) in that build might be problematic (and should be reverted).
      • Use the Test Results Analyzer to see individual test history over time.
  • Flake mitigation

    • Tests that flake (fail a small percentage of the time) need an issue filed against them. Please read this; the build cop is expected to file issues for any flaky tests they encounter.

    • It's reasonable to manually merge PRs that fix a flake or otherwise mitigate it.

Contact information

@k8s-oncall will reach the current person on call.

Analytics