community/contributors/devel/sig-release/flake-finders/episodes/000
hasheddan 1016d17ce3
[flake finders] fix episode 000 readme formatting
Fixes code block formatting in guide for flake finders episode 000.

Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
2021-02-28 15:19:25 -06:00
..
README.md [flake finders] fix episode 000 readme formatting 2021-02-28 15:19:25 -06:00

README.md

Flake Finder Fridays #0

February 5th 2021 (Recording)

Hosts: Dan Mangum, Rob Kielty

Introduction

This is the first episode of Flake Finder Fridays with Dan Mangum and Rob Kielty.

On the first friday of every month we will go through an issue that was logged for a failing or flaking test on the Kubernetes project.

We will review the triage, root cause analysis, and problem resolution for a test related issue logged in the past four weeks.

We intend to demo how CI works on the Kubernetes project and also how we collaborate across teams to resolve test maintenance issues.

Issue This is the issue that we are going to look at today ...

[Failing Test] ci-kubernetes-build-canary does not understand "--platform"

Testgrid Dashboard

build-master-canary

Breaking PRs

Investigation

  1. Desire to move from Google-owned infrastructure to Kubernetes community infrastructure. Thus the introduction of a canary build job to test pushing building and pushing artifacts with new infrastructure.

  2. Desire to move off of bootstrap.py job (currently being used for canary job) to krel tooling.

  3. Separate job existed (ci-kubernetes-build-no-bootstrap) that was doing the same thing as the canary job, but with krel tooling.

  4. The no-bootstrap job was running smoothly, so updated to use it for the canary job.

  5. Right before the update, we switched to using buildx for multi-arch images.

  6. Job started failing, which showed up in some interesting ways.

  7. Triage begins! Issue opened and release management team is pinged in Slack.

  8. The build-master job was still passing though... interesting.

  9. Both are eventually calling make release, so environment must be different.

  10. Let's look inside!

    docker run -it --entrypoint /bin/bash gcr.io/k8s-testimages/bootstrap:v20210130-12516b2
    
    docker run -it gcr.io/k8s-staging-releng/k8s-ci-builder:v20201128-v0.6.0-6-g6313f696-default /bin/bash
    
  11. A few directions we could go here:

    1. Update the k8s-ci-builder image to you use newer version of Docker
    2. Update the k8s-ci-builder image to ensure that DOCKER_CLI_EXPERIMENTAL=enabled is set
    3. Update the release.sh script to set DOCKER_CLI_EXPERIMENTAL=enabled
  12. Making the release.sh script more flexible serves the community better because it allows for building with more environments. Would also be good to update the k8s-ci-builder image for this specific case as well.

  13. And we get a new failure!

  14. Let's see what is going on in those images again...

  15. Why would this cause an error in one but not the other if we have DOCKER_CLI_EXPERIMENTAL=enabled? (this is why)

  16. In the mean time we went ahead and re-enabled the bootstrap job (consumers of those images need them!)

  17. Decided to increase logging verbosity on failures to see if that would give us a clue into what was going wrong (and to remove those annoying quiet currently not implemented warnings).

  18. Job turns green! But how?

  19. Buildx is versioned separately than Docker itself. Turns out that the --quiet flag warning was actually an error until v0.5.1 of Buildx.

  20. The build-master job was running with buildx v0.5.1 while the krel job was running with v0.4.2. This meant the quiet flag was causing an error in the krel job, and removing it alleviated the error.

  21. Finished up by once again removing the bootstrap job.

Fixes

Test Infra

Slack Threads

Kubernetes Project Resources

Brand new to the project?

Setup already and interested in maintaining tests?

Here's how the CI Signal Team actively monitors CI during a release cycle: