Add a new code path to the ctpolicy package which enforces Chrome's new CT Policy, which requires that SCTs come from logs run by two different operators, rather than one Google and one non-Google log. To achieve this, invert the "race" logic: rather than assuming we always have two groups, and racing the logs within each group against each other, we now race the various groups against each other, and pick just one arbitrary log from each group to attempt submission to. Ensure that the new code path does the right thing by adding a new zlint which checks that the two SCTs embedded in a certificate come from logs run by different operators. To support this lint, which needs to have a canonical mapping from logs to their operators, import the Chrome CT Log List JSON Schema and autogenerate Go structs from it so that we can parse a real CT Log List. Also add flags to all services which run these lints (the CA and cert-checker) to let them load a CT Log List from disk and provide it to the lint. Finally, since we now have the ability to load a CT Log List file anyway, use this capability to simplify configuration of the RA. Rather than listing all of the details for each log we're willing to submit to, simply list the names (technically, Descriptions) of each log, and look up the rest of the details from the log list file. To support this change, SRE will need to deploy log list files (the real Chrome log list for prod, and a custom log list for staging) and then update the configuration of the RA, CA, and cert-checker. Once that transition is complete, the deletion TODOs left behind by this change will be able to be completed, removing the old RA configuration and old ctpolicy race logic. Part of #5938 |
||
|---|---|---|
| .. | ||
| Dockerfile | ||
| README.md | ||
| boulder.rsyslog.conf | ||
| build.sh | ||
| requirements.txt | ||
| tag_and_upload.sh | ||
README.md
Boulder-Tools Docker Image Utilities
In CI and our development environment we do not rely on the Go environment of
the host machine, and instead use Go installed in a container. To simplify
things we separate all of Boulder's build dependencies into its own
boulder-tools Docker image.
Setup
To build boulder-tools images, you'll need a Docker set up to do cross-platform builds (we build for both amd64 and arm64 so developers with Apple silicon can use boulder-tools in their dev environment). On Ubuntu the setup steps are:
docker buildx create --use --name=cross
sudo sudo apt-get install qemu binfmt-support qemu-user-static
After setup, the output of docker buildx ls should contain an entry like:
cross0 unix:///var/run/docker.sock running linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
If you see an entry like:
cross0 unix:///var/run/docker.sock stopped
That's probably fine; the instance will be started when you run
tag_and_upload.sh (which runs docker buildx build).
Go Versions
Rather than install multiple versions of Go within the same boulder-tools
container we maintain separate images for each Go version we support.
When a new Go version is available we perform several steps to integrate it to our workflow:
- We add it to the
GO_VERSIONSarray intag_and_upload.sh. - We run the
tag_and_upload.shscript to build, tag, and upload aboulder-toolsimage for each of theGO_VERSIONS - We update
.github/workflows/boulder-ci.yml, adding the new docker image tag(s) to theBOULDER_TOOLS_TAGsection.
After some time when we have spot checked the new Go release and coordinated
a staging/prod environment upgrade with the operations team we can remove the
old GO_VERSIONS entries, delete their respective build matrix items, and update
docker-compose.yml.