Compare commits

...

118 Commits

Author SHA1 Message Date
Paul Holzinger a0b436c123
Merge pull request #411 from mtrmac/podman-sequoia
WIP: Install podman-sequoia in rawhide images
2025-08-19 20:31:41 +02:00
Miloslav Trmač d8d2fc4c90 Install podman-sequoia in rawhide images
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-08-12 19:33:06 +02:00
Miloslav Trmač 2c9f480248 Update the IMG_SFX rules to work on macOS
- (date --utc) is not supported
- The $(file ) make function is not supported
- macOS sed has no \+ in basic regular expressions, use
  the extended format
- (quote arguments to [ ] to avoid confusing error messages if an earlier sed fails)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-07-30 20:55:44 +02:00
Miloslav Trmač 34add92ba5
Merge pull request #410 from lsm5/skopeo-registry
skopeo_cidev: Depend on docker-distribution
2025-07-23 19:08:48 +02:00
Lokesh Mandvekar 3c73fc4fa8
skopeo / fedora cache_image: Install docker-distribution
Having the registry binary named `registry-v2` causes trouble for
`make test-integration-local`. The registry binary provided by the
docker-distribution package is just `/usr/bin/registry`.

Depending on docker-distribution should make things simpler, more
consistent and usable regardles of CI / testing environment.

In skopeo cirrus jobs, the integration tests are run on the host itself
but a lot of the binaries are copied from the skopeo_cidev container.
So, in this case docker-distribution is directly installed on the host
environment and the registry-v2 build is removed from the skopeo_cidev
image.

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2025-07-21 14:11:23 -04:00
Paul Holzinger 0e1497cd77
Merge pull request #408 from Luap99/podman-py-rm
remove podman-py
2025-07-01 10:14:23 +02:00
Paul Holzinger 08a78fef72
new image build 2025-06-27
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-27 17:52:11 +02:00
Paul Holzinger 6489ad88d4
remove podman-py
It only uses tmt now and not cirrus anymore. So delete all the image
build infra for it.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-27 17:51:05 +02:00
Paul Holzinger 6b776d0590
Merge pull request #407 from timcoding1988/feat/add-gh-to-fedora
Feat/add gh to fedora
2025-06-24 11:57:40 +02:00
timcoding1988 5f27145d64 1. adding gh 2. remove 4.0 timebomb check
Signed-off-by: Tim Zhou <tzhou@redhat.com>
2025-06-18 10:39:18 -04:00
Paul Holzinger 699dbfbcc1
Merge pull request #404 from Luap99/packages
update to Fedora 42 and add some packages
2025-04-23 11:21:52 +02:00
Paul Holzinger 56b6c5c1f8
update IMG_SFX 2025-04-22
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 15:08:26 +02:00
Paul Holzinger 1a7005b4ea
ci: work around build issue
All the base image jobs are failing with:

ssh-keygen -f /tmp/cirrus-ci-build_tmp/cidata.ssh -P "" -q -t ed25519
Saving key "/tmp/cirrus-ci-build_tmp/cidata.ssh" failed: Permission denied
make: *** [Makefile:216: /tmp/cirrus-ci-build_tmp/cidata.ssh] Error 1

I have no idea what happend but let's try without selinux in case
selinux is blocking file access.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 15:08:20 +02:00
Paul Holzinger e960222013
f42: force newer criu
To fix broken checkpoint tests.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 11:58:46 +02:00
Paul Holzinger 087a6c4b24
AWS fedora: work around selinux bug
On f42 restorecon no longer applies the new label:
https://bugzilla.redhat.com/show_bug.cgi?id=2360183

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-16 16:35:42 +02:00
Paul Holzinger 12c503fb07
fedora: remove python3.8
The package has been removed in f42.

https://fedoraproject.org/wiki/Changes/RetirePython3.8

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 20:11:14 +02:00
Paul Holzinger 96f688b0e3
update to Fedora 42
It has been released.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:53 +02:00
Paul Holzinger 632e4b16f8
.github: check_cirrus_cron work around github bug
So I wondered why our email workflow only reported things for podman...

It seems the secrets: inherit is broken and no longer working, I see all
jobs on all repos failing with:

Error when evaluating 'secrets'. .github/workflows/check_cirrus_cron.yml (Line: 19, Col: 11): Secret SECRET_CIRRUS_API_KEY is required, but not provided while calling.

This makes no sense to me I doubled checked the names, nothing changed
on our side and it is consistent for all projects. Interestingly this
same thing passed on March 10 and 11 (on all repos) but failed before
and after this as well.

Per[1] we are not alone, anyway let's try to get this working again even
if it means more duplication.

[1] https://github.com/actions/runner/issues/2709

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:02 +02:00
Paul Holzinger ea0295744e
github: use thollander/actions-comment-pull-request
jungwinter/comment doesn't seem very much maintained and makes use of
the deprecated set-output[1].

[1] https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:02 +02:00
Paul Holzinger e073d1b16d
debian: disable dnsmasq service
This conflicts with aardvark-dns which also binds this port.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-01 11:20:18 +02:00
Paul Holzinger af87d70dce
add sqlite3 lib/dev packages
I like to dynamically link sqlite3 in podman builds to make the binaries
smaller.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-31 14:31:52 +02:00
Lokesh Mandvekar 879a69260c
Fedora cache image: install koji and fedora-distro-aliases
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2025-03-31 14:23:09 +02:00
Paul Holzinger 564840b6bc
Merge pull request #402 from Luap99/new-images
new images 2025-03-24
2025-03-24 14:59:33 +01:00
Paul Holzinger 6c11ff7257
new images 2025-03-24
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-24 12:19:25 +01:00
Daniel J Walsh fe4e4f3cd7
Merge pull request #401 from Luap99/new-images
new images 20250312
2025-03-12 16:58:26 -04:00
Paul Holzinger 617fe85f37
new images 20250312
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-12 17:54:25 +01:00
Paul Holzinger 3319c260ad
Merge pull request #400 from Luap99/artifacts
add new testartifacts in the cache registry
2025-02-11 20:33:21 +01:00
Paul Holzinger 1a185cfb81
new images
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:08:49 +01:00
Paul Holzinger 3f7b07de69
debian: remove tar work around
Thanks to Reinhard for patching the debian package to no longer trigger
the bug.

https://salsa.debian.org/debian/tar/-/merge_requests/6

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:06:24 +01:00
Paul Holzinger d2652b1135
add new testartifact to image cache
This is needed by https://github.com/containers/podman/pull/25238

To avoid flakes we need to have the test artifacts in the cache
registry.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:02:56 +01:00
Paul Holzinger 4b32b8267d
Merge pull request #399 from Luap99/new-images
new images 2025-01-31
2025-02-03 16:04:16 +01:00
Paul Holzinger 4756da479a
new images 2025-01-31
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-31 13:19:19 +01:00
Paul Holzinger ed0f37f1bd
Merge pull request #398 from Luap99/new-images
new images
2025-01-07 18:46:23 +01:00
Paul Holzinger e5a1016f08
new images
Removed two timebombs that no longer apply, composefs is installed in
the main package list and the pasta version is in stable now.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-07 14:24:36 +01:00
Paul Holzinger 8c6d4bb0bf
debian: remove git-daemon-run
The package no longer exists[1] in sid. Per quick search it just
contained a simple script not something we actually use. We need the git
daemon command and that is already part of the main git package AFAICS.

[1] 2de766588e

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-07 14:04:39 +01:00
Paul Holzinger 21cebe3fec
Merge pull request #397 from baude/add7z
Add 7zip Windows compression utility
2025-01-06 15:31:32 +01:00
Brent Baude 856110c78d Add 7zip Windows compression utility
The Fedora images used to test libhvee are now being shipped with xz
compression.  Because the golang xz decompression is extremely slow, I'm
proposing to use this command line utility.

Signed-off-by: Brent Baude <bbaude@redhat.com>
2024-12-18 09:52:12 -06:00
Paul Holzinger 46c3bf5c93
Merge pull request #396 from Luap99/podman-machine-os
add packages needed by podman-machine-os
2024-12-13 15:23:22 +01:00
Paul Holzinger d317246fd6
build new images
- remove old pasta bump and add new bump for rawhide issue
  https://github.com/containers/podman/issues/24804
- bump debian tar timebomb, it still has the same broken version

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-12 13:25:24 +01:00
Paul Holzinger 006e5b1db8
add packages needed by podman-machine-os
So that we do not have to deal with dnf install issues over there at
runtime.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-05 13:45:56 +01:00
Ed Santiago 99e20928ad
Merge pull request #394 from edsantiago/bump-systemd
Bump. Let's see if we pick up a new systemd.
2024-11-20 08:03:34 -07:00
Ed Santiago 7c285acaaa Bump. Let's see if we pick up a new systemd.
Desperate attempt to look into podman issue 24220, the
missing-logs-and-events flake. I noticed on 1mt that
rawhide is on systemd-257~rc1, which is what's on
debian, and we haven't seen 24220 on debian. F41
is still on 256.7.

Let's see what this PR brings in. If we get systemd-257
on rawhide, let's hammer at it on podman and see what
happens with 24220.

Also, fix a big duh on my part. My new README-simplified
had a line beginning with the word "timebomb", which
'make timebomb-check' interpreted as an actual timebomb
directive, which caused the check to fail. Workaround
is to shuffle words; a more proper solution might be
to exclude READMEs, or look only in *.sh files, or
some other smart filter.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-18 06:06:17 -07:00
Paul Holzinger 454288919f
Merge pull request #393 from edsantiago/lets-see
Another bump, to pick up 6.11.6 kernel
2024-11-11 14:20:37 +01:00
Ed Santiago 2b3a418d3e Another bump, to get 6.11.6 kernel
Also, bump pasta on f40 just to eliminate all chances
of podman flake 24219.

Also, add a simplified README explaining the usual-case
actions in this repo.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-07 13:58:15 -07:00
Paul Holzinger f4bbaabf94
Merge pull request #392 from edsantiago/f41-clean
VMs: bump to f41
2024-11-07 19:23:52 +01:00
Ed Santiago 4b297585c3 bump IMG_SFX
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:35:17 -07:00
Ed Santiago 4839366e72 Installed packages: make them work again
Changes necessary to get working VM images. I can't remember
why all of these are necessary. I think the docker-compose
change is because that package started bringing in too many
unwanted dependencies that conflict with podman. Anyhow,
this works.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:32:10 -07:00
Ed Santiago aef024bab7 Changes needed for new dnf
Lots of things seem to have changed in dnf-land. These are the
changes that get us working again.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:30:06 -07:00
Ed Santiago 4a12d4e3bd Fedora AWS query: strip the us-east-1
Something has changed in Fedora images on AWS. The us-east-1 suffix
no longer exists. Remove it.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:26:07 -07:00
Ed Santiago 4392650a1c Fedora 41 is stable. Bump.
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:24:29 -07:00
Paul Holzinger 7ef71ffbbd
Merge pull request #389 from edsantiago/testimage-20241011
cache registry: add testimage:20241011
2024-10-17 13:47:08 +02:00
Ed Santiago 57ebb34516 cache registry: add testimage:20241011
Needed by podman for debugging a pasta flake and, more
importantly, supporting infrastructure changes (buildah 5595)
that break APIv2 test assumptions. Fixing these failures
will silence red-herring test failures in our ongoing
testing of zstd:chunked.

The 20240123 image is not used anywhere other than podman,
so it is safe to remove.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-10-16 08:46:44 -06:00
Ed Santiago a478e68664
Merge pull request #376 from inknos/update-python-versions-and-packages
Remove unused packages and update python versions
2024-10-15 08:36:03 -06:00
Nicola Sella 9301643309 Remove unused packages and update python versions
python-xdg was removed as a dependency
8d1020ecc8

tests are currently done for py12
330cf0b501

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-10-15 10:55:18 +02:00
Ed Santiago d8ee5ceae2
Merge pull request #387 from Luap99/win-zstd
Add zstd on windows
2024-10-10 11:54:35 -06:00
Paul Holzinger ef2c8f2e71
Build new images
Bump debian tar timebomb, remove manual crun install as the package is
stable now and most importantly remove IMA workaround as the issue[1],
we will see if that is true.

[1] https://github.com/containers/podman/issues/18543

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-10 12:55:59 +02:00
Paul Holzinger aa36f713ee
windows: add zstandard package
Windows does not have zstd by default so we need to install it. In
particular I am looking at switching the repo archive to zstd as this
makes things much faster (over 1min in podman)[1] but the windows
testing is unable to extract that. While archiver added zstd support a
while back it is not in the version that is on chocolatey which seems a
bit out of date.

[1] https://github.com/containers/podman/pull/24120

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-10 12:42:38 +02:00
Ed Santiago 456905c2ed
Merge pull request #386 from edsantiago/test-crun-17
Build images with crun 1.17
2024-09-17 18:08:11 -06:00
Ed Santiago b5c7d46947 Build images with crun 1.17
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-09-11 09:09:35 -06:00
Lokesh Mandvekar 90ac9fc314
Merge pull request #385 from Luap99/ShellCheck
Add ShellCheck to fedora images
2024-09-11 19:12:00 +05:30
Paul Holzinger 2c858e70b9
Add ShellCheck to fedora images
It is installed at runtime in podman which is not good[1]. Install it
here so we can drop the dnf install there.

Also update some timebombs, pasta is in stable now, tar is still broken
in debian and IMA bug is also still not fixed in podman.

[1] f22f4cfe50/contrib/cirrus/prebuild.sh (L54)

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-09-06 17:34:23 +02:00
Ed Santiago 454f7be018
Merge pull request #383 from edsantiago/main
Build new VMs
2024-08-26 13:01:35 -06:00
Ed Santiago 3bc493fe31 Build new VMs
Timebomb pasta 08-14 on f39. See how/if this works in podman.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-08-21 11:14:47 -06:00
Chris Evich 9f437cb621
Merge pull request #382 from cevich/fix_debug_test_flake
[CI:TOOLING] Fix test_debug_task passing/failing by chance
2024-08-20 19:06:07 -04:00
Chris Evich 5edc6ba963
Fix test_debug_task passing/failing by chance
There's no guarantee of nested-virt support with the standard
"pick first available" VM type done by the `&ibi_vm` alias.
However, nested-virt is required for `image_builder_debug`
matrix element of `test_debug_task`.  Switch to the alias
purpose-built to supply a nested-virt capable VM.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-20 16:24:49 -04:00
Chris Evich fc75a1a84a
Merge pull request #380 from cevich/faster_simpler_tooling_builds
[CI:TOOLING] Track image IDs instead of tar exports
2024-08-19 15:01:45 -04:00
Chris Evich 8b60787478
Update debugging docs
Clarify the difference between `ci_debug` and `image_builder_debug`.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 9400efd805
Add tests for debug targets
Previously if either debugging targets broke in some way, nobody would
know.  Fix this by adding simple CI tests that confirm they build and
run a basic command.

Also, quiet down the unzipping of AWS cli tools.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 4958aa2422
Track image IDs instead of tar exports
Previously all container builds run by the Makefile managed them based
on presence/absence of a docker-archive tar file.  Producing these
exports is time-consuming and ultimately unnecessary extra work.  The
tar files are never actually consumed in a meaningful way by any other
targets.  Further, most of the container builds in CI run in parallel,
simply throwing away the tar when finished.

Fix this by switching to management based on image-ID files instead.
The only exception is the `imgts` image and images which are based on
it.  For those, some special handling is required (already done by the
CI build script), so some comments were added to assist.

Also, remove the `bench_stuff` target entirely as this has long since
been retired.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 217ff7ed3e
Merge pull request #379 from cevich/gcp_update
[CI:DOCS] Retire oversight of dnsname project
2024-08-19 10:12:22 -04:00
Chris Evich 4cd328ddfa
Minor: Update/clarify comment
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-16 10:19:16 -04:00
Chris Evich 1e2bebe9b0
Retire oversight of dnsname project
This github repo has been archived, CI disabled, and the GCE project
deleted.  Stop tracking it in automation.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-16 10:15:42 -04:00
Chris Evich 3db41a4702
Merge pull request #375 from cevich/bigger_fedora_vms
Catch Fedora-base image update problems early
2024-08-12 16:36:07 -04:00
Chris Evich 46c104b403
Catch Fedora-base image update problems early
Previously updates were disabled due to the cloud VM only having 2-gig
and the nested-VM only having 1-gig of memory.  Allow Fedora base-image
package updates by increasing the available resources.  Enabling
base-level (esp. kernel) package updates early supports spotting
fundamental image problems early.  Otherwise they may not be found until
a set of images is deployed downstream.

Also, update a few comments relating to followup package update.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 13:43:29 -04:00
Chris Evich b162196e68
Merge pull request #374 from cevich/rm_network_flakes
Reduce impact of networking slowdowns
2024-08-12 13:39:40 -04:00
Chris Evich 0a1e3dbfff
Reduce impact of networking slowdowns
Previously if a repository server, the internet, or the execution
environment experienced some kind of networking slowdown, it could lead
to a package install or update timeout failure.  Increase resiliency in
these situations with additional retries, timeouts, and lowered minimum
rates.  Also increase the timeout on the related Cirrus-CI tasks.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 10:59:43 -04:00
Ed Santiago 83c9b1661c
Merge pull request #371 from Luap99/ebpf
add bpftrace for CI debugging
2024-08-06 10:16:57 -06:00
Paul Holzinger 13b68fe5aa
new image IDs
Bump timebomb to Sep 1st, the podman issue is still not fixed and I
haven't looked at the debian bug but I assume it is also still not
fixed.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-08-05 19:32:30 +02:00
Paul Holzinger 5d99e6aed4
add bpftrace for CI debugging
I like to run a bpftrace based program in CI to collect better logs for
specific processes not observed in the normal testing such as the podman
container cleanup command.

Given you need to have full privs to run ebpf and the package pulls in
an entire toolchain which is almost 500MB in install size we do not add
it the the container images to not bloat them without reason.

https://github.com/containers/podman/pull/23487

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-08-05 19:05:24 +02:00
Ed Santiago 798e83dba9
Merge pull request #357 from edsantiago/local-cache-registry
Create a local registry
2024-07-22 05:42:13 -06:00
Ed Santiago 7e977eee41 Create a local registry
...to minimize hiccups. RUN-2091 in Jira. Network registries
are too unreliable; they cause too many flakes in CI. Here
we set up a registry running on each VM, prepopulated with
all container images used in podman and buildah tests.

Related PRs:
   https://github.com/containers/podman/pull/22726
   https://github.com/containers/buildah/pull/5584

Once those merge, podman and buildah CI tests will fetch
images from this local registry.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-07-08 09:26:55 -06:00
Chris Evich e1662886ab
Merge pull request #370 from cevich/increase_image_rm_rate
[CI:TOOLING] Increase obsolete image flagging and pruning
2024-07-08 11:23:37 -04:00
Chris Evich f67769a6ff
Increase obsolete image flagging and pruning
It was observed in the Cirrus-CI cron logs, that only the total
number of images scanned is reported.  Fix this by giving more
useful info., like the number of candidates for obsolete/pruning.

Related, the original restriction of `10` obsolete/prune images
was originally put in place when only a few repos utilized Cirrus-CI
VMs and image building was substantially more infrequent.  The
reason it exists is to prevent potential catastrophe should the `meta`
time stamp updating tasks have a bug or some other related failure occur.
Increase the limit to `50` so deletions may proceed much more rapidly.

*Note:* "Obsolete" images still live w/in a 30-day window where they can
be recovered if need be.  It's simply that any attempted use by CI will
fail, putting someone on notice that image recovery may be necessary.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-08 09:56:36 -04:00
Chris Evich a86360dc58
Remove ref to missing tool
The `uuidgen` tool has long-since been removed from the tooling images.
For whatever reason one call to it still existed.  Remove it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-05 11:48:44 -04:00
Chris Evich dd546e9037
Merge pull request #369 from cevich/aws_creds_docs
[CI:DOCS] Add link to AWS credentials file format
2024-07-05 11:23:35 -04:00
Chris Evich b0f018152e
[CI:DOCS] Add link to AWS credentials file format
Previously this was available in `import_images/README.md` which was
recently removed.  Since this page is difficult to find in the AWS docs,
link it directly into the main README.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-05 11:21:10 -04:00
Lokesh Mandvekar faf62c81b7
Merge pull request #354 from lsm5/dotnet
Windows: install dotnet and latest wix
2024-07-02 15:52:13 -04:00
Chris Evich b1864a66e9
Merge pull request #368 from cevich/fix_renovate_lib
[CI:DOCS] Fix renovate updating lib.sh
2024-07-02 14:38:03 -04:00
Chris Evich 07a870aa8e
Fix renovate updating lib.sh
Previously Renovate was failing in a multi-line search for an anchored
pattern in `lib.sh`.  This resulted in it completely ignoring the custom
regex manager for that file, as observed in the debug logs.  Fix this by
removing the regex anchors.

Also remove the filename anchors referenced in the `lib.sh` package rule
as they're unnecessary.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 14:28:53 -04:00
Chris Evich 419d61271c
Merge pull request #367 from cevich/fix_update_renovate_config
[CI:DOCS] Reformat renovate config + other minor updates
2024-07-02 14:18:23 -04:00
Lokesh Mandvekar 84304ec159
Windows: install dotnet and latest wix
wix3 is EOL and choco doesn't support installing wix > 3.14.

So, this commit installs the `dotnet` runtime and uses dotnet to install
the latest wix in the windows image.

Also remove pasta package timebomb from debian packaging.

Resolves: RUN-2055

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2024-07-02 14:07:13 -04:00
Chris Evich 8319550d63
Reformat renovate config + other minor updates
Previously the Renovate configuration was using an older format no longer
supported by the bot.  Apply automatic fixes proposed by the bot,
re-adding/adjusting the old comments as needed.

Also:

* Drop automatic assignment of Renovate PRs to `cevich`
* Reference the GHCR registry container image
* Simplify CI VM update warning message conditions & text.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 13:59:09 -04:00
Ed Santiago 6b9b9f9f08
Merge pull request #366 from cevich/do_not_use_cirrus_base_sha
[CI:DOCS] Remove broken CIRRUS_BASE_SHA usage
2024-07-02 09:10:57 -06:00
Ed Santiago 38e7c58ee6
Merge pull request #363 from cevich/rm_import_images
Use fedoraproject published EC2 images
2024-07-02 09:10:13 -06:00
Chris Evich 03802c1e7a
Remove broken CIRRUS_BASE_SHA usage
Unfortunately this value doesn't properly reflect the current branch
point of a PR.  Replace it with a call to `git merge-base` instead.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 10:01:42 -04:00
Chris Evich 6ec9ceecf3
Merge pull request #365 from cevich/example_pre-commit-config
[CI:DOCS] Add example pre-commit config
2024-07-01 15:49:57 -04:00
Chris Evich fcf08a3e5a
Add example pre-commit config
Add suggested/example `pre-commit` configuration for this repo. To use
as-is, simply symlink to `.pre-commit-config.yaml`.  Otherwise it can
be a basis for a custom configuration.

Fix all findings from the example pre-commit hooks.

Also include codespell config w/ repo-specific dictionary extension.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 15:48:59 -04:00
Chris Evich 29014788ac
Use fedoraproject published EC2 images
Previously a very complex, manual, and failure-prone `import_images`
stage was required to bring raw images into EC2.  Primarily this was
necessary because beta images aren't published on EC2 by the
fedoraproject.  However, since the original implementation, CI
operations against rawhide have largely supplanted the need to support
testing against the beta images.  This means the 'import_images' stage
can be completely dropped, and the 'base_images' stage can simply source
images (including `rawhide` if necessary) published by the Fedora
project.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:52:11 -04:00
Chris Evich 108ec30605
Remove Debian pasta apparmor workaround
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:52:11 -04:00
Ed Santiago cfc18f05da
Merge pull request #364 from cevich/imgsfx_history
[CI:DOCS] Add pre-commit (app) hook to check IMGSFX
2024-07-01 09:35:03 -06:00
Chris Evich 2e5a2acfe2
Add pre-commit (app) hook to check IMGSFX
Intended for use by [the pre-commit
app](https://pre-commit.com/#intro), this hook keeps track of all IMG_SFX
values pushed, failing when any duplicate is found.  In the case of
pushing to PRs that don't build CI VM images, the hook failure must be
manually bypassed.  Example `.pre-commit-config.yaml`:

```yaml
---
repos:
  - repo: https://github.com/containers/automation_images.git
    rev: <tag or commit sha>
    hooks:
      - id: check-imgsfx
```

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:32:03 -04:00
Chris Evich 014b518abf
Merge pull request #362 from cevich/get_ci_vm_docs
[CI:DOCS] Improve get_ci_vm container docs
2024-06-24 15:55:23 -04:00
Chris Evich 03d55b684b
Improve get_ci_vm container docs
The readme contained a lot of technical/implementation details, but
lacked an overview of the architecture/operations.  Fix this.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-06-24 11:16:43 -04:00
Paul Holzinger 8a55408a27
Merge pull request #361 from edsantiago/bump
Semiregular VM catchup
2024-06-21 14:32:56 +02:00
Ed Santiago 79bf8749af Semiregular VM catchup
- rawhide now includes rpm-plugin-ima, which breaks rootless
  podman pods. Add a timebomb'ed workaround until there's a
  more definitive solution in podman or its containers-* libraries

- bug fix for Makefile, handle indented timebombs

- install composefs in rawhide

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-06-20 09:31:27 -06:00
Ed Santiago 91846357a1
Merge pull request #360 from cevich/only_after_merge
[CI:DOCS] Stop tagging during cron runs
2024-05-29 14:51:03 -06:00
Chris Evich f7bdd130a7
Merge pull request #338 from edsantiago/debian_cgroups_v2
Debian: remove force-cgroups-v1 code
2024-05-29 14:38:18 -04:00
Chris Evich 7c1ecb657b
Stop tagging during cron runs
Previously the `tag_latest_images` was executing during the daily
'lifecycle' Cirrus-cron job.  This was unintentional, this task should
only run after a merge onto the default branch.  Fix the condition.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 10:55:22 -04:00
Miloslav Trmač 1e2559b4af Backport a patch to avoid a panic when compiled with Go >= 1.22
> panic: encoding alphabet includes duplicate symbols

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:17:28 -06:00
Miloslav Trmač 564b76cfe1 Also stop plocate-updatedb
plocate is the default locate implementation in Fedora.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:11:00 -06:00
Miloslav Trmač 6cbfbbac05 Stop installing mlocate
It has been retired in Rawhide, and it's unclear whether
we need it at all.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:11:00 -06:00
Ed Santiago e50990987f Debian: remove force-cgroups-v1 code
Per discussion in 2024-03-20 Planning meeting, we will no
longer be testing runc in CI. And cgroups V1 is dead too.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-05-29 08:10:58 -06:00
Ed Santiago e48dc5d37e
Merge pull request #359 from cevich/fix_uuidgen
[CI:TOOLING] Fix missing uuidgen tool
2024-05-29 08:09:59 -06:00
Chris Evich aae598a48a
Fix missing uuidgen tool
Previously this tool was used by a few container images as a
half-hearted attempt at thwarting guesses of the credentials
filename.  For whatever reason the `uuidgen` command is no
longer present in the latest base images but this anti-thwart
measure is also unnecessary and not very effective, remove it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 08:58:12 -04:00
Chris Evich 9acf75b6f5
Merge pull request #358 from cevich/fix_tag_latest
Fix tagging latest after [CI:TOOLING] PR merge
2024-05-29 08:55:57 -04:00
Chris Evich c63d02bec2
Fix tagging latest after [CI:TOOLING] PR merge
After a PR merges a branch-level job runs to tag the new container
images.  However, there is a special-case when a magic string is present
in the PR title:  No Fedora/Skopeo images were be built, so they should
not be tagged be ignored.

Prior to this commit, this special case isn't handled correctly, because
`CIRRUS_CHANGE_TITLE` only contains the first-line of the HEAD commit.
When executing on a branch, after a PR merge, this would be something
like:

`Merge pull request #FOO from some/thing`

Therefore not matching the intended magic string.  Fix this by switching
to a check against `CIRRUS_CHANGE_MESSAGE` which includes the entire
message.  Importantly, when merged using the github UI, the second line
of the commit message should contain the PR description and thus the
magic string.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-28 16:30:28 -04:00
Chris Evich afe1ced362
Merge pull request #356 from cevich/fix_get_ci_vm_test
[CI:TOOLING] Fix get_ci_vm test and new git safety checks
2024-05-23 15:02:29 -04:00
Chris Evich 499c24d856
Fix get_ci_vm test and new git safety checks
Previously, likely do to some git update the following error was
produced:

```
Testing: Verify mock 'gcevm' flavor main() workflow produces expected
output
fail - Expected exit-code 0 but received 128 while executing
mock_gcevm_workflow (output follows)
Winning lottery-number checksum: 0
gcloud --configuration=automation_images --project=automation_images
compute instances create --zone=us-central1-a
--image-project=automation_images --image=test-image-name --custom-cpu=0
--custom-memory=0Gb --boot-disk-size=0 --labels=in-use-by=foobar
foobar-test-image-name
gcloud --configuration=automation_images --project=automation_images
compute ssh --ssh-flag=-o=AddKeysToAgent=yes --force-key-file-overwrite
--strict-host-key-checking=no --zone=us-central1-a
root@foobar-test-image-name -- true
Cloning into '/tmp/get_ci_vm_hRxAoX.tmp/var/tmp/automation_images'...
fatal: detected dubious ownership in repository at
'/tmp/cirrus-ci-build/get_ci_vm/good_repo_test/.git'
To add an exception for this directory, call:

  git config --global --add safe.directory
/tmp/cirrus-ci-build/get_ci_vm/good_repo_test/.git
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
```

Fix this.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-23 14:32:42 -04:00
50 changed files with 940 additions and 805 deletions

View File

@ -6,7 +6,6 @@ load("cirrus", "fs")
def main():
return {
"env": {
"IMG_SFX": fs.read("IMG_SFX").strip(),
"IMPORT_IMG_SFX": fs.read("IMPORT_IMG_SFX").strip()
"IMG_SFX": fs.read("IMG_SFX").strip()
},
}

View File

@ -164,20 +164,21 @@ base_images_task:
# Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true"
stateful: true
timeout_in: 50m
# Cannot use a container for this task, virt required for fedora image conversion
gce_instance:
<<: *ibi_vm
# Nested-virt is required, need Intel Haswell or better CPU
enable_nested_virtualization: true
type: "n2-standard-2"
scopes: ["cloud-platform"]
timeout_in: 70m
gce_instance: *ibi_vm
matrix:
- &base_image
name: "${PACKER_BUILDS} Base Image"
gce_instance: &nested_virt_vm
<<: *ibi_vm
# Nested-virt is required, need Intel Haswell or better CPU
enable_nested_virtualization: true
type: "n2-standard-16"
scopes: ["cloud-platform"]
env:
PACKER_BUILDS: "fedora"
- <<: *base_image
gce_instance: *nested_virt_vm
env:
PACKER_BUILDS: "prior-fedora"
- <<: *base_image
@ -211,6 +212,7 @@ cache_images_task:
# Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true"
stateful: true
timeout_in: 90m
container:
dockerfile: "image_builder/Containerfile"
cpu: 2
@ -231,9 +233,6 @@ cache_images_task:
- <<: *cache_image
env:
PACKER_BUILDS: "fedora-netavark"
- <<: *cache_image
env:
PACKER_BUILDS: "fedora-podman-py"
- <<: *cache_image
env:
PACKER_BUILDS: "fedora-aws"
@ -284,6 +283,26 @@ win_images_task:
path: win_images/manifest.json
type: application/json
# These targets are intended for humans, make sure they builds and function on a basic level
test_debug_task:
name: "Test ${TARGET} make target"
alias: test_debug
only_if: *is_pr
skip: *ci_docs
depends_on:
- validate
gce_instance: *nested_virt_vm
matrix:
- env:
TARGET: ci_debug
- env:
TARGET: image_builder_debug
env:
HOME: "/root"
GAC_FILEPATH: "/dev/null"
AWS_SHARED_CREDENTIALS_FILE: "/dev/null"
DBG_TEST_CMD: "true"
script: make ${TARGET}
# Test metadata addition to images (built or not) to ensure container functions
test_imgts_task: &imgts
@ -318,7 +337,6 @@ test_imgts_task: &imgts
fedora-c${IMG_SFX}
prior-fedora-c${IMG_SFX}
fedora-netavark-c${IMG_SFX}
fedora-podman-py-c${IMG_SFX}
rawhide-c${IMG_SFX}
debian-c${IMG_SFX}
build-push-c${IMG_SFX}
@ -485,7 +503,9 @@ test_build-push_task:
tag_latest_images_task:
alias: tag_latest_images
name: "Tag latest built container images."
only_if: $CIRRUS_BRANCH == $CIRRUS_DEFAULT_BRANCH
only_if: |
$CIRRUS_CRON == '' &&
$CIRRUS_BRANCH == $CIRRUS_DEFAULT_BRANCH
skip: *ci_docs
gce_instance: *ibi_vm
env: *image_env
@ -528,6 +548,7 @@ success_task:
- base_images
- cache_images
- win_images
- test_debug
- test_imgts
- imgts
- test_imgobsolete

2
.codespelldict Normal file
View File

@ -0,0 +1,2 @@
IMGSFX,IMG-SFX->IMG_SFX
Dockerfile->Containerfile

0
.codespellignore Normal file
View File

4
.codespellrc Normal file
View File

@ -0,0 +1,4 @@
[codespell]
ignore-words = .codespellignore
dictionary = .codespelldict
quiet-level = 3

View File

@ -1,20 +1,12 @@
/*
Renovate is a service similar to GitHub Dependabot, but with
(fantastically) more configuration options. So many options
in fact, if you're new I recommend glossing over this cheat-sheet
prior to the official documentation:
Renovate is a service similar to GitHub Dependabot.
https://www.augmentedmind.de/2021/07/25/renovate-bot-cheat-sheet
Configuration Update/Change Procedure:
1. Make changes
2. Manually validate changes (from repo-root):
Please Manually validate any changes to this file with:
podman run -it \
-v ./.github/renovate.json5:/usr/src/app/renovate.json5:z \
docker.io/renovate/renovate:latest \
ghcr.io/renovatebot/renovate:latest \
renovate-config-validator
3. Commit.
Configuration Reference:
https://docs.renovatebot.com/configuration-options/
@ -22,11 +14,9 @@
Monitoring Dashboard:
https://app.renovatebot.com/dashboard#github/containers
Note: The Renovate bot will create/manage it's business on
branches named 'renovate/*'. Otherwise, and by
default, the only the copy of this file that matters
is the one on the `main` branch. No other branches
will be monitored or touched in any way.
Note: The Renovate bot will create/manage its business on
branches named 'renovate/*'. The only copy of this
file that matters is the one on the `main` branch.
*/
{
@ -44,55 +34,45 @@
// This repo builds images, don't try to manage them.
"docker:disable"
],
/*************************************************
*** Repository-specific configuration options ***
*************************************************/
// Don't leave dep. update. PRs "hanging", assign them to people.
"assignees": ["cevich"],
// Don't build CI VM images for dep. update PRs (by default)
commitMessagePrefix: "[CI:DOCS]",
"commitMessagePrefix": "[CI:DOCS]",
"regexManagers": [
"customManagers": [
// Manage updates to the common automation library version
{
"customType": "regex",
"fileMatch": "^lib.sh$",
"matchStrings": ["^INSTALL_AUTOMATION_VERSION=\"(?<currentValue>.+)\""],
"matchStrings": ["INSTALL_AUTOMATION_VERSION=\"(?<currentValue>.+)\""],
"depNameTemplate": "containers/automation",
"datasourceTemplate": "github-tags",
"versioningTemplate": "semver-coerced",
// "v" included in tag, but should not be used in lib.sh
"extractVersionTemplate": "v(?<version>.+)",
},
"extractVersionTemplate": "^v(?<version>.+)$"
}
],
// N/B: LAST MATCHING RULE WINS, match statems are ANDed together.
// https://docs.renovatebot.com/configuration-options/#packagerules
"packageRules": [
// When automation library version updated, full CI VM image build
// is needed, along with some other overrides not required in
// (for example) github-action updates.
{
"matchManagers": ["regex"],
"matchFiles": ["lib.sh"], // full-path exact-match
// Don't wait, roll out CI VM Updates immediately
"matchManagers": ["custom.regex"],
"matchFileNames": ["lib.sh"],
"schedule": ["at any time"],
// Override default `[CI:DOCS]`, DO build new CI VM images.
commitMessagePrefix: null,
// Frequently, library updates require adjustments to build-scripts
"commitMessagePrefix": null,
"draftPR": true,
"reviewers": ["cevich"],
"prBodyNotes": [
// handlebar conditionals don't have logical operators, and renovate
// does not provide an 'isMinor' template field
"\
"\
{{#if isMajor}}\
:warning: Changes are **likely** required for build-scripts \
and/or downstream CI VM image users. Please check very carefully. :warning:\
{{/if}}\
{{#if isPatch}}\
:warning: Changes are **likely** required for build-scripts and/or downstream CI VM \
image users. Please check very carefully. :warning:\
{{else}}\
:warning: Changes *might be* required for build-scripts \
and/or downstream CI VM image users. Please double-check. :warning:\
{{/if}}\
"
],
:warning: Changes may be required for build-scripts and/or downstream CI VM \
image users. Please double-check. :warning:\
{{/if}}"
]
}
]
}

View File

@ -14,4 +14,9 @@ jobs:
# Ref: https://docs.github.com/en/actions/using-workflows/reusing-workflows
call_cron_failures:
uses: containers/podman/.github/workflows/check_cirrus_cron.yml@main
secrets: inherit
secrets:
SECRET_CIRRUS_API_KEY: ${{secrets.SECRET_CIRRUS_API_KEY}}
ACTION_MAIL_SERVER: ${{secrets.ACTION_MAIL_SERVER}}
ACTION_MAIL_USERNAME: ${{secrets.ACTION_MAIL_USERNAME}}
ACTION_MAIL_PASSWORD: ${{secrets.ACTION_MAIL_PASSWORD}}
ACTION_MAIL_SENDER: ${{secrets.ACTION_MAIL_SENDER}}

View File

@ -132,12 +132,10 @@ jobs:
- if: steps.manifests.outputs.count > 0
name: Post PR comment with image name/id table
uses: jungwinter/comment@v1.1.0
uses: thollander/actions-comment-pull-request@v3
with:
issue_number: '${{ steps.retro.outputs.prn }}'
type: 'create'
token: '${{ secrets.GITHUB_TOKEN }}'
body: |
pr-number: '${{ steps.retro.outputs.prn }}'
message: |
${{ env.IMAGE_TABLE }}
# Ref: https://github.com/marketplace/actions/deploy-to-gist

1
.gitignore vendored
View File

@ -1,2 +1,3 @@
*/*.json
/.cache
.pre-commit-config.yaml

20
.pre-commit-hooks.yaml Normal file
View File

@ -0,0 +1,20 @@
---
# Ref: https://pre-commit.com/#creating-new-hooks
- id: check-imgsfx
name: Check IMG_SFX for accidental reuse.
description: |
Every PR intended to produce CI VM or container images must update
the `IMG_SFX` file via `make IMG_SFX`. The exact value will be
validated against global suffix usage (encoded as tags on the
`imgts` container image). This pre-commit hook verifies on every
push, the IMG_SFX file's value has not been pushed previously.
It's intended as a simple/imperfect way to save developers time
by avoiding force-pushes that will most certainly fail validation.
entry: ./check-imgsfx.sh
language: system
exclude: '.*' # Not examining any specific file/dir/link
always_run: true # ignore no matching files
fail_fast: true
pass_filenames: false
stages: ["pre-push"]

View File

@ -1 +1 @@
20240513t140131z-f40f39d13
20250812t173301z-f42f41d13

View File

@ -1 +0,0 @@
20240423t151529z-f40f39d13

235
Makefile
View File

@ -20,14 +20,13 @@ if_ci_else = $(if $(findstring true,$(CI)),$(1),$(2))
export CENTOS_STREAM_RELEASE = 9
export FEDORA_RELEASE = 40
export PRIOR_FEDORA_RELEASE = 39
# Warning: Beta Fedora releases are not supported. Verifiy EC2 AMI availability
# here: https://fedoraproject.org/cloud/download
export FEDORA_RELEASE = 42
export PRIOR_FEDORA_RELEASE = 41
# This should always be one-greater than $FEDORA_RELEASE (assuming it's actually the latest)
export RAWHIDE_RELEASE = 41
# See import_images/README.md
export FEDORA_IMPORT_IMG_SFX = $(_IMPORT_IMG_SFX)
export RAWHIDE_RELEASE = 43
# Automation assumes the actual release number (after SID upgrade)
# is always one-greater than the latest DEBIAN_BASE_FAMILY (GCE image).
@ -104,7 +103,6 @@ override _HLPFMT = "%-20s %s\n"
# Suffix value for any images built from this make execution
_IMG_SFX ?= $(file <IMG_SFX)
_IMPORT_IMG_SFX ?= $(file <IMPORT_IMG_SFX)
# Env. vars needed by packer
export CHECKPOINT_DISABLE = 1 # Disable hashicorp phone-home
@ -116,6 +114,9 @@ export AWS := aws --output json --region us-east-1
# Needed for container-image builds
GIT_HEAD = $(shell git rev-parse HEAD)
# Save some typing
_IMGTS_FQIN := quay.io/libpod/imgts:c$(_IMG_SFX)
##### Targets #####
# N/B: The double-# after targets is gawk'd out as the target description
@ -131,17 +132,17 @@ help: ## Default target, parses special in-line comments as documentation.
# names and a max-length of 63.
.PHONY: IMG_SFX
IMG_SFX: timebomb-check ## Generate a new date-based image suffix, store in the file IMG_SFX
$(file >$@,$(shell date --utc +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE)))
@echo "$(file <IMG_SFX)"
@echo "$$(date -u +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE))" > "$@"
@cat IMG_SFX
# Prevent us from wasting CI time when we have expired timebombs
.PHONY: timebomb-check
timebomb-check:
@now=$$(date --utc +%Y%m%d); \
@now=$$(date -u +%Y%m%d); \
found=; \
while read -r bomb; do \
when=$$(echo "$$bomb" | awk '{print $$2}'); \
if [ $$when -le $$now ]; then \
when=$$(echo "$$bomb" | sed -E -e 's/^.*timebomb ([0-9]+).*/\1/'); \
if [ "$$when" -le "$$now" ]; then \
echo "$$bomb"; \
found=found; \
fi; \
@ -152,13 +153,17 @@ timebomb-check:
false; \
fi
.PHONY: IMPORT_IMG_SFX
IMPORT_IMG_SFX: ## Generate a new date-based import-image suffix, store in the file IMPORT_IMG_SFX
$(file >$@,$(shell date --utc +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE)))
@echo "$(file <IMPORT_IMG_SFX)"
# Given the path to a file containing 'sha256:<image id>' return <image id>
# or throw error if empty.
define imageid
$(if $(file < $(1)),$(subst sha256:,,$(file < $(1))),$(error Container IID file $(1) doesn't exist or is empty))
endef
# This is intended for use by humans, to debug the image_builder_task in .cirrus.yml
# as well as the scripts under the ci subdirectory. See the 'image_builder_debug`
# target if debugging of the packer builds is necessary.
.PHONY: ci_debug
ci_debug: $(_TEMPDIR)/ci_debug.tar ## Build and enter container for local development/debugging of container-based Cirrus-CI tasks
ci_debug: $(_TEMPDIR)/ci_debug.iid ## Build and enter container for local development/debugging of container-based Cirrus-CI tasks
/usr/bin/podman run -it --rm \
--security-opt label=disable \
-v $(_MKFILE_DIR):$(_MKFILE_DIR) -w $(_MKFILE_DIR) \
@ -170,19 +175,18 @@ ci_debug: $(_TEMPDIR)/ci_debug.tar ## Build and enter container for local develo
-e GAC_FILEPATH=$(GAC_FILEPATH) \
-e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \
-e TEMPDIR=$(_TEMPDIR) \
docker-archive:$<
$(call imageid,$<) $(if $(DBG_TEST_CMD),$(DBG_TEST_CMD),)
# Takes 3 arguments: export filepath, FQIN, context dir
# Takes 3 arguments: IID filepath, FQIN, context dir
define podman_build
podman build -t $(2) \
--iidfile=$(1) \
--build-arg CENTOS_STREAM_RELEASE=$(CENTOS_STREAM_RELEASE) \
--build-arg PACKER_VERSION=$(call err_if_empty,PACKER_VERSION) \
-f $(3)/Containerfile .
rm -f $(1)
podman save --quiet -o $(1) $(2)
endef
$(_TEMPDIR)/ci_debug.tar: $(_TEMPDIR) $(wildcard ci/*)
$(_TEMPDIR)/ci_debug.iid: $(_TEMPDIR) $(wildcard ci/*)
$(call podman_build,$@,ci_debug,ci)
$(_TEMPDIR):
@ -225,7 +229,7 @@ $(_TEMPDIR)/user-data: $(_TEMPDIR) $(_TEMPDIR)/cidata.ssh.pub $(_TEMPDIR)/cidata
cidata: $(_TEMPDIR)/user-data $(_TEMPDIR)/meta-data
define build_podman_container
$(MAKE) $(_TEMPDIR)/$(1).tar BASE_TAG=$(2)
$(MAKE) $(_TEMPDIR)/$(1).iid BASE_TAG=$(2)
endef
# First argument is the path to the template JSON
@ -253,14 +257,17 @@ image_builder: image_builder/manifest.json ## Create image-building image and im
image_builder/manifest.json: image_builder/gce.json image_builder/setup.sh lib.sh systemd_banish.sh $(PACKER_INSTALL_DIR)/packer
$(call packer_build,image_builder/gce.json)
# Note: We assume this repo is checked out somewhere under the caller's
# home-dir for bind-mounting purposes. Otherwise possibly necessary
# files/directories like $HOME/.gitconfig or $HOME/.ssh/ won't be available
# from inside the debugging container.
# Note: It's assumed there are important files in the callers $HOME
# needed for debugging (.gitconfig, .ssh keys, etc.). It's unsafe
# to assume $(_MKFILE_DIR) is also under $HOME. Both are mounted
# for good measure.
.PHONY: image_builder_debug
image_builder_debug: $(_TEMPDIR)/image_builder_debug.tar ## Build and enter container for local development/debugging of targets requiring packer + virtualization
image_builder_debug: $(_TEMPDIR)/image_builder_debug.iid ## Build and enter container for local development/debugging of targets requiring packer + virtualization
/usr/bin/podman run -it --rm \
--security-opt label=disable -v $$HOME:$$HOME -w $(_MKFILE_DIR) \
--security-opt label=disable \
-v $$HOME:$$HOME \
-v $(_MKFILE_DIR):$(_MKFILE_DIR) \
-w $(_MKFILE_DIR) \
-v $(_TEMPDIR):$(_TEMPDIR) \
-v $(call err_if_empty,GAC_FILEPATH):$(GAC_FILEPATH) \
-v $(call err_if_empty,AWS_SHARED_CREDENTIALS_FILE):$(AWS_SHARED_CREDENTIALS_FILE) \
@ -268,113 +275,13 @@ image_builder_debug: $(_TEMPDIR)/image_builder_debug.tar ## Build and enter cont
-e PACKER_INSTALL_DIR=/usr/local/bin \
-e PACKER_VERSION=$(call err_if_empty,PACKER_VERSION) \
-e IMG_SFX=$(call err_if_empty,_IMG_SFX) \
-e IMPORT_IMG_SFX=$(call err_if_empty,_IMPORT_IMG_SFX) \
-e GAC_FILEPATH=$(GAC_FILEPATH) \
-e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \
docker-archive:$<
$(call imageid,$<) $(if $(DBG_TEST_CMD),$(DBG_TEST_CMD))
$(_TEMPDIR)/image_builder_debug.tar: $(_TEMPDIR) $(wildcard image_builder/*)
$(_TEMPDIR)/image_builder_debug.iid: $(_TEMPDIR) $(wildcard image_builder/*)
$(call podman_build,$@,image_builder_debug,image_builder)
# Avoid re-downloading unnecessarily
# Ref: https://www.gnu.org/software/make/manual/html_node/Special-Targets.html#Special-Targets
.PRECIOUS: $(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).$(IMPORT_FORMAT)
$(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).$(IMPORT_FORMAT): $(_TEMPDIR)
bash import_images/handle_image.sh \
$@ \
$(call err_if_empty,FEDORA_IMAGE_URL) \
$(call err_if_empty,FEDORA_CSUM_URL)
$(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).$(IMPORT_FORMAT): $(_TEMPDIR)
bash import_images/handle_image.sh \
$@ \
$(call err_if_empty,FEDORA_ARM64_IMAGE_URL) \
$(call err_if_empty,FEDORA_ARM64_CSUM_URL)
$(_TEMPDIR)/%.md5: $(_TEMPDIR)/%.$(IMPORT_FORMAT)
openssl md5 -binary $< | base64 > $@.tmp
mv $@.tmp $@
# MD5 metadata value checked by AWS after upload + 5 retries.
# Cache disabled to avoid sync. issues w/ vmimport service if
# image re-uploaded.
# TODO: Use sha256 from ..._CSUM_URL file instead of recalculating
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
# Avoid re-uploading unnecessarily
.SECONDARY: $(_TEMPDIR)/%.uploaded
$(_TEMPDIR)/%.uploaded: $(_TEMPDIR)/%.$(IMPORT_FORMAT) $(_TEMPDIR)/%.md5
-$(AWS) s3 rm --quiet s3://packer-image-import/%.$(IMPORT_FORMAT)
$(AWS) s3api put-object \
--content-md5 "$(file < $(_TEMPDIR)/$*.md5)" \
--content-encoding binary/octet-stream \
--cache-control no-cache \
--bucket packer-image-import \
--key $*.$(IMPORT_FORMAT) \
--body $(_TEMPDIR)/$*.$(IMPORT_FORMAT) > $@.tmp
mv $@.tmp $@
# For whatever reason, the 'Format' value must be all upper-case.
# Avoid creating unnecessary/duplicate import tasks
.SECONDARY: $(_TEMPDIR)/%.import_task_id
$(_TEMPDIR)/%.import_task_id: $(_TEMPDIR)/%.uploaded
$(AWS) ec2 import-snapshot \
--disk-container Format=$(shell tr '[:lower:]' '[:upper:]'<<<"$(IMPORT_FORMAT)"),UserBucket="{S3Bucket=packer-image-import,S3Key=$*.$(IMPORT_FORMAT)}" > $@.tmp.json
@cat $@.tmp.json
jq -r -e .ImportTaskId $@.tmp.json > $@.tmp
mv $@.tmp $@
# Avoid importing multiple snapshots for the same image
.PRECIOUS: $(_TEMPDIR)/%.snapshot_id
$(_TEMPDIR)/%.snapshot_id: $(_TEMPDIR)/%.import_task_id
bash import_images/wait_import_task.sh "$<" > $@.tmp
mv $@.tmp $@
define _register_sed
sed -r \
-e 's/@@@NAME@@@/$(1)/' \
-e 's/@@@IMPORT_IMG_SFX@@@/$(_IMPORT_IMG_SFX)/' \
-e 's/@@@ARCH@@@/$(2)/' \
-e 's/@@@SNAPSHOT_ID@@@/$(3)/' \
import_images/register.json.in \
> $(4)
endef
$(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).register.json: $(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).snapshot_id import_images/register.json.in
$(call _register_sed,fedora-aws,x86_64,$(file <$<),$@)
$(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).register.json: $(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).snapshot_id import_images/register.json.in
$(call _register_sed,fedora-aws-arm64,arm64,$(file <$<),$@)
# Avoid multiple registrations for the same image
.PRECIOUS: $(_TEMPDIR)/%.ami.id
$(_TEMPDIR)/%.ami.id: $(_TEMPDIR)/%.register.json
$(AWS) ec2 register-image --cli-input-json "$$(<$<)" > $@.tmp.json
cat $@.tmp.json
jq -r -e .ImageId $@.tmp.json > $@.tmp
mv $@.tmp $@
$(_TEMPDIR)/%.ami.name: $(_TEMPDIR)/%.register.json
jq -r -e .Name $< > $@.tmp
mv $@.tmp $@
$(_TEMPDIR)/%.ami.json: $(_TEMPDIR)/%.ami.id $(_TEMPDIR)/%.ami.name
$(AWS) ec2 create-tags \
--resources "$$(<$(_TEMPDIR)/$*.ami.id)" \
--tags \
Key=Name,Value=$$(<$(_TEMPDIR)/$*.ami.name) \
Key=automation,Value=false
$(AWS) --output table ec2 describe-images --image-ids "$$(<$(_TEMPDIR)/$*.ami.id)" \
| tee $@
.PHONY: import_images
import_images: $(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).ami.json $(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).ami.json import_images/manifest.json.in ## Import generic Fedora cloud images into AWS EC2.
sed -r \
-e 's/@@@IMG_SFX@@@/$(_IMPORT_IMG_SFX)/' \
-e 's/@@@CIRRUS_TASK_ID@@@/$(CIRRUS_TASK_ID)/' \
import_images/manifest.json.in \
> import_images/manifest.json
@echo "Image import(s) successful!"
.PHONY: base_images
# This needs to run in a virt/nested-virt capable environment
base_images: base_images/manifest.json ## Create, prepare, and import base-level images into GCE.
@ -401,9 +308,10 @@ fedora_podman: ## Build Fedora podman development container
prior-fedora_podman: ## Build Prior-Fedora podman development container
$(call build_podman_container,$@,$(PRIOR_FEDORA_RELEASE))
$(_TEMPDIR)/%_podman.tar: podman/Containerfile podman/setup.sh $(wildcard base_images/*.sh) $(_TEMPDIR) $(wildcard cache_images/*.sh)
$(_TEMPDIR)/%_podman.iid: podman/Containerfile podman/setup.sh $(wildcard base_images/*.sh) $(_TEMPDIR) $(wildcard cache_images/*.sh)
podman build -t $*_podman:$(call err_if_empty,_IMG_SFX) \
--security-opt seccomp=unconfined \
--iidfile=$@ \
--build-arg=BASE_NAME=$(subst prior-,,$*) \
--build-arg=BASE_TAG=$(call err_if_empty,BASE_TAG) \
--build-arg=PACKER_BUILD_NAME=$(subst _podman,,$*) \
@ -411,70 +319,69 @@ $(_TEMPDIR)/%_podman.tar: podman/Containerfile podman/setup.sh $(wildcard base_i
--build-arg=CIRRUS_TASK_ID=$(CIRRUS_TASK_ID) \
--build-arg=GIT_HEAD=$(call err_if_empty,GIT_HEAD) \
-f podman/Containerfile .
rm -f $@
podman save --quiet -o $@ $*_podman:$(_IMG_SFX)
.PHONY: skopeo_cidev
skopeo_cidev: $(_TEMPDIR)/skopeo_cidev.tar ## Build Skopeo development and CI container
$(_TEMPDIR)/skopeo_cidev.tar: $(_TEMPDIR) $(wildcard skopeo_base/*)
skopeo_cidev: $(_TEMPDIR)/skopeo_cidev.iid ## Build Skopeo development and CI container
$(_TEMPDIR)/skopeo_cidev.iid: $(_TEMPDIR) $(wildcard skopeo_base/*)
podman build -t skopeo_cidev:$(call err_if_empty,_IMG_SFX) \
--iidfile=$@ \
--security-opt seccomp=unconfined \
--build-arg=BASE_TAG=$(FEDORA_RELEASE) \
skopeo_cidev
rm -f $@
podman save --quiet -o $@ skopeo_cidev:$(_IMG_SFX)
.PHONY: ccia
ccia: $(_TEMPDIR)/ccia.tar ## Build the Cirrus-CI Artifacts container image
$(_TEMPDIR)/ccia.tar: ccia/Containerfile $(_TEMPDIR)
ccia: $(_TEMPDIR)/ccia.iid ## Build the Cirrus-CI Artifacts container image
$(_TEMPDIR)/ccia.iid: ccia/Containerfile $(_TEMPDIR)
$(call podman_build,$@,ccia:$(call err_if_empty,_IMG_SFX),ccia)
.PHONY: bench_stuff
bench_stuff: $(_TEMPDIR)/bench_stuff.tar ## Build the Cirrus-CI Artifacts container image
$(_TEMPDIR)/bench_stuff.tar: bench_stuff/Containerfile $(_TEMPDIR)
$(call podman_build,$@,bench_stuff:$(call err_if_empty,_IMG_SFX),bench_stuff)
# Note: This target only builds imgts:c$(_IMG_SFX) it does not push it to
# any container registry which may be required for targets which
# depend on it as a base-image. In CI, pushing is handled automatically
# by the 'ci/make_container_images.sh' script.
.PHONY: imgts
imgts: $(_TEMPDIR)/imgts.tar ## Build the VM image time-stamping container image
$(_TEMPDIR)/imgts.tar: imgts/Containerfile imgts/entrypoint.sh imgts/google-cloud-sdk.repo imgts/lib_entrypoint.sh $(_TEMPDIR)
$(call podman_build,$@,imgts:$(call err_if_empty,_IMG_SFX),imgts)
imgts: imgts/Containerfile imgts/entrypoint.sh imgts/google-cloud-sdk.repo imgts/lib_entrypoint.sh $(_TEMPDIR) ## Build the VM image time-stamping container image
$(call podman_build,/dev/null,imgts:$(call err_if_empty,_IMG_SFX),imgts)
-rm $(_TEMPDIR)/$@.iid
# Helper function to build images which depend on imgts:latest base image
# N/B: There is no make dependency resolution on imgts.iid on purpose,
# imgts:c$(_IMG_SFX) is assumed to have already been pushed to quay.
# See imgts target above.
define imgts_base_podman_build
podman load -i $(_TEMPDIR)/imgts.tar
podman tag imgts:$(call err_if_empty,_IMG_SFX) imgts:latest
podman image exists $(_IMGTS_FQIN) || podman pull $(_IMGTS_FQIN)
podman image exists imgts:latest || podman tag $(_IMGTS_FQIN) imgts:latest
$(call podman_build,$@,$(1):$(call err_if_empty,_IMG_SFX),$(1))
endef
.PHONY: imgobsolete
imgobsolete: $(_TEMPDIR)/imgobsolete.tar ## Build the VM Image obsoleting container image
$(_TEMPDIR)/imgobsolete.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh imgobsolete/Containerfile imgobsolete/entrypoint.sh $(_TEMPDIR)
imgobsolete: $(_TEMPDIR)/imgobsolete.iid ## Build the VM Image obsoleting container image
$(_TEMPDIR)/imgobsolete.iid: imgts/lib_entrypoint.sh imgobsolete/Containerfile imgobsolete/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,imgobsolete)
.PHONY: imgprune
imgprune: $(_TEMPDIR)/imgprune.tar ## Build the VM Image pruning container image
$(_TEMPDIR)/imgprune.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh imgprune/Containerfile imgprune/entrypoint.sh $(_TEMPDIR)
imgprune: $(_TEMPDIR)/imgprune.iid ## Build the VM Image pruning container image
$(_TEMPDIR)/imgprune.iid: imgts/lib_entrypoint.sh imgprune/Containerfile imgprune/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,imgprune)
.PHONY: gcsupld
gcsupld: $(_TEMPDIR)/gcsupld.tar ## Build the GCS Upload container image
$(_TEMPDIR)/gcsupld.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh gcsupld/Containerfile gcsupld/entrypoint.sh $(_TEMPDIR)
gcsupld: $(_TEMPDIR)/gcsupld.iid ## Build the GCS Upload container image
$(_TEMPDIR)/gcsupld.iid: imgts/lib_entrypoint.sh gcsupld/Containerfile gcsupld/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,gcsupld)
.PHONY: orphanvms
orphanvms: $(_TEMPDIR)/orphanvms.tar ## Build the Orphaned VM container image
$(_TEMPDIR)/orphanvms.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh orphanvms/Containerfile orphanvms/entrypoint.sh orphanvms/_gce orphanvms/_ec2 $(_TEMPDIR)
orphanvms: $(_TEMPDIR)/orphanvms.iid ## Build the Orphaned VM container image
$(_TEMPDIR)/orphanvms.iid: imgts/lib_entrypoint.sh orphanvms/Containerfile orphanvms/entrypoint.sh orphanvms/_gce orphanvms/_ec2 $(_TEMPDIR)
$(call imgts_base_podman_build,orphanvms)
.PHONY: .get_ci_vm
get_ci_vm: $(_TEMPDIR)/get_ci_vm.tar ## Build the get_ci_vm container image
$(_TEMPDIR)/get_ci_vm.tar: lib.sh get_ci_vm/Containerfile get_ci_vm/entrypoint.sh get_ci_vm/setup.sh $(_TEMPDIR)
podman build -t get_ci_vm:$(call err_if_empty,_IMG_SFX) -f get_ci_vm/Containerfile .
rm -f $@
podman save --quiet -o $@ get_ci_vm:$(_IMG_SFX)
get_ci_vm: $(_TEMPDIR)/get_ci_vm.iid ## Build the get_ci_vm container image
$(_TEMPDIR)/get_ci_vm.iid: lib.sh get_ci_vm/Containerfile get_ci_vm/entrypoint.sh get_ci_vm/setup.sh $(_TEMPDIR)
podman build --iidfile=$@ -t get_ci_vm:$(call err_if_empty,_IMG_SFX) -f get_ci_vm/Containerfile ./
.PHONY: clean
clean: ## Remove all generated files referenced in this Makefile
-rm -rf $(_TEMPDIR)
-rm -f image_builder/*.json
-rm -f *_images/{*.json,cidata*,*-data}
-rm -f ci_debug.tar
-podman rmi imgts:latest
-podman rmi $(_IMGTS_FQIN)

108
README-simplified.md Normal file
View File

@ -0,0 +1,108 @@
The README here is waaaaaay too complicated for Ed. So here is a
simplified version of the typical things you need to do.
Super Duper Simplest Case
=========================
This is by far the most common case, and the simplest to understand.
You do this when you want to build VMs with newer package versions than
whatever VMs are currently set up in CI. You really need to
understand this before you get into anything more complicated.
```
$ git checkout -b lets-see-what-happens
$ make IMG_SFX
$ git commit -asm"Let's just see what happens"
```
...and push that as a PR.
If you're lucky, in about an hour you will get an email from `github-actions[bot]`
with a nice table of base and cache images, with links. I strongly encourage you
to try to get Ed's
[cirrus-vm-get-versions](https://github.com/edsantiago/containertools/tree/main/cirrus-vm-get-versions)
script working, because this will give you a very quick easy reliable
list of what packages have changed. You don't need this, but life will be painful
for you without it.
(If you're not lucky, the build will break. There are infinite ways for
this to happen, so you're on your own here. Ask for help! This is a great
team, and one or more people may quickly realize the problem.)
Once you have new VMs built, **test in an actual project**! Usually podman
and buildah, but you may want the varks too:
```
$ cd ~/src/github/containers/podman ! or wherever
$ git checkout -b test-new-vms
$ vim .cirrus.yml
[ search for "c202", and replace with your new IMG_SFX.]
[ Don't forget the leading "c"! ]
$ git commit -as
[ Please include a link to the automation_images PR! ]
```
Push this PR and see what happens. If you're very lucky, it will
pass on this and other repos. Get your podman/buildah/vark PRs
reviewed and merged, and then review-merge the automation_images one.
Pushing (har har) Your Luck
---------------------------
Feel lucky? Tag this VM build, so `dependabot` will create PRs
on all the myriad container repos:
```
$ git tag $(<IMG_SFX)
$ git push --no-verify upstream $(<IMG_SFX)
```
Within a few hours you'll see a ton of PRs. It is very likely that
something will go wrong in one or two, and if so, it's impossible to
cover all possibilities. As above, ask for help.
More Complicated Cases
======================
These are the next two most common.
Bumping One Package
-------------------
Quite often we need an emergency bump of only one package that
is not yet stable. Here are examples of the two most typical
cases,
[crun](https://github.com/containers/automation_images/pull/386/files) and
[pasta](https://github.com/containers/automation_images/pull/383/files).
Note the `timebomb` directives. Please use these: the time you save
may be your own, one future day. And please use 2-6 week times.
A timebomb that expires in a year is going to be hard to understand
when it goes off.
Bumping Distros
---------------
Like Fedora 40 to 41. Edit `Makefile`. Change `FEDORA`, `PRIOR_FEDORA`,
and `RAWHIDE`, then proceed with Simple Case.
There is almost zero chance that this will work on the first try.
Sorry, that's just the way it is. See the
[F40 to F41 PR](https://github.com/containers/automation_images/pull/392/files)
for a not-atypical example.
STRONG RECOMMENDATION
=====================
Read [check-imgsfx.sh](check-imgsfx.sh) and follow its instructions. Ed
likes to copy that to `.git/hooks/pre-push`, Chris likes using some
external tool that Ed doesn't trust. Use your judgment.
The reason for this is that you are going to forget to `make IMG_SFX`
one day, and then you're going to `git push --force` an update and walk
away, and come back to a failed run because `IMG_SFX` must always
always always be brand new.
Weak Recommendation
-------------------
Ed likes to fiddle with `IMG_SFX`, zeroing out to the nearest
quarter hour. Absolutely unnecessary, but easier on the eyes
when trying to see which VMs are in use or when comparing
diffs.

View File

@ -52,7 +52,7 @@ However, all steps are listed below for completeness.
For more information on the overall process of importing custom GCE VM
Images, please [refer to the documentation](https://cloud.google.com/compute/docs/import/import-existing-image). For references to the latest pre-build AWS
EC2 Fedora AMI's see [the
upstream cloud page](https://alt.fedoraproject.org/cloud/).
upstream cloud page](https://fedoraproject.org/cloud/download).
For more information on the primary tool (*packer*) used for this process,
please [see it's documentation page](https://www.packer.io/docs).
@ -374,10 +374,11 @@ infinite-growth of the VM image count.
# Debugging / Locally driving VM Image production
Because the entire automated build process is containerized, it may easily be
performed locally on your laptop/workstation. However, this process will
Much of the CI and image-build process is containerized, so it may be debugged
locally on your laptop/workstation. However, this process will
still involve interfacing with GCE and AWS. Therefore, you must be in possession
of a *Google Application Credentials* (GAC) JSON and AWS credentials INI file.
of a *Google Application Credentials* (GAC) JSON and
[AWS credentials INI file](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds).
The GAC JSON file should represent a service account (contrasted to a user account,
which always uses OAuth2). The name of the service account doesn't matter,
@ -398,44 +399,52 @@ one the following (custom) IAM policies enabled:
Somebody familiar with Google and AWS IAM will need to provide you with the
credential files and ensure correct account configuration. Having these files
stored *in your home directory* on your laptop/workstation, the process of
producing images proceeds as follows:
building and entering the debug containers is as follows:
1. Ensure you have podman installed, and lots of available network and CPU
resources (i.e. turn off YouTube, shut down background VMs and other hungry
tasks). Build the image-builder container image, by executing
tasks).
2. Build and enter either the `ci_debug` or the `image_builder_debug` container
image, by executing:
```
make image_builder_debug GAC_FILEPATH=</home/path/to/gac.json> \
AWS_SHARED_CREDENTIALS_FILE=</path/to/credentials>
make <ci_debug|image_builder_debug> \
GAC_FILEPATH=</home/path/to/gac.json> \
AWS_SHARED_CREDENTIALS_FILE=</path/to/credentials>
```
2. You will be dropped into a debugging container, inside a volume-mount of
the repository root. This container is practically identical to the VM
produced and used in *overview step 1*. If changes are made, the container
image should be re-built to reflect them.
* The `ci_debug` image is significantly smaller, and only intended for rudimentary
cases, for example running the scripts under the `ci` subdirectory.
* The `image_builder_debug` image is larger, and has KVM virtualization enabled.
It's needed for more extensive debugging of the packer-based image builds.
3. If you wish to build only a subset of available images, list the names
you want as comma-separated values of the `PACKER_BUILDS` variable. Be
sure you *export* this variable so that `make` has access to it. For
example, `export PACKER_BUILDS=debian,prior-fedora`.
3. Both containers will place you in the default shell, inside a volume-mount of
the repository root. This environment is practically identical to what is
used in Cirrus-CI.
4. Still within the container, again ensure you have plenty of network and CPU
4. For the `image_builder_debug` container, If you wish to build only a subset
of available images, list the names you want as comma-separated values of the
`PACKER_BUILDS` variable. Be sure you *export* this variable so that `make`
has access to it. For example, `export PACKER_BUILDS=debian,prior-fedora`.
5. Still within the container, again ensure you have plenty of network and CPU
resources available. Build the VM Base images by executing the command
``make base_images``. This is the equivalent operation as documented by
*overview step 2*. ***N/B*** The GCS -> GCE image conversion can take
some time, be patient. Packer may not produce any output for several minutes
while the conversion is happening.
5. When successful, the names of the produced images will all be referenced
6. When successful, the names of the produced images will all be referenced
in the `base_images/manifest.json` file. If there are problems, fix them
and remove the `manifest.json` file. Then re-run the same *make* command
as before, packer will force-overwrite any broken/partially created
images automatically.
6. Produce the VM Cache Images, equivalent to the operations outlined
7. Produce the VM Cache Images, equivalent to the operations outlined
in *overview step 3*. Execute the following command (still within the
debug image-builder container): ``make cache_images``.
7. Again when successful, you will find the image names are written into
8. Again when successful, you will find the image names are written into
the `cache_images/manifest.json` file. If there is a problem, remove
this file, fix the problem, and re-run the `make` command. No cleanup
is necessary, leftover/disused images will be automatically cleaned up

View File

@ -26,8 +26,6 @@ variables: # Empty value means it must be passed in on command-line
PRIOR_FEDORA_IMAGE_URL: "{{env `PRIOR_FEDORA_IMAGE_URL`}}"
PRIOR_FEDORA_CSUM_URL: "{{env `PRIOR_FEDORA_CSUM_URL`}}"
FEDORA_IMPORT_IMG_SFX: "{{env `FEDORA_IMPORT_IMG_SFX`}}"
DEBIAN_RELEASE: "{{env `DEBIAN_RELEASE`}}"
DEBIAN_BASE_FAMILY: "{{env `DEBIAN_BASE_FAMILY`}}"
@ -63,7 +61,7 @@ builders:
type: 'qemu'
accelerator: "kvm"
qemu_binary: '/usr/libexec/qemu-kvm' # Unique to CentOS, not fedora :(
memory: 1024
memory: 12288
iso_url: '{{user `FEDORA_IMAGE_URL`}}'
disk_image: true
format: "raw"
@ -109,20 +107,18 @@ builders:
- &fedora-aws
name: 'fedora-aws'
type: 'amazon-ebs'
source_ami_filter: # Will fail if >1 or no AMI found
source_ami_filter:
# Many of these search filter values (like account ID and name) aren't publicized
# anywhere. They were found by examining AWS EC2 AMIs published/referenced from
# the AWS sections on https://fedoraproject.org/cloud/download
owners:
# Docs are wrong, specifying the Account ID required to make AMIs private.
# The Account ID is hard-coded here out of expediency, since passing in
# more packer args from the command-line (in Makefile) is non-trivial.
- &accountid '449134212816'
# It's necessary to 'search' for the base-image by these criteria. If
# more than one image is found, Packer will fail the build (and display
# the conflicting AMI IDs).
- &fedora_accountid 125523088429
most_recent: true # Required b/c >1 search result likely to be returned
filters: &ami_filters
architecture: 'x86_64'
image-type: 'machine'
is-public: 'false'
name: '{{build_name}}-i{{user `FEDORA_IMPORT_IMG_SFX`}}'
is-public: 'true'
name: 'Fedora-Cloud-Base*-{{user `FEDORA_RELEASE`}}-*'
root-device-type: 'ebs'
state: 'available'
virtualization-type: 'hvm'
@ -146,7 +142,6 @@ builders:
volume_type: 'gp2'
delete_on_termination: true
# These are critical and used by security-polciy to enforce instance launch limits.
tags: &awstags
<<: *imgcpylabels
# EC2 expects "Name" to be capitalized
@ -160,7 +155,7 @@ builders:
# This is necessary for security - The CI service accounts are not permitted
# to use AMI's from any other account, including public ones.
ami_users:
- *accountid
- &accountid '449134212816'
ssh_username: 'fedora'
ssh_clear_authorized_keys: true
# N/B: Required Packer >= 1.8.0
@ -171,7 +166,8 @@ builders:
name: 'fedora-aws-arm64'
source_ami_filter:
owners:
- *accountid
- *fedora_accountid
most_recent: true # Required b/c >1 search result likely to be returned
filters:
<<: *ami_filters
architecture: 'arm64'

View File

@ -16,6 +16,15 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# Cloud-networking in general can sometimes be flaky.
# Increase Apt's tolerance levels.
cat << EOF | $SUDO tee -a /etc/apt/apt.conf.d/99timeouts
// Added during CI VM image build
Acquire::Retries "3";
Acquire::http::timeout "300";
Acquire::https::timeout "300";
EOF
echo "Switch sources to Debian Unstable (SID)"
cat << EOF | $SUDO tee /etc/apt/sources.list
deb http://deb.debian.org/debian/ unstable main
@ -43,19 +52,6 @@ install_automation_tooling
# Ensure automation library is loaded
source "$REPO_DIRPATH/lib.sh"
# 2024-01-02 found debian 13 tar 1.35+dfsg-2
# which has the horrible duplicate-path bug:
# https://github.com/containers/podman/issues/19407
# https://bugzilla.redhat.com/show_bug.cgi?id=2230127
# 2024-01-25 dfsg-3 also has the bug
# 2024-05-01 trixy still has 1.35+dfsg-3
timebomb 20240801 "prevent us from getting broken tar-1.35+dfsg-3"
$SUDO tee /etc/apt/preferences.d/$(date +%Y%m%d)-tar <<EOF
Package: tar
Pin: version 1.35+dfsg-[23]
Pin-Priority: -1
EOF
# Workaround 12->13 forward-incompatible change in grub scripts.
# Without this, updating to the SID kernel may fail.
echo "Upgrading grub-common"

View File

@ -18,7 +18,6 @@ source "$REPO_DIRPATH/lib.sh"
declare -a PKGS
PKGS=(rng-tools git coreutils cloud-init)
XARGS=--disablerepo=updates
if ! ((CONTAINER)); then
# Packer defines this automatically for us
# shellcheck disable=SC2154
@ -35,15 +34,23 @@ if ! ((CONTAINER)); then
fi
fi
# Due to https://bugzilla.redhat.com/show_bug.cgi?id=1907030
# updates cannot be installed or even looked at during this stage.
# Pawn the problem off to the cache-image stage where more memory
# is available and debugging is also easier. Try to save some more
# memory by pre-populating repo metadata prior to any transactions.
$SUDO dnf makecache $XARGS
# Updates disable, see comment above
# $SUDO dnf -y update $XARGS
$SUDO dnf -y install $XARGS "${PKGS[@]}"
# The Fedora CI VM base images are built using nested-virt with
# limited resources available. Further, cloud-networking in
# general can sometimes be flaky. Increase DNF's tolerance
# levels.
cat << EOF | $SUDO tee -a /etc/dnf/dnf.conf
# Added during CI VM image build
minrate=100
timeout=60
EOF
$SUDO dnf makecache
$SUDO dnf -y update
$SUDO dnf -y install "${PKGS[@]}"
# Occasionally following an install, there are more updates available.
# This may be due to activation of suggested/recommended dependency resolution.
$SUDO dnf -y update
if ! ((CONTAINER)); then
$SUDO systemctl enable rngd
@ -83,7 +90,9 @@ if ! ((CONTAINER)); then
# This is necessary to prevent permission-denied errors on service-start
# and also on the off-chance the package gets updated and context reset.
$SUDO semanage fcontext --add --type bin_t /usr/bin/cloud-init
$SUDO restorecon -v /usr/bin/cloud-init
# This used restorecon before so we don't have to specify the file_contexts.local
# manually, however with f42 that stopped working: https://bugzilla.redhat.com/show_bug.cgi?id=2360183
$SUDO setfiles -v /etc/selinux/targeted/contexts/files/file_contexts.local /usr/bin/cloud-init
else # GCP Image
echo "Setting GCP startup service (for Cirrus-CI agent) SELinux unconfined"
# ref: https://cloud.google.com/compute/docs/startupscript

View File

@ -75,9 +75,6 @@ builders:
source_image_family: 'fedora-base'
labels: *fedora_gce_labels
- <<: *aux_fed_img
name: 'fedora-podman-py'
- <<: *aux_fed_img
name: 'fedora-netavark'

View File

@ -44,7 +44,7 @@ INSTALL_PACKAGES=(\
fuse-overlayfs
gcc
gettext
git-daemon-run
git
gnupg2
go-md2man
golang
@ -103,6 +103,8 @@ INSTALL_PACKAGES=(\
skopeo
slirp4netns
socat
libsqlite3-0
libsqlite3-dev
systemd-container
sudo
time
@ -116,42 +118,18 @@ INSTALL_PACKAGES=(\
zstd
)
# bpftrace is only needed on the host as containers cannot run ebpf
# programs anyway and it is very big so we should not bloat the container
# images unnecessarily.
if ! ((CONTAINER)); then
INSTALL_PACKAGES+=( \
bpftrace
)
fi
msg "Installing general build/testing dependencies"
bigto $SUDO apt-get -q -y install "${INSTALL_PACKAGES[@]}"
# 2024-05-01 Debian pasta package has a broken apparmor profile for our test
# ref: https://github.com/containers/podman/issues/22625
timebomb 20240630 "Workaround for pasta apparmor blocking use of /tmp"
$SUDO tee /etc/apparmor.d/usr.bin.pasta <<EOF
# SPDX-License-Identifier: GPL-2.0-or-later
#
# PASST - Plug A Simple Socket Transport
# for qemu/UNIX domain socket mode
#
# PASTA - Pack A Subtle Tap Abstraction
# for network namespace/tap device mode
#
# contrib/apparmor/usr.bin.pasta - AppArmor profile for pasta(1)
#
# Copyright (c) 2022 Red Hat GmbH
# Author: Stefano Brivio <sbrivio@redhat.com>
abi <abi/3.0>,
include <tunables/global>
profile pasta /usr/bin/pasta{,.avx2} flags=(attach_disconnected) {
include <abstractions/pasta>
# Alternatively: include <abstractions/user-tmp>
/tmp/** rw, # tap_sock_unix_init(), pcap(),
# write_pidfile(),
# logfile_init()
owner @{HOME}/** w, # pcap(), write_pidfile()
}
EOF
# The nc installed by default is missing many required options
$SUDO update-alternatives --set nc /usr/bin/ncat

View File

@ -47,10 +47,14 @@ req_env_vars PACKER_BUILD_NAME
bash $SCRIPT_DIRPATH/debian_packaging.sh
# dnsmasq is set to bind 0.0.0.0:53, that will conflict with our dns tests.
# We don't need a local resolver.
$SUDO systemctl disable dnsmasq.service
$SUDO systemctl mask dnsmasq.service
if ! ((CONTAINER)); then
warn "Making Debian kernel enable cgroup swap accounting"
warn "Forcing CgroupsV1"
SEDCMD='s/^GRUB_CMDLINE_LINUX="(.*)"/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=0"/'
SEDCMD='s/^GRUB_CMDLINE_LINUX="(.*)"/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1"/'
ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub.d/*
ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub
ooe.sh $SUDO update-grub
@ -58,6 +62,10 @@ fi
nm_ignore_cni
if ! ((CONTAINER)); then
initialize_local_cache_registry
fi
finalize
echo "SUCCESS!"

View File

@ -1,98 +0,0 @@
#!/bin/bash
# This script is called from fedora_setup.sh and various Dockerfiles.
# It's not intended to be used outside of those contexts. It assumes the lib.sh
# library has already been sourced, and that all "ground-up" package-related activity
# needs to be done, including repository setup and initial update.
set -e
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# shellcheck disable=SC2154
warn "Enabling updates-testing repository for $PACKER_BUILD_NAME"
lilto ooe.sh $SUDO dnf install -y 'dnf-command(config-manager)'
lilto ooe.sh $SUDO dnf config-manager --set-enabled updates-testing
msg "Updating/Installing repos and packages for $OS_REL_VER"
bigto ooe.sh $SUDO dnf update -y
INSTALL_PACKAGES=(\
bash-completion
bridge-utils
buildah
bzip2
curl
findutils
fuse3
gcc
git
git-daemon
glib2-devel
glibc-devel
hostname
httpd-tools
iproute
iptables
jq
libtool
lsof
make
nmap-ncat
openssl
openssl-devel
pkgconfig
podman
policycoreutils
protobuf
protobuf-devel
python-pip-wheel
python-setuptools-wheel
python-toml
python-wheel-wheel
python3-PyYAML
python3-coverage
python3-dateutil
python3-docker
python3-fixtures
python3-libselinux
python3-libsemanage
python3-libvirt
python3-pip
python3-psutil
python3-pylint
python3-pytest
python3-pyxdg
python3-requests
python3-requests-mock
python3-virtualenv
python3.6
python3.8
python3.9
redhat-rpm-config
rsync
sed
skopeo
socat
tar
time
tox
unzip
vim
wget
xz
zip
zstd
)
echo "Installing general build/test dependencies"
bigto $SUDO dnf install -y "${INSTALL_PACKAGES[@]}"
# It was observed in F33, dnf install doesn't always get you the latest/greatest
lilto $SUDO dnf update -y

View File

@ -28,7 +28,7 @@ req_env_vars PACKER_BUILD_NAME
if [[ "$PACKER_BUILD_NAME" == "fedora" ]] && [[ ! "$PACKER_BUILD_NAME" =~ "prior" ]]; then
warn "Enabling updates-testing repository for $PACKER_BUILD_NAME"
lilto ooe.sh $SUDO dnf install -y 'dnf-command(config-manager)'
lilto ooe.sh $SUDO dnf config-manager --set-enabled updates-testing
lilto ooe.sh $SUDO dnf config-manager setopt updates-testing.enabled=1
else
warn "NOT enabling updates-testing repository for $PACKER_BUILD_NAME"
fi
@ -56,6 +56,7 @@ INSTALL_PACKAGES=(\
curl
device-mapper-devel
dnsmasq
docker-distribution
e2fsprogs-devel
emacs-nox
fakeroot
@ -64,6 +65,7 @@ INSTALL_PACKAGES=(\
fuse3
fuse3-devel
gcc
gh
git
git-daemon
glib2-devel
@ -81,6 +83,7 @@ INSTALL_PACKAGES=(\
iproute
iptables
jq
koji
krb5-workstation
libassuan
libassuan-devel
@ -101,7 +104,6 @@ INSTALL_PACKAGES=(\
lsof
make
man-db
mlocate
msitools
nfs-utils
nmap-ncat
@ -113,22 +115,29 @@ INSTALL_PACKAGES=(\
passt
perl-Clone
perl-FindBin
pigz
pkgconfig
podman
podman-remote
pre-commit
procps-ng
protobuf
protobuf-c
protobuf-c-devel
protobuf-devel
python3-fedora-distro-aliases
python3-koji-cli-plugins
redhat-rpm-config
rpcbind
rsync
runc
sed
ShellCheck
skopeo
slirp4netns
socat
sqlite-libs
sqlite-devel
squashfs-tools
tar
time
@ -145,12 +154,10 @@ INSTALL_PACKAGES=(\
# Rawhide images don't need these packages
if [[ "$PACKER_BUILD_NAME" =~ fedora ]]; then
INSTALL_PACKAGES+=( \
docker-compose
python-pip-wheel
python-setuptools-wheel
python-toml
python-wheel-wheel
python2
python3-PyYAML
python3-coverage
python3-dateutil
@ -167,17 +174,38 @@ if [[ "$PACKER_BUILD_NAME" =~ fedora ]]; then
python3-requests
python3-requests-mock
)
else # podman-sequoia is only available in Rawhide
timebomb 20251101 "Also install the package in future Fedora releases, and enable Sequoia support in users of the images."
INSTALL_PACKAGES+=( \
podman-sequoia
)
fi
# When installing during a container-build, having this present
# will seriously screw up future dnf operations in very non-obvious ways.
# bpftrace is only needed on the host as containers cannot run ebpf
# programs anyway and it is very big so we should not bloat the container
# images unnecessarily.
if ! ((CONTAINER)); then
INSTALL_PACKAGES+=( \
bpftrace
composefs
container-selinux
fuse-overlayfs
libguestfs-tools
selinux-policy-devel
policycoreutils
)
# Extra packages needed by podman-machine-os
INSTALL_PACKAGES+=( \
podman-machine
osbuild
osbuild-tools
osbuild-ostree
xfsprogs
e2fsprogs
)
fi
@ -207,5 +235,6 @@ $SUDO curl --fail --silent --location -O \
https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
cd -
# It was observed in F33, dnf install doesn't always get you the latest/greatest
# Occasionally following an install, there are more updates available.
# This may be due to activation of suggested/recommended dependency resolution.
lilto $SUDO dnf update -y

View File

@ -30,8 +30,6 @@ req_env_vars PACKER_BUILD_NAME
# shellcheck disable=SC2154
if [[ "$PACKER_BUILD_NAME" =~ "netavark" ]]; then
bash $SCRIPT_DIRPATH/fedora-netavark_packaging.sh
elif [[ "$PACKER_BUILD_NAME" =~ "podman-py" ]]; then
bash $SCRIPT_DIRPATH/fedora-podman-py_packaging.sh
elif [[ "$PACKER_BUILD_NAME" =~ "build-push" ]]; then
bash $SCRIPT_DIRPATH/build-push_packaging.sh
# Registers qemu emulation for non-native execution
@ -49,6 +47,8 @@ if ! ((CONTAINER)); then
else
msg "Enabling cgroup management from containers"
ooe.sh $SUDO setsebool -P container_manage_cgroup true
initialize_local_cache_registry
fi
fi

345
cache_images/local-cache-registry Executable file
View File

@ -0,0 +1,345 @@
#! /bin/bash
#
# local-cache-registry - set up and manage a local registry with cached images
#
# Used in containers CI, to reduce exposure to registry flakes.
#
# We start with the docker registry image. Pull it, extract the registry
# binary and config, tweak the config, and create a systemd unit file that
# will start the registry at boot.
#
# We also populate that registry with a (hardcoded) list of container
# images used in CI tests. That way a CI VM comes up alreay ready,
# and CI tests do not need to do remote pulls. The image list is
# hardcoded right here in this script file, in the automation_images
# repo. See below for reasons.
#
ME=$(basename $0)
###############################################################################
# BEGIN defaults
# FQIN of registry image. From this image, we extract the registry to run.
PODMAN_REGISTRY_IMAGE=quay.io/libpod/registry:2.8.2
# Fixed path to registry setup. This is the directory used by the registry.
PODMAN_REGISTRY_WORKDIR=/var/cache/local-registry
# Fixed port on which registry listens. This is hardcoded and must be
# shared knowledge among all CI repos that use this registry.
REGISTRY_PORT=60333
# Podman binary to run
PODMAN=${PODMAN:-/usr/bin/podman}
# Temporary directories for podman, so we don't clobber any system files.
# Wipe them upon script exit.
PODMAN_TMPROOT=$(mktemp -d --tmpdir $ME.XXXXXXX)
trap 'status=$?; rm -rf $PODMAN_TMPROOT && exit $status' 0
# Images to cache. Default prefix is "quay.io/libpod/"
#
# It seems evil to hardcode this list as part of the script itself
# instead of a separate file or resource but there's a good reason:
# keeping code and data together in one place makes it possible for
# a podman (and some day other repo?) developer to run a single
# command, contrib/cirrus/get-local-registry-script, which will
# fetch this script and allow the dev to run it to start a local
# registry on their system.
#
# As of 2024-07-02 this list includes podman and buildah images
#
# FIXME: periodically run this to look for no-longer-needed images:
#
# for i in $(sed -ne '/IMAGELIST=/,/^[^ ]/p' <cache_images/local-cache-registry | sed -ne 's/^ *//p');do grep -q -R $i ../podman/test ../buildah/tests || echo "unused $i";done
#
declare -a IMAGELIST=(
alpine:3.10.2
alpine:latest
alpine_healthcheck:latest
alpine_nginx:latest
alpine@sha256:634a8f35b5f16dcf4aaa0822adc0b1964bb786fca12f6831de8ddc45e5986a00
alpine@sha256:f270dcd11e64b85919c3bab66886e59d677cf657528ac0e4805d3c71e458e525
alpine@sha256:fa93b01658e3a5a1686dc3ae55f170d8de487006fb53a28efcd12ab0710a2e5f
autoupdatebroken:latest
badhealthcheck:latest
busybox:1.30.1
busybox:glibc
busybox:latest
busybox:musl
cirros:latest
fedora/python-311:latest
healthcheck:config-only
k8s-pause:3.5
podman_python:latest
redis:alpine
registry:2.8.2
registry:volume_omitted
systemd-image:20240124
testartifact:20250206-single
testartifact:20250206-multi
testartifact:20250206-multi-no-title
testartifact:20250206-evil
testdigest_v2s2
testdigest_v2s2:20200210
testimage:00000000
testimage:00000004
testimage:20221018
testimage:20241011
testimage:multiimage
testimage@sha256:1385ce282f3a959d0d6baf45636efe686c1e14c3e7240eb31907436f7bc531fa
testdigest_v2s2:20200210
testdigest_v2s2@sha256:755f4d90b3716e2bf57060d249e2cd61c9ac089b1233465c5c2cb2d7ee550fdb
volume-plugin-test-img:20220623
podman/stable:v4.3.1
podman/stable:v4.8.0
skopeo/stable:latest
ubuntu:latest
)
# END defaults
###############################################################################
# BEGIN help messages
missing=" argument is missing; see $ME -h for details"
usage="Usage: $ME [options] [initialize | cache IMAGE...]
$ME manages a local instance of a container registry.
When called to initialize a registry, $ME will pull
this image into a local temporary directory:
$PODMAN_REGISTRY_IMAGE
...then extract the registry binary and config, tweak the config,
start the registry, and populate it with a list of images needed by tests:
\$ $ME initialize
To fetch individual images into the cache:
\$ $ME cache libpod/testimage:21120101
Override the default image and/or port with:
-i IMAGE registry image to pull (default: $PODMAN_REGISTRY_IMAGE)
-P PORT port to bind to (on 127.0.0.1) (default: $REGISTRY_PORT)
Other options:
-h display usage message
"
die () {
echo "$ME: $*" >&2
exit 1
}
# END help messages
###############################################################################
# BEGIN option processing
while getopts "i:P:hv" opt; do
case "$opt" in
i) PODMAN_REGISTRY_IMAGE=$OPTARG ;;
P) REGISTRY_PORT=$OPTARG ;;
h) echo "$usage"; exit 0;;
v) verbose=1 ;;
\?) echo "Run '$ME -h' for help" >&2; exit 1;;
esac
done
shift $((OPTIND-1))
# END option processing
###############################################################################
# BEGIN helper functions
function podman() {
${PODMAN} --root ${PODMAN_TMPROOT}/root \
--runroot ${PODMAN_TMPROOT}/runroot \
--tmpdir ${PODMAN_TMPROOT}/tmp \
"$@"
}
###############
# must_pass # Run a command quietly; abort with error on failure
###############
function must_pass() {
local log=${PODMAN_TMPROOT}/log
"$@" &> $log
if [ $? -ne 0 ]; then
echo "$ME: Command failed: $*" >&2
cat $log >&2
# If we ever get here, it's a given that the registry is not running.
exit 1
fi
}
###################
# wait_for_port # Returns once port is available on localhost
###################
function wait_for_port() {
local port=$1 # Numeric port
local host=127.0.0.1
local _timeout=5
# Wait
while [ $_timeout -gt 0 ]; do
{ exec {unused_fd}<> /dev/tcp/$host/$port; } &>/dev/null && return
sleep 1
_timeout=$(( $_timeout - 1 ))
done
die "Timed out waiting for port $port"
}
#################
# cache_image # (singular) fetch one remote image
#################
function cache_image() {
local img=$1
# Almost all our images are under libpod; no need to repeat that part
if ! expr "$img" : "^\(.*\)/" >/dev/null; then
img="libpod/$img"
fi
# Almost all our images are from quay.io, but "domain.tld" prefix overrides
registry=$(expr "$img" : "^\([^/.]\+\.[^/]\+\)/" || true)
if [[ -n "$registry" ]]; then
img=$(expr "$img" : "[^/]\+/\(.*\)")
else
registry=quay.io
fi
echo
echo "...caching: $registry / $img"
# FIXME: inspect, and only pull if missing?
for retry in 1 2 3 0;do
skopeo --registries-conf /dev/null \
copy --all --dest-tls-verify=false \
docker://$registry/$img \
docker://127.0.0.1:${REGISTRY_PORT}/$img \
&& return
sleep $((retry * 30))
done
die "Too many retries; unable to cache $registry/$img"
}
##################
# cache_images # (plural) fetch all remote images
##################
function cache_images() {
for img in "${IMAGELIST[@]}"; do
cache_image "$img"
done
}
# END helper functions
###############################################################################
# BEGIN action processing
###################
# do_initialize # Start, then cache images
###################
#
# Intended to be run only from automation_images repo, or by developer
# on local workstation. This should never be run from podman/buildah/etc
# because it defeats the entire purpose of the cache -- a dead registry
# will cause this to fail.
#
function do_initialize() {
# This action can only be run as root
if [[ "$(id -u)" != "0" ]]; then
die "this script must be run as root"
fi
# For the next few commands, die on any error
set -e
mkdir -p ${PODMAN_REGISTRY_WORKDIR}
# Copy of this script
if ! [[ $0 =~ ${PODMAN_REGISTRY_WORKDIR} ]]; then
rm -f ${PODMAN_REGISTRY_WORKDIR}/$ME
cp $0 ${PODMAN_REGISTRY_WORKDIR}/$ME
fi
# Give it three tries, to compensate for flakes
podman pull ${PODMAN_REGISTRY_IMAGE} &>/dev/null ||
podman pull ${PODMAN_REGISTRY_IMAGE} &>/dev/null ||
must_pass podman pull ${PODMAN_REGISTRY_IMAGE}
# Mount the registry image...
registry_root=$(podman image mount ${PODMAN_REGISTRY_IMAGE})
# ...copy the registry binary into our own bin...
cp ${registry_root}/bin/registry /usr/bin/docker-registry
# ...and copy the config, making a few adjustments to it.
sed -e "s;/var/lib/registry;${PODMAN_REGISTRY_WORKDIR};" \
-e "s;:5000;127.0.0.1:${REGISTRY_PORT};" \
< ${registry_root}/etc/docker/registry/config.yml \
> /etc/local-registry.yml
podman image umount -a
# Create a systemd unit file. Enable it (so it starts at boot)
# and also start it --now.
cat > /etc/systemd/system/$ME.service <<EOF
[Unit]
Description=Local Cache Registry for CI tests
[Service]
ExecStart=/usr/bin/docker-registry serve /etc/local-registry.yml
Type=exec
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now $ME.service
wait_for_port ${REGISTRY_PORT}
cache_images
}
##############
# do_cache # Cache one or more images
##############
function do_cache() {
if [[ -z "$*" ]]; then
die "missing args to 'cache'"
fi
for img in "$@"; do
cache_image "$img"
done
}
# END action processing
###############################################################################
# BEGIN command-line processing
# First command-line arg must be an action
action=${1?ACTION$missing}
shift
case "$action" in
init|initialize) do_initialize ;;
cache) do_cache "$@" ;;
*) die "Unknown action '$action'; must be init | cache IMAGE" ;;
esac
# END command-line processing
###############################################################################
exit 0

View File

@ -16,18 +16,9 @@ source "$REPO_DIRPATH/lib.sh"
# for both VM and container image build workflows.
req_env_vars PACKER_BUILD_NAME
# Going from F38 -> rawhide requires some special handling WRT DNF upgrade to DNF5
if [[ "$OS_RELEASE_VER" -ge 38 ]]; then
warn "Upgrading dnf -> dnf5"
showrun $SUDO dnf update -y dnf
showrun $SUDO dnf install -y dnf5
# Even dnf5 refuses to remove the 'dnf' package.
showrun $SUDO rpm -e yum dnf
else
warn "Upgrading Fedora '$OS_RELEASE_VER' to rawhide, this might break."
# shellcheck disable=SC2154
warn "If so, this script may be found in the repo. as '$SCRIPT_DIRPATH/$SCRIPT_FILENAME'."
fi
warn "Upgrading Fedora '$OS_RELEASE_VER' to rawhide, this might break."
# shellcheck disable=SC2154
warn "If so, this script may be found in the repo. as '$SCRIPT_DIRPATH/$SCRIPT_FILENAME'."
# Show what's happening
set -x

View File

@ -1,17 +0,0 @@
{
"builds": [
{
"name": "fedora-podman-py",
"builder_type": "googlecompute",
"build_time": 1658176090,
"files": null,
"artifact_id": "fedora-podman-py-c5419329914142720",
"packer_run_uuid": "e5b1e6ab-37a5-a695-624d-47bf0060b272",
"custom_data": {
"IMG_SFX": "5419329914142720",
"STAGE": "cache"
}
}
],
"last_run_uuid": "e5b1e6ab-37a5-a695-624d-47bf0060b272"
}

36
check-imgsfx.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
#
# 2024-01-25 esm
# 2024-06-28 cevich
#
# This script is intended to be used by the `pre-commit` utility, or it may
# be manually copied (or symlinked) as local `.git/hooks/pre-push` file.
# It's purpose is to keep track of image-suffix values which have already
# been pushed, to avoid them being immediately rejected by CI validation.
# To use it with the `pre-commit` utility, simply add something like this
# to your `.pre-commit-config.yaml`:
#
# ---
# repos:
# - repo: https://github.com/containers/automation_images.git
# rev: <tag or commit sha>
# hooks:
# - id: check-imgsfx
set -eo pipefail
# Ensure CWD is the repo root
cd $(dirname "${BASH_SOURCE[0]}")
imgsfx=$(<IMG_SFX)
imgsfx_history=".git/hooks/imgsfx.history"
if [[ -e $imgsfx_history ]]; then
if grep -q "$imgsfx" $imgsfx_history; then
echo "FATAL: $imgsfx has already been used" >&2
echo "Please rerun 'make IMG_SFX'" >&2
exit 1
fi
fi
echo $imgsfx >>$imgsfx_history

View File

@ -1,4 +1,4 @@
# This dockerfile defines the environment for Cirrus-CI when
# This Containerfile defines the environment for Cirrus-CI when
# running automated checks and tests. It may also be used
# for development/debugging or manually building most
# Makefile targets.
@ -13,11 +13,11 @@ ENV CIRRUS_WORKING_DIR=/var/tmp/automation_images \
PACKER_VERSION=$PACKER_VERSION \
CONTAINER=1
# When using the dockerfile-as-ci feature of Cirrus-CI, it's unsafe
# When using the containerfile-as-ci feature of Cirrus-CI, it's unsafe
# to rely on COPY or ADD instructions. See documentation for warning.
RUN test -n "$PACKER_VERSION"
RUN dnf update -y && \
dnf mark remove $(rpm -qa | grep -Ev '(gpg-pubkey)|(dnf)|(sudo)') && \
dnf -y mark dependency $(rpm -qa | grep -Ev '(gpg-pubkey)|(dnf)|(sudo)') && \
dnf install -y \
ShellCheck \
bash-completion \
@ -38,7 +38,7 @@ RUN dnf update -y && \
util-linux \
unzip \
&& \
dnf mark install dnf sudo $_ && \
dnf -y mark user dnf sudo $_ && \
dnf autoremove -y && \
dnf clean all

View File

@ -35,6 +35,14 @@ if [[ -n "$AWS_INI" ]]; then
set_aws_filepath
fi
id
# FIXME: ssh-keygen seems to fail to create keys with Permission denied
# in the base_images make target, I have no idea why but all CI jobs are
# broken because of this. Let's try without selinux.
if [[ "$(getenforce)" == "Enforcing" ]]; then
setenforce 0
fi
set -x
cd "$REPO_DIRPATH"
export IMG_SFX=$IMG_SFX

View File

@ -9,8 +9,9 @@ fi
# This envar is set by the CI system
# shellcheck disable=SC2154
if [[ "$CIRRUS_CHANGE_TITLE" =~ .*CI:DOCS.* ]]; then
echo "This script must never run after a [CI:DOCS] PR merge"
if [[ "$CIRRUS_CHANGE_MESSAGE" =~ .*CI:DOCS.* ]]; then
echo "This script must never tag anything after a [CI:DOCS] PR merge"
exit 0
fi
# Ensure no secrets leak via debugging var expansion
@ -23,7 +24,7 @@ echo "$REG_PASSWORD" | \
declare -a imgnames
imgnames=( imgts imgobsolete imgprune gcsupld get_ci_vm orphanvms ccia )
# A [CI:TOOLING] build doesn't produce CI VM images
if [[ ! "$CIRRUS_CHANGE_TITLE" =~ .*CI:TOOLING.* ]]; then
if [[ ! "$CIRRUS_CHANGE_MESSAGE" =~ .*CI:TOOLING.* ]]; then
imgnames+=( skopeo_cidev fedora_podman prior-fedora_podman )
fi

View File

@ -13,7 +13,7 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
req_env_vars CIRRUS_PR CIRRUS_BASE_SHA CIRRUS_PR_TITLE CIRRUS_USER_PERMISSION
req_env_vars CIRRUS_PR CIRRUS_PR_TITLE CIRRUS_USER_PERMISSION CIRRUS_BASE_BRANCH
show_env_vars
@ -52,17 +52,20 @@ if [[ "$CIRRUS_PR_TITLE" =~ CI:DOCS ]]; then
exit 0
fi
# Variable is defined by Cirrus-CI at runtime
# Fix "Not a valid object name main" error from Cirrus's
# incomplete checkout.
git remote update origin
# Determine where PR branched off of $CIRRUS_BASE_BRANCH
# shellcheck disable=SC2154
if ! git diff --name-only ${CIRRUS_BASE_SHA}..HEAD | grep -q IMG_SFX; then
base_sha=$(git merge-base origin/${CIRRUS_BASE_BRANCH:-main} HEAD)
if ! git diff --name-only ${base_sha}..HEAD | grep -q IMG_SFX; then
die "Every PR that builds images must include an updated IMG_SFX file.
Simply run 'make IMG_SFX', commit the result, and re-push."
else
IMG_SFX="$(<./IMG_SFX)"
# IMG_SFX was modified vs PR's base-branch, confirm version moved forward
# shellcheck disable=SC2154
v_prev=$(git show ${CIRRUS_BASE_SHA}:IMG_SFX 2>&1 || true)
v_prev=$(git show ${base_sha}:IMG_SFX 2>&1 || true)
# Verify new IMG_SFX value always version-sorts later than previous value.
# This prevents screwups due to local timezone, bad, or unset clocks, etc.
new_img_ver=$(awk -F 't' '{print $1"."$2}'<<<"$IMG_SFX" | cut -dz -f1)

View File

@ -0,0 +1,43 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-symlinks
- id: mixed-line-ending
- id: no-commit-to-branch
args: [--branch, main]
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
args: [--config, .codespellrc]
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 3.0.0
hooks:
- id: forbid-binary
exclude: >
(?x)^(
get_ci_vm/good_repo_test/dot_git.tar.gz
)$
- id: script-must-have-extension
- id: shellcheck
# These come from ci/shellcheck.sh
args:
- --color=always
- --format=tty
- --shell=bash
- --external-sources
- --enable=add-default-case,avoid-nullary-conditions,check-unassigned-uppercase
- --exclude=SC2046,SC2034,SC2090,SC2064
- --wiki-link-count=0
- --severity=warning
- repo: https://github.com/containers/automation_images.git
rev: 2e5a2acfe21cc4b13511b453733b8875e592ad9c
hooks:
- id: check-imgsfx

View File

@ -1,14 +1,13 @@
# This is a listing of GCP Project IDs which use images produced by
# this repo. It's used by the "Orphan VMs" github action to monitor
# for any leftover/lost VMs.
# This is a listing of Google Cloud Platform Project IDs for
# orphan VM monitoring and possibly other automation tasks.
# Note: CI VM images produced by this repo are all stored within
# the libpod-218412 project (in addition to some AWS EC2)
buildah
conmon-222014
containers-build-source-image
dnsname-8675309
libpod-218412
netavark-2021
oci-seccomp-bpf-hook
podman-py
skopeo
storage-240716
udica-247612

View File

@ -5,6 +5,36 @@ This directory contains the source for building [the
This image image is used by many containers-org repos. `hack/get_ci_vm.sh` script.
It is not intended to be called via any other mechanism.
In general/high-level terms, the architecture and operation is:
1. [containers/automation hosts cirrus-ci_env](https://github.com/containers/automation/tree/main/cirrus-ci_env),
a python mini-implementation of a `.cirrus.yml` parser. It's only job is to extract all required envars,
given a task name (including from a matrix element). It's highly dependent on
[certain YAML formatting requirements](README.md#downstream-repository-cirrusyml-requirements). If the target
repo. doesn't follow those standards, nasty/ugly python errors will vomit forth. Mainly this has to do with
Cirrus-CI's use of a non-standard YAML parser, allowing things like certain duplicate dictionary keys.
1. [containers/automation_images hosts get_ci_vm](https://github.com/containers/automation_images/tree/main/get_ci_vm),
a bundling of the `cirrus-ci_env` python script with an `entrypoint.sh` script inside a container image.
1. When a user runs `hack/get_ci_vm.sh` inside a target repo, the container image is entered, and `.cirrus.yml`
is parsed based on the CLI task-name. A VM is then provisioned based on specific envars (see the "Env. Vars."
entries in the sections for [APIv1](README.md#env-vars) and [APIv2](README.md#env-vars-1) sections below).
This is the most complex part of the process.
1. The remote system will not have **any** of the otherwise automatic Cirrus-CI operations performed (like "clone")
nor any magic CI variables defined. Having a VM ready, the container entrypoint script transfers a copy of
the local repo (including any uncommited changes).
1. The container entrypoint script then performs **_remote_** execution of the `hack/get_ci_vm.sh` script
including the magic `--setup` parameter. Though it varies by repo, typically this will establish everything
necessary to simulate a CI environment, via a call to the repo's own `setup.sh` or equivalent. Typically
The repo's setup scripts will persist any required envars into a `/etc/ci_environment` or similar. Though
this isn't universal.
1. Lastly, the user is dropped into a shell on the VM, inside the repo copy, with all envars defined and
ready to start running tests.
_Note_: If there are any envars found to be missing, they must be defined by updating either the repo normal CI
setup scripts (preferred), or in the `hack/get_ci_vm.sh` `--setup` section.
# Building
Example build (from repository root):
```bash

View File

@ -413,6 +413,9 @@ make_setup_tarball() {
status "Preparing setup tarball for instance."
req_env_vars DESTDIR _TMPDIR SRCDIR UPSTREAM_REPO
mkdir -p "${_TMPDIR}$DESTDIR"
# Mark the volume-mounted source repo as safe system-wide (w/in the container)
git config --global --add safe.directory "$SRCDIR"
git config --global --add safe.directory "$SRCDIR/.git"
# We have no way of knowing what state or configuration the user's
# local repository is in. Work from a local clone, so we can
# specify our own setup and prevent unexpected script breakage.

View File

@ -33,7 +33,8 @@ dnf install -y --allowerasing $(<"$INST_PKGS_FP")
AWSURL="https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"
cd /tmp
curl --fail --location -O "${AWSURL}"
unzip awscli*.zip
# There's little reason to see every single file extracted
unzip -q awscli*.zip
./aws/install -i /usr/local/share/aws-cli -b /usr/local/bin
rm -rf awscli*.zip ./aws

View File

@ -11,13 +11,13 @@ set -eo pipefail
# shellcheck source=imgts/lib_entrypoint.sh
source /usr/local/bin/lib_entrypoint.sh
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX IMPORT_IMG_SFX
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX
gcloud_init
# Set this to 1 for testing
DRY_RUN="${DRY_RUN:-0}"
OBSOLETE_LIMIT=10
OBSOLETE_LIMIT=50
THEFUTURE=$(date --date='+1 hour' +%s)
TOO_OLD_DAYS='30'
TOO_OLD_DESC="$TOO_OLD_DAYS days ago"
@ -159,10 +159,10 @@ for (( i=nr_amis ; i ; i-- )); do
continue
fi
# Any image matching the currently in-use IMG_SFX or IMPORT_IMG_SFX
# Any image matching the currently in-use IMG_SFX
# must always be preserved. Values are defined in cirrus.yml
# shellcheck disable=SC2154
if [[ "$name" =~ $IMG_SFX ]] || [[ "$name" =~ $IMPORT_IMG_SFX ]]; then
if [[ "$name" =~ $IMG_SFX ]]; then
msg "Retaining current (latest) image $name | $tags"
continue
fi
@ -201,14 +201,15 @@ for (( i=nr_amis ; i ; i-- )); do
done
COUNT=$(<"$IMGCOUNT")
CANDIDATES=$(wc -l <$TOOBSOLETE)
msg "########################################################################"
msg "Obsoleting $OBSOLETE_LIMIT random images of $COUNT examined:"
msg "Obsoleting $OBSOLETE_LIMIT random image candidates ($CANDIDATES/$COUNT total):"
# Require a minimum number of images to exist. Also if there is some
# horrible scripting accident, this limits the blast-radius.
if [[ "$COUNT" -lt $OBSOLETE_LIMIT ]]
if [[ "$CANDIDATES" -lt $OBSOLETE_LIMIT ]]
then
die 0 "Safety-net Insufficient images ($COUNT) to process ($OBSOLETE_LIMIT required)"
die 0 "Safety-net Insufficient images ($CANDIDATES) to process ($OBSOLETE_LIMIT required)"
fi
# Don't let one bad apple ruin the whole bunch

View File

@ -11,14 +11,14 @@ set -e
# shellcheck source=imgts/lib_entrypoint.sh
source /usr/local/bin/lib_entrypoint.sh
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX IMPORT_IMG_SFX
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX
gcloud_init
# Set this to 1 for testing
DRY_RUN="${DRY_RUN:-0}"
# For safety's sake limit nr deletions
DELETE_LIMIT=10
DELETE_LIMIT=50
ABOUTNOW=$(date --iso-8601=date) # precision is not needed for this use
# Format Ref: https://cloud.google.com/sdk/gcloud/reference/topic/formats
# Field list from `gcloud compute images list --limit=1 --format=text`
@ -48,7 +48,7 @@ $GCLOUD compute images list --show-deprecated \
# Any image matching the currently in-use IMG_SFX must always be preserved.
# Values are defined in cirrus.yml
# shellcheck disable=SC2154
if [[ "$name" =~ $IMG_SFX ]] || [[ "$name" =~ $IMPORT_IMG_SFX ]]; then
if [[ "$name" =~ $IMG_SFX ]]; then
msg " Skipping current (latest) image $name"
continue
fi
@ -91,9 +91,9 @@ for (( i=nr_amis ; i ; i-- )); do
warn 0 " EC2 AMI ID '$ami_id' is missing a 'Name' tag"
fi
# Any image matching the currently in-use IMG_SFX or IMPORT_IMG_SFX
# Any image matching the currently in-use IMG_SFX
# must always be preserved.
if [[ "$name" =~ $IMG_SFX ]] || [[ "$name" =~ $IMPORT_IMG_SFX ]]; then
if [[ "$name" =~ $IMG_SFX ]]; then
warn 0 " Retaining current (latest) image $name id $ami_id"
$AWS ec2 disable-image-deprecation --image-id "$ami_id" > /dev/null
continue
@ -106,13 +106,14 @@ for (( i=nr_amis ; i ; i-- )); do
done
COUNT=$(<"$IMGCOUNT")
CANDIDATES=$(wc -l <$TODELETE)
msg "########################################################################"
msg "Deleting up to $DELETE_LIMIT random images of $COUNT examined:"
msg "Deleting up to $DELETE_LIMIT random image candidates ($CANDIDATES/$COUNT total)::"
# Require a minimum number of images to exist
if [[ "$COUNT" -lt $DELETE_LIMIT ]]
if [[ "$CANDIDATES" -lt $DELETE_LIMIT ]]
then
die 0 "Safety-net Insufficient images ($COUNT) to process deletions ($DELETE_LIMIT required)"
die 0 "Safety-net Insufficient images ($CANDIDATES) to process deletions ($DELETE_LIMIT required)"
fi
sort --random-sort $TODELETE | tail -$DELETE_LIMIT | \

View File

@ -5,7 +5,7 @@ set -e
RED="\e[1;31m"
YEL="\e[1;33m"
NOR="\e[0m"
SENTINEL="__unknown__" # default set in dockerfile
SENTINEL="__unknown__" # default set in Containerfile
# Disable all input prompts
# https://cloud.google.com/sdk/docs/scripting-gcloud
GCLOUD="gcloud --quiet"
@ -55,7 +55,7 @@ gcloud_init() {
then
TMPF="$1"
else
TMPF=$(mktemp -p '' .$(uuidgen)_XXXX.json)
TMPF=$(mktemp -p '' .XXXXXXXX)
trap "rm -f $TMPF &> /dev/null" EXIT
# Required variable must be set by caller
# shellcheck disable=SC2154
@ -77,7 +77,7 @@ aws_init() {
then
TMPF="$1"
else
TMPF=$(mktemp -p '' .$(uuidgen)_XXXX.ini)
TMPF=$(mktemp -p '' .XXXXXXXX)
fi
# shellcheck disable=SC2154
echo "$AWSINI" > $TMPF

View File

@ -1,108 +0,0 @@
# Semi-manual image imports
## Overview
[Due to a bug in
packer](https://github.com/hashicorp/packer-plugin-amazon/issues/264) and
the sheer complexity of EC2 image imports, this process is impractical for
full automation. It tends toward nearly always requiring supervision of a
human:
* There are multiple failure-points, some are not well reported to
the user by tools here or by AWS itself.
* The upload of the image to s3 can be unreliable. Silently corrupting image
data.
* The import-process is managed by a hosted AWS service which can be slow
and is occasionally unreliable.
* Failure often results in one or more leftover/incomplete resources
(s3 objects, EC2 snapshots, and AMIs)
## Requirements
* You're generally familiar with the (manual)
[EC2 snapshot import process](https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html).
* You are in possession of an AWS EC2 account, with the [IAM policy
`vmimport`](https://docs.aws.amazon.com/vm-import/latest/userguide/required-permissions.html#vmimport-role) attached.
* You have "Access Key" and "Secret Access Key" values set in [a credentials
file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
These are only shown once, if lost a new "Access Key" needs to be created.
The format for `~/.aws/credentials` is very simple:
```
[default]
aws_access_key_id = <Unquoted value>
aws_secret_access_key = <Unquoted value>
```
The format for `~/.aws/config` is similarly simple:
```
[defaults]
output = json
region = us-east-1
```
* Podman is installed and functional
* At least 10gig free space under `/tmp`, more if there are failures / multiple runs.
* *Network bandwidth sufficient for downloading and uploading many GBs of
data, potentially multiple times.*
## Process
Unless there is a problem with the current contents or age of the
imported images, this process does not need to be followed. The
normal PR-based build workflow can simply be followed as usual.
This process is only needed to bring newly updated Fedora images into
AWS to build CI images from. For example, due to a new Beta or GA release.
***Note:*** Most of the steps below will happen within a container environment.
Any exceptions are noted in the individual steps below with *[HOST]*
1. *[HOST]* Edit the `Makefile`, update the Fedora release numbers
under the section
`##### Important image release and source details #####`
1. *[HOST]* Run `make IMPORT_IMG_SFX`
1. *[HOST]* Run
```bash
$ make image_builder_debug \
GAC_FILEPATH=/dev/null \
AWS_SHARED_CREDENTIALS_FILE=/path/to/.aws/credentials
```
1. Run `make import_images` (or `make --jobs=4 import_images` if you're brave).
1. The following steps should all occur successfully for each imported image.
1. Image is downloaded.
1. Image checksum is downloaded.
1. Image is verified against the checksum.
1. Image is converted to `VHDX` format.
1. The `VHDX` image is uploaded to the `packer-image-import` S3 bucket.
1. AWS `import-snapshot` process is started (uses AWS vmimport service)
1. Progress of snapshot import is monitored until completion or failure.
1. The imported snapshot is converted into an AMI
1. Essential tags are added to the AMI
1. Details ascii-table about the new AMI is printed on success.
1. Assuming all image imports were successful, a final success message will be
printed by `make`.
## Failure responses
This list is not exhaustive, and only represents common/likely failures.
Normally there is no need to exit the build container.
* If image download fails, double-check any error output, run `make clean`
and retry.
* If checksum validation fails,
run `make clean`.
Retry `make import_images`.
* If s3 upload fails,
Confirm service availability,
retry `make import_images`.
* If snapshot import fails with a `Disk validation failed` error,
Retry `make import_images`.
* If snapshot import fails with non-validation error,
find snapshot in EC2 and delete it manually.
Retry `make import_images`.
* If AMI registration fails, remove any conflicting AMIs *and* snapshots.
Retry `make import_images`.
* If import was successful but AMI tagging failed, manually add
the required tags to AMI: `automation=false` and `Name=<name>-i${IMG_SFX}`.
Where `<name>` is `fedora-aws` or `fedora-aws-arm64`.

View File

@ -1,45 +0,0 @@
#!/bin/bash
# This script is intended to be run by packer, usage under any other
# environment may behave badly. Its purpose is to download a VM
# image and a checksum file. Verify the image's checksum matches.
# If it does, convert the downloaded image into the format indicated
# by the first argument's `.extension`.
#
# The first argument is the file path and name for the output image,
# the second argument is the image download URL (ending in a filename).
# The third argument is the download URL for a checksum file containing
# details necessary to verify vs filename included in image download URL.
set -eo pipefail
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
[[ "$#" -eq 3 ]] || \
die "Expected to be called with three arguments, not: $#"
# Packer needs to provide the desired filename as it's unable to parse
# a filename out of the URL or interpret output from this script.
dest_dirpath=$(dirname "$1")
dest_filename=$(basename "$1")
dest_format=$(cut -d. -f2<<<"$dest_filename")
src_url="$2"
src_filename=$(basename "$src_url")
cs_url="$3"
req_env_vars dest_dirpath dest_filename dest_format src_url src_filename cs_url
mkdir -p "$dest_dirpath"
cd "$dest_dirpath"
[[ -r "$src_filename" ]] || \
curl --fail --location -O "$src_url"
echo "Downloading & verifying checksums in $cs_url"
curl --fail --location "$cs_url" -o - | \
sha256sum --ignore-missing --check -
echo "Converting '$src_filename' to ($dest_format format) '$dest_filename'"
qemu-img convert "$src_filename" -O "$dest_format" "${dest_filename}"

View File

@ -1,31 +0,0 @@
{
"builds": [
{
"name": "fedora-aws",
"builder_type": "hamsterwheel",
"build_time": 0,
"files": null,
"artifact_id": "",
"packer_run_uuid": null,
"custom_data": {
"IMG_SFX": "fedora-aws-i@@@IMPORT_IMG_SFX@@@",
"STAGE": "import",
"TASK": "@@@CIRRUS_TASK_ID@@@"
}
},
{
"name": "fedora-aws-arm64",
"builder_type": "hamsterwheel",
"build_time": 0,
"files": null,
"artifact_id": "",
"packer_run_uuid": null,
"custom_data": {
"IMG_SFX": "fedora-aws-arm64-i@@@IMPORT_IMG_SFX@@@",
"STAGE": "import",
"TASK": "@@@CIRRUS_TASK_ID@@@"
}
}
],
"last_run_uuid": "00000000-0000-0000-0000-000000000000"
}

View File

@ -1,18 +0,0 @@
{
"Name": "@@@NAME@@@-i@@@IMPORT_IMG_SFX@@@",
"VirtualizationType": "hvm",
"Architecture": "@@@ARCH@@@",
"EnaSupport": true,
"RootDeviceName": "/dev/sda1",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"SnapshotId": "@@@SNAPSHOT_ID@@@",
"VolumeSize": 10,
"VolumeType": "gp2"
}
}
]
}

View File

@ -1,84 +0,0 @@
#!/bin/bash
# This script is intended to be called by the main Makefile
# to wait for and confirm successful import and conversion
# of an uploaded image object from S3 into EC2. It expects
# the path to a file containing the import task ID as the
# first argument.
#
# If the import is successful, the snapshot ID is written
# to stdout. Otherwise, all output goes to stderr, and
# the script exits non-zero on failure or timeout. On
# failure, the file containing the import task ID will
# be removed.
set -eo pipefail
AWS="${AWS:-aws --output json --region us-east-1}"
# The import/conversion process can take a LONG time, have observed
# > 10 minutes on occasion. Normally, takes 2-5 minutes.
SLEEP_SECONDS=10
TIMEOUT_SECONDS=720
TASK_ID_FILE="$1"
tmpfile=$(mktemp -p '' tmp.$(basename ${BASH_SOURCE[0]}).XXXX)
die() { echo "ERROR: ${1:-No error message provided}" > /dev/stderr; exit 1; }
msg() { echo "${1:-No error message provided}" > /dev/stderr; }
unset snapshot_id
handle_exit() {
set +e
rm -f "$tmpfile" &> /dev/null
if [[ -n "$snapshot_id" ]]; then
msg "Success ($task_id): $snapshot_id"
echo -n "$snapshot_id" > /dev/stdout
return 0
fi
rm -f "$TASK_ID_FILE"
die "Timeout or other error reported while waiting for snapshot import"
}
trap handle_exit EXIT
[[ -n "$AWS_SHARED_CREDENTIALS_FILE" ]] || \
die "\$AWS_SHARED_CREDENTIALS_FILE must not be unset/empty."
[[ -r "$1" ]] || \
die "Can't read task id from file '$TASK_ID_FILE'"
task_id=$(<$TASK_ID_FILE)
msg "Waiting up to $TIMEOUT_SECONDS seconds for '$task_id' import. Checking progress every $SLEEP_SECONDS seconds."
for (( i=$TIMEOUT_SECONDS ; i ; i=i-$SLEEP_SECONDS )); do \
# Sleep first, to give AWS time to start meaningful work.
sleep ${SLEEP_SECONDS}s
$AWS ec2 describe-import-snapshot-tasks \
--import-task-ids $task_id > $tmpfile
if ! st_msg=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.StatusMessage?' $tmpfile) && \
[[ -n $st_msg ]] && \
[[ ! "$st_msg" =~ null ]]
then
die "Unexpected result: $st_msg"
elif grep -Eiq '(error)|(fail)' <<<"$st_msg"; then
die "$task_id: $st_msg"
fi
msg "$task_id: $st_msg (${i}s remaining)"
# Why AWS you use StatusMessage && Status? Bad names! WHY!?!?!?!
if status=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.Status?' $tmpfile) && \
[[ "$status" == "completed" ]] && \
snapshot_id=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.SnapshotId?' $tmpfile)
then
msg "Import complete to: $snapshot_id"
break
else
unset snapshot_id
fi
done

10
lib.sh
View File

@ -286,6 +286,16 @@ unmanaged-devices=interface-name:*podman*;interface-name:veth*
EOF
}
# Create a local registry, seed it with remote images
initialize_local_cache_registry() {
msg "Initializing local cache registry"
#shellcheck disable=SC2154
$SUDO ${SCRIPT_DIRPATH}/local-cache-registry initialize
msg "du -sh /var/cache/local-registry"
du -sh /var/cache/local-registry
}
common_finalize() {
set -x # extra detail is no-longer necessary
cd /

View File

@ -12,7 +12,6 @@ RUN dnf -y update && \
dnf clean all
ENV REG_REPO="https://github.com/docker/distribution.git" \
REG_COMMIT="b5ca020cfbe998e5af3457fda087444cf5116496" \
REG_COMMIT_SCHEMA1="ec87e9b6971d831f0eff752ddb54fb64693e51cd" \
OSO_REPO="https://github.com/openshift/origin.git" \
OSO_TAG="v1.5.0-alpha.3"

View File

@ -9,7 +9,6 @@ set -e
declare -a req_vars
req_vars=(\
REG_REPO
REG_COMMIT
REG_COMMIT_SCHEMA1
OSO_REPO
OSO_TAG
@ -43,12 +42,6 @@ cd "$REG_GOSRC"
(
# This is required to be set like this by the build system
export GOPATH="$PWD/Godeps/_workspace:$GOPATH"
# This comes in from the Containerfile
# shellcheck disable=SC2154
git checkout -q "$REG_COMMIT"
go build -o /usr/local/bin/registry-v2 \
github.com/docker/distribution/cmd/registry
# This comes in from the Containerfile
# shellcheck disable=SC2154
git checkout -q "$REG_COMMIT_SCHEMA1"
@ -68,6 +61,10 @@ sed -i -e 's/\[\[ "\${go_version\[2]}" < "go1.5" ]]/false/' ./hack/common.sh
# 8 characters long. This can happen if/when systemd-resolved adds 'trust-ad'.
sed -i '/== "attempts:"/s/ 8 / 9 /' vendor/github.com/miekg/dns/clientconfig.go
# Backport https://github.com/ugorji/go/commit/8286c2dc986535d23e3fad8d3e816b9dd1e5aea6
# Go ≥ 1.22 panics with a base64 encoding using duplicated characters.
sed -i -e 's,"encoding/base64","encoding/base32", ; s,base64.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789__"),base32.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef"),' vendor/github.com/ugorji/go/codec/gen.go
make build
make all WHAT=cmd/dockerregistry
cp -a ./_output/local/bin/linux/*/* /usr/local/bin/

View File

@ -12,7 +12,7 @@ if [[ "$UID" -ne 0 ]]; then
export SUDO="sudo env DEBIAN_FRONTEND=noninteractive"
fi
EVIL_UNITS="cron crond atd apt-daily-upgrade apt-daily fstrim motd-news systemd-tmpfiles-clean update-notifier-download mlocate-updatedb"
EVIL_UNITS="cron crond atd apt-daily-upgrade apt-daily fstrim motd-news systemd-tmpfiles-clean update-notifier-download mlocate-updatedb plocate-updatedb"
if [[ "$1" == "--list" ]]
then

View File

@ -9,19 +9,21 @@ iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocola
# Install basic required tooling.
# psexec needed to workaround session 0 WSL bug
retryInstall git archiver psexec golang mingw StrawberryPerl; Check-Exit
retryInstall 7zip git archiver psexec golang mingw StrawberryPerl zstandard; Check-Exit
# Update service is required for dotnet
Set-Service -Name wuauserv -StartupType "Manual"; Check-Exit
# dotnet is required for wixtoolset
# Allowing chocolaty to install dotnet breaks in an entirely
# non-debuggable way. Workaround this by installing it as
# a server-feature first.
Install-WindowsFeature -Name Net-Framework-Core; Check-Exit
# Install dotnet as that's the best way to install WiX 4+
# Choco does not support installing anything over WiX 3.14
Invoke-WebRequest -Uri https://dotnet.microsoft.com/download/dotnet/scripts/v1/dotnet-install.ps1 -OutFile dotnet-install.ps1
.\dotnet-install.ps1 -InstallDir 'C:\Program Files\dotnet'
# Install wixtoolset for installer build & test.
retryInstall wixtoolset; Check-Exit
# Configure NuGet sources for dotnet to fetch wix (and other packages) from
& 'C:\Program Files\dotnet\dotnet.exe' nuget add source https://api.nuget.org/v3/index.json -n nuget.org
# Install wix
& 'C:\Program Files\dotnet\dotnet.exe' tool install --global wix
# Install Hyper-V
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart