Compare commits

...

331 Commits

Author SHA1 Message Date
Paul Holzinger a0b436c123
Merge pull request #411 from mtrmac/podman-sequoia
WIP: Install podman-sequoia in rawhide images
2025-08-19 20:31:41 +02:00
Miloslav Trmač d8d2fc4c90 Install podman-sequoia in rawhide images
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-08-12 19:33:06 +02:00
Miloslav Trmač 2c9f480248 Update the IMG_SFX rules to work on macOS
- (date --utc) is not supported
- The $(file ) make function is not supported
- macOS sed has no \+ in basic regular expressions, use
  the extended format
- (quote arguments to [ ] to avoid confusing error messages if an earlier sed fails)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-07-30 20:55:44 +02:00
Miloslav Trmač 34add92ba5
Merge pull request #410 from lsm5/skopeo-registry
skopeo_cidev: Depend on docker-distribution
2025-07-23 19:08:48 +02:00
Lokesh Mandvekar 3c73fc4fa8
skopeo / fedora cache_image: Install docker-distribution
Having the registry binary named `registry-v2` causes trouble for
`make test-integration-local`. The registry binary provided by the
docker-distribution package is just `/usr/bin/registry`.

Depending on docker-distribution should make things simpler, more
consistent and usable regardles of CI / testing environment.

In skopeo cirrus jobs, the integration tests are run on the host itself
but a lot of the binaries are copied from the skopeo_cidev container.
So, in this case docker-distribution is directly installed on the host
environment and the registry-v2 build is removed from the skopeo_cidev
image.

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2025-07-21 14:11:23 -04:00
Paul Holzinger 0e1497cd77
Merge pull request #408 from Luap99/podman-py-rm
remove podman-py
2025-07-01 10:14:23 +02:00
Paul Holzinger 08a78fef72
new image build 2025-06-27
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-27 17:52:11 +02:00
Paul Holzinger 6489ad88d4
remove podman-py
It only uses tmt now and not cirrus anymore. So delete all the image
build infra for it.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-27 17:51:05 +02:00
Paul Holzinger 6b776d0590
Merge pull request #407 from timcoding1988/feat/add-gh-to-fedora
Feat/add gh to fedora
2025-06-24 11:57:40 +02:00
timcoding1988 5f27145d64 1. adding gh 2. remove 4.0 timebomb check
Signed-off-by: Tim Zhou <tzhou@redhat.com>
2025-06-18 10:39:18 -04:00
Paul Holzinger 699dbfbcc1
Merge pull request #404 from Luap99/packages
update to Fedora 42 and add some packages
2025-04-23 11:21:52 +02:00
Paul Holzinger 56b6c5c1f8
update IMG_SFX 2025-04-22
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 15:08:26 +02:00
Paul Holzinger 1a7005b4ea
ci: work around build issue
All the base image jobs are failing with:

ssh-keygen -f /tmp/cirrus-ci-build_tmp/cidata.ssh -P "" -q -t ed25519
Saving key "/tmp/cirrus-ci-build_tmp/cidata.ssh" failed: Permission denied
make: *** [Makefile:216: /tmp/cirrus-ci-build_tmp/cidata.ssh] Error 1

I have no idea what happend but let's try without selinux in case
selinux is blocking file access.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 15:08:20 +02:00
Paul Holzinger e960222013
f42: force newer criu
To fix broken checkpoint tests.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 11:58:46 +02:00
Paul Holzinger 087a6c4b24
AWS fedora: work around selinux bug
On f42 restorecon no longer applies the new label:
https://bugzilla.redhat.com/show_bug.cgi?id=2360183

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-16 16:35:42 +02:00
Paul Holzinger 12c503fb07
fedora: remove python3.8
The package has been removed in f42.

https://fedoraproject.org/wiki/Changes/RetirePython3.8

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 20:11:14 +02:00
Paul Holzinger 96f688b0e3
update to Fedora 42
It has been released.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:53 +02:00
Paul Holzinger 632e4b16f8
.github: check_cirrus_cron work around github bug
So I wondered why our email workflow only reported things for podman...

It seems the secrets: inherit is broken and no longer working, I see all
jobs on all repos failing with:

Error when evaluating 'secrets'. .github/workflows/check_cirrus_cron.yml (Line: 19, Col: 11): Secret SECRET_CIRRUS_API_KEY is required, but not provided while calling.

This makes no sense to me I doubled checked the names, nothing changed
on our side and it is consistent for all projects. Interestingly this
same thing passed on March 10 and 11 (on all repos) but failed before
and after this as well.

Per[1] we are not alone, anyway let's try to get this working again even
if it means more duplication.

[1] https://github.com/actions/runner/issues/2709

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:02 +02:00
Paul Holzinger ea0295744e
github: use thollander/actions-comment-pull-request
jungwinter/comment doesn't seem very much maintained and makes use of
the deprecated set-output[1].

[1] https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:02 +02:00
Paul Holzinger e073d1b16d
debian: disable dnsmasq service
This conflicts with aardvark-dns which also binds this port.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-01 11:20:18 +02:00
Paul Holzinger af87d70dce
add sqlite3 lib/dev packages
I like to dynamically link sqlite3 in podman builds to make the binaries
smaller.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-31 14:31:52 +02:00
Lokesh Mandvekar 879a69260c
Fedora cache image: install koji and fedora-distro-aliases
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2025-03-31 14:23:09 +02:00
Paul Holzinger 564840b6bc
Merge pull request #402 from Luap99/new-images
new images 2025-03-24
2025-03-24 14:59:33 +01:00
Paul Holzinger 6c11ff7257
new images 2025-03-24
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-24 12:19:25 +01:00
Daniel J Walsh fe4e4f3cd7
Merge pull request #401 from Luap99/new-images
new images 20250312
2025-03-12 16:58:26 -04:00
Paul Holzinger 617fe85f37
new images 20250312
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-12 17:54:25 +01:00
Paul Holzinger 3319c260ad
Merge pull request #400 from Luap99/artifacts
add new testartifacts in the cache registry
2025-02-11 20:33:21 +01:00
Paul Holzinger 1a185cfb81
new images
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:08:49 +01:00
Paul Holzinger 3f7b07de69
debian: remove tar work around
Thanks to Reinhard for patching the debian package to no longer trigger
the bug.

https://salsa.debian.org/debian/tar/-/merge_requests/6

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:06:24 +01:00
Paul Holzinger d2652b1135
add new testartifact to image cache
This is needed by https://github.com/containers/podman/pull/25238

To avoid flakes we need to have the test artifacts in the cache
registry.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:02:56 +01:00
Paul Holzinger 4b32b8267d
Merge pull request #399 from Luap99/new-images
new images 2025-01-31
2025-02-03 16:04:16 +01:00
Paul Holzinger 4756da479a
new images 2025-01-31
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-31 13:19:19 +01:00
Paul Holzinger ed0f37f1bd
Merge pull request #398 from Luap99/new-images
new images
2025-01-07 18:46:23 +01:00
Paul Holzinger e5a1016f08
new images
Removed two timebombs that no longer apply, composefs is installed in
the main package list and the pasta version is in stable now.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-07 14:24:36 +01:00
Paul Holzinger 8c6d4bb0bf
debian: remove git-daemon-run
The package no longer exists[1] in sid. Per quick search it just
contained a simple script not something we actually use. We need the git
daemon command and that is already part of the main git package AFAICS.

[1] 2de766588e

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-07 14:04:39 +01:00
Paul Holzinger 21cebe3fec
Merge pull request #397 from baude/add7z
Add 7zip Windows compression utility
2025-01-06 15:31:32 +01:00
Brent Baude 856110c78d Add 7zip Windows compression utility
The Fedora images used to test libhvee are now being shipped with xz
compression.  Because the golang xz decompression is extremely slow, I'm
proposing to use this command line utility.

Signed-off-by: Brent Baude <bbaude@redhat.com>
2024-12-18 09:52:12 -06:00
Paul Holzinger 46c3bf5c93
Merge pull request #396 from Luap99/podman-machine-os
add packages needed by podman-machine-os
2024-12-13 15:23:22 +01:00
Paul Holzinger d317246fd6
build new images
- remove old pasta bump and add new bump for rawhide issue
  https://github.com/containers/podman/issues/24804
- bump debian tar timebomb, it still has the same broken version

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-12 13:25:24 +01:00
Paul Holzinger 006e5b1db8
add packages needed by podman-machine-os
So that we do not have to deal with dnf install issues over there at
runtime.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-05 13:45:56 +01:00
Ed Santiago 99e20928ad
Merge pull request #394 from edsantiago/bump-systemd
Bump. Let's see if we pick up a new systemd.
2024-11-20 08:03:34 -07:00
Ed Santiago 7c285acaaa Bump. Let's see if we pick up a new systemd.
Desperate attempt to look into podman issue 24220, the
missing-logs-and-events flake. I noticed on 1mt that
rawhide is on systemd-257~rc1, which is what's on
debian, and we haven't seen 24220 on debian. F41
is still on 256.7.

Let's see what this PR brings in. If we get systemd-257
on rawhide, let's hammer at it on podman and see what
happens with 24220.

Also, fix a big duh on my part. My new README-simplified
had a line beginning with the word "timebomb", which
'make timebomb-check' interpreted as an actual timebomb
directive, which caused the check to fail. Workaround
is to shuffle words; a more proper solution might be
to exclude READMEs, or look only in *.sh files, or
some other smart filter.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-18 06:06:17 -07:00
Paul Holzinger 454288919f
Merge pull request #393 from edsantiago/lets-see
Another bump, to pick up 6.11.6 kernel
2024-11-11 14:20:37 +01:00
Ed Santiago 2b3a418d3e Another bump, to get 6.11.6 kernel
Also, bump pasta on f40 just to eliminate all chances
of podman flake 24219.

Also, add a simplified README explaining the usual-case
actions in this repo.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-07 13:58:15 -07:00
Paul Holzinger f4bbaabf94
Merge pull request #392 from edsantiago/f41-clean
VMs: bump to f41
2024-11-07 19:23:52 +01:00
Ed Santiago 4b297585c3 bump IMG_SFX
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:35:17 -07:00
Ed Santiago 4839366e72 Installed packages: make them work again
Changes necessary to get working VM images. I can't remember
why all of these are necessary. I think the docker-compose
change is because that package started bringing in too many
unwanted dependencies that conflict with podman. Anyhow,
this works.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:32:10 -07:00
Ed Santiago aef024bab7 Changes needed for new dnf
Lots of things seem to have changed in dnf-land. These are the
changes that get us working again.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:30:06 -07:00
Ed Santiago 4a12d4e3bd Fedora AWS query: strip the us-east-1
Something has changed in Fedora images on AWS. The us-east-1 suffix
no longer exists. Remove it.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:26:07 -07:00
Ed Santiago 4392650a1c Fedora 41 is stable. Bump.
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:24:29 -07:00
Paul Holzinger 7ef71ffbbd
Merge pull request #389 from edsantiago/testimage-20241011
cache registry: add testimage:20241011
2024-10-17 13:47:08 +02:00
Ed Santiago 57ebb34516 cache registry: add testimage:20241011
Needed by podman for debugging a pasta flake and, more
importantly, supporting infrastructure changes (buildah 5595)
that break APIv2 test assumptions. Fixing these failures
will silence red-herring test failures in our ongoing
testing of zstd:chunked.

The 20240123 image is not used anywhere other than podman,
so it is safe to remove.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-10-16 08:46:44 -06:00
Ed Santiago a478e68664
Merge pull request #376 from inknos/update-python-versions-and-packages
Remove unused packages and update python versions
2024-10-15 08:36:03 -06:00
Nicola Sella 9301643309 Remove unused packages and update python versions
python-xdg was removed as a dependency
8d1020ecc8

tests are currently done for py12
330cf0b501

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-10-15 10:55:18 +02:00
Ed Santiago d8ee5ceae2
Merge pull request #387 from Luap99/win-zstd
Add zstd on windows
2024-10-10 11:54:35 -06:00
Paul Holzinger ef2c8f2e71
Build new images
Bump debian tar timebomb, remove manual crun install as the package is
stable now and most importantly remove IMA workaround as the issue[1],
we will see if that is true.

[1] https://github.com/containers/podman/issues/18543

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-10 12:55:59 +02:00
Paul Holzinger aa36f713ee
windows: add zstandard package
Windows does not have zstd by default so we need to install it. In
particular I am looking at switching the repo archive to zstd as this
makes things much faster (over 1min in podman)[1] but the windows
testing is unable to extract that. While archiver added zstd support a
while back it is not in the version that is on chocolatey which seems a
bit out of date.

[1] https://github.com/containers/podman/pull/24120

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-10 12:42:38 +02:00
Ed Santiago 456905c2ed
Merge pull request #386 from edsantiago/test-crun-17
Build images with crun 1.17
2024-09-17 18:08:11 -06:00
Ed Santiago b5c7d46947 Build images with crun 1.17
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-09-11 09:09:35 -06:00
Lokesh Mandvekar 90ac9fc314
Merge pull request #385 from Luap99/ShellCheck
Add ShellCheck to fedora images
2024-09-11 19:12:00 +05:30
Paul Holzinger 2c858e70b9
Add ShellCheck to fedora images
It is installed at runtime in podman which is not good[1]. Install it
here so we can drop the dnf install there.

Also update some timebombs, pasta is in stable now, tar is still broken
in debian and IMA bug is also still not fixed in podman.

[1] f22f4cfe50/contrib/cirrus/prebuild.sh (L54)

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-09-06 17:34:23 +02:00
Ed Santiago 454f7be018
Merge pull request #383 from edsantiago/main
Build new VMs
2024-08-26 13:01:35 -06:00
Ed Santiago 3bc493fe31 Build new VMs
Timebomb pasta 08-14 on f39. See how/if this works in podman.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-08-21 11:14:47 -06:00
Chris Evich 9f437cb621
Merge pull request #382 from cevich/fix_debug_test_flake
[CI:TOOLING] Fix test_debug_task passing/failing by chance
2024-08-20 19:06:07 -04:00
Chris Evich 5edc6ba963
Fix test_debug_task passing/failing by chance
There's no guarantee of nested-virt support with the standard
"pick first available" VM type done by the `&ibi_vm` alias.
However, nested-virt is required for `image_builder_debug`
matrix element of `test_debug_task`.  Switch to the alias
purpose-built to supply a nested-virt capable VM.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-20 16:24:49 -04:00
Chris Evich fc75a1a84a
Merge pull request #380 from cevich/faster_simpler_tooling_builds
[CI:TOOLING] Track image IDs instead of tar exports
2024-08-19 15:01:45 -04:00
Chris Evich 8b60787478
Update debugging docs
Clarify the difference between `ci_debug` and `image_builder_debug`.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 9400efd805
Add tests for debug targets
Previously if either debugging targets broke in some way, nobody would
know.  Fix this by adding simple CI tests that confirm they build and
run a basic command.

Also, quiet down the unzipping of AWS cli tools.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 4958aa2422
Track image IDs instead of tar exports
Previously all container builds run by the Makefile managed them based
on presence/absence of a docker-archive tar file.  Producing these
exports is time-consuming and ultimately unnecessary extra work.  The
tar files are never actually consumed in a meaningful way by any other
targets.  Further, most of the container builds in CI run in parallel,
simply throwing away the tar when finished.

Fix this by switching to management based on image-ID files instead.
The only exception is the `imgts` image and images which are based on
it.  For those, some special handling is required (already done by the
CI build script), so some comments were added to assist.

Also, remove the `bench_stuff` target entirely as this has long since
been retired.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 217ff7ed3e
Merge pull request #379 from cevich/gcp_update
[CI:DOCS] Retire oversight of dnsname project
2024-08-19 10:12:22 -04:00
Chris Evich 4cd328ddfa
Minor: Update/clarify comment
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-16 10:19:16 -04:00
Chris Evich 1e2bebe9b0
Retire oversight of dnsname project
This github repo has been archived, CI disabled, and the GCE project
deleted.  Stop tracking it in automation.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-16 10:15:42 -04:00
Chris Evich 3db41a4702
Merge pull request #375 from cevich/bigger_fedora_vms
Catch Fedora-base image update problems early
2024-08-12 16:36:07 -04:00
Chris Evich 46c104b403
Catch Fedora-base image update problems early
Previously updates were disabled due to the cloud VM only having 2-gig
and the nested-VM only having 1-gig of memory.  Allow Fedora base-image
package updates by increasing the available resources.  Enabling
base-level (esp. kernel) package updates early supports spotting
fundamental image problems early.  Otherwise they may not be found until
a set of images is deployed downstream.

Also, update a few comments relating to followup package update.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 13:43:29 -04:00
Chris Evich b162196e68
Merge pull request #374 from cevich/rm_network_flakes
Reduce impact of networking slowdowns
2024-08-12 13:39:40 -04:00
Chris Evich 0a1e3dbfff
Reduce impact of networking slowdowns
Previously if a repository server, the internet, or the execution
environment experienced some kind of networking slowdown, it could lead
to a package install or update timeout failure.  Increase resiliency in
these situations with additional retries, timeouts, and lowered minimum
rates.  Also increase the timeout on the related Cirrus-CI tasks.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 10:59:43 -04:00
Ed Santiago 83c9b1661c
Merge pull request #371 from Luap99/ebpf
add bpftrace for CI debugging
2024-08-06 10:16:57 -06:00
Paul Holzinger 13b68fe5aa
new image IDs
Bump timebomb to Sep 1st, the podman issue is still not fixed and I
haven't looked at the debian bug but I assume it is also still not
fixed.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-08-05 19:32:30 +02:00
Paul Holzinger 5d99e6aed4
add bpftrace for CI debugging
I like to run a bpftrace based program in CI to collect better logs for
specific processes not observed in the normal testing such as the podman
container cleanup command.

Given you need to have full privs to run ebpf and the package pulls in
an entire toolchain which is almost 500MB in install size we do not add
it the the container images to not bloat them without reason.

https://github.com/containers/podman/pull/23487

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-08-05 19:05:24 +02:00
Ed Santiago 798e83dba9
Merge pull request #357 from edsantiago/local-cache-registry
Create a local registry
2024-07-22 05:42:13 -06:00
Ed Santiago 7e977eee41 Create a local registry
...to minimize hiccups. RUN-2091 in Jira. Network registries
are too unreliable; they cause too many flakes in CI. Here
we set up a registry running on each VM, prepopulated with
all container images used in podman and buildah tests.

Related PRs:
   https://github.com/containers/podman/pull/22726
   https://github.com/containers/buildah/pull/5584

Once those merge, podman and buildah CI tests will fetch
images from this local registry.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-07-08 09:26:55 -06:00
Chris Evich e1662886ab
Merge pull request #370 from cevich/increase_image_rm_rate
[CI:TOOLING] Increase obsolete image flagging and pruning
2024-07-08 11:23:37 -04:00
Chris Evich f67769a6ff
Increase obsolete image flagging and pruning
It was observed in the Cirrus-CI cron logs, that only the total
number of images scanned is reported.  Fix this by giving more
useful info., like the number of candidates for obsolete/pruning.

Related, the original restriction of `10` obsolete/prune images
was originally put in place when only a few repos utilized Cirrus-CI
VMs and image building was substantially more infrequent.  The
reason it exists is to prevent potential catastrophe should the `meta`
time stamp updating tasks have a bug or some other related failure occur.
Increase the limit to `50` so deletions may proceed much more rapidly.

*Note:* "Obsolete" images still live w/in a 30-day window where they can
be recovered if need be.  It's simply that any attempted use by CI will
fail, putting someone on notice that image recovery may be necessary.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-08 09:56:36 -04:00
Chris Evich a86360dc58
Remove ref to missing tool
The `uuidgen` tool has long-since been removed from the tooling images.
For whatever reason one call to it still existed.  Remove it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-05 11:48:44 -04:00
Chris Evich dd546e9037
Merge pull request #369 from cevich/aws_creds_docs
[CI:DOCS] Add link to AWS credentials file format
2024-07-05 11:23:35 -04:00
Chris Evich b0f018152e
[CI:DOCS] Add link to AWS credentials file format
Previously this was available in `import_images/README.md` which was
recently removed.  Since this page is difficult to find in the AWS docs,
link it directly into the main README.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-05 11:21:10 -04:00
Lokesh Mandvekar faf62c81b7
Merge pull request #354 from lsm5/dotnet
Windows: install dotnet and latest wix
2024-07-02 15:52:13 -04:00
Chris Evich b1864a66e9
Merge pull request #368 from cevich/fix_renovate_lib
[CI:DOCS] Fix renovate updating lib.sh
2024-07-02 14:38:03 -04:00
Chris Evich 07a870aa8e
Fix renovate updating lib.sh
Previously Renovate was failing in a multi-line search for an anchored
pattern in `lib.sh`.  This resulted in it completely ignoring the custom
regex manager for that file, as observed in the debug logs.  Fix this by
removing the regex anchors.

Also remove the filename anchors referenced in the `lib.sh` package rule
as they're unnecessary.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 14:28:53 -04:00
Chris Evich 419d61271c
Merge pull request #367 from cevich/fix_update_renovate_config
[CI:DOCS] Reformat renovate config + other minor updates
2024-07-02 14:18:23 -04:00
Lokesh Mandvekar 84304ec159
Windows: install dotnet and latest wix
wix3 is EOL and choco doesn't support installing wix > 3.14.

So, this commit installs the `dotnet` runtime and uses dotnet to install
the latest wix in the windows image.

Also remove pasta package timebomb from debian packaging.

Resolves: RUN-2055

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2024-07-02 14:07:13 -04:00
Chris Evich 8319550d63
Reformat renovate config + other minor updates
Previously the Renovate configuration was using an older format no longer
supported by the bot.  Apply automatic fixes proposed by the bot,
re-adding/adjusting the old comments as needed.

Also:

* Drop automatic assignment of Renovate PRs to `cevich`
* Reference the GHCR registry container image
* Simplify CI VM update warning message conditions & text.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 13:59:09 -04:00
Ed Santiago 6b9b9f9f08
Merge pull request #366 from cevich/do_not_use_cirrus_base_sha
[CI:DOCS] Remove broken CIRRUS_BASE_SHA usage
2024-07-02 09:10:57 -06:00
Ed Santiago 38e7c58ee6
Merge pull request #363 from cevich/rm_import_images
Use fedoraproject published EC2 images
2024-07-02 09:10:13 -06:00
Chris Evich 03802c1e7a
Remove broken CIRRUS_BASE_SHA usage
Unfortunately this value doesn't properly reflect the current branch
point of a PR.  Replace it with a call to `git merge-base` instead.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 10:01:42 -04:00
Chris Evich 6ec9ceecf3
Merge pull request #365 from cevich/example_pre-commit-config
[CI:DOCS] Add example pre-commit config
2024-07-01 15:49:57 -04:00
Chris Evich fcf08a3e5a
Add example pre-commit config
Add suggested/example `pre-commit` configuration for this repo. To use
as-is, simply symlink to `.pre-commit-config.yaml`.  Otherwise it can
be a basis for a custom configuration.

Fix all findings from the example pre-commit hooks.

Also include codespell config w/ repo-specific dictionary extension.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 15:48:59 -04:00
Chris Evich 29014788ac
Use fedoraproject published EC2 images
Previously a very complex, manual, and failure-prone `import_images`
stage was required to bring raw images into EC2.  Primarily this was
necessary because beta images aren't published on EC2 by the
fedoraproject.  However, since the original implementation, CI
operations against rawhide have largely supplanted the need to support
testing against the beta images.  This means the 'import_images' stage
can be completely dropped, and the 'base_images' stage can simply source
images (including `rawhide` if necessary) published by the Fedora
project.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:52:11 -04:00
Chris Evich 108ec30605
Remove Debian pasta apparmor workaround
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:52:11 -04:00
Ed Santiago cfc18f05da
Merge pull request #364 from cevich/imgsfx_history
[CI:DOCS] Add pre-commit (app) hook to check IMGSFX
2024-07-01 09:35:03 -06:00
Chris Evich 2e5a2acfe2
Add pre-commit (app) hook to check IMGSFX
Intended for use by [the pre-commit
app](https://pre-commit.com/#intro), this hook keeps track of all IMG_SFX
values pushed, failing when any duplicate is found.  In the case of
pushing to PRs that don't build CI VM images, the hook failure must be
manually bypassed.  Example `.pre-commit-config.yaml`:

```yaml
---
repos:
  - repo: https://github.com/containers/automation_images.git
    rev: <tag or commit sha>
    hooks:
      - id: check-imgsfx
```

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:32:03 -04:00
Chris Evich 014b518abf
Merge pull request #362 from cevich/get_ci_vm_docs
[CI:DOCS] Improve get_ci_vm container docs
2024-06-24 15:55:23 -04:00
Chris Evich 03d55b684b
Improve get_ci_vm container docs
The readme contained a lot of technical/implementation details, but
lacked an overview of the architecture/operations.  Fix this.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-06-24 11:16:43 -04:00
Paul Holzinger 8a55408a27
Merge pull request #361 from edsantiago/bump
Semiregular VM catchup
2024-06-21 14:32:56 +02:00
Ed Santiago 79bf8749af Semiregular VM catchup
- rawhide now includes rpm-plugin-ima, which breaks rootless
  podman pods. Add a timebomb'ed workaround until there's a
  more definitive solution in podman or its containers-* libraries

- bug fix for Makefile, handle indented timebombs

- install composefs in rawhide

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-06-20 09:31:27 -06:00
Ed Santiago 91846357a1
Merge pull request #360 from cevich/only_after_merge
[CI:DOCS] Stop tagging during cron runs
2024-05-29 14:51:03 -06:00
Chris Evich f7bdd130a7
Merge pull request #338 from edsantiago/debian_cgroups_v2
Debian: remove force-cgroups-v1 code
2024-05-29 14:38:18 -04:00
Chris Evich 7c1ecb657b
Stop tagging during cron runs
Previously the `tag_latest_images` was executing during the daily
'lifecycle' Cirrus-cron job.  This was unintentional, this task should
only run after a merge onto the default branch.  Fix the condition.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 10:55:22 -04:00
Miloslav Trmač 1e2559b4af Backport a patch to avoid a panic when compiled with Go >= 1.22
> panic: encoding alphabet includes duplicate symbols

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:17:28 -06:00
Miloslav Trmač 564b76cfe1 Also stop plocate-updatedb
plocate is the default locate implementation in Fedora.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:11:00 -06:00
Miloslav Trmač 6cbfbbac05 Stop installing mlocate
It has been retired in Rawhide, and it's unclear whether
we need it at all.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:11:00 -06:00
Ed Santiago e50990987f Debian: remove force-cgroups-v1 code
Per discussion in 2024-03-20 Planning meeting, we will no
longer be testing runc in CI. And cgroups V1 is dead too.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-05-29 08:10:58 -06:00
Ed Santiago e48dc5d37e
Merge pull request #359 from cevich/fix_uuidgen
[CI:TOOLING] Fix missing uuidgen tool
2024-05-29 08:09:59 -06:00
Chris Evich aae598a48a
Fix missing uuidgen tool
Previously this tool was used by a few container images as a
half-hearted attempt at thwarting guesses of the credentials
filename.  For whatever reason the `uuidgen` command is no
longer present in the latest base images but this anti-thwart
measure is also unnecessary and not very effective, remove it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 08:58:12 -04:00
Chris Evich 9acf75b6f5
Merge pull request #358 from cevich/fix_tag_latest
Fix tagging latest after [CI:TOOLING] PR merge
2024-05-29 08:55:57 -04:00
Chris Evich c63d02bec2
Fix tagging latest after [CI:TOOLING] PR merge
After a PR merges a branch-level job runs to tag the new container
images.  However, there is a special-case when a magic string is present
in the PR title:  No Fedora/Skopeo images were be built, so they should
not be tagged be ignored.

Prior to this commit, this special case isn't handled correctly, because
`CIRRUS_CHANGE_TITLE` only contains the first-line of the HEAD commit.
When executing on a branch, after a PR merge, this would be something
like:

`Merge pull request #FOO from some/thing`

Therefore not matching the intended magic string.  Fix this by switching
to a check against `CIRRUS_CHANGE_MESSAGE` which includes the entire
message.  Importantly, when merged using the github UI, the second line
of the commit message should contain the PR description and thus the
magic string.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-28 16:30:28 -04:00
Chris Evich afe1ced362
Merge pull request #356 from cevich/fix_get_ci_vm_test
[CI:TOOLING] Fix get_ci_vm test and new git safety checks
2024-05-23 15:02:29 -04:00
Chris Evich 499c24d856
Fix get_ci_vm test and new git safety checks
Previously, likely do to some git update the following error was
produced:

```
Testing: Verify mock 'gcevm' flavor main() workflow produces expected
output
fail - Expected exit-code 0 but received 128 while executing
mock_gcevm_workflow (output follows)
Winning lottery-number checksum: 0
gcloud --configuration=automation_images --project=automation_images
compute instances create --zone=us-central1-a
--image-project=automation_images --image=test-image-name --custom-cpu=0
--custom-memory=0Gb --boot-disk-size=0 --labels=in-use-by=foobar
foobar-test-image-name
gcloud --configuration=automation_images --project=automation_images
compute ssh --ssh-flag=-o=AddKeysToAgent=yes --force-key-file-overwrite
--strict-host-key-checking=no --zone=us-central1-a
root@foobar-test-image-name -- true
Cloning into '/tmp/get_ci_vm_hRxAoX.tmp/var/tmp/automation_images'...
fatal: detected dubious ownership in repository at
'/tmp/cirrus-ci-build/get_ci_vm/good_repo_test/.git'
To add an exception for this directory, call:

  git config --global --add safe.directory
/tmp/cirrus-ci-build/get_ci_vm/good_repo_test/.git
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
```

Fix this.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-23 14:32:42 -04:00
Ed Santiago b7395d11fe
Merge pull request #351 from Luap99/debian-tmpfs
debian: use tmpfs on /tmp + bump /tmp size on fedora
2024-05-13 13:15:32 -06:00
Paul Holzinger 09161bf540
bump image IMG_SFX
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-13 16:01:35 +02:00
Paul Holzinger aa79d45352
Update pasta apparmor profile
Now that we use /tmp we do not have to include the changes for /var/tmp.
However we need r (read) access to /tmp as pasta opens the path with
read access.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-13 15:37:19 +02:00
Paul Holzinger 663384815d
fedora: increase /tmp tmpfs size
By default we only get 50% of all memory, given our programs don't take
this much we should instead use more /tmp space in case we have to store
more images.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-08 11:41:24 +02:00
Paul Holzinger a2d4af6eff
debian: use tmpfs on /tmp
To make tests faster setup a tmpfs on /tmp like fedora does so that test
do not have to write everything onto persistent disk.

Fixes #350

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-08 11:41:24 +02:00
Ed Santiago 560a8f5db7
Merge pull request #349 from cevich/fedora40
Fedora40
2024-05-07 19:12:53 -06:00
Chris Evich ed4f43488b
Add debian pasta apparmor workaround
Ref:
https://github.com/containers/automation_images/pull/349#issuecomment-2090494124

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 26f0a720ed
Simplify setting Debian release version
Previously a convoluted system was used to add a "fake" release number
into `/etc/os-release` for CI/automation purposes.  It forced a
two-component version to satisfy some legacy automation-library needs.

Since the release number is also specified in the Makefile, and passed
into the packer call, it's trivial to simply provide this value to the
`debian_base-setup.sh` script.

This reduces complexity and avoids duplication.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 8fe782be13
Remove D12->13 grub timebomb
Previously this was necessary because simply updating the D12
grub-common package was no longer sufficient.  Importantly, make sure
the workaround/restriction on an update to tar is in place prior to
upgrading grub-common (which has a dependency on it).

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich da749c4c9a
Bump debian base-image tar workaround timebomb
The version hasn't changed, continue using the "old" version of tar.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich cd783f07c3
Remove Fedora passt timebomb
There was a lot of churn in this area causing many problems in CI.
Remove the workaround to see if problems have settled out with the most
recent packages.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 4958b8a6b7
Bump up to CentOS Stream 9 tooling images
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich f9e42ece82
Bump CI VMs to F40 & F39
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 078da3cb58
Merge pull request #353 from cevich/no_build_push
[CI:DOCS] Fix test_build-push failing w/ no_build-push label
2024-05-03 14:10:02 -04:00
Chris Evich a6ab11b389
Fix test_build-push failing w/ no_build-push label
Previously the build-push task was much more sophisticated and able to
run even if a new CI VM image was not produced.  This situation has now
changed, and the testing task requires some additional "smarts" to not
run when its image wasn't build.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-03 14:03:13 -04:00
Chris Evich 64e25fa32b
Merge pull request #348 from cevich/bump_automation_lib
Bump automation library version
2024-04-24 12:54:37 -04:00
Chris Evich c3a0ca1aba
Bump container build timeout
Many/most of the container image builds rely on pulling packages from
repos that are sometimes slow/busy.  Give the tasks a bit of extra time
in case it's needed.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-24 10:42:16 -04:00
Chris Evich 82ac450b89
Simplify build-push test
Previously this task depended on executing a downstream test script
intended for exercising an orthagonal orchestration script (which
happens to call `build-push.sh`.  Having upstream CI VM image builds
depend on a downstream script is very much not ideal.  Replace this with
a very quick/dirty test that simply confirms a multi-arch build
can function.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-24 10:39:53 -04:00
Chris Evich d9d87f33d6
Bump automation library version
Importantly, this contains a necessary fix for `build_push.sh` needed to
stop immutable-image existence-check failing on build (c/image_build
cron job).

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-24 10:39:50 -04:00
Ed Santiago cf72ba2655
Merge pull request #340 from edsantiago/tmp-should-be-tmpfs
Revert /tmp to tmpfs
2024-04-24 07:02:41 -06:00
Ed Santiago b2adc260a8 Revert /tmp to tmpfs
Podman *really* needs /tmp to be tmpfs, to detect and
handle reboots. Although there are (at this time) no
reboots involved in CI testing, it's still important
for CI hosts to reflect something close to a real-world
environment. And, there is work underway to check /tmp:

  https://github.com/containers/podman/pull/22141

This PR removes special-case Fedora code that was
disabling a tmpfs /tmp mount. History dates back to
PR #30 back in 2020.

Some of the image-build code in this repo performs
reboots and relies on persistent tmp files, so you'll
note a flurry of /tmp -> /var/tmp changes.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-04-11 06:49:18 -06:00
Ed Santiago fe0936e168
Merge pull request #346 from baude/validatepr
Add pre-commit to podman image
2024-04-11 06:46:33 -06:00
Ed Santiago 11f3c2a954
Merge branch 'main' into validatepr 2024-04-11 06:45:53 -06:00
Ed Santiago fc4b863bab
Merge pull request #347 from cevich/podman_oci_labels
Add OCI standard labels to podman images
2024-04-10 14:27:22 -06:00
Brent Baude 42fe503a39 Add PR validation packages to fedora image
In support of containers/podman/#22260, we need additional packages in
the podman fedora container:

* pre-commit
* man-db

Signed-off-by: Brent Baude <bbaude@redhat.com>
2024-04-10 15:06:40 -05:00
Chris Evich 5c66e14eca
Add OCI standard labels to podman images
Given a local container image, the OCI labels are very useful in tracking
down the source and revision from whence it came.  Tooling like Renovate
is also able to make use of these labels to suggest when newer versions
are available.

Note: The current OCI spec. references defining these as annotations,
however in practice, virtually nobody uses them.  Simple labels are more
much accessible to both humans and tooling (like Renovate).

Update the podman container images README section to reflect the
present-day reality.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-10 14:41:52 -04:00
Ed Santiago b38b5cf397
Merge pull request #345 from edsantiago/ya-new-pasta
Yet another new Pasta (04-05)
2024-04-10 06:39:08 -06:00
Ed Santiago 0ac3346842 Yet another new Pasta (04-05)
This one fixes a user-reported bug that we don't see in CI.
It's in bodhi for rawhide but no others. We want to test anyway.

Also, small changes to Windows Chocolatey install command
to conform to (some) best practices document. Link to such,
and explain why I disregard some of what they call "best".

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-04-09 13:25:47 -06:00
Ed Santiago 45282613ab
Merge pull request #344 from cevich/tag_latest_fedora_container
Tag latest fedora container
2024-04-09 10:23:23 -06:00
Chris Evich 8e0c9f3a52
Simplify latest image tagging
Previously when a PR was merged, another build ran for all the critical
container images, along with tagging them 'latest'.  This is not ideal,
because the content can change from the time the PR build and tested the
images until when it was merged.  There is also an anticipated future
need to access the `fedora_podman` and `prior-fedora_podman` images via a
"latest" tag.

* Update image-build tasks to only run in PRs
* Simplify `ci/make_container_images.sh` to no-longer require/use a
  magic `$PUSH_LATEST` value.
* Deduplicate all FQIN references to reuse a common prefix in `$REGPFX`
* Add a new `tag_latest_images_task` that only runs on branches, and
  simply adds a `latest` tag to all container images based on the
  (as-merged) value of `$IMG_SFX` (from IMG_SFX file)

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-09 10:53:11 -04:00
Chris Evich 619c79f716
Merge pull request #343 from cevich/build_push_updates
Build-push CI VM: Stop caching fedora
2024-04-05 10:12:32 -04:00
Chris Evich eb80bb9c30
Build-push CI VM: Stop caching fedora
Pulling the latest Fedora images needed to build P|B|S images was
previously done at CI VM build time.  However this causes some problems
in containers/image_build automation relating to the last pulled
architecture not matching the local system.  Since CI VM images can
stick around for a number of months sometimes, caching the "latest"
Fedora image becomes less and less impactful.  Simply stop the practice.

Also add the `unzip` package to support future image_build automation
and bump several timebomb statements.  Remove the debian grub timebomb
as that issue has been fixed.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-04 16:53:18 -04:00
Chris Evich 138d12e6e6
Merge pull request #342 from edsantiago/pasta-0326
Bump to pasta 03-26
2024-03-29 13:46:45 -04:00
Ed Santiago 0e56ce4e24 Bump to pasta 03-26
...and deal with broken grub on Debian. Switch to new better
debian blocking way, where we explicitly block broken versions
but allow future upgrades

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-03-28 08:09:26 -06:00
Chris Evich 791fd657c6
Merge pull request #339 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.12.0
2024-03-25 14:24:43 -04:00
renovate[bot] 6b9521f3d4
[skip-ci] Update dawidd6/action-send-mail action to v3.12.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-03-23 21:07:00 +00:00
Chris Evich 657b6acc75
Merge pull request #337 from edsantiago/howsitgoing
New VM build, just to see how things are
2024-03-20 15:29:11 -04:00
Ed Santiago c41f36a60f New VM build, just to see how things are
New pasta (03-20). And whatever else comes in.

Also: install StrawberryPerl on Windows, see:

  https://github.com/containers/podman/pull/21991

First CI-detected problem:

    debian: The following packages have unmet dependencies:
    debian:  libfuse2t64 : Breaks: libfuse2 (< 2.9.9-8.1)

Solution attempted: remove libfuse2 from INSTALL_PACKAGES

And, bump expired Debian timebombs

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-03-20 09:39:54 -06:00
Chris Evich a3e4099c72
Merge pull request #335 from cevich/migrate_build-push
Migrate build script from c/automation_images
2024-03-08 16:24:54 -05:00
Chris Evich ce9fbf2d1a
Migrate build script from c/automation_images
Ref: https://github.com/containers/image_build/pull/12

Previously the build-push scripts were run by automation in this repo.
That has since changed, with a migrated over to the
containers/image_build repo.  However, while automation there uses the
most recent build-push VM image, that image is produced in this repo.
Arrange to test the latest script against just-produced VM images to
ensure the environment is always supportive for the script.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-08 14:18:58 -05:00
Chris Evich 53eea3160a
Push back rc6 kernel timebomb
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-07 14:30:21 -05:00
Chris Evich 7111e7a5e8
Merge pull request #334 from cevich/move_quayimages
[CI:DOCS] Migrate quay.io container image build
2024-03-05 15:43:46 -05:00
Chris Evich e256fc30e4
[CI:DOCS] Migrate quay.io container image build
Moved to: https://github.com/containers/image_build

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-05 15:04:19 -05:00
Chris Evich 930bf6b852
Merge pull request #333 from cevich/build_push_bug_fix
Build-push bug fix
2024-02-29 14:52:13 -05:00
Chris Evich b006128ff9
Minor: Additional build-push debugging statements
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-29 12:21:19 -05:00
Chris Evich 0c597a7ef3
Fix bug introduced by #332
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-29 12:21:19 -05:00
Chris Evich 059c4c608c
Merge pull request #332 from cevich/build_push_dot
Support no-clone build-push mode
2024-02-28 14:38:14 -05:00
Chris Evich 565d822329
Support no-clone build-push mode
As originally conceived, the build context for each image lives in the
respective podman, buildah, and skopeo repositories.  A future set of
PRs will move both the source and build automation into the
new containers/image_build repository.  This is needed to support
images that are point-in-time rebuildable and run test-builds on
image context changes.

Add a magic 2nd argument prefix ('.'), and conditionals to prevent
cloning the build context repo. This will allow for an interim period
where build automation can run from both the current and new repository
until the context repos can be moved.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-28 14:02:27 -05:00
Chris Evich 447529fcae
Merge pull request #331 from edsantiago/rc6
Bump. Hoping to get rc6 kernel in rawhide
2024-02-28 13:55:06 -05:00
Ed Santiago d1c008a1d1 Bump. Hoping to get rc6 kernel in rawhide
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-27 05:58:15 -07:00
Chris Evich 4f989daed5
Merge pull request #329 from edsantiago/ditch_f38
New VMs again, keeping f38
2024-02-22 16:01:08 -05:00
Ed Santiago c625377c36 New VMs yet again
Need new pasta 2024-02-20 to fix hanging-tests problem.

Pasta 2024-02-20 is not yet stable on all fedorae, so add
a timebombed force-install.

Also: podman-plugins is obsolete and does not exist in rawhide.
Ditch it.

Also: jobs are occasionally timing out. Bump up timeouts.

Also: fix broken timebomb check in Makefile

Also: bump up expired Debian timebombs

Also: sideload pasta 02-20 for Debian

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-22 07:30:08 -07:00
Chris Evich 7547b67e33
Merge pull request #323 from cevich/use_library_timebomb
Utilize the new library timebomb() function.
2024-02-16 10:50:51 -05:00
Chris Evich 7d010362a1
Utilize the new library timebomb() function
N/B: This new automation library version includes a significant update
to stdio redirection for all functions.  Careful testing of these images
is highly recommended.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-15 12:00:32 -05:00
Chris Evich 5af77ad53a
Merge pull request #327 from edsantiago/new_netavark
New VMs: we need netavark 1.10.3
2024-02-15 11:57:01 -05:00
Ed Santiago 15fe9709bb New VMs: we need netavark 1.10.2-1.fc40
Also, add "rpm -qa" (fedora) and "dpkg -l" (debian) so Ed's
package-version script can get better data. It would be nice
if we could save those to an artifact file, but we can't.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-12 05:21:20 -07:00
Chris Evich 5dfa6aebfa
Merge pull request #325 from edsantiago/no_more_cni
New VMs: include netavark in prior-fedora
2024-02-01 17:08:05 -05:00
Ed Santiago 8c0332d2a8 New VMs: include netavark in prior-fedora
CNI is deprecated, and will no longer be tested in CI (Podman
PR 21410).

We've been force-removing netavark from prior-fedora. Remove
this special case so now all fedorae have netavark.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-01 07:31:05 -07:00
Chris Evich d1ce228ced
Merge pull request #326 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.11.0
2024-01-31 14:02:24 -05:00
renovate[bot] 7f8ae66fb5
[skip-ci] Update dawidd6/action-send-mail action to v3.11.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-30 21:26:27 +00:00
Ed Santiago c6ce03e4a1
Merge pull request #324 from edsantiago/new-vms
Let's see what we pick up this time
2024-01-29 07:37:27 -07:00
Ed Santiago 71dcd869a5 Let's see what we pick up this time
Results: debian tar is still broken, and I didn't check grub
but it's safe to assume that's still broken too, so, bump
up both timebombs.

...and:

  - add new timebomb-check target to prevent me from
    submitting a guaranteed-to-fail-CI job

  - get_ci_vm: use apk, not pip, to install aws-cli
    because our base image now whines about pip:

       This environment is externally managed

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-01-25 11:42:11 -07:00
Chris Evich 55f939df9f
Merge pull request #320 from edsantiago/new-vms
new vms
2024-01-16 11:06:17 -05:00
Chris Evich dc21540194
Merge pull request #321 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.10.0
2024-01-08 16:16:31 -05:00
Chris Evich f768dd484d
Merge pull request #322 from cevich/email_subj_fix
[CI:DOCS] Minor fix to fix orphan-vm e-mail subject
2024-01-08 11:07:45 -05:00
Chris Evich 9b3f9aa275
Minor fix to fix orphan-vm e-mail subject
Its been checking GCP and AWS clouds for a long time now.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-08 11:06:21 -05:00
renovate[bot] 1a940444ad
[skip-ci] Update dawidd6/action-send-mail action to v3.10.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-05 19:30:49 +00:00
Ed Santiago fed97ac56a new vms
Try to pick up new pasta.

Also, add perl-Clone, needed by the manpage/helpmsg xref script

Also, remove one timebomb (crun) and extend another (grub on debian):
crun is now 1.12-1 on all VMs.

And, finally, a seemingly innocuous change: google-cloud-sdk -> -cli
I have no idea what's going on here, but making this change gets
builds to pass. Without this change, one of the early image-build
CI steps fails because of a dnf conflict. What seems to be happening
is that in old builds (Dec 2023), 'dnf upgrade' upgraded  only -sdk.
In new builds (Jan 2024) it wants to bring in both -sdk and -cli,
and the two can't coexist.

Oh, one more: block debian upgrade of tar. The version in debian
right now is broken. Add a timebomb.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-01-02 14:27:28 -07:00
Chris Evich 61ad7cf83a
Merge pull request #319 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4
2023-12-14 15:57:32 -05:00
renovate[bot] da81c99493
[skip-ci] Update actions/upload-artifact action to v4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-14 20:31:54 +00:00
Chris Evich 3aba7d7eaf
Merge pull request #318 from n1hility/update-win-storage
Move win instance to faster storage and 6k iops
2023-12-12 09:44:51 -05:00
Jason T. Greene 1155207686 Move win instance to faster storage and 6k iops
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-12-08 13:39:12 -06:00
Chris Evich 4ffbee0218
Merge pull request #316 from n1hility/add-mgmt
Add hyperv management tools to Windows image
2023-12-08 11:15:09 -05:00
Chris Evich 2d41ea4849
Merge pull request #317 from cevich/docs_update
[CI:DOCS] Minor readme update
2023-12-07 12:10:36 -05:00
Chris Evich 8765d190c4
Minor readme update
Modern versions of the AWS cli allow all these options to exist in the
`credentials` file.  But for completeness, and to add in the region
default, best mention them.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-07 11:31:37 -05:00
Jason T. Greene 6d57972c89 Increase volume size to 200gb
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-12-06 16:59:23 -06:00
Jason T. Greene ae25083be1 Add hyperv management tools to Windows image
Extend timebomb om cache_images

Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-12-05 21:18:30 -06:00
Chris Evich dfe3b9d73c
Attempt to fix URL in notification mail
The docs are not specific enough to know for sure `run_id` is the
correct value to use.  When browsing to a job, there are two numbers
present in the URL, I cannot find a ref for the one of them :S
Hopefully `run_id` is correct and the second number isn't needed.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-01 10:41:21 -05:00
Chris Evich 416e87b605
Merge pull request #312 from edsantiago/f39_released
New f39 (official, not beta) image
2023-11-16 15:57:23 -05:00
Ed Santiago d16ced38be New f39 (official, not beta) image
First step: create new base images:

  1minutetip$ make IMPORT_IMG_SFX
  1minutetip$ make image_builder_debug ....

Second step:

  home$ make IMG_SFX

Commit and push. Subsequent emergency management steps:

  1) Change "-qq" to "-q" in debian apt-get, so we have some
     hope of figuring out what is failing.

  2) debian update of grub no longer works. Try a new way.
     (We can no longer update grub-common, due to dependency
     error. Old grub fails with a "version_find_latest" error.
     So, new solution is to provide version_find_latest).

     2a) New timebomb() function will ensure that temporary
         workarounds like this one do not accumulate.

  3) force-update crun on f38 so we get 1.11.2.
     3a) use new timebomb(), see 2a above.

  4) ccia is failing due to cython issue in newer Fedora.
     Force using f38, which works. Cannot timebomb().

  5) fedora-aws build kept timing out. Discover and add
     AWS_SOMETHING envariables to .cirrus.yml

Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-11-16 10:44:21 -07:00
Chris Evich b1b966eb7c
Merge pull request #308 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.9.0
2023-10-17 12:26:41 -04:00
Chris Evich 03994f80e4
Merge pull request #309 from cevich/update_rawhide_crun
Update windows CI VMs for hyper-v machine testing
2023-10-05 12:09:58 -04:00
Chris Evich 2ee0d88384
Update windows CI VMs for hyper-v machine testing
In addition to updating mingw and golang, this moves the
installation of .Net and wixtoolset here instead of at CI runtime.
The windows packer-configuration was updated to operate more
consistently with how things are done in Linux WRT calling scripts.
Along with some file renames and other cosmetic changes, the windows
build timeout was increased since the extra packages seem to
place it right on the edge of the former value.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-10-04 15:53:22 -04:00
Chris Evich 1cfc6d352f
Remove temp. workarounds
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-10-03 16:56:03 -04:00
Chris Evich f68fc63aa8
Merge pull request #305 from cevich/docs_update
[CI:DOCS] Improve import-image docs
2023-09-29 14:23:48 -04:00
Chris Evich 6f157ff28e
Improve import-image docs
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-29 14:22:00 -04:00
Chris Evich c22ef2b398
Merge pull request #302 from edsantiago/f39
Bump to Fedora 39
2023-09-28 14:47:20 -04:00
Ed Santiago 80f5d3fd60 Bump to Fedora 39
Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-09-27 18:45:59 -06:00
Ed Santiago ea2dc8bd8b Housekeeping: egrep is deprecated
Replace with grep -E

Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-09-27 12:20:45 -06:00
Chris Evich e5de95d40e
Merge pull request #307 from n1hility/add-hyperv
Add hyperv to windows image
2023-09-27 11:04:19 -04:00
renovate[bot] 60f03d91f3
[skip-ci] Update dawidd6/action-send-mail action to v3.9.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-27 11:42:56 +00:00
Jason T. Greene f5884c1b03 Add Hyper-V
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-09-26 11:54:44 -05:00
Jason T. Greene 2028aa50d0 Add example userdata for reenabling RDP
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-09-26 11:50:14 -05:00
Chris Evich 5b9f617e7d
Merge pull request #304 from cevich/fix_jq_null_iteration
Latest common automation library on build-push VM
2023-09-26 11:18:57 -04:00
Chris Evich 99a28fad77
Use latest common library + show version
The automation common library is version-pinned (in `lib.sh`) and
updates are carefully managed by renovate.  This is by design, so
breaking changes don't impact important CI environments.

However, on more than one occasion, there's been a need to update the
podman/buildah/skopeo image building scripts rapidly.  Since the
latest build-push VM image is always used, it's production doesn't need
to be tied down in the same way.  Mainly because there's extensive
testing of it from CI in this repo.

Make the necessary changes to allow installing the latest version of the
common automation library, along with the `build_push.sh` script,
specifically in the build-push VM image.

Also, add a debug message for the library version installed (will include
commit sha) to assist any future debugging.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-21 10:57:58 -04:00
Chris Evich 0582a0cc22
Minor: Fix documentation URL
Previous value was missing `$head_sha` and for some containers-org repos
would point at the wrong path.  Fix this by confirming the existence of
the README file, then using the location in the docs URL.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 17:16:17 -04:00
Chris Evich b86aea0acd
Merge pull request #298 from cevich/update_automation_lib
Update automation-library
2023-09-20 17:12:41 -04:00
Chris Evich 13f4ad1ca3
Workarond failure to update SID kernel
Without this, during package setup this error is emitted:

```
Setting up linux-image-6.5.0-1-cloud-amd64 (6.5.3-1) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-6.5.0-1-cloud-amd64
/etc/kernel/postinst.d/zz-update-grub:
Generating grub configuration file ...
/etc/grub.d/10_linux: 1: version_find_latest: not found
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127
dpkg: error processing package linux-image-6.5.0-1-cloud-amd64 (--configure):
```

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 15:58:51 -04:00
Chris Evich a2f2f472a4
Drop ZFS CI Support in Debian SID
Maintaining this is a PITA and it seems to break very frequently with
errors similar to:

```
Failed to process /etc/kernel/header_postinst.d at /var/lib/dpkg/info/linux-headers-6.5.0-1-cloud-amd64.postinst line 11.
dpkg: error processing package linux-headers-6.5.0-1-cloud-amd64 (--configure):
 installed linux-headers-6.5.0-1-cloud-amd64 package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of linux-headers-cloud-amd64:
 linux-headers-cloud-amd64 depends on linux-headers-6.5.0-1-cloud-amd64 (= 6.5.3-1); however:
  Package linux-headers-6.5.0-1-cloud-amd64 is not configured yet.

dpkg: error processing package linux-headers-cloud-amd64 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of linux-headers-amd64:
 linux-headers-amd64 depends on linux-headers-6.5.0-1-amd64 (= 6.5.3-1); however:
  Package linux-headers-6.5.0-1-amd64 is not configured yet.

dpkg: error processing package linux-headers-amd64 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of zfs-zed:
 zfs-zed depends on zfs-modules | zfs-dkms; however:
  Package zfs-modules is not installed.
  Package zfs-dkms which provides zfs-modules is not configured yet.
  Package zfs-dkms is not configured yet.

dpkg: error processing package zfs-zed (--configure):
 dependency problems - leaving unconfigured
```

The fact is ZFS is completely unsupported by those whom pay our bills,
a best-effort package in Debian, and an almost constant headache.  It's
only needed by the containers/storage CI, and nowhere else.  It's not
fair for CI in all the other repos to wait due to Debian+ZFS build
problems.  This commit removes ZFS support on all Debian images.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 15:58:27 -04:00
Chris Evich dd2f6bda56
Increase cache-image build timeout
On several occasions this job has hit the 45m wall due (probably) to
networking slowness (somewhere) downloading packages.  Bunp it up to use
the default 60m timeout.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 14:08:28 -04:00
Chris Evich 0de4a8bf4a
Update automation-library
Significantly, this version defines a `passthrough_envars()` function to
replace the two duplicate definitions in podman and buildah CI.  When
incorporating the new images into those environments, the duplicates
should be removed.

Also included is an important updates to the build-push script that
improves debugging in cases where the `--nopush` argument is used.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 14:08:28 -04:00
Chris Evich 26a61a6523
Remove emacs from debian SID
This was added as a developer-friendly package, but as of this commit
there are dependency problems in SID.  Remove it, if it's really still
needed somebody can add it back.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 14:08:28 -04:00
Chris Evich d83bbbe01e
Fix image build/push repo. arg. check
Likely a typo of variable name, was always intended to check vs the full
URL, not just the name.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 11:16:07 -04:00
Chris Evich cdab3b2497
Merge pull request #301 from cevich/multiarch_builds
Implement quay.io container image build and push
2023-09-20 09:50:27 -04:00
Chris Evich fd0eaecf09
Minor tweaks to multi-arch images
* After confirming the image source repository comes from github,
  point the source label/annotation directly using the exact commit.

* Add quay-specific expiration labels for 'testing' and 'upstream'
  images.  This way if builds stop or fail for some reason, any use
  of rapidly irrelivent images is blocked.

* Update tests

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 16:25:31 -04:00
Chris Evich e53f780ac4
Validate Cirrus-CI Repository settings in PRs
There's a critical little "slider" on the webpage that's somewhat
difficult to tell if it's enabled or not.  Make a somewhat weak attempt
to catch if it's state ever changes.  This is better than not checking
at all.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 10:26:17 -04:00
Chris Evich 96f616e440
Always show the repo. clone details
Otherwise, outside of a debugging environment, it's hard to tell in the
log what was cloned.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 10:25:05 -04:00
Chris Evich abcfe96b58
Implement quay.io container image build and push
This job use to be performed from the individual repositories CI,
however there was a major problem:

https://github.com/containers/podman/discussions/19796

Reinstate the build jobs in this repo. since it's secrets are secure and
builds are safe from general-public meddling.

Also, slighly alter the existing cirrus-cron triggered tasks such that
they only respond to a specific job name.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 10:24:41 -04:00
Chris Evich 09ae91d04c
Merge pull request #300 from cevich/multiarch_mulligan
Improve & rename main build-push script
2023-09-18 15:49:06 -04:00
Chris Evich 0226d63d3f
Improve & rename main build-push script.
This script orchestrates running of the actual `build_push.sh` script,
on behalf of various github containers-org repos.  Rename it to better
reflect that purpose.

Change behavior WRT first argument (git repo. URL) to shallow-clone the
repo into a temporary directory.

Remove the auto-update library in anticipation of executing builds from
Cirrus-cron in this (automation_images) repo.  Given encrypted secrets
are protected by execution context and actor.

Update labeling to also annotate the images, since newer tooling prefers
annotations but older tools only support labels.

Remove wait-for-copr from build-push VM image since it's not needed.  An
alternate build system was put in place.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-18 14:35:28 -04:00
Chris Evich 71cc3691c4
Revert "[CI:TOOLING] Fix wrong SHA in revision label"
This reverts commit 874da1b703.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-18 11:41:17 -04:00
Chris Evich d2a1ea8cc4
Replace quay.io robot credentials.
Removed out of an abundance of caution, ref:
https://github.com/containers/podman/discussions/19796

Double-checked Cirrus-CI 'Decrypt Credentials' setting for this repo.
is: Collaborators, Bots, and Users with Write permission.

Double-checked Github collaboration settings.  It's limited to specific
github users only.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-15 10:15:11 -04:00
Chris Evich 7482f50592
Merge pull request #299 from containers/renovate/actions-checkout-4.x
[skip-ci] Update actions/checkout action to v4
2023-09-05 09:57:20 -04:00
renovate[bot] c893f90c7e
[skip-ci] Update actions/checkout action to v4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-04 15:11:25 +00:00
Chris Evich ac93ef9bef
Merge pull request #297 from cevich/test_new_build_table
Test PR #296
2023-08-23 11:39:03 -04:00
Chris Evich f3dace1baa
Test PR #296
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 14:59:01 -04:00
Chris Evich 4288dfa701
Merge pull request #296 from cevich/only_the_bs
Obscure non-cache image IDs in pr-comment table
2023-08-22 14:57:20 -04:00
Chris Evich f248d99329
Update image suffix value
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 13:24:15 -04:00
Chris Evich 12065df676
Update code style using the `black` tool
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 13:20:26 -04:00
Chris Evich 6714c86834
Obscure non-cache image IDs in pr-comment table
All built images are included in the build-table added as a PR comment
to be helpful for reference and possible debugging.  However, it's
unhelpful if a human accidentally tries to deploy a non-cache image ID
into CI somewhere.  Those images are never to be used outside of very
special-case situations.  Obscure non-cache image IDs in the table to
prevent accidents.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 13:20:08 -04:00
Chris Evich 979faa40bc
Merge pull request #295 from Luap99/debian-locale
provide en_US.UTF-8 locale
2023-08-21 13:27:55 -04:00
Paul Holzinger ce66b7ec98
fedora: add glibc-langpack-en
Make sure the en_US.UTF-8 LANG is installed and can be used by podman
tests, see https://github.com/containers/podman/pull/19635.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2023-08-21 17:37:42 +02:00
Paul Holzinger b913d24a76
debian: generate en_US.UTF-8 locale
A podman test depends on that locale so we need to make it is installed
in the image, see https://github.com/containers/podman/pull/19635.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2023-08-17 13:20:00 +02:00
Chris Evich 0ac55b66e3
Merge pull request #294 from cevich/replace_clobbered_ec2_import_image
Replace clobbered ec2 import image
2023-08-16 16:52:17 -04:00
Chris Evich ec0c7b62f8
Update image IDs
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-16 15:17:58 -04:00
Lokesh Mandvekar 89a72808e7
Build-Push: Install pip and wait-for-copr
Podman-Desktop team would like fcos images built with packages from
rhcontainerbot/podman-next for MacOS testing with the latest unreleased
bits.
Ref: https://github.com/containers/podman/issues/19448

This commit installs pip and wait-for-copr in the build-push images.
wait-for-copr, as the name suggests, waits for a build with a specified
string to become available on a copr repo.
Ref: https://github.com/packit/wait-for-copr

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2023-08-16 15:17:58 -04:00
Chris Evich 9a2de2b244
Add image obsolete/prune safety net
If for whatever reason, a currently in-use import/base/cache image comes
up for obsolete or prune handling, issue a warning and skip it.  This
should never happen, but "should" is the purpose of a safety net.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-16 15:10:52 -04:00
Chris Evich 90376689e3
Fix typo in image-metadata update task
Unfortunately this mistake resulted in the loss of an in-use image.  A
future commit will add a safety-net to the obsolete/prune tooling.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-16 15:01:20 -04:00
Chris Evich dcc416cc0c
Merge pull request #286 from cevich/murder_ec2_orphans
[CI:TOOLING] Automatic termination of EC2 VMs
2023-08-16 11:09:18 -04:00
Chris Evich 4e37a05331
Automatic termination of EC2 VMs
Around the time of this commit, an annoyingly steady stream of EC2
orphans were reported to Cirrus-support.  They've taken actions to
resolve, but the failure-modes are many and complex.  Since most of the
EC2 instances are rather expensive to keep needlessly running, and manual
cleanup is annoying, enhance the monitoring script try to attempt
termination automatically.

This isn't perfect, it's possible for the script to break in strange ways
and it's not practical to check for all of them.  Instead, include
some helpful indications in the monitoring e-mail regarding what was
attempted.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-15 12:08:52 -04:00
Chris Evich b3a106cf13
Minor: Fix duplicate YAML anchor
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-14 17:18:53 -04:00
Chris Evich ca033fcf2d
Merge pull request #293 from cevich/show_autoupdate_commit
[CI:TOOLING] Show build-push commit when auto-updating
2023-08-14 14:24:49 -04:00
Chris Evich 8d7f73ae37
Show build-push commit when auto-updating
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-14 10:46:39 -04:00
Chris Evich e792393866
Merge pull request #291 from cevich/fix_build_push
Improve build-push w/ more debugging and a few fixes.
2023-08-10 16:03:57 -04:00
Chris Evich a0b1a5ce7b
Improve build-push w/ more debugging and a few fixes
Ref: https://github.com/containers/automation/pull/150

This app always downloads/runs from the latest version committed in this
repo.  No need to build all images, just a few needed for testing.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-10 14:11:55 -04:00
Chris Evich cf5f143cbc
Minor: Fix deprecated use of egrep
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-10 14:11:35 -04:00
Chris Evich 755589f1da
Merge pull request #290 from cevich/mandown_to_go-md2man
Install go-md2man in place of mandown
2023-08-09 15:47:58 -04:00
Chris Evich d607d2a984
Allow rawhide to use shared packaging script
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-09 10:37:18 -04:00
Chris Evich d5acf18e70
Install go-md2man in place of mandown
Ref: https://github.com/containers/netavark/pull/771

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-08 15:58:51 -04:00
Chris Evich 42b240929e
Merge pull request #289 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.8.0
2023-08-08 11:14:12 -04:00
renovate[bot] 2e5b1b72a0
[skip-ci] Update dawidd6/action-send-mail action to v3.8.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-08 14:43:36 +00:00
Chris Evich 60cb657655
Merge pull request #288 from cevich/fix_podman_python_validation
Fix podman python validation
2023-08-07 13:11:34 -04:00
Chris Evich 5580ea129f
Update passt to 0.0~git20230625.32660ce-1 or later
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-07 10:49:27 -04:00
Chris Evich 1b8a8e4bc4
fix python validation for podman on rawhide
Cache a few packages that should allow podman CI to run build validation
on rawhide.  This will result in some runtime installs of python stuff,
but that's not a change from the non-rawhide validation operations.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-07 10:48:26 -04:00
Chris Evich 755c1f1b96
Merge pull request #285 from cevich/update_sid
Update CI VM images
2023-08-07 10:46:16 -04:00
Chris Evich fe46dd6384
Update CI VM images
Needed for newer `passt` package in Debian.  Previous
`20230706t200047z-f38f37d13` images have an older
`0.0~git20230309.7c7625d-1` version that doesn't pass podman CI.
I'm told that we need `0.0~git20230625.32660ce-1` or later.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-07-26 15:10:51 -04:00
Chris Evich d72aaa3503
Merge pull request #284 from cevich/update_images
Update CI VM images for newer packages
2023-07-26 15:10:06 -04:00
Chris Evich da513e4ff7
Update CI VM images for newer packages
Ref:
https://github.com/containers/podman/pull/18612#issuecomment-1608089245

Also:

* Switch to the distro. version of `passt` since development has
  cooled down.  It's also now available in both F37 and 38.  Note, for
  the F38 images, it will still grab it from updates-testing.

* Implement a few fixes to cope with the `dnf` to `dnf5` update
  when switching from F38 to rawhide.

* De-duplicate & force use of `DEBIAN_FRONTEND=noninteractive` by
  including it into the `$SUDO` variable.

* Add debugging to `base_images/debian_base-setup.sh` to help verify
  `DEBIAN_FRONTEND=noninteractive` is set.

* Move python packages out of Fedora rawhide due to a broken
  dependnecy:
  `nothing provides (python3.12dist(astroid) <= 2.17~~dev0 with
  python3.12dist(astroid) >= 2.15.2) needed by
  python3-pylint-2.17.2-2.fc39.noarch`

* Fix the name of the debian kernel headers package to not mention the
  currently booted kernel version.  This fixes an issue where an older
  kernel is in use and doesn't match currently available headers package.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-07-06 16:00:57 -04:00
Chris Evich 34662df855
Minor: Fix empty CIRRUS_PR condition
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-23 15:04:59 -04:00
Chris Evich e91123949b
Minor: Fix including 'v' in common lib update
Also, remove the grouping as it's not necessary.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-23 12:26:58 -04:00
Chris Evich 2fc5b66495
Merge pull request #283 from cevich/renovate_manage_common_lib
[CI:DOCS] Renovate management of common-lib updates
2023-06-23 12:20:04 -04:00
Chris Evich 66bc43a4e3
Minor: Fix validation failure on docs PR.
Conditional was checking the wrong CI env. var.  Also, print env. vars.
during validation to help with any future bugs.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-23 12:18:11 -04:00
Chris Evich 114aef1915
[CI:DOCS] Renovate management of common-lib updates
Often these kinds of updates require some hand-holding of the build
process and/or manual build-script updates.  However, it's desirable to
not allow CI VM images to get "too far" behind w/ their common-lib
version.

Teach renovate how to manage it, try to build CI VM images,
but mark the PRs as draft since they likely need more scrutiny. Also,
for "major" or "minor" (not "patch") version updates, include a
highly-visible warning message in the PR description.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-23 12:12:11 -04:00
Chris Evich c2834937f6
Merge pull request #282 from cevich/newer_packages
Update CI VM Images to pickup updated packages
2023-06-14 11:02:29 -04:00
Chris Evich ecd3a9cf1f
Update CI VM Images to pickup updated packages
Also update to latest Debian SID (Trixie) release.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-14 09:46:19 -04:00
Chris Evich f8396a5852
Parallelize tooling-images build
Prior to this commit, this task frequently completed near or past the
40m timeout.  This is too long.  Arrange for the majority of the images
to build in parallel. Fix `test_imgts` and `imgts` task dependencies
on the `imgts_build` task.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-14 09:46:19 -04:00
Chris Evich d238930755
Update debian images to new SID v13
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-13 16:34:29 -04:00
Chris Evich ff1a95b822
Fix not updating timestamp on arm64 import image
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-13 12:40:59 -04:00
Chris Evich 14c5cf8a3e
Merge pull request #281 from cevich/update_images
Update CI VM Images
2023-06-01 13:53:09 -04:00
Chris Evich 003070b52f
Update CI VM Images
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-01 10:54:46 -04:00
Chris Evich 117dcb5891
Merge pull request #279 from nalind/use-agent
Use an ssh-agent in get-ci-vm.sh
2023-06-01 10:52:20 -04:00
Nalin Dahyabhai 5913338a44 Use an ssh-agent in get-ci-vm.sh
Use an ssh-agent to cache keys in the container, so that if the user's
gcloud key is encrypted, we don't have to prompt them for the passphrase
several times.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2023-05-17 13:50:17 -04:00
Chris Evich 3c12979c42
Merge pull request #280 from cevich/freshen_images
Update CI VM Images
2023-05-17 12:13:39 -04:00
Chris Evich 010405a5ae
Update CI VM Images
Specifically this is needed to bring in a new
container-selinux-2.213.0-1.fc38.noarch for testing in podman CI.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-17 10:46:57 -04:00
Chris Evich 3683033594
Fix cron-check calling wrong reusable workflow
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-11 10:02:44 -04:00
Chris Evich 6898802ba3
Merge pull request #277 from cevich/rm_bench_stuff
[CI:DOCS] Remove disused bench_stuff container context dir
2023-05-05 10:30:56 -04:00
Chris Evich d6a1424e30
Remove disused bench_stuff container context dir
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-05 10:27:56 -04:00
Chris Evich c410bbd63a
Merge pull request #275 from cevich/fedora_38_images
Update to Fedora 38
2023-04-27 10:50:35 -04:00
Chris Evich e18319267d
Update to Fedora 38
Also twiddle a few other bits and bobs needing minor touches here and
there.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-26 10:05:00 -04:00
Chris Evich 040c15be63
Remove oci-umount package on Fedora CI VMs
This package was retired and is no-longer available in F38 and beyond.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-26 10:05:00 -04:00
Chris Evich dd27bdb39f
Merge pull request #276 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.7.2
2023-04-25 14:00:11 -04:00
renovate[bot] 7909b38d19
[skip-ci] Update dawidd6/action-send-mail action to v3.7.2
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-04-25 12:25:47 +00:00
Chris Evich 1f5a119ec7
Merge pull request #274 from cevich/fix_rawhide_meta
[CI:DOCS] Minor: Fix imgts test task missing rawhide
2023-04-20 16:15:02 -04:00
Chris Evich 98198d1adf
[CI:DOCS] Minor: Fix imgts test task missing rawhide
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-20 16:12:52 -04:00
Chris Evich 0226f99e0a
Merge pull request #273 from cevich/add_rawhide
Add rawhide CI VM image build
2023-04-20 16:11:46 -04:00
Chris Evich 2df52680a1
Add rawhide CI VM image build
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-20 14:40:42 -04:00
Chris Evich ba7fde8949
Fix build_push test of removed label
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-20 14:40:28 -04:00
Chris Evich ae87f3ab2b
Fix CCIA container build
The `update` subcommand was dropped from microdnf.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-19 12:04:35 -04:00
Chris Evich a6066dda69
Stop building bench_stuff container
Podman CI has stopped generating benchmarks, so there's nothing to
collect.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-19 12:04:35 -04:00
Chris Evich e0aeae3262
Merge pull request #270 from cevich/fix_bad_deprecation
[CI:TOOLING] Fix removal of active AWS images
2023-04-13 12:22:03 -04:00
Chris Evich 2e32597bf1
Fix removal of active AWS images
Periodically image `LastLaunched` timestamps are examined.  When found
to be more than 30-days in the past, images have a deprecation date set
on them 30-days in the future.  However during the periodic scans, if
the `LastLaunched` time is updated, the deprecation date isn't cleared.

Worse, a previous bug had been incorrectly updating deprecation dates on
active images.  That bug was fixed, but the deprecation status was never
cleaned up.  This has been corrected manually by removing the
deprecation statuses of all EC2 images.

Fix both the periodic (daily) obsolete scans, and CI-job level (imgts)
activity scans such that the deprecation status of any active image
is cleared.  Also force deprecation status removal if encountered on an
image marked `permanent=true`.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-13 10:49:37 -04:00
Chris Evich 5b4b628d31
Merge pull request #272 from cevich/bad_version
[CI:TOOLING] Fix wrong SHA in revision label
2023-04-12 11:34:11 -04:00
Chris Evich 874da1b703
[CI:TOOLING] Fix wrong SHA in revision label
Fixes: #271

All images built by this script install the subject binary via RPM.
However the script sets the revision label based on a distant past
implementation that always compiled the subject.  Unfortunately, there's
no common/simple way to extract the SCM commit ID from the RPM.
Fix inclusion of the bad SHA by removing the (likely always)
incorrect label.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-11 16:33:26 -04:00
Chris Evich 3c32b95f69
Merge pull request #269 from edsantiago/banish_systemd_resolver
Disable systemd-resolved
2023-04-05 13:59:50 -04:00
Ed Santiago fe14a06903 Disable systemd-resolved
We're pretty certain that it's the cause of the cdn03.quay.io flakes

Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-04-05 09:23:07 -06:00
Chris Evich 340f469e28
Merge pull request #268 from cevich/debian_zfs_support
Add zfsutils package to debian CI VMs
2023-03-30 14:59:39 -04:00
Chris Evich e82314452f
Add zfsutils package to debian CI VMs
Ref: https://github.com/containers/storage/pull/1535

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-30 11:31:50 -04:00
Chris Evich 41d186ee2a
Fix debian interactive prompting
For unknown reasons, `env DEBIAN_FRONTEND=noninteractive` was not present
in the `$SUDO` value for debian VMs.  Since it's only really needed in
one place, simply hard-code it there directly.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-29 14:52:36 -04:00
Chris Evich a9644b1756
Merge pull request #267 from cevich/c_storage_packages
Add packages needed for c/storage CI
2023-03-21 10:30:55 -04:00
Chris Evich 88db010688
Add packages needed for c/storage CI
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-20 11:41:18 -04:00
Chris Evich da9f7b7e6a
Bump automation library version
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-20 11:10:26 -04:00
Chris Evich 8bdea25772
Merge pull request #266 from cevich/add_debian_package
Add libostree-dev package for c/image CI
2023-03-15 10:54:39 -04:00
Chris Evich 1e8cd256cf
Replace pytoml with toml & tomli
The former has been deprecated and removed from SID

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-14 16:45:32 -04:00
Chris Evich 97c8f569ac
Add libostree-dev package for c/image CI
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-14 14:55:16 -04:00
Chris Evich 5aa15c19ef
Merge pull request #265 from cevich/fix_ccia
[CI:TOOLING] Fix CCIA container entrypoint
2023-03-13 12:00:12 -04:00
Chris Evich 8d62c93de1
[CI:TOOLING] Fix CCIA container entrypoint
Apparently, it's not possible to reference an env. var. in the
entrypoint.  Oops.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-13 11:09:23 -04:00
Chris Evich 27912931c0
Merge pull request #264 from cevich/fix_pr_image_id
[CI:DOCS] Fix attempted use of old tag format
2023-03-10 15:32:03 -05:00
Chris Evich f257892262
Merge pull request #263 from cevich/remove_cache
Remove non-functional container build cache
2023-03-10 15:30:24 -05:00
Chris Evich 5db1d862ee
Fix attempted use of old tag format
The c<$CIRRUS_BUILD_ID> format tags have been replaced by the contents
of the IMG_SFX file.  However, this workflow does not clone the
repository and so cannot access the file.  Simplify it to just use the
latest tag.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-10 15:28:21 -05:00
Chris Evich 10256e8339
Remove non-functional container build cache
This was only ever half-implemented and attempts to complete it were all
abandoned.  Fully remove all traces of package cache.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-10 11:57:02 -05:00
Chris Evich ce7a6ed472
Merge pull request #262 from cevich/bench_stuff
[CI:TOOLING] Add bench_stuff container image & system test
2023-03-10 11:28:54 -05:00
Chris Evich be8a1ffaa2
Add bench_stuff container image & system test
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-09 16:29:37 -05:00
Chris Evich ad78e8e522
Resolve ccia container build TODO
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-09 15:21:15 -05:00
Chris Evich 3e1fa9e870
Minor: Fix CCIA container image
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-09 12:20:35 -05:00
Chris Evich 0e3e3bfff9
Merge pull request #261 from cevich/update_packages
Update Images for passt-0^20230227.gc538ee8-1.fc37
2023-03-07 16:18:40 -05:00
Chris Evich 51155cbc3a
Update Images for passt-0^20230227.gc538ee8-1.fc37
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-07 14:25:36 -05:00
Chris Evich a6c6e3c5f5
Minor: Comment update
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-07 14:25:23 -05:00
Chris Evich 9d6edeb8ba
Merge pull request #260 from cevich/download_latest_docker
Automatically select docker download Debian repo.
2023-03-02 15:15:17 -05:00
Chris Evich 19cb21bf81
Automatically select docker download Debian repo.
When Debian CI VM Images were initially implemented, there was no Docker
repository for the Debian SID release, so the latest release code-name
was hard-coded.  The situation has changed, and there is now a Docker
repository for bookworm.  Remove the hard-coded release code-name and
replace it with a name-lookup from `/etc/os-release` (currently
`bookworm`).

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-02 13:37:04 -05:00
Chris Evich 9b03e0960b
Fix missing AWS EC2 import images
Previously, import-image AMI's were not being properly time-stamped on
use.  This resulted in their being pruned, leading to an inability to
build new images.  Fix this by employing a similar `.cirrus.star` and
`Makefile` mechanism as `IMG_SFX`.  Update image-import documentation
accordingly.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-02 13:37:04 -05:00
Chris Evich 777d1a6921
Merge pull request #259 from cevich/fix_not_pruning_amis
[CI:TOOLING] Fix not pruning amis
2023-03-01 11:34:30 -05:00
Chris Evich 18ad66a3d0
Enable pruning of EC2 AMIs and Snapshots
Due to either a misunderstanding or recent change, AMI's and snapshots
are not automatically removed after their deprecation date.  Update
image-prune container to search-for and handle deprecated EC2 images.

Fixes: #257

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-01 10:42:27 -05:00
Chris Evich 618aae5edd
Fix perpetually re-deprecating AMIs
Previously while searching through AMI's to label and/or mark for
deprecation the imgobsolete container never checked if the AMI was
already deprecated.  Since the deprecation date is based on the current
date/time, this results in deprecated AMI's never actually being
deprecated.  Fix this by skipping AMIs that have a deprecation date
already set.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-01 10:42:26 -05:00
79 changed files with 1803 additions and 1623 deletions

View File

@ -10,7 +10,10 @@ env:
CIRRUS_CLONE_DEPTH: 50 CIRRUS_CLONE_DEPTH: 50
# Version of packer to use when building images # Version of packer to use when building images
PACKER_VERSION: &PACKER_VERSION "1.8.3" PACKER_VERSION: &PACKER_VERSION "1.8.3"
# Registry/namespace prefix where container images live
REGPFX: "quay.io/libpod"
#IMG_SFX = <See IMG_SFX file and .cirrus.star script> #IMG_SFX = <See IMG_SFX file and .cirrus.star script>
#IMPORT_IMG_SFX = <See IMPORT_IMG_SFX file and .cirrus.star script>
gcp_credentials: ENCRYPTED[823fdbc2fee3c27fa054ba1e9cfca084829b5e71572f1703a28e0746b1a924ee5860193f931adce197d40bf89e7027fe] gcp_credentials: ENCRYPTED[823fdbc2fee3c27fa054ba1e9cfca084829b5e71572f1703a28e0746b1a924ee5860193f931adce197d40bf89e7027fe]
@ -44,7 +47,7 @@ image_builder_task:
# Packer needs time to clean up partially created VM images # Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true" auto_cancellation: $CI != "true"
stateful: true stateful: true
timeout_in: 40m timeout_in: 50m
container: container:
dockerfile: "image_builder/Containerfile" dockerfile: "image_builder/Containerfile"
cpu: 2 cpu: 2
@ -68,7 +71,7 @@ container_images_task: &container_images
skip: *ci_docs_tooling skip: *ci_docs_tooling
depends_on: depends_on:
- image_builder - image_builder
timeout_in: 30m timeout_in: &cntr_timeout 40m
gce_instance: &ibi_vm gce_instance: &ibi_vm
image_project: "libpod-218412" image_project: "libpod-218412"
# Trust whatever was built most recently is functional # Trust whatever was built most recently is functional
@ -80,7 +83,7 @@ container_images_task: &container_images
env: env:
TARGET_NAME: 'fedora_podman' TARGET_NAME: 'fedora_podman'
# Add a 'c' to the tag for consistency with VM Image names # Add a 'c' to the tag for consistency with VM Image names
DEST_FQIN: &fqin 'quay.io/libpod/${TARGET_NAME}:c$IMG_SFX' DEST_FQIN: &fqin '${REGPFX}/${TARGET_NAME}:c$IMG_SFX'
- name: *name - name: *name
env: env:
TARGET_NAME: 'prior-fedora_podman' TARGET_NAME: 'prior-fedora_podman'
@ -96,39 +99,59 @@ container_images_task: &container_images
# TARGET_NAME: 'debian' # TARGET_NAME: 'debian'
# DEST_FQIN: *fqin # DEST_FQIN: *fqin
env: &image_env env: &image_env
# For quay.io/libpod namespace # For $REGPFX namespace, select FQINs only.
REG_USERNAME: ENCRYPTED[de755aef351c501ee480231c24eae25b15e2b2a2b7c629f477c1d427fc5269e360bb358a53bd8914605bae588e99b52a] REG_USERNAME: ENCRYPTED[df4efe530b9a6a731cfea19233e395a5206d24dfac25e84329de035393d191e94ead8c39b373a0391fa025cab15470f8]
REG_PASSWORD: ENCRYPTED[52268944bb0d6642c33efb1c5d7fb82d0c40f9e6988448de35827f9be2cc547c1383db13e8b21516dbd7a0a69a7ae536] REG_PASSWORD: ENCRYPTED[255ec05057707c20237a6c7d15b213422779c534f74fe019b8ca565f635dba0e11035a034e533a6f39e146e7435d87b5]
script: ci/make_container_images.sh; script: ci/make_container_images.sh;
package_cache: &package_cache package_cache: &package_cache
folder: "/tmp/automation_images_tmp/.cache/**" folder: "/var/tmp/automation_images_tmp/.cache/**"
fingerprint_key: "${TARGET_NAME}-cache-version-1" fingerprint_key: "${TARGET_NAME}-cache-version-1"
# Most other tooling images depend on this one, build it first so the others
# may build in parallel.
imgts_build_task:
alias: imgts_build
name: 'Build IMGTS image'
only_if: *is_pr
skip: &ci_docs $CIRRUS_CHANGE_TITLE =~ '.*CI:DOCS.*'
depends_on:
- image_builder
timeout_in: *cntr_timeout
gce_instance: *ibi_vm
env: *image_env
script: |
export TARGET_NAME=imgts
export DEST_FQIN="${REGPFX}/${TARGET_NAME}:c${IMG_SFX}";
ci/make_container_images.sh;
tooling_images_task: tooling_images_task:
alias: tooling_images alias: tooling_images
name: 'Build Tooling images' name: 'Build Tooling image ${TARGET_NAME}'
only_if: $CIRRUS_CRON == '' only_if: *is_pr
skip: &ci_docs $CIRRUS_CHANGE_TITLE =~ '.*CI:DOCS.*' skip: *ci_docs
depends_on: depends_on:
- validate - imgts_build
# TODO: This should not take this long, but it can :( timeout_in: *cntr_timeout
timeout_in: 40m
gce_instance: *ibi_vm gce_instance: *ibi_vm
env: env: *image_env
<<: *image_env matrix:
TARGET_NAMES: imgts imgobsolete imgprune gcsupld get_ci_vm orphanvms ccia - env:
PUSH_LATEST: 1 # scripts force to 0 if $CIRRUS_PR TARGET_NAME: imgobsolete
- env:
TARGET_NAME: imgprune
- env:
TARGET_NAME: gcsupld
- env:
TARGET_NAME: get_ci_vm
- env:
TARGET_NAME: orphanvms
- env:
TARGET_NAME: ccia
script: | script: |
for TARGET_NAME in $TARGET_NAMES; do export DEST_FQIN="${REGPFX}/${TARGET_NAME}:c${IMG_SFX}";
export TARGET_NAME ci/make_container_images.sh;
export DEST_FQIN="quay.io/libpod/${TARGET_NAME}:c${IMG_SFX}";
ci/make_container_images.sh;
done
package_cache:
folder: "/tmp/automation_images_tmp/.cache/**"
fingerprint_key: "tooling-cache-version-1"
base_images_task: base_images_task:
name: "Build VM Base-images" name: "Build VM Base-images"
@ -141,20 +164,21 @@ base_images_task:
# Packer needs time to clean up partially created VM images # Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true" auto_cancellation: $CI != "true"
stateful: true stateful: true
timeout_in: 45m timeout_in: 70m
# Cannot use a container for this task, virt required for fedora image conversion gce_instance: *ibi_vm
gce_instance:
<<: *ibi_vm
# Nested-virt is required, need Intel Haswell or better CPU
enable_nested_virtualization: true
type: "n2-standard-2"
scopes: ["cloud-platform"]
matrix: matrix:
- &base_image - &base_image
name: "${PACKER_BUILDS} Base Image" name: "${PACKER_BUILDS} Base Image"
gce_instance: &nested_virt_vm
<<: *ibi_vm
# Nested-virt is required, need Intel Haswell or better CPU
enable_nested_virtualization: true
type: "n2-standard-16"
scopes: ["cloud-platform"]
env: env:
PACKER_BUILDS: "fedora" PACKER_BUILDS: "fedora"
- <<: *base_image - <<: *base_image
gce_instance: *nested_virt_vm
env: env:
PACKER_BUILDS: "prior-fedora" PACKER_BUILDS: "prior-fedora"
- <<: *base_image - <<: *base_image
@ -169,6 +193,8 @@ base_images_task:
env: env:
GAC_JSON: &gac_json ENCRYPTED[7fba7fb26ab568ae39f799ab58a476123206576b0135b3d1019117c6d682391370c801e149f29324ff4b50133012aed9] GAC_JSON: &gac_json ENCRYPTED[7fba7fb26ab568ae39f799ab58a476123206576b0135b3d1019117c6d682391370c801e149f29324ff4b50133012aed9]
AWS_INI: &aws_ini ENCRYPTED[4cd69097cd29a9899e51acf3bbacceeb83cb5c907d272ca1e2a8ccd515b03f2368a0680870c0d120fc32bc578bb0a930] AWS_INI: &aws_ini ENCRYPTED[4cd69097cd29a9899e51acf3bbacceeb83cb5c907d272ca1e2a8ccd515b03f2368a0680870c0d120fc32bc578bb0a930]
AWS_MAX_ATTEMPTS: 300
AWS_TIMEOUT_SECONDS: 3000
script: "ci/make.sh base_images" script: "ci/make.sh base_images"
manifest_artifacts: manifest_artifacts:
path: base_images/manifest.json path: base_images/manifest.json
@ -186,7 +212,7 @@ cache_images_task:
# Packer needs time to clean up partially created VM images # Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true" auto_cancellation: $CI != "true"
stateful: true stateful: true
timeout_in: 45m timeout_in: 90m
container: container:
dockerfile: "image_builder/Containerfile" dockerfile: "image_builder/Containerfile"
cpu: 2 cpu: 2
@ -203,10 +229,10 @@ cache_images_task:
PACKER_BUILDS: "prior-fedora" PACKER_BUILDS: "prior-fedora"
- <<: *cache_image - <<: *cache_image
env: env:
PACKER_BUILDS: "fedora-netavark" PACKER_BUILDS: "rawhide"
- <<: *cache_image - <<: *cache_image
env: env:
PACKER_BUILDS: "fedora-podman-py" PACKER_BUILDS: "fedora-netavark"
- <<: *cache_image - <<: *cache_image
env: env:
PACKER_BUILDS: "fedora-aws" PACKER_BUILDS: "fedora-aws"
@ -225,6 +251,8 @@ cache_images_task:
env: env:
GAC_JSON: *gac_json GAC_JSON: *gac_json
AWS_INI: *aws_ini AWS_INI: *aws_ini
AWS_MAX_ATTEMPTS: 300
AWS_TIMEOUT_SECONDS: 3000
script: "ci/make.sh cache_images" script: "ci/make.sh cache_images"
manifest_artifacts: manifest_artifacts:
path: cache_images/manifest.json path: cache_images/manifest.json
@ -243,7 +271,6 @@ win_images_task:
# Packer needs time to clean up partially created VM images # Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true" auto_cancellation: $CI != "true"
stateful: true stateful: true
timeout_in: 45m
# Packer WinRM communicator is not reliable on container tasks # Packer WinRM communicator is not reliable on container tasks
gce_instance: gce_instance:
<<: *ibi_vm <<: *ibi_vm
@ -256,18 +283,39 @@ win_images_task:
path: win_images/manifest.json path: win_images/manifest.json
type: application/json type: application/json
# These targets are intended for humans, make sure they builds and function on a basic level
test_debug_task:
name: "Test ${TARGET} make target"
alias: test_debug
only_if: *is_pr
skip: *ci_docs
depends_on:
- validate
gce_instance: *nested_virt_vm
matrix:
- env:
TARGET: ci_debug
- env:
TARGET: image_builder_debug
env:
HOME: "/root"
GAC_FILEPATH: "/dev/null"
AWS_SHARED_CREDENTIALS_FILE: "/dev/null"
DBG_TEST_CMD: "true"
script: make ${TARGET}
# Test metadata addition to images (built or not) to ensure container functions # Test metadata addition to images (built or not) to ensure container functions
# TODO: Requires manually examining the output log to confirm operation.
test_imgts_task: &imgts test_imgts_task: &imgts
name: "Test image timestamp/metadata updates" name: "Test image timestamp/metadata updates"
alias: test_imgts alias: test_imgts
only_if: $CIRRUS_CRON == '' only_if: *is_pr
skip: *ci_docs skip: *ci_docs
depends_on: depends_on: &imgts_deps
- tooling_images - base_images
- cache_images
- imgts_build
container: container:
image: 'quay.io/libpod/imgts:c$IMG_SFX' image: '${REGPFX}/imgts:c$IMG_SFX'
cpu: 2 cpu: 2
memory: '2G' memory: '2G'
env: &imgts_env env: &imgts_env
@ -289,12 +337,14 @@ test_imgts_task: &imgts
fedora-c${IMG_SFX} fedora-c${IMG_SFX}
prior-fedora-c${IMG_SFX} prior-fedora-c${IMG_SFX}
fedora-netavark-c${IMG_SFX} fedora-netavark-c${IMG_SFX}
fedora-podman-py-c${IMG_SFX} rawhide-c${IMG_SFX}
debian-c${IMG_SFX} debian-c${IMG_SFX}
build-push-c${IMG_SFX} build-push-c${IMG_SFX}
EC2IMGNAMES: | EC2IMGNAMES: |
fedora-aws-i${IMPORT_IMG_SFX}
fedora-aws-b${IMG_SFX} fedora-aws-b${IMG_SFX}
fedora-aws-c${IMG_SFX} fedora-aws-c${IMG_SFX}
fedora-aws-arm64-i${IMPORT_IMG_SFX}
fedora-aws-arm64-b${IMG_SFX} fedora-aws-arm64-b${IMG_SFX}
fedora-podman-aws-arm64-c${IMG_SFX} fedora-podman-aws-arm64-c${IMG_SFX}
fedora-netavark-aws-arm64-c${IMG_SFX} fedora-netavark-aws-arm64-c${IMG_SFX}
@ -309,9 +359,7 @@ imgts_task:
alias: imgts alias: imgts
only_if: *is_pr only_if: *is_pr
skip: *ci_docs_tooling skip: *ci_docs_tooling
depends_on: depends_on: *imgts_deps
- base_images
- cache_images
env: env:
<<: *imgts_env <<: *imgts_env
DRY_RUN: 0 DRY_RUN: 0
@ -328,13 +376,13 @@ imgts_task:
test_imgobsolete_task: &lifecycle_test test_imgobsolete_task: &lifecycle_test
name: "Test obsolete image detection" name: "Test obsolete image detection"
alias: test_imgobsolete alias: test_imgobsolete
only_if: &only_prs $CIRRUS_PR != '' only_if: *is_pr
skip: *ci_docs skip: *ci_docs
depends_on: depends_on:
- tooling_images - tooling_images
- imgts - imgts
container: container:
image: 'quay.io/libpod/imgobsolete:c$IMG_SFX' image: '${REGPFX}/imgobsolete:c$IMG_SFX'
cpu: 2 cpu: 2
memory: '2G' memory: '2G'
env: &lifecycle_env env: &lifecycle_env
@ -353,9 +401,8 @@ test_orphanvms_task:
<<: *lifecycle_test <<: *lifecycle_test
name: "Test orphan VMs detection" name: "Test orphan VMs detection"
alias: test_orphanvms alias: test_orphanvms
skip: *ci_docs
container: container:
image: 'quay.io/libpod/orphanvms:c$IMG_SFX' image: '$REGPFX/orphanvms:c$IMG_SFX'
cpu: 2 cpu: 2
memory: '2G' memory: '2G'
env: env:
@ -364,6 +411,7 @@ test_orphanvms_task:
GCPPROJECT: 'libpod-218412' GCPPROJECT: 'libpod-218412'
GCPPROJECTS: 'libpod-218412' # value for testing, otherwise see gcpprojects.txt GCPPROJECTS: 'libpod-218412' # value for testing, otherwise see gcpprojects.txt
AWSINI: ENCRYPTED[1ab89ff7bc1515dc964efe7ef6e094e01164ba8dd2e11c9a01259c6af3b3968ab841dbe473fe4ab5b573f2f5fa3653e8] AWSINI: ENCRYPTED[1ab89ff7bc1515dc964efe7ef6e094e01164ba8dd2e11c9a01259c6af3b3968ab841dbe473fe4ab5b573f2f5fa3653e8]
DRY_RUN: 1
EVERYTHING: 1 # Alter age-limit from 3-days -> 3 seconds for a test-run. EVERYTHING: 1 # Alter age-limit from 3-days -> 3 seconds for a test-run.
script: /usr/local/bin/entrypoint.sh script: /usr/local/bin/entrypoint.sh
@ -372,24 +420,23 @@ test_imgprune_task:
<<: *lifecycle_test <<: *lifecycle_test
name: "Test obsolete image removal" name: "Test obsolete image removal"
alias: test_imgprune alias: test_imgprune
skip: *ci_docs
depends_on: depends_on:
- tooling_images - tooling_images
- imgts - imgts
container: container:
image: 'quay.io/libpod/imgprune:c$IMG_SFX' image: '$REGPFX/imgprune:c$IMG_SFX'
test_gcsupld_task: test_gcsupld_task:
name: "Test uploading to GCS" name: "Test uploading to GCS"
alias: test_gcsupld alias: test_gcsupld
only_if: *only_prs only_if: *is_pr
skip: *ci_docs skip: *ci_docs
depends_on: depends_on:
- tooling_images - tooling_images
- imgts - imgts
container: container:
image: 'quay.io/libpod/gcsupld:c$IMG_SFX' image: '$REGPFX/gcsupld:c$IMG_SFX'
cpu: 2 cpu: 2
memory: '2G' memory: '2G'
env: env:
@ -402,13 +449,13 @@ test_gcsupld_task:
test_get_ci_vm_task: test_get_ci_vm_task:
name: "Test get_ci_vm entrypoint" name: "Test get_ci_vm entrypoint"
alias: test_get_ci_vm alias: test_get_ci_vm
only_if: *only_prs only_if: *is_pr
skip: *ci_docs skip: *ci_docs
depends_on: depends_on:
- tooling_images - tooling_images
- imgts - imgts
container: container:
image: 'quay.io/libpod/get_ci_vm:c$IMG_SFX' image: '$REGPFX/get_ci_vm:c$IMG_SFX'
cpu: 2 cpu: 2
memory: '2G' memory: '2G'
env: env:
@ -419,44 +466,59 @@ test_get_ci_vm_task:
test_ccia_task: test_ccia_task:
name: "Test ccia entrypoint" name: "Test ccia entrypoint"
alias: test_ccia alias: test_ccia
only_if: *only_prs only_if: *is_pr
skip: *ci_docs skip: *ci_docs
depends_on: depends_on:
- tooling_images - tooling_images
container: container:
image: 'quay.io/libpod/ccia:c$IMG_SFX' image: '$REGPFX/ccia:c$IMG_SFX'
cpu: 2 cpu: 2
memory: '2G' memory: '2G'
env:
TESTING_ENTRYPOINT: true
CCIABIN: /usr/share/automation/bin/cirrus-ci_artifacts
test_script: ./ccia/test.sh test_script: ./ccia/test.sh
test_build-push_task: test_build-push_task:
name: "Test build-push VM functions" name: "Test build-push VM functions"
alias: test_build-push alias: test_build-push
only_if: *only_prs only_if: |
$CIRRUS_PR != '' &&
$CIRRUS_PR_LABELS !=~ ".*no_build-push.*"
skip: *ci_docs_tooling skip: *ci_docs_tooling
depends_on: depends_on:
- cache_images - cache_images
gce_instance: gce_instance:
image_project: "libpod-218412" image_project: "libpod-218412"
image_family: 'build-push-cache' image_name: build-push-c${IMG_SFX}
zone: "us-central1-a" zone: "us-central1-a"
disk: 200 disk: 200
# More muscle to emulate multi-arch # More muscle to emulate multi-arch
type: "n2-standard-4" type: "n2-standard-4"
script: bash ./build-push/test.sh script: |
mkdir /tmp/context
echo -e "FROM scratch\nENV foo=bar\n" > /tmp/context/Containerfile
source /etc/automation_environment
A_DEBUG=1 build-push.sh --nopush --arches=amd64,arm64,s390x,ppc64le example.com/foo/bar /tmp/context
tag_latest_images_task:
alias: tag_latest_images
name: "Tag latest built container images."
only_if: |
$CIRRUS_CRON == '' &&
$CIRRUS_BRANCH == $CIRRUS_DEFAULT_BRANCH
skip: *ci_docs
gce_instance: *ibi_vm
env: *image_env
script: ci/tag_latest.sh
# N/B: "latest" image produced after PR-merge (branch-push) # N/B: "latest" image produced after PR-merge (branch-push)
cron_imgobsolete_task: &lifecycle_cron cron_imgobsolete_task: &lifecycle_cron
name: "Periodicly mark old images obsolete" name: "Periodicly mark old images obsolete"
alias: cron_imgobsolete alias: cron_imgobsolete
only_if: $CIRRUS_PR == '' && $CIRRUS_CRON != '' only_if: $CIRRUS_CRON == 'lifecycle'
container: container:
image: 'quay.io/libpod/imgobsolete:latest' image: '$REGPFX/imgobsolete:latest'
cpu: 2 cpu: 2
memory: '2G' memory: '2G'
env: env:
@ -472,7 +534,7 @@ cron_imgprune_task:
depends_on: depends_on:
- cron_imgobsolete - cron_imgobsolete
container: container:
image: 'quay.io/libpod/imgprune:latest' image: '$REGPFX/imgprune:latest'
success_task: success_task:
@ -486,6 +548,7 @@ success_task:
- base_images - base_images
- cache_images - cache_images
- win_images - win_images
- test_debug
- test_imgts - test_imgts
- imgts - imgts
- test_imgobsolete - test_imgobsolete
@ -495,6 +558,7 @@ success_task:
- cron_imgprune - cron_imgprune
- test_gcsupld - test_gcsupld
- test_get_ci_vm - test_get_ci_vm
- test_ccia
- test_build-push - test_build-push
container: container:
<<: *ci_container <<: *ci_container

2
.codespelldict Normal file
View File

@ -0,0 +1,2 @@
IMGSFX,IMG-SFX->IMG_SFX
Dockerfile->Containerfile

0
.codespellignore Normal file
View File

4
.codespellrc Normal file
View File

@ -0,0 +1,4 @@
[codespell]
ignore-words = .codespellignore
dictionary = .codespelldict
quiet-level = 3

View File

@ -13,9 +13,9 @@ import sys
def msg(msg, newline=True): def msg(msg, newline=True):
"""Print msg to stderr with optional newline.""" """Print msg to stderr with optional newline."""
nl = '' nl = ""
if newline: if newline:
nl = '\n' nl = "\n"
sys.stderr.write(f"{msg}{nl}") sys.stderr.write(f"{msg}{nl}")
sys.stderr.flush() sys.stderr.flush()
@ -23,13 +23,13 @@ def msg(msg, newline=True):
def stage_sort(item): def stage_sort(item):
"""Return sorting-key for build-image-json item.""" """Return sorting-key for build-image-json item."""
if item["stage"] == "import": if item["stage"] == "import":
return str("0010"+item["name"]) return str("0010" + item["name"])
elif item["stage"] == "base": elif item["stage"] == "base":
return str("0020"+item["name"]) return str("0020" + item["name"])
elif item["stage"] == "cache": elif item["stage"] == "cache":
return str("0030"+item["name"]) return str("0030" + item["name"])
else: else:
return str("0100"+item["name"]) return str("0100" + item["name"])
if "GITHUB_ENV" not in os.environ: if "GITHUB_ENV" not in os.environ:
@ -40,46 +40,58 @@ github_workspace = os.environ.get("GITHUB_WORKSPACE", ".")
# File written by a previous workflow step # File written by a previous workflow step
with open(f"{github_workspace}/built_images.json") as bij: with open(f"{github_workspace}/built_images.json") as bij:
msg(f"Reading image build data from {bij.name}:") msg(f"Reading image build data from {bij.name}:")
data = [] data = []
for build in json.load(bij): # list of build data maps for build in json.load(bij): # list of build data maps
stage = build.get("stage", False) stage = build.get("stage", False)
name = build.get("name", False) name = build.get("name", False)
sfx = build.get("sfx", False) sfx = build.get("sfx", False)
task = build.get("task", False) task = build.get("task", False)
if bool(stage) and bool(name) and bool(sfx) and bool(task): if bool(stage) and bool(name) and bool(sfx) and bool(task):
image_suffix = f'{stage[0]}{sfx}' image_suffix = f"{stage[0]}{sfx}"
data.append(dict(stage=stage, name=name, data.append(
image_suffix=image_suffix, task=task)) dict(stage=stage, name=name, image_suffix=image_suffix, task=task)
if cirrus_ci_build_id is None: )
cirrus_ci_build_id = sfx if cirrus_ci_build_id is None:
msg(f"Including '{stage}' stage build '{name}' for task '{task}'.") cirrus_ci_build_id = sfx
else: msg(f"Including '{stage}' stage build '{name}' for task '{task}'.")
msg(f"Skipping '{stage}' stage build '{name}' for task '{task}'.") else:
msg(f"Skipping '{stage}' stage build '{name}' for task '{task}'.")
url = 'https://cirrus-ci.com/task' url = "https://cirrus-ci.com/task"
lines = [] lines = []
data.sort(key=stage_sort) data.sort(key=stage_sort)
for item in data: for item in data:
lines.append('|*{0}*|[{1}]({2})|`{3}`|\n'.format(item['stage'], image_suffix = item["image_suffix"]
item['name'], '{0}/{1}'.format(url, item['task']), # Base-images should never actually be used, but it may be helpful
item['image_suffix'])) # to have them in the list in case some debugging is needed.
if item["stage"] != "cache":
image_suffix = "do-not-use"
lines.append(
"|*{0}*|[{1}]({2})|`{3}`|\n".format(
item["stage"],
item["name"],
"{0}/{1}".format(url, item["task"]),
image_suffix,
)
)
# This is the mechanism required to set an multi-line env. var. # This is the mechanism required to set an multi-line env. var.
# value to be consumed by future workflow steps. # value to be consumed by future workflow steps.
with open(os.environ["GITHUB_ENV"], "a") as ghenv, \ with open(os.environ["GITHUB_ENV"], "a") as ghenv, open(
open(f'{github_workspace}/images.md', "w") as mdfile, \ f"{github_workspace}/images.md", "w"
open(f'{github_workspace}/images.json', "w") as images_json: ) as mdfile, open(f"{github_workspace}/images.json", "w") as images_json:
env_header = ("IMAGE_TABLE<<EOF\n") env_header = "IMAGE_TABLE<<EOF\n"
header = (f"[Cirrus CI build](https://cirrus-ci.com/build/{cirrus_ci_build_id})" header = (
" successful. [Found built image names and" f"[Cirrus CI build](https://cirrus-ci.com/build/{cirrus_ci_build_id})"
f' IDs](https://github.com/{os.environ["GITHUB_REPOSITORY"]}' " successful. [Found built image names and"
f'/actions/runs/{os.environ["GITHUB_RUN_ID"]}):\n' f' IDs](https://github.com/{os.environ["GITHUB_REPOSITORY"]}'
"\n") f'/actions/runs/{os.environ["GITHUB_RUN_ID"]}):\n'
c_head = ("|*Stage*|**Image Name**|`IMAGE_SUFFIX`|\n" "\n"
"|---|---|---|\n") )
c_head = "|*Stage*|**Image Name**|`IMAGE_SUFFIX`|\n" "|---|---|---|\n"
# Different output destinations get slightly different content # Different output destinations get slightly different content
for dst in [ghenv, mdfile, sys.stderr]: for dst in [ghenv, mdfile, sys.stderr]:
if dst == ghenv: if dst == ghenv:
@ -92,5 +104,7 @@ with open(os.environ["GITHUB_ENV"], "a") as ghenv, \
dst.write("EOF\n\n") dst.write("EOF\n\n")
json.dump(data, images_json, indent=4, sort_keys=True) json.dump(data, images_json, indent=4, sort_keys=True)
msg(f"Wrote github env file '{ghenv.name}', md-file '{mdfile.name}'," msg(
f" and json-file '{images_json.name}'") f"Wrote github env file '{ghenv.name}', md-file '{mdfile.name}',"
f" and json-file '{images_json.name}'"
)

View File

@ -1,20 +1,12 @@
/* /*
Renovate is a service similar to GitHub Dependabot, but with Renovate is a service similar to GitHub Dependabot.
(fantastically) more configuration options. So many options
in fact, if you're new I recommend glossing over this cheat-sheet
prior to the official documentation:
https://www.augmentedmind.de/2021/07/25/renovate-bot-cheat-sheet Please Manually validate any changes to this file with:
Configuration Update/Change Procedure:
1. Make changes
2. Manually validate changes (from repo-root):
podman run -it \ podman run -it \
-v ./.github/renovate.json5:/usr/src/app/renovate.json5:z \ -v ./.github/renovate.json5:/usr/src/app/renovate.json5:z \
docker.io/renovate/renovate:latest \ ghcr.io/renovatebot/renovate:latest \
renovate-config-validator renovate-config-validator
3. Commit.
Configuration Reference: Configuration Reference:
https://docs.renovatebot.com/configuration-options/ https://docs.renovatebot.com/configuration-options/
@ -22,11 +14,9 @@
Monitoring Dashboard: Monitoring Dashboard:
https://app.renovatebot.com/dashboard#github/containers https://app.renovatebot.com/dashboard#github/containers
Note: The Renovate bot will create/manage it's business on Note: The Renovate bot will create/manage its business on
branches named 'renovate/*'. Otherwise, and by branches named 'renovate/*'. The only copy of this
default, the only the copy of this file that matters file that matters is the one on the `main` branch.
is the one on the `main` branch. No other branches
will be monitored or touched in any way.
*/ */
{ {
@ -44,12 +34,45 @@
// This repo builds images, don't try to manage them. // This repo builds images, don't try to manage them.
"docker:disable" "docker:disable"
], ],
/*************************************************
*** Repository-specific configuration options ***
*************************************************/
// Don't leave dep. update. PRs "hanging", assign them to people.
"assignees": ["cevich"],
// Don't build CI VM images for dep. update PRs // Don't build CI VM images for dep. update PRs (by default)
commitMessagePrefix: "[CI:DOCS]", "commitMessagePrefix": "[CI:DOCS]",
"customManagers": [
// Manage updates to the common automation library version
{
"customType": "regex",
"fileMatch": "^lib.sh$",
"matchStrings": ["INSTALL_AUTOMATION_VERSION=\"(?<currentValue>.+)\""],
"depNameTemplate": "containers/automation",
"datasourceTemplate": "github-tags",
"versioningTemplate": "semver-coerced",
// "v" included in tag, but should not be used in lib.sh
"extractVersionTemplate": "^v(?<version>.+)$"
}
],
// N/B: LAST MATCHING RULE WINS, match statems are ANDed together.
"packageRules": [
// When automation library version updated, full CI VM image build
// is needed, along with some other overrides not required in
// (for example) github-action updates.
{
"matchManagers": ["custom.regex"],
"matchFileNames": ["lib.sh"],
"schedule": ["at any time"],
"commitMessagePrefix": null,
"draftPR": true,
"prBodyNotes": [
"\
{{#if isMajor}}\
:warning: Changes are **likely** required for build-scripts and/or downstream CI VM \
image users. Please check very carefully. :warning:\
{{else}}\
:warning: Changes may be required for build-scripts and/or downstream CI VM \
image users. Please double-check. :warning:\
{{/if}}"
]
}
]
} }

View File

@ -13,5 +13,10 @@ on:
jobs: jobs:
# Ref: https://docs.github.com/en/actions/using-workflows/reusing-workflows # Ref: https://docs.github.com/en/actions/using-workflows/reusing-workflows
call_cron_failures: call_cron_failures:
uses: containers/buildah/.github/workflows/check_cirrus_cron.yml@main uses: containers/podman/.github/workflows/check_cirrus_cron.yml@main
secrets: inherit secrets:
SECRET_CIRRUS_API_KEY: ${{secrets.SECRET_CIRRUS_API_KEY}}
ACTION_MAIL_SERVER: ${{secrets.ACTION_MAIL_SERVER}}
ACTION_MAIL_USERNAME: ${{secrets.ACTION_MAIL_USERNAME}}
ACTION_MAIL_PASSWORD: ${{secrets.ACTION_MAIL_PASSWORD}}
ACTION_MAIL_SENDER: ${{secrets.ACTION_MAIL_SENDER}}

View File

@ -25,12 +25,12 @@ jobs:
orphan_vms: orphan_vms:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
with: with:
persist-credentials: false persist-credentials: false
# Avoid duplicating cron-fail_addrs.csv # Avoid duplicating cron-fail_addrs.csv
- uses: actions/checkout@v3 - uses: actions/checkout@v4
with: with:
repository: containers/podman repository: containers/podman
path: '_podman' path: '_podman'
@ -44,14 +44,14 @@ jobs:
GCPPROJECT: 'libpod-218412' GCPPROJECT: 'libpod-218412'
run: | run: |
export GCPNAME GCPJSON AWSINI GCPPROJECT export GCPNAME GCPJSON AWSINI GCPPROJECT
export GCPPROJECTS=$(egrep -vx '^#+.*$' $GITHUB_WORKSPACE/gcpprojects.txt | tr -s '[:space:]' ' ') export GCPPROJECTS=$(grep -E -vx '^#+.*$' $GITHUB_WORKSPACE/gcpprojects.txt | tr -s '[:space:]' ' ')
podman run --rm \ podman run --rm \
-e GCPNAME -e GCPJSON -e AWSINI -e GCPPROJECT -e GCPPROJECTS \ -e GCPNAME -e GCPJSON -e AWSINI -e GCPPROJECT -e GCPPROJECTS \
quay.io/libpod/orphanvms:latest \ quay.io/libpod/orphanvms:latest \
> /tmp/orphanvms_output.txt > /tmp/orphanvms_output.txt
- if: always() - if: always()
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: orphanvms_output name: orphanvms_output
path: /tmp/orphanvms_output.txt path: /tmp/orphanvms_output.txt
@ -59,7 +59,7 @@ jobs:
- name: Count number of orphaned VMs - name: Count number of orphaned VMs
id: orphans id: orphans
run: | run: |
count=$(egrep -x '\* VM .+' /tmp/orphanvms_output.txt | wc -l) count=$(grep -E -x '\* VM .+' /tmp/orphanvms_output.txt | wc -l)
# Assist with debugging job (step-outputs are otherwise hidden) # Assist with debugging job (step-outputs are otherwise hidden)
printf "Orphan VMs count:%d\n" $count printf "Orphan VMs count:%d\n" $count
if [[ "$count" =~ ^[0-9]+$ ]]; then if [[ "$count" =~ ^[0-9]+$ ]]; then
@ -86,20 +86,20 @@ jobs:
- if: steps.orphans.outputs.count > 0 - if: steps.orphans.outputs.count > 0
name: Send orphan notification e-mail name: Send orphan notification e-mail
# Ref: https://github.com/dawidd6/action-send-mail # Ref: https://github.com/dawidd6/action-send-mail
uses: dawidd6/action-send-mail@v3.7.1 uses: dawidd6/action-send-mail@v3.12.0
with: with:
server_address: ${{ secrets.ACTION_MAIL_SERVER }} server_address: ${{ secrets.ACTION_MAIL_SERVER }}
server_port: 465 server_port: 465
username: ${{ secrets.ACTION_MAIL_USERNAME }} username: ${{ secrets.ACTION_MAIL_USERNAME }}
password: ${{ secrets.ACTION_MAIL_PASSWORD }} password: ${{ secrets.ACTION_MAIL_PASSWORD }}
subject: Orphaned GCP VMs subject: Orphaned CI VMs detected
to: ${{env.RCPTCSV}} to: ${{env.RCPTCSV}}
from: ${{ secrets.ACTION_MAIL_SENDER }} from: ${{ secrets.ACTION_MAIL_SENDER }}
body: file:///tmp/email_body.txt body: file:///tmp/email_body.txt
- if: failure() - if: failure()
name: Send error notification e-mail name: Send error notification e-mail
uses: dawidd6/action-send-mail@v3.7.1 uses: dawidd6/action-send-mail@v3.12.0
with: with:
server_address: ${{secrets.ACTION_MAIL_SERVER}} server_address: ${{secrets.ACTION_MAIL_SERVER}}
server_port: 465 server_port: 465
@ -108,4 +108,4 @@ jobs:
subject: Github workflow error on ${{github.repository}} subject: Github workflow error on ${{github.repository}}
to: ${{env.RCPTCSV}} to: ${{env.RCPTCSV}}
from: ${{secrets.ACTION_MAIL_SENDER}} from: ${{secrets.ACTION_MAIL_SENDER}}
body: "Job failed: https://github.com/${{github.repository}}/runs/${{github.job}}?check_suite_focus=true" body: "Job failed: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}"

View File

@ -58,7 +58,7 @@ jobs:
fi fi
- if: steps.retro.outputs.is_pr == 'true' - if: steps.retro.outputs.is_pr == 'true'
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
persist-credentials: false persist-credentials: false
@ -71,11 +71,7 @@ jobs:
# fall back to using the latest built CCIA image. # fall back to using the latest built CCIA image.
run: | run: |
PODMAN="podman run --rm -v $GITHUB_WORKSPACE:/data -w /data" PODMAN="podman run --rm -v $GITHUB_WORKSPACE:/data -w /data"
PR_CCIA="quay.io/libpod/ccia:c${{ steps.retro.outputs.bid }}" $PODMAN quay.io/libpod/ccia:latest --verbose "${{ steps.retro.outputs.bid }}" ".*/manifest.json"
UP_CCIA="quay.io/libpod/ccia:latest"
declare -a ARGS
ARGS=("--verbose" "${{ steps.retro.outputs.bid }}" ".*/manifest.json")
$PODMAN $PR_CCIA "${ARGS[@]}" || $PODMAN $UP_CCIA "${ARGS[@]}"
- if: steps.retro.outputs.is_pr == 'true' - if: steps.retro.outputs.is_pr == 'true'
name: Count the number of manifest.json files downloaded name: Count the number of manifest.json files downloaded
@ -136,12 +132,10 @@ jobs:
- if: steps.manifests.outputs.count > 0 - if: steps.manifests.outputs.count > 0
name: Post PR comment with image name/id table name: Post PR comment with image name/id table
uses: jungwinter/comment@v1.1.0 uses: thollander/actions-comment-pull-request@v3
with: with:
issue_number: '${{ steps.retro.outputs.prn }}' pr-number: '${{ steps.retro.outputs.prn }}'
type: 'create' message: |
token: '${{ secrets.GITHUB_TOKEN }}'
body: |
${{ env.IMAGE_TABLE }} ${{ env.IMAGE_TABLE }}
# Ref: https://github.com/marketplace/actions/deploy-to-gist # Ref: https://github.com/marketplace/actions/deploy-to-gist

1
.gitignore vendored
View File

@ -1,2 +1,3 @@
*/*.json */*.json
/.cache /.cache
.pre-commit-config.yaml

20
.pre-commit-hooks.yaml Normal file
View File

@ -0,0 +1,20 @@
---
# Ref: https://pre-commit.com/#creating-new-hooks
- id: check-imgsfx
name: Check IMG_SFX for accidental reuse.
description: |
Every PR intended to produce CI VM or container images must update
the `IMG_SFX` file via `make IMG_SFX`. The exact value will be
validated against global suffix usage (encoded as tags on the
`imgts` container image). This pre-commit hook verifies on every
push, the IMG_SFX file's value has not been pushed previously.
It's intended as a simple/imperfect way to save developers time
by avoiding force-pushes that will most certainly fail validation.
entry: ./check-imgsfx.sh
language: system
exclude: '.*' # Not examining any specific file/dir/link
always_run: true # ignore no matching files
fail_fast: true
pass_filenames: false
stages: ["pre-push"]

View File

@ -1 +1 @@
20230223t153813z-f37f36d12 20250812t173301z-f42f41d13

285
Makefile
View File

@ -1,4 +1,7 @@
# Default is sh, which has scripting limitations
SHELL := $(shell command -v bash;)
##### Functions ##### ##### Functions #####
# Evaluates to $(1) if $(1) non-empty, otherwise evaluates to $(2) # Evaluates to $(1) if $(1) non-empty, otherwise evaluates to $(2)
@ -15,18 +18,20 @@ if_ci_else = $(if $(findstring true,$(CI)),$(1),$(2))
##### Important image release and source details ##### ##### Important image release and source details #####
export CENTOS_STREAM_RELEASE = 8 export CENTOS_STREAM_RELEASE = 9
export FEDORA_RELEASE = 37 # Warning: Beta Fedora releases are not supported. Verifiy EC2 AMI availability
export PRIOR_FEDORA_RELEASE = 36 # here: https://fedoraproject.org/cloud/download
export FEDORA_RELEASE = 42
export PRIOR_FEDORA_RELEASE = 41
# See import_images/README.md # This should always be one-greater than $FEDORA_RELEASE (assuming it's actually the latest)
export FEDORA_IMPORT_IMG_SFX = 1669819494 export RAWHIDE_RELEASE = 43
# Automation assumes the actual release number (after SID upgrade) # Automation assumes the actual release number (after SID upgrade)
# is always one-greater than the latest DEBIAN_BASE_FAMILY (GCE image). # is always one-greater than the latest DEBIAN_BASE_FAMILY (GCE image).
export DEBIAN_RELEASE = 12 export DEBIAN_RELEASE = 13
export DEBIAN_BASE_FAMILY = debian-11 export DEBIAN_BASE_FAMILY = debian-12
IMPORT_FORMAT = vhdx IMPORT_FORMAT = vhdx
@ -106,6 +111,12 @@ export PACKER_CACHE_DIR = $(call err_if_empty,_TEMPDIR)
# AWS CLI default, in case caller needs to override # AWS CLI default, in case caller needs to override
export AWS := aws --output json --region us-east-1 export AWS := aws --output json --region us-east-1
# Needed for container-image builds
GIT_HEAD = $(shell git rev-parse HEAD)
# Save some typing
_IMGTS_FQIN := quay.io/libpod/imgts:c$(_IMG_SFX)
##### Targets ##### ##### Targets #####
# N/B: The double-# after targets is gawk'd out as the target description # N/B: The double-# after targets is gawk'd out as the target description
@ -120,12 +131,39 @@ help: ## Default target, parses special in-line comments as documentation.
# There are length/character limitations (a-z, 0-9, -) in GCE for image # There are length/character limitations (a-z, 0-9, -) in GCE for image
# names and a max-length of 63. # names and a max-length of 63.
.PHONY: IMG_SFX .PHONY: IMG_SFX
IMG_SFX: ## Generate a new date-based image suffix, store in the file IMG_SFX IMG_SFX: timebomb-check ## Generate a new date-based image suffix, store in the file IMG_SFX
$(file >$@,$(shell date --utc +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE))) @echo "$$(date -u +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE))" > "$@"
@echo "$(file <IMG_SFX)" @cat IMG_SFX
# Prevent us from wasting CI time when we have expired timebombs
.PHONY: timebomb-check
timebomb-check:
@now=$$(date -u +%Y%m%d); \
found=; \
while read -r bomb; do \
when=$$(echo "$$bomb" | sed -E -e 's/^.*timebomb ([0-9]+).*/\1/'); \
if [ "$$when" -le "$$now" ]; then \
echo "$$bomb"; \
found=found; \
fi; \
done < <(git grep --line-number '^[ ]*timebomb '); \
if [[ -n "$$found" ]]; then \
echo ""; \
echo "****** FATAL: Please check/fix expired timebomb(s) ^^^^^^"; \
false; \
fi
# Given the path to a file containing 'sha256:<image id>' return <image id>
# or throw error if empty.
define imageid
$(if $(file < $(1)),$(subst sha256:,,$(file < $(1))),$(error Container IID file $(1) doesn't exist or is empty))
endef
# This is intended for use by humans, to debug the image_builder_task in .cirrus.yml
# as well as the scripts under the ci subdirectory. See the 'image_builder_debug`
# target if debugging of the packer builds is necessary.
.PHONY: ci_debug .PHONY: ci_debug
ci_debug: $(_TEMPDIR)/ci_debug.tar ## Build and enter container for local development/debugging of container-based Cirrus-CI tasks ci_debug: $(_TEMPDIR)/ci_debug.iid ## Build and enter container for local development/debugging of container-based Cirrus-CI tasks
/usr/bin/podman run -it --rm \ /usr/bin/podman run -it --rm \
--security-opt label=disable \ --security-opt label=disable \
-v $(_MKFILE_DIR):$(_MKFILE_DIR) -w $(_MKFILE_DIR) \ -v $(_MKFILE_DIR):$(_MKFILE_DIR) -w $(_MKFILE_DIR) \
@ -137,23 +175,19 @@ ci_debug: $(_TEMPDIR)/ci_debug.tar ## Build and enter container for local develo
-e GAC_FILEPATH=$(GAC_FILEPATH) \ -e GAC_FILEPATH=$(GAC_FILEPATH) \
-e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \ -e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \
-e TEMPDIR=$(_TEMPDIR) \ -e TEMPDIR=$(_TEMPDIR) \
docker-archive:$< $(call imageid,$<) $(if $(DBG_TEST_CMD),$(DBG_TEST_CMD),)
# Takes 4 arguments: export filepath, FQIN, context dir, package cache key # Takes 3 arguments: IID filepath, FQIN, context dir
define podman_build define podman_build
podman build -t $(2) \ podman build -t $(2) \
--security-opt seccomp=unconfined \ --iidfile=$(1) \
-v $(_TEMPDIR)/.cache/$(4):/var/cache/dnf:Z \
-v $(_TEMPDIR)/.cache/$(4):/var/cache/apt:Z \
--build-arg CENTOS_STREAM_RELEASE=$(CENTOS_STREAM_RELEASE) \ --build-arg CENTOS_STREAM_RELEASE=$(CENTOS_STREAM_RELEASE) \
--build-arg PACKER_VERSION=$(call err_if_empty,PACKER_VERSION) \ --build-arg PACKER_VERSION=$(call err_if_empty,PACKER_VERSION) \
-f $(3)/Containerfile . -f $(3)/Containerfile .
rm -f $(1)
podman save --quiet -o $(1) $(2)
endef endef
$(_TEMPDIR)/ci_debug.tar: $(_TEMPDIR)/.cache/fedora $(wildcard ci/*) $(_TEMPDIR)/ci_debug.iid: $(_TEMPDIR) $(wildcard ci/*)
$(call podman_build,$@,ci_debug,ci,fedora) $(call podman_build,$@,ci_debug,ci)
$(_TEMPDIR): $(_TEMPDIR):
mkdir -p $@ mkdir -p $@
@ -161,12 +195,6 @@ $(_TEMPDIR):
$(_TEMPDIR)/bin: $(_TEMPDIR) $(_TEMPDIR)/bin: $(_TEMPDIR)
mkdir -p $@ mkdir -p $@
$(_TEMPDIR)/.cache: $(_TEMPDIR)
mkdir -p $@
$(_TEMPDIR)/.cache/%: $(_TEMPDIR)/.cache
mkdir -p $@
$(_TEMPDIR)/packer.zip: $(_TEMPDIR) $(_TEMPDIR)/packer.zip: $(_TEMPDIR)
curl -L --silent --show-error "$(_PACKER_URL)" -o "$@" curl -L --silent --show-error "$(_PACKER_URL)" -o "$@"
@ -201,7 +229,7 @@ $(_TEMPDIR)/user-data: $(_TEMPDIR) $(_TEMPDIR)/cidata.ssh.pub $(_TEMPDIR)/cidata
cidata: $(_TEMPDIR)/user-data $(_TEMPDIR)/meta-data cidata: $(_TEMPDIR)/user-data $(_TEMPDIR)/meta-data
define build_podman_container define build_podman_container
$(MAKE) $(_TEMPDIR)/$(1).tar BASE_TAG=$(2) $(MAKE) $(_TEMPDIR)/$(1).iid BASE_TAG=$(2)
endef endef
# First argument is the path to the template JSON # First argument is the path to the template JSON
@ -229,14 +257,17 @@ image_builder: image_builder/manifest.json ## Create image-building image and im
image_builder/manifest.json: image_builder/gce.json image_builder/setup.sh lib.sh systemd_banish.sh $(PACKER_INSTALL_DIR)/packer image_builder/manifest.json: image_builder/gce.json image_builder/setup.sh lib.sh systemd_banish.sh $(PACKER_INSTALL_DIR)/packer
$(call packer_build,image_builder/gce.json) $(call packer_build,image_builder/gce.json)
# Note: We assume this repo is checked out somewhere under the caller's # Note: It's assumed there are important files in the callers $HOME
# home-dir for bind-mounting purposes. Otherwise possibly necessary # needed for debugging (.gitconfig, .ssh keys, etc.). It's unsafe
# files/directories like $HOME/.gitconfig or $HOME/.ssh/ won't be available # to assume $(_MKFILE_DIR) is also under $HOME. Both are mounted
# from inside the debugging container. # for good measure.
.PHONY: image_builder_debug .PHONY: image_builder_debug
image_builder_debug: $(_TEMPDIR)/image_builder_debug.tar ## Build and enter container for local development/debugging of targets requiring packer + virtualization image_builder_debug: $(_TEMPDIR)/image_builder_debug.iid ## Build and enter container for local development/debugging of targets requiring packer + virtualization
/usr/bin/podman run -it --rm \ /usr/bin/podman run -it --rm \
--security-opt label=disable -v $$HOME:$$HOME -w $(_MKFILE_DIR) \ --security-opt label=disable \
-v $$HOME:$$HOME \
-v $(_MKFILE_DIR):$(_MKFILE_DIR) \
-w $(_MKFILE_DIR) \
-v $(_TEMPDIR):$(_TEMPDIR) \ -v $(_TEMPDIR):$(_TEMPDIR) \
-v $(call err_if_empty,GAC_FILEPATH):$(GAC_FILEPATH) \ -v $(call err_if_empty,GAC_FILEPATH):$(GAC_FILEPATH) \
-v $(call err_if_empty,AWS_SHARED_CREDENTIALS_FILE):$(AWS_SHARED_CREDENTIALS_FILE) \ -v $(call err_if_empty,AWS_SHARED_CREDENTIALS_FILE):$(AWS_SHARED_CREDENTIALS_FILE) \
@ -246,114 +277,10 @@ image_builder_debug: $(_TEMPDIR)/image_builder_debug.tar ## Build and enter cont
-e IMG_SFX=$(call err_if_empty,_IMG_SFX) \ -e IMG_SFX=$(call err_if_empty,_IMG_SFX) \
-e GAC_FILEPATH=$(GAC_FILEPATH) \ -e GAC_FILEPATH=$(GAC_FILEPATH) \
-e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \ -e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \
docker-archive:$< $(call imageid,$<) $(if $(DBG_TEST_CMD),$(DBG_TEST_CMD))
$(_TEMPDIR)/image_builder_debug.tar: $(_TEMPDIR)/.cache/centos $(wildcard image_builder/*) $(_TEMPDIR)/image_builder_debug.iid: $(_TEMPDIR) $(wildcard image_builder/*)
$(call podman_build,$@,image_builder_debug,image_builder,centos) $(call podman_build,$@,image_builder_debug,image_builder)
# Avoid re-downloading unnecessarily
# Ref: https://www.gnu.org/software/make/manual/html_node/Special-Targets.html#Special-Targets
.PRECIOUS: $(_TEMPDIR)/fedora-aws-$(_IMG_SFX).$(IMPORT_FORMAT)
$(_TEMPDIR)/fedora-aws-$(_IMG_SFX).$(IMPORT_FORMAT): $(_TEMPDIR)
bash import_images/handle_image.sh \
$@ \
$(call err_if_empty,FEDORA_IMAGE_URL) \
$(call err_if_empty,FEDORA_CSUM_URL)
$(_TEMPDIR)/fedora-aws-arm64-$(_IMG_SFX).$(IMPORT_FORMAT): $(_TEMPDIR)
bash import_images/handle_image.sh \
$@ \
$(call err_if_empty,FEDORA_ARM64_IMAGE_URL) \
$(call err_if_empty,FEDORA_ARM64_CSUM_URL)
$(_TEMPDIR)/%.md5: $(_TEMPDIR)/%.$(IMPORT_FORMAT)
openssl md5 -binary $< | base64 > $@.tmp
mv $@.tmp $@
# MD5 metadata value checked by AWS after upload + 5 retries.
# Cache disabled to avoid sync. issues w/ vmimport service if
# image re-uploaded.
# TODO: Use sha256 from ..._CSUM_URL file instead of recalculating
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
# Avoid re-uploading unnecessarily
.SECONDARY: $(_TEMPDIR)/%.uploaded
$(_TEMPDIR)/%.uploaded: $(_TEMPDIR)/%.$(IMPORT_FORMAT) $(_TEMPDIR)/%.md5
-$(AWS) s3 rm --quiet s3://packer-image-import/%.$(IMPORT_FORMAT)
$(AWS) s3api put-object \
--content-md5 "$(file < $(_TEMPDIR)/$*.md5)" \
--content-encoding binary/octet-stream \
--cache-control no-cache \
--bucket packer-image-import \
--key $*.$(IMPORT_FORMAT) \
--body $(_TEMPDIR)/$*.$(IMPORT_FORMAT) > $@.tmp
mv $@.tmp $@
# For whatever reason, the 'Format' value must be all upper-case.
# Avoid creating unnecessary/duplicate import tasks
.SECONDARY: $(_TEMPDIR)/%.import_task_id
$(_TEMPDIR)/%.import_task_id: $(_TEMPDIR)/%.uploaded
$(AWS) ec2 import-snapshot \
--disk-container Format=$(shell tr '[:lower:]' '[:upper:]'<<<"$(IMPORT_FORMAT)"),UserBucket="{S3Bucket=packer-image-import,S3Key=$*.$(IMPORT_FORMAT)}" > $@.tmp.json
@cat $@.tmp.json
jq -r -e .ImportTaskId $@.tmp.json > $@.tmp
mv $@.tmp $@
# Avoid importing multiple snapshots for the same image
.PRECIOUS: $(_TEMPDIR)/%.snapshot_id
$(_TEMPDIR)/%.snapshot_id: $(_TEMPDIR)/%.import_task_id
bash import_images/wait_import_task.sh "$<" > $@.tmp
mv $@.tmp $@
define _register_sed
sed -r \
-e 's/@@@NAME@@@/$(1)/' \
-e 's/@@@IMG_SFX@@@/$(_IMG_SFX)/' \
-e 's/@@@ARCH@@@/$(2)/' \
-e 's/@@@SNAPSHOT_ID@@@/$(3)/' \
import_images/register.json.in \
> $(4)
endef
$(_TEMPDIR)/fedora-aws-$(_IMG_SFX).register.json: $(_TEMPDIR)/fedora-aws-$(_IMG_SFX).snapshot_id import_images/register.json.in
$(call _register_sed,fedora-aws,x86_64,$(file <$<),$@)
$(_TEMPDIR)/fedora-aws-arm64-$(_IMG_SFX).register.json: $(_TEMPDIR)/fedora-aws-arm64-$(_IMG_SFX).snapshot_id import_images/register.json.in
$(call _register_sed,fedora-aws-arm64,arm64,$(file <$<),$@)
# Avoid multiple registrations for the same image
.PRECIOUS: $(_TEMPDIR)/%.ami.id
$(_TEMPDIR)/%.ami.id: $(_TEMPDIR)/%.register.json
$(AWS) ec2 register-image --cli-input-json "$$(<$<)" > $@.tmp.json
cat $@.tmp.json
jq -r -e .ImageId $@.tmp.json > $@.tmp
mv $@.tmp $@
$(_TEMPDIR)/%.ami.name: $(_TEMPDIR)/%.register.json
jq -r -e .Name $< > $@.tmp
mv $@.tmp $@
$(_TEMPDIR)/%.ami.json: $(_TEMPDIR)/%.ami.id $(_TEMPDIR)/%.ami.name
$(AWS) ec2 create-tags \
--resources "$$(<$(_TEMPDIR)/$*.ami.id)" \
--tags \
Key=Name,Value=$$(<$(_TEMPDIR)/$*.ami.name) \
Key=automation,Value=false
$(AWS) --output table ec2 describe-images --image-ids "$$(<$(_TEMPDIR)/$*.ami.id)" \
| tee $@
.PHONY: import_images
import_images: $(_TEMPDIR)/fedora-aws-$(_IMG_SFX).ami.json $(_TEMPDIR)/fedora-aws-arm64-$(_IMG_SFX).ami.json import_images/manifest.json.in ## Import generic Fedora cloud images into AWS EC2.
sed -r \
-e 's/@@@IMG_SFX@@@/$(_IMG_SFX)/' \
-e 's/@@@CIRRUS_TASK_ID@@@/$(CIRRUS_TASK_ID)/' \
import_images/manifest.json.in \
> import_images/manifest.json
@echo "Image import(s) successful."
@echo "############################################################"
@echo "Please update Makefile value:"
@echo ""
@echo " FEDORA_IMPORT_IMG_SFX = $(_IMG_SFX)"
@echo "############################################################"
.PHONY: base_images .PHONY: base_images
# This needs to run in a virt/nested-virt capable environment # This needs to run in a virt/nested-virt capable environment
@ -381,82 +308,80 @@ fedora_podman: ## Build Fedora podman development container
prior-fedora_podman: ## Build Prior-Fedora podman development container prior-fedora_podman: ## Build Prior-Fedora podman development container
$(call build_podman_container,$@,$(PRIOR_FEDORA_RELEASE)) $(call build_podman_container,$@,$(PRIOR_FEDORA_RELEASE))
$(_TEMPDIR)/%_podman.tar: podman/Containerfile podman/setup.sh $(wildcard base_images/*.sh) $(wildcard cache_images/*.sh) $(_TEMPDIR)/.cache/% $(_TEMPDIR)/%_podman.iid: podman/Containerfile podman/setup.sh $(wildcard base_images/*.sh) $(_TEMPDIR) $(wildcard cache_images/*.sh)
podman build -t $*_podman:$(call err_if_empty,_IMG_SFX) \ podman build -t $*_podman:$(call err_if_empty,_IMG_SFX) \
--security-opt seccomp=unconfined \ --security-opt seccomp=unconfined \
--iidfile=$@ \
--build-arg=BASE_NAME=$(subst prior-,,$*) \ --build-arg=BASE_NAME=$(subst prior-,,$*) \
--build-arg=BASE_TAG=$(call err_if_empty,BASE_TAG) \ --build-arg=BASE_TAG=$(call err_if_empty,BASE_TAG) \
--build-arg=PACKER_BUILD_NAME=$(subst _podman,,$*) \ --build-arg=PACKER_BUILD_NAME=$(subst _podman,,$*) \
-v $(_TEMPDIR)/.cache/$*:/var/cache/dnf:Z \ --build-arg=IMG_SFX=$(_IMG_SFX) \
-v $(_TEMPDIR)/.cache/$*:/var/cache/apt:Z \ --build-arg=CIRRUS_TASK_ID=$(CIRRUS_TASK_ID) \
--build-arg=GIT_HEAD=$(call err_if_empty,GIT_HEAD) \
-f podman/Containerfile . -f podman/Containerfile .
rm -f $@
podman save --quiet -o $@ $*_podman:$(_IMG_SFX)
.PHONY: skopeo_cidev .PHONY: skopeo_cidev
skopeo_cidev: $(_TEMPDIR)/skopeo_cidev.tar ## Build Skopeo development and CI container skopeo_cidev: $(_TEMPDIR)/skopeo_cidev.iid ## Build Skopeo development and CI container
$(_TEMPDIR)/skopeo_cidev.tar: $(wildcard skopeo_base/*) $(_TEMPDIR)/.cache/fedora $(_TEMPDIR)/skopeo_cidev.iid: $(_TEMPDIR) $(wildcard skopeo_base/*)
podman build -t skopeo_cidev:$(call err_if_empty,_IMG_SFX) \ podman build -t skopeo_cidev:$(call err_if_empty,_IMG_SFX) \
--iidfile=$@ \
--security-opt seccomp=unconfined \ --security-opt seccomp=unconfined \
--build-arg=BASE_TAG=$(FEDORA_RELEASE) \ --build-arg=BASE_TAG=$(FEDORA_RELEASE) \
-v $(_TEMPDIR)/.cache/fedora:/var/cache/dnf:Z \
skopeo_cidev skopeo_cidev
rm -f $@
podman save --quiet -o $@ skopeo_cidev:$(_IMG_SFX)
# TODO: Temporarily force F36 due to:
# https://github.com/aio-libs/aiohttp/issues/6600
.PHONY: ccia .PHONY: ccia
ccia: $(_TEMPDIR)/ccia.tar ## Build the Cirrus-CI Artifacts container image ccia: $(_TEMPDIR)/ccia.iid ## Build the Cirrus-CI Artifacts container image
$(_TEMPDIR)/ccia.tar: ccia/Containerfile $(_TEMPDIR)/ccia.iid: ccia/Containerfile $(_TEMPDIR)
podman build -t ccia:$(call err_if_empty,_IMG_SFX) \ $(call podman_build,$@,ccia:$(call err_if_empty,_IMG_SFX),ccia)
--security-opt seccomp=unconfined \
--build-arg=BASE_TAG=36 \
ccia
rm -f $@
podman save --quiet -o $@ ccia:$(_IMG_SFX)
# Note: This target only builds imgts:c$(_IMG_SFX) it does not push it to
# any container registry which may be required for targets which
# depend on it as a base-image. In CI, pushing is handled automatically
# by the 'ci/make_container_images.sh' script.
.PHONY: imgts .PHONY: imgts
imgts: $(_TEMPDIR)/imgts.tar ## Build the VM image time-stamping container image imgts: imgts/Containerfile imgts/entrypoint.sh imgts/google-cloud-sdk.repo imgts/lib_entrypoint.sh $(_TEMPDIR) ## Build the VM image time-stamping container image
$(_TEMPDIR)/imgts.tar: imgts/Containerfile imgts/entrypoint.sh imgts/google-cloud-sdk.repo imgts/lib_entrypoint.sh $(_TEMPDIR)/.cache/centos $(call podman_build,/dev/null,imgts:$(call err_if_empty,_IMG_SFX),imgts)
$(call podman_build,$@,imgts:$(call err_if_empty,_IMG_SFX),imgts,centos) -rm $(_TEMPDIR)/$@.iid
# Helper function to build images which depend on imgts:latest base image
# N/B: There is no make dependency resolution on imgts.iid on purpose,
# imgts:c$(_IMG_SFX) is assumed to have already been pushed to quay.
# See imgts target above.
define imgts_base_podman_build define imgts_base_podman_build
podman load -i $(_TEMPDIR)/imgts.tar podman image exists $(_IMGTS_FQIN) || podman pull $(_IMGTS_FQIN)
podman tag imgts:$(call err_if_empty,_IMG_SFX) imgts:latest podman image exists imgts:latest || podman tag $(_IMGTS_FQIN) imgts:latest
$(call podman_build,$@,$(1):$(call err_if_empty,_IMG_SFX),$(1),centos) $(call podman_build,$@,$(1):$(call err_if_empty,_IMG_SFX),$(1))
endef endef
.PHONY: imgobsolete .PHONY: imgobsolete
imgobsolete: $(_TEMPDIR)/imgobsolete.tar ## Build the VM Image obsoleting container image imgobsolete: $(_TEMPDIR)/imgobsolete.iid ## Build the VM Image obsoleting container image
$(_TEMPDIR)/imgobsolete.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh imgobsolete/Containerfile imgobsolete/entrypoint.sh $(_TEMPDIR)/.cache/centos $(_TEMPDIR)/imgobsolete.iid: imgts/lib_entrypoint.sh imgobsolete/Containerfile imgobsolete/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,imgobsolete) $(call imgts_base_podman_build,imgobsolete)
.PHONY: imgprune .PHONY: imgprune
imgprune: $(_TEMPDIR)/imgprune.tar ## Build the VM Image pruning container image imgprune: $(_TEMPDIR)/imgprune.iid ## Build the VM Image pruning container image
$(_TEMPDIR)/imgprune.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh imgprune/Containerfile imgprune/entrypoint.sh $(_TEMPDIR)/.cache/centos $(_TEMPDIR)/imgprune.iid: imgts/lib_entrypoint.sh imgprune/Containerfile imgprune/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,imgprune) $(call imgts_base_podman_build,imgprune)
.PHONY: gcsupld .PHONY: gcsupld
gcsupld: $(_TEMPDIR)/gcsupld.tar ## Build the GCS Upload container image gcsupld: $(_TEMPDIR)/gcsupld.iid ## Build the GCS Upload container image
$(_TEMPDIR)/gcsupld.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh gcsupld/Containerfile gcsupld/entrypoint.sh $(_TEMPDIR)/.cache/centos $(_TEMPDIR)/gcsupld.iid: imgts/lib_entrypoint.sh gcsupld/Containerfile gcsupld/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,gcsupld) $(call imgts_base_podman_build,gcsupld)
.PHONY: orphanvms .PHONY: orphanvms
orphanvms: $(_TEMPDIR)/orphanvms.tar ## Build the Orphaned VM container image orphanvms: $(_TEMPDIR)/orphanvms.iid ## Build the Orphaned VM container image
$(_TEMPDIR)/orphanvms.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh orphanvms/Containerfile orphanvms/entrypoint.sh orphanvms/_gce orphanvms/_ec2 $(_TEMPDIR)/.cache/centos $(_TEMPDIR)/orphanvms.iid: imgts/lib_entrypoint.sh orphanvms/Containerfile orphanvms/entrypoint.sh orphanvms/_gce orphanvms/_ec2 $(_TEMPDIR)
$(call imgts_base_podman_build,orphanvms) $(call imgts_base_podman_build,orphanvms)
.PHONY: .get_ci_vm .PHONY: .get_ci_vm
get_ci_vm: $(_TEMPDIR)/get_ci_vm.tar ## Build the get_ci_vm container image get_ci_vm: $(_TEMPDIR)/get_ci_vm.iid ## Build the get_ci_vm container image
$(_TEMPDIR)/get_ci_vm.tar: lib.sh get_ci_vm/Containerfile get_ci_vm/entrypoint.sh get_ci_vm/setup.sh $(_TEMPDIR) $(_TEMPDIR)/get_ci_vm.iid: lib.sh get_ci_vm/Containerfile get_ci_vm/entrypoint.sh get_ci_vm/setup.sh $(_TEMPDIR)
podman build -t get_ci_vm:$(call err_if_empty,_IMG_SFX) -f get_ci_vm/Containerfile . podman build --iidfile=$@ -t get_ci_vm:$(call err_if_empty,_IMG_SFX) -f get_ci_vm/Containerfile ./
rm -f $@
podman save --quiet -o $@ get_ci_vm:$(_IMG_SFX)
.PHONY: clean .PHONY: clean
clean: ## Remove all generated files referenced in this Makefile clean: ## Remove all generated files referenced in this Makefile
-rm -rf $(_TEMPDIR) -rm -rf $(_TEMPDIR)
-rm -f image_builder/*.json -rm -f image_builder/*.json
-rm -f *_images/{*.json,cidata*,*-data} -rm -f *_images/{*.json,cidata*,*-data}
-rm -f ci_debug.tar -podman rmi imgts:latest
-podman rmi $(_IMGTS_FQIN)

108
README-simplified.md Normal file
View File

@ -0,0 +1,108 @@
The README here is waaaaaay too complicated for Ed. So here is a
simplified version of the typical things you need to do.
Super Duper Simplest Case
=========================
This is by far the most common case, and the simplest to understand.
You do this when you want to build VMs with newer package versions than
whatever VMs are currently set up in CI. You really need to
understand this before you get into anything more complicated.
```
$ git checkout -b lets-see-what-happens
$ make IMG_SFX
$ git commit -asm"Let's just see what happens"
```
...and push that as a PR.
If you're lucky, in about an hour you will get an email from `github-actions[bot]`
with a nice table of base and cache images, with links. I strongly encourage you
to try to get Ed's
[cirrus-vm-get-versions](https://github.com/edsantiago/containertools/tree/main/cirrus-vm-get-versions)
script working, because this will give you a very quick easy reliable
list of what packages have changed. You don't need this, but life will be painful
for you without it.
(If you're not lucky, the build will break. There are infinite ways for
this to happen, so you're on your own here. Ask for help! This is a great
team, and one or more people may quickly realize the problem.)
Once you have new VMs built, **test in an actual project**! Usually podman
and buildah, but you may want the varks too:
```
$ cd ~/src/github/containers/podman ! or wherever
$ git checkout -b test-new-vms
$ vim .cirrus.yml
[ search for "c202", and replace with your new IMG_SFX.]
[ Don't forget the leading "c"! ]
$ git commit -as
[ Please include a link to the automation_images PR! ]
```
Push this PR and see what happens. If you're very lucky, it will
pass on this and other repos. Get your podman/buildah/vark PRs
reviewed and merged, and then review-merge the automation_images one.
Pushing (har har) Your Luck
---------------------------
Feel lucky? Tag this VM build, so `dependabot` will create PRs
on all the myriad container repos:
```
$ git tag $(<IMG_SFX)
$ git push --no-verify upstream $(<IMG_SFX)
```
Within a few hours you'll see a ton of PRs. It is very likely that
something will go wrong in one or two, and if so, it's impossible to
cover all possibilities. As above, ask for help.
More Complicated Cases
======================
These are the next two most common.
Bumping One Package
-------------------
Quite often we need an emergency bump of only one package that
is not yet stable. Here are examples of the two most typical
cases,
[crun](https://github.com/containers/automation_images/pull/386/files) and
[pasta](https://github.com/containers/automation_images/pull/383/files).
Note the `timebomb` directives. Please use these: the time you save
may be your own, one future day. And please use 2-6 week times.
A timebomb that expires in a year is going to be hard to understand
when it goes off.
Bumping Distros
---------------
Like Fedora 40 to 41. Edit `Makefile`. Change `FEDORA`, `PRIOR_FEDORA`,
and `RAWHIDE`, then proceed with Simple Case.
There is almost zero chance that this will work on the first try.
Sorry, that's just the way it is. See the
[F40 to F41 PR](https://github.com/containers/automation_images/pull/392/files)
for a not-atypical example.
STRONG RECOMMENDATION
=====================
Read [check-imgsfx.sh](check-imgsfx.sh) and follow its instructions. Ed
likes to copy that to `.git/hooks/pre-push`, Chris likes using some
external tool that Ed doesn't trust. Use your judgment.
The reason for this is that you are going to forget to `make IMG_SFX`
one day, and then you're going to `git push --force` an update and walk
away, and come back to a failed run because `IMG_SFX` must always
always always be brand new.
Weak Recommendation
-------------------
Ed likes to fiddle with `IMG_SFX`, zeroing out to the nearest
quarter hour. Absolutely unnecessary, but easier on the eyes
when trying to see which VMs are in use or when comparing
diffs.

View File

@ -52,7 +52,7 @@ However, all steps are listed below for completeness.
For more information on the overall process of importing custom GCE VM For more information on the overall process of importing custom GCE VM
Images, please [refer to the documentation](https://cloud.google.com/compute/docs/import/import-existing-image). For references to the latest pre-build AWS Images, please [refer to the documentation](https://cloud.google.com/compute/docs/import/import-existing-image). For references to the latest pre-build AWS
EC2 Fedora AMI's see [the EC2 Fedora AMI's see [the
upstream cloud page](https://alt.fedoraproject.org/cloud/). upstream cloud page](https://fedoraproject.org/cloud/download).
For more information on the primary tool (*packer*) used for this process, For more information on the primary tool (*packer*) used for this process,
please [see it's documentation page](https://www.packer.io/docs). please [see it's documentation page](https://www.packer.io/docs).
@ -264,13 +264,11 @@ then automatically pushed to:
* https://quay.io/repository/libpod/fedora_podman * https://quay.io/repository/libpod/fedora_podman
* https://quay.io/repository/libpod/prior-fedora_podman * https://quay.io/repository/libpod/prior-fedora_podman
* https://quay.io/repository/libpod/debian_podman
The meaning of *prior* and *current*, is defined by the contents of The meaning of *prior* and *current*, is defined by the contents of
the `*_release` files within the `podman` subdirectory. This is the `*_RELEASE` values in the `Makefile`. The images will be tagged
necessary to support the Makefile target being used manually with the value within the `IMG_SFX` file. Additionally, the most
(e.g. debugging). These files must be updated manually when introducing recently merged PR on this repo will tag its images `latest`.
a new VM image version.
### Tooling ### Tooling
@ -292,8 +290,7 @@ the following are built:
In all cases, when automation runs on a branch (i.e. after a PR is merged) In all cases, when automation runs on a branch (i.e. after a PR is merged)
the actual image tagged `latest` will be pushed. When running in a PR, the actual image tagged `latest` will be pushed. When running in a PR,
only validation and test images are produced. This behavior is controled only validation and test images are produced.
by a combination of the `$PUSH_LATEST` and `$CIRRUS_PR` variables.
## The Base Images (overview step 3) ## The Base Images (overview step 3)
@ -377,10 +374,11 @@ infinite-growth of the VM image count.
# Debugging / Locally driving VM Image production # Debugging / Locally driving VM Image production
Because the entire automated build process is containerized, it may easily be Much of the CI and image-build process is containerized, so it may be debugged
performed locally on your laptop/workstation. However, this process will locally on your laptop/workstation. However, this process will
still involve interfacing with GCE and AWS. Therefore, you must be in possession still involve interfacing with GCE and AWS. Therefore, you must be in possession
of a *Google Application Credentials* (GAC) JSON and AWS credentials INI file. of a *Google Application Credentials* (GAC) JSON and
[AWS credentials INI file](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds).
The GAC JSON file should represent a service account (contrasted to a user account, The GAC JSON file should represent a service account (contrasted to a user account,
which always uses OAuth2). The name of the service account doesn't matter, which always uses OAuth2). The name of the service account doesn't matter,
@ -401,44 +399,52 @@ one the following (custom) IAM policies enabled:
Somebody familiar with Google and AWS IAM will need to provide you with the Somebody familiar with Google and AWS IAM will need to provide you with the
credential files and ensure correct account configuration. Having these files credential files and ensure correct account configuration. Having these files
stored *in your home directory* on your laptop/workstation, the process of stored *in your home directory* on your laptop/workstation, the process of
producing images proceeds as follows: building and entering the debug containers is as follows:
1. Ensure you have podman installed, and lots of available network and CPU 1. Ensure you have podman installed, and lots of available network and CPU
resources (i.e. turn off YouTube, shut down background VMs and other hungry resources (i.e. turn off YouTube, shut down background VMs and other hungry
tasks). Build the image-builder container image, by executing tasks).
2. Build and enter either the `ci_debug` or the `image_builder_debug` container
image, by executing:
``` ```
make image_builder_debug GAC_FILEPATH=</home/path/to/gac.json> \ make <ci_debug|image_builder_debug> \
AWS_SHARED_CREDENTIALS_FILE=</path/to/credentials> GAC_FILEPATH=</home/path/to/gac.json> \
AWS_SHARED_CREDENTIALS_FILE=</path/to/credentials>
``` ```
2. You will be dropped into a debugging container, inside a volume-mount of * The `ci_debug` image is significantly smaller, and only intended for rudimentary
the repository root. This container is practically identical to the VM cases, for example running the scripts under the `ci` subdirectory.
produced and used in *overview step 1*. If changes are made, the container * The `image_builder_debug` image is larger, and has KVM virtualization enabled.
image should be re-built to reflect them. It's needed for more extensive debugging of the packer-based image builds.
3. If you wish to build only a subset of available images, list the names 3. Both containers will place you in the default shell, inside a volume-mount of
you want as comma-separated values of the `PACKER_BUILDS` variable. Be the repository root. This environment is practically identical to what is
sure you *export* this variable so that `make` has access to it. For used in Cirrus-CI.
example, `export PACKER_BUILDS=debian,prior-fedora`.
4. Still within the container, again ensure you have plenty of network and CPU 4. For the `image_builder_debug` container, If you wish to build only a subset
of available images, list the names you want as comma-separated values of the
`PACKER_BUILDS` variable. Be sure you *export* this variable so that `make`
has access to it. For example, `export PACKER_BUILDS=debian,prior-fedora`.
5. Still within the container, again ensure you have plenty of network and CPU
resources available. Build the VM Base images by executing the command resources available. Build the VM Base images by executing the command
``make base_images``. This is the equivalent operation as documented by ``make base_images``. This is the equivalent operation as documented by
*overview step 2*. ***N/B*** The GCS -> GCE image conversion can take *overview step 2*. ***N/B*** The GCS -> GCE image conversion can take
some time, be patient. Packer may not produce any output for several minutes some time, be patient. Packer may not produce any output for several minutes
while the conversion is happening. while the conversion is happening.
5. When successful, the names of the produced images will all be referenced 6. When successful, the names of the produced images will all be referenced
in the `base_images/manifest.json` file. If there are problems, fix them in the `base_images/manifest.json` file. If there are problems, fix them
and remove the `manifest.json` file. Then re-run the same *make* command and remove the `manifest.json` file. Then re-run the same *make* command
as before, packer will force-overwrite any broken/partially created as before, packer will force-overwrite any broken/partially created
images automatically. images automatically.
6. Produce the VM Cache Images, equivalent to the operations outlined 7. Produce the VM Cache Images, equivalent to the operations outlined
in *overview step 3*. Execute the following command (still within the in *overview step 3*. Execute the following command (still within the
debug image-builder container): ``make cache_images``. debug image-builder container): ``make cache_images``.
7. Again when successful, you will find the image names are written into 8. Again when successful, you will find the image names are written into
the `cache_images/manifest.json` file. If there is a problem, remove the `cache_images/manifest.json` file. If there is a problem, remove
this file, fix the problem, and re-run the `make` command. No cleanup this file, fix the problem, and re-run the `make` command. No cleanup
is necessary, leftover/disused images will be automatically cleaned up is necessary, leftover/disused images will be automatically cleaned up

View File

@ -26,8 +26,6 @@ variables: # Empty value means it must be passed in on command-line
PRIOR_FEDORA_IMAGE_URL: "{{env `PRIOR_FEDORA_IMAGE_URL`}}" PRIOR_FEDORA_IMAGE_URL: "{{env `PRIOR_FEDORA_IMAGE_URL`}}"
PRIOR_FEDORA_CSUM_URL: "{{env `PRIOR_FEDORA_CSUM_URL`}}" PRIOR_FEDORA_CSUM_URL: "{{env `PRIOR_FEDORA_CSUM_URL`}}"
FEDORA_IMPORT_IMG_SFX: "{{env `FEDORA_IMPORT_IMG_SFX`}}"
DEBIAN_RELEASE: "{{env `DEBIAN_RELEASE`}}" DEBIAN_RELEASE: "{{env `DEBIAN_RELEASE`}}"
DEBIAN_BASE_FAMILY: "{{env `DEBIAN_BASE_FAMILY`}}" DEBIAN_BASE_FAMILY: "{{env `DEBIAN_BASE_FAMILY`}}"
@ -63,6 +61,7 @@ builders:
type: 'qemu' type: 'qemu'
accelerator: "kvm" accelerator: "kvm"
qemu_binary: '/usr/libexec/qemu-kvm' # Unique to CentOS, not fedora :( qemu_binary: '/usr/libexec/qemu-kvm' # Unique to CentOS, not fedora :(
memory: 12288
iso_url: '{{user `FEDORA_IMAGE_URL`}}' iso_url: '{{user `FEDORA_IMAGE_URL`}}'
disk_image: true disk_image: true
format: "raw" format: "raw"
@ -75,12 +74,12 @@ builders:
headless: true headless: true
# qemu_binary: "/usr/libexec/qemu-kvm" # qemu_binary: "/usr/libexec/qemu-kvm"
qemuargs: # List-of-list format required to override packer-generated args qemuargs: # List-of-list format required to override packer-generated args
- - "-m" - - "-display"
- "1024" - "none"
- - "-device" - - "-device"
- "virtio-rng-pci" - "virtio-rng-pci"
- - "-chardev" - - "-chardev"
- "tty,id=pts,path={{user `TTYDEV`}}" - "file,id=pts,path={{user `TTYDEV`}}"
- - "-device" - - "-device"
- "isa-serial,chardev=pts" - "isa-serial,chardev=pts"
- - "-netdev" - - "-netdev"
@ -108,20 +107,18 @@ builders:
- &fedora-aws - &fedora-aws
name: 'fedora-aws' name: 'fedora-aws'
type: 'amazon-ebs' type: 'amazon-ebs'
source_ami_filter: # Will fail if >1 or no AMI found source_ami_filter:
# Many of these search filter values (like account ID and name) aren't publicized
# anywhere. They were found by examining AWS EC2 AMIs published/referenced from
# the AWS sections on https://fedoraproject.org/cloud/download
owners: owners:
# Docs are wrong, specifying the Account ID required to make AMIs private. - &fedora_accountid 125523088429
# The Account ID is hard-coded here out of expediency, since passing in most_recent: true # Required b/c >1 search result likely to be returned
# more packer args from the command-line (in Makefile) is non-trivial.
- &accountid '449134212816'
# It's necessary to 'search' for the base-image by these criteria. If
# more than one image is found, Packer will fail the build (and display
# the conflicting AMI IDs).
filters: &ami_filters filters: &ami_filters
architecture: 'x86_64' architecture: 'x86_64'
image-type: 'machine' image-type: 'machine'
is-public: 'false' is-public: 'true'
name: '{{build_name}}-i{{user `FEDORA_IMPORT_IMG_SFX`}}' name: 'Fedora-Cloud-Base*-{{user `FEDORA_RELEASE`}}-*'
root-device-type: 'ebs' root-device-type: 'ebs'
state: 'available' state: 'available'
virtualization-type: 'hvm' virtualization-type: 'hvm'
@ -145,7 +142,6 @@ builders:
volume_type: 'gp2' volume_type: 'gp2'
delete_on_termination: true delete_on_termination: true
# These are critical and used by security-polciy to enforce instance launch limits. # These are critical and used by security-polciy to enforce instance launch limits.
tags: &awstags tags: &awstags
<<: *imgcpylabels <<: *imgcpylabels
# EC2 expects "Name" to be capitalized # EC2 expects "Name" to be capitalized
@ -159,7 +155,7 @@ builders:
# This is necessary for security - The CI service accounts are not permitted # This is necessary for security - The CI service accounts are not permitted
# to use AMI's from any other account, including public ones. # to use AMI's from any other account, including public ones.
ami_users: ami_users:
- *accountid - &accountid '449134212816'
ssh_username: 'fedora' ssh_username: 'fedora'
ssh_clear_authorized_keys: true ssh_clear_authorized_keys: true
# N/B: Required Packer >= 1.8.0 # N/B: Required Packer >= 1.8.0
@ -170,7 +166,8 @@ builders:
name: 'fedora-aws-arm64' name: 'fedora-aws-arm64'
source_ami_filter: source_ami_filter:
owners: owners:
- *accountid - *fedora_accountid
most_recent: true # Required b/c >1 search result likely to be returned
filters: filters:
<<: *ami_filters <<: *ami_filters
architecture: 'arm64' architecture: 'arm64'
@ -187,23 +184,23 @@ provisioners: # Debian images come bundled with GCE integrations provisioned
- type: 'shell' - type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- 'mkdir -p /tmp/automation_images' - 'mkdir -p /var/tmp/automation_images'
- type: 'file' - type: 'file'
source: '{{ pwd }}/' source: '{{ pwd }}/'
destination: '/tmp/automation_images/' destination: '/var/tmp/automation_images/'
- except: ['debian'] - except: ['debian']
type: 'shell' type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- '/bin/bash /tmp/automation_images/base_images/fedora_base-setup.sh' - '/bin/bash /var/tmp/automation_images/base_images/fedora_base-setup.sh'
- only: ['debian'] - only: ['debian']
type: 'shell' type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- '/bin/bash /tmp/automation_images/base_images/debian_base-setup.sh' - 'env DEBIAN_FRONTEND=noninteractive DEBIAN_RELEASE={{user `DEBIAN_RELEASE`}} /bin/bash /var/tmp/automation_images/base_images/debian_base-setup.sh'
post-processors: post-processors:
# Must be double-nested to guarantee execution order # Must be double-nested to guarantee execution order

View File

@ -16,8 +16,17 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh # shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh" source "$REPO_DIRPATH/lib.sh"
# Switch to Debian Unstable (SID) # Cloud-networking in general can sometimes be flaky.
cat << EOF | sudo tee /etc/apt/sources.list # Increase Apt's tolerance levels.
cat << EOF | $SUDO tee -a /etc/apt/apt.conf.d/99timeouts
// Added during CI VM image build
Acquire::Retries "3";
Acquire::http::timeout "300";
Acquire::https::timeout "300";
EOF
echo "Switch sources to Debian Unstable (SID)"
cat << EOF | $SUDO tee /etc/apt/sources.list
deb http://deb.debian.org/debian/ unstable main deb http://deb.debian.org/debian/ unstable main
deb-src http://deb.debian.org/debian/ unstable main deb-src http://deb.debian.org/debian/ unstable main
EOF EOF
@ -28,7 +37,6 @@ PKGS=( \
curl curl
cloud-init cloud-init
gawk gawk
git
openssh-client openssh-client
openssh-server openssh-server
rng-tools5 rng-tools5
@ -36,40 +44,46 @@ PKGS=( \
) )
echo "Updating package source lists" echo "Updating package source lists"
$SUDO apt-get -qq -y update ( set -x; $SUDO apt-get -q -y update; )
# Only deps for automation tooling
( set -x; $SUDO apt-get -q -y install git )
install_automation_tooling
# Ensure automation library is loaded
source "$REPO_DIRPATH/lib.sh"
# Workaround 12->13 forward-incompatible change in grub scripts.
# Without this, updating to the SID kernel may fail.
echo "Upgrading grub-common"
( set -x; $SUDO apt-get -q -y upgrade grub-common; )
echo "Upgrading to SID" echo "Upgrading to SID"
$SUDO apt-get -qq -y full-upgrade ( set -x; $SUDO apt-get -q -y full-upgrade; )
echo "Installing basic, necessary packages." echo "Installing basic, necessary packages."
$SUDO apt-get -qq -y install "${PKGS[@]}" ( set -x; $SUDO apt-get -q -y install "${PKGS[@]}"; )
# compatibility / usefullness of all automated scripting (which is bash-centric) # compatibility / usefullness of all automated scripting (which is bash-centric)
$SUDO DEBCONF_DB_OVERRIDE='File{'$SCRIPT_DIRPATH/no_dash.dat'}' \ ( set -x; $SUDO DEBCONF_DB_OVERRIDE='File{'$SCRIPT_DIRPATH/no_dash.dat'}' \
dpkg-reconfigure dash dpkg-reconfigure dash; )
# Ref: https://wiki.debian.org/DebianReleases # Ref: https://wiki.debian.org/DebianReleases
# CI automation needs a *sortable* OS version/release number to select/perform/apply # CI automation needs an OS version/release number for a variety of uses.
# runtime configuration and workarounds. Since switching to Unstable/SID, a # However, After switching to Unstable/SID, the value from the usual source
# numeric release version is not available. While an imperfect solution, # is not available. Simply use the value passed through packer by the Makefile.
# base an artificial version off the 'base-files' package version, right-padded with req_env_vars DEBIAN_RELEASE
# zeros to ensure sortability (i.e. "12.02" < "12.13"). # shellcheck disable=SC2154
base_files_version=$(dpkg -s base-files | awk '/Version:/{print $2}') warn "Setting '$DEBIAN_RELEASE' as the release number for CI-automation purposes."
base_major=$(cut -d. -f 1 <<<"$base_files_version") ( set -x; echo "VERSION_ID=\"$DEBIAN_RELEASE\"" | \
base_minor=$(cut -d. -f 2 <<<"$base_files_version") $SUDO tee -a /etc/os-release; )
sortable_version=$(printf "%02d.%02d" $base_major $base_minor)
echo "WARN: This is NOT an official version number. It's for CI-automation purposes only."
echo "VERSION_ID=\"$sortable_version\"" | \
$SUDO tee -a /etc/os-release
install_automation_tooling
if ! ((CONTAINER)); then if ! ((CONTAINER)); then
custom_cloud_init custom_cloud_init
$SUDO systemctl enable rngd ( set -x; $SUDO systemctl enable rngd; )
# Cloud-config fails to enable this for some reason or another # Cloud-config fails to enable this for some reason or another
$SUDO sed -i -r \ ( set -x; $SUDO sed -i -r \
-e 's/^PermitRootLogin no/PermitRootLogin prohibit-password/' \ -e 's/^PermitRootLogin no/PermitRootLogin prohibit-password/' \
/etc/ssh/sshd_config /etc/ssh/sshd_config; )
fi fi
finalize finalize

View File

@ -18,7 +18,6 @@ source "$REPO_DIRPATH/lib.sh"
declare -a PKGS declare -a PKGS
PKGS=(rng-tools git coreutils cloud-init) PKGS=(rng-tools git coreutils cloud-init)
XARGS=--disablerepo=updates
if ! ((CONTAINER)); then if ! ((CONTAINER)); then
# Packer defines this automatically for us # Packer defines this automatically for us
# shellcheck disable=SC2154 # shellcheck disable=SC2154
@ -30,20 +29,28 @@ if ! ((CONTAINER)); then
if ((OS_RELEASE_VER<35)); then if ((OS_RELEASE_VER<35)); then
PKGS+=(google-compute-engine-tools) PKGS+=(google-compute-engine-tools)
else else
PKGS+=(google-compute-engine-guest-configs) PKGS+=(google-compute-engine-guest-configs google-guest-agent)
fi fi
fi fi
fi fi
# Due to https://bugzilla.redhat.com/show_bug.cgi?id=1907030 # The Fedora CI VM base images are built using nested-virt with
# updates cannot be installed or even looked at during this stage. # limited resources available. Further, cloud-networking in
# Pawn the problem off to the cache-image stage where more memory # general can sometimes be flaky. Increase DNF's tolerance
# is available and debugging is also easier. Try to save some more # levels.
# memory by pre-populating repo metadata prior to any transactions. cat << EOF | $SUDO tee -a /etc/dnf/dnf.conf
$SUDO dnf makecache $XARGS
# Updates disable, see comment above # Added during CI VM image build
# $SUDO dnf -y update $XARGS minrate=100
$SUDO dnf -y install $XARGS "${PKGS[@]}" timeout=60
EOF
$SUDO dnf makecache
$SUDO dnf -y update
$SUDO dnf -y install "${PKGS[@]}"
# Occasionally following an install, there are more updates available.
# This may be due to activation of suggested/recommended dependency resolution.
$SUDO dnf -y update
if ! ((CONTAINER)); then if ! ((CONTAINER)); then
$SUDO systemctl enable rngd $SUDO systemctl enable rngd
@ -83,7 +90,9 @@ if ! ((CONTAINER)); then
# This is necessary to prevent permission-denied errors on service-start # This is necessary to prevent permission-denied errors on service-start
# and also on the off-chance the package gets updated and context reset. # and also on the off-chance the package gets updated and context reset.
$SUDO semanage fcontext --add --type bin_t /usr/bin/cloud-init $SUDO semanage fcontext --add --type bin_t /usr/bin/cloud-init
$SUDO restorecon -v /usr/bin/cloud-init # This used restorecon before so we don't have to specify the file_contexts.local
# manually, however with f42 that stopped working: https://bugzilla.redhat.com/show_bug.cgi?id=2360183
$SUDO setfiles -v /etc/selinux/targeted/contexts/files/file_contexts.local /usr/bin/cloud-init
else # GCP Image else # GCP Image
echo "Setting GCP startup service (for Cirrus-CI agent) SELinux unconfined" echo "Setting GCP startup service (for Cirrus-CI agent) SELinux unconfined"
# ref: https://cloud.google.com/compute/docs/startupscript # ref: https://cloud.google.com/compute/docs/startupscript
@ -95,10 +104,4 @@ if ! ((CONTAINER)); then
/lib/$METADATA_SERVICE_PATH | $SUDO tee -a /etc/$METADATA_SERVICE_PATH /lib/$METADATA_SERVICE_PATH | $SUDO tee -a /etc/$METADATA_SERVICE_PATH
fi fi
if [[ "$OS_RELEASE_ID" == "fedora" ]] && ((OS_RELEASE_VER>=33)); then
# Ref: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=783509
echo "Disabling automatic /tmp (tmpfs) mount"
$SUDO systemctl mask tmp.mount
fi
finalize finalize

View File

@ -1,26 +0,0 @@
#!/bin/bash
# This script is intended to be used from two places only:
# 1) When building the build-push VM image, to install the scripts as-is
# in a PR in order for CI testing to operate on them.
# 2) From the autoupdate.sh script, when $BUILDPUSHAUTOUPDATED is unset
# or '0'. This clones the latest repository to install (possibly)
# updated scripts.
#
# WARNING: Use under any other circumstances will probably screw things up.
if [[ -z "$BUILDPUSHAUTOUPDATED" ]];
then
echo "This script must only be run under Packer or autoupdate.sh"
exit 1
fi
source /etc/automation_environment
source "$AUTOMATION_LIB_PATH/common_lib.sh"
#shellcheck disable=SC2154
cd $(dirname "$SCRIPT_FILEPATH") || exit 1
# Must be installed into $AUTOMATION_LIB_PATH/../bin which is on $PATH
cp ./bin/* $AUTOMATION_LIB_PATH/../bin/
cp ./lib/* $AUTOMATION_LIB_PATH/
chmod +x $AUTOMATION_LIB_PATH/../bin/*

View File

@ -1,5 +0,0 @@
# DO NOT USE
This directory contains scripts/data used by the Cirrus-CI
`test_build-push` task. It is not intended to be used otherwise
and may cause harm.

View File

@ -1,175 +0,0 @@
#!/bin/bash
# This script is not intended for humans. It should be run by automation
# at the branch-level in automation for the skopeo, buildah, and podman
# repositories. It's purpose is to produce a multi-arch container image
# based on the contents of context subdirectory. At runtime, $PWD is assumed
# to be the root of the cloned git repository.
#
# The first argument to the script, should be the URL of the git repository
# in question. Though at this time, this is only used for labeling the
# resulting image.
#
# The second argument to this script is the relative path to the build context
# subdirectory. The basename of this subdirectory may indicates the
# image flavor (i.e. `upstream`, `testing`, or `stable`). Depending
# on this value, the image may be pushed to multiple container registries
# under slightly different rules (see the next option).
#
# If the basename of the context directory (second argument) does NOT reflect
# the image flavor, this name may be passed in as a third argument. Handling
# of this argument may be repository-specific, so check the actual code below
# to understand it's behavior.
set -eo pipefail
if [[ -r "/etc/automation_environment" ]]; then
source /etc/automation_environment # defines AUTOMATION_LIB_PATH
#shellcheck disable=SC1090,SC2154
source "$AUTOMATION_LIB_PATH/common_lib.sh"
#shellcheck source=../lib/autoupdate.sh
source "$AUTOMATION_LIB_PATH/autoupdate.sh"
else
echo "Expecting to find automation common library installed."
exit 1
fi
# Careful: Changing the error message below could break auto-update test.
if [[ "$#" -lt 2 ]]; then
#shellcheck disable=SC2145
die "Must be called with at least two arguments, got '$@'"
fi
if [[ -z $(type -P build-push.sh) ]]; then
die "It does not appear that build-push.sh is installed properly"
fi
if ! [[ -d "$PWD/.git" ]]; then
die "The current directory ($PWD) does not appear to be the root of a git repo."
fi
# Assume transitive debugging state for build-push.sh if set
if [[ "$(automation_version | cut -d '.' -f 1)" -ge 4 ]]; then
# Valid for version 4.0.0 and above only
export A_DEBUG
else
export DEBUG
fi
# Arches to build by default - may be overridden for testing
ARCHES="${ARCHES:-amd64,ppc64le,s390x,arm64}"
# First arg (REPO_URL) is the clone URL for repository for informational purposes
REPO_URL="$1"
REPO_NAME=$(basename "${REPO_URL%.git}")
# Second arg (CTX_SUB) is the context subdirectory relative to the clone path
CTX_SUB="$2"
# Historically, the basename of second arg set the image flavor(i.e. `upstream`,
# `testing`, or `stable`). For cases where this convention doesn't fit,
# it's possible to pass the flavor-name as the third argument. Both methods
# will populate a "FLAVOR" build-arg value.
if [[ "$#" -lt 3 ]]; then
FLAVOR_NAME=$(basename "$CTX_SUB")
elif [[ "$#" -ge 3 ]]; then
FLAVOR_NAME="$3" # An empty-value is valid
else
die "Expecting a non-empty third argument indicating the FLAVOR build-arg value."
fi
_REG="quay.io"
if [[ "$REPO_NAME" =~ testing ]]; then
_REG="example.com"
fi
REPO_FQIN="$_REG/$REPO_NAME/$FLAVOR_NAME"
req_env_vars REPO_URL REPO_NAME CTX_SUB FLAVOR_NAME
# Common library defines SCRIPT_FILENAME
# shellcheck disable=SC2154
dbg "$SCRIPT_FILENAME operating constants:
REPO_URL=$REPO_URL
REPO_NAME=$REPO_NAME
CTX_SUB=$CTX_SUB
FLAVOR_NAME=$FLAVOR_NAME
REPO_FQIN=$REPO_FQIN
"
# Set non-zero to avoid actually executing build-push, simply print
# the command-line that would have been executed
DRYRUN=${DRYRUN:-0}
_DRNOPUSH=""
if ((DRYRUN)); then
_DRNOPUSH="--nopush"
warn "Operating in dry-run mode with $_DRNOPUSH"
fi
### MAIN
declare -a build_args
if [[ -n "$FLAVOR_NAME" ]]; then
build_args=(--build-arg "FLAVOR=$FLAVOR_NAME")
fi
head_sha=$(git rev-parse HEAD)
dbg "HEAD is $head_sha"
# Labels to add to all images
# N/B: These won't show up in the manifest-list itself, only it's constituents.
lblargs="\
--label=org.opencontainers.image.source=$REPO_URL \
--label=org.opencontainers.image.revision=$head_sha \
--label=org.opencontainers.image.created=$(date -u --iso-8601=seconds)"
dbg "lblargs=$lblargs"
modcmdarg="tag_version.sh $FLAVOR_NAME"
# For stable images, the version number of the command is needed for tagging.
if [[ "$FLAVOR_NAME" == "stable" ]]; then
# only native arch is needed to extract the version
dbg "Building local-arch image to extract stable version number"
podman build -t $REPO_FQIN "${build_args[@]}" ./$CTX_SUB
case "$REPO_NAME" in
skopeo) version_cmd="--version" ;;
buildah) version_cmd="buildah --version" ;;
podman) version_cmd="podman --version" ;;
testing) version_cmd="cat FAKE_VERSION" ;;
*) die "Unknown/unsupported repo '$REPO_NAME'" ;;
esac
pvcmd="podman run -i --rm $REPO_FQIN $version_cmd"
dbg "Extracting version with command: $pvcmd"
version_output=$($pvcmd)
dbg "version output:
$version_output
"
img_cmd_version=$(awk -r -e '/^.+ version /{print $3}' <<<"$version_output")
dbg "parsed version: $img_cmd_version"
test -n "$img_cmd_version"
lblargs="$lblargs --label=org.opencontainers.image.version=$img_cmd_version"
# Prevent temporary build colliding with multi-arch manifest list (built next)
# but preserve image (by ID) for use as cache.
dbg "Un-tagging $REPO_FQIN"
podman untag $REPO_FQIN
# tag-version.sh expects this arg. when FLAVOR_NAME=stable
modcmdarg+=" $img_cmd_version"
# Stable images get pushed to 'containers' namespace as latest & version-tagged
build-push.sh \
$_DRNOPUSH \
--arches=$ARCHES \
--modcmd="$modcmdarg" \
$_REG/containers/$REPO_NAME \
./$CTX_SUB \
$lblargs \
"${build_args[@]}"
fi
# All images are pushed to quay.io/<reponame>, both
# latest and version-tagged (if available).
build-push.sh \
$_DRNOPUSH \
--arches=$ARCHES \
--modcmd="$modcmdarg" \
$REPO_FQIN \
./$CTX_SUB \
$lblargs \
"${build_args[@]}"

View File

@ -1,69 +0,0 @@
#!/bin/bash
# This script is not intended for humans. It should only be referenced
# as an argument to the build-push.sh `--modcmd` option. It's purpose
# is to ensure stable images are re-tagged with a verison-number
# cooresponding to the included tool's version.
set -eo pipefail
if [[ -r "/etc/automation_environment" ]]; then
source /etc/automation_environment # defines AUTOMATION_LIB_PATH
#shellcheck disable=SC1090,SC2154
source "$AUTOMATION_LIB_PATH/common_lib.sh"
else
echo "Unexpected operating environment"
exit 1
fi
# Vars defined by build-push.sh spec. for mod scripts
req_env_vars SCRIPT_FILENAME SCRIPT_FILEPATH RUNTIME PLATFORMOS FQIN CONTEXT \
PUSH ARCHES REGSERVER NAMESPACE IMGNAME MODCMD
if [[ "$#" -ge 1 ]]; then
FLAVOR_NAME="$1" # upstream, testing, or stable
fi
if [[ "$#" -ge 2 ]]; then
# Enforce all version-tags start with a 'v'
VERSION="v${2#v}" # output of $version_cmd
fi
if [[ -z "$FLAVOR_NAME" ]]; then
# Defined by common_lib.sh
# shellcheck disable=SC2154
warn "$SCRIPT_FILENAME passed empty flavor-name argument (optional)."
elif [[ -z "$VERSION" ]]; then
warn "$SCRIPT_FILENAME received empty version argument (req. for FLAVOR_NAME=stable)."
fi
# shellcheck disable=SC2154
dbg "Mod-command operating on $FQIN in '$FLAVOR_NAME' flavor"
if [[ "$FLAVOR_NAME" == "stable" ]]; then
# Stable images must all be tagged with a version number.
# Confirm this value is passed in by caller.
req_env_vars VERSION
VERSION=v${VERSION#v}
if egrep -q '^v[0-9]+\.[0-9]+\.[0-9]+'<<<"$VERSION"; then
msg "Found image command version '$VERSION'"
else
die "Encountered unexpected/non-conforming version '$VERSION'"
fi
# shellcheck disable=SC2154
$RUNTIME tag $FQIN:latest $FQIN:$VERSION
msg "Successfully tagged $FQIN:$VERSION"
# Tag as x.y to provide a consistent tag even for a future z+1
xy_ver=$(awk -F '.' '{print $1"."$2}'<<<"$VERSION")
$RUNTIME tag $FQIN:latest $FQIN:$xy_ver
msg "Successfully tagged $FQIN:$xy_ver"
# Tag as x to provide consistent tag even for a future y+1
x_ver=$(awk -F '.' '{print $1}'<<<"$xy_ver")
$RUNTIME tag $FQIN:latest $FQIN:$x_ver
msg "Successfully tagged $FQIN:$x_ver"
else
warn "$SCRIPT_FILENAME not version-tagging for '$FLAVOR_NAME' stage of '$FQIN'"
fi

View File

@ -1,36 +0,0 @@
# This script is not intended for humans. It should only be sourced by
# main.sh. If BUILDPUSHAUTOUPDATED!=0 this it will be a no-op. Otherwise,
# it will download the latest version of the build-push scripts and re-exec
# main.sh. This allows the scripts to be updated without requiring new VM
# images to be composed and deployed.
#
# WARNING: Changes to this script _do_ require new VM images as auto-updating
# the auto-update script would be complex and hard to test.
# Must be exported - .install.sh checks this is set.
export BUILDPUSHAUTOUPDATED="${BUILDPUSHAUTOUPDATED:-0}"
if ! ((BUILDPUSHAUTOUPDATED)); then
msg "Auto-updating build-push operational scripts..."
#shellcheck disable=SC2154
GITTMP=$(mktemp -p '' -d "$MKTEMP_FORMAT")
trap "rm -rf $GITTMP" EXIT
msg "Obtaining latest version..."
git clone --quiet --depth=1 \
https://github.com/containers/automation_images.git \
"$GITTMP"
msg "Installing..."
cd $GITTMP/build-push || exit 1
bash ./.install.sh
# Important: Return to directory main.sh was started from
cd - || exit 1
rm -rf "$GITTMP"
#shellcheck disable=SC2145
msg "Re-executing main.sh $@..."
export BUILDPUSHAUTOUPDATED=1
exec main.sh "$@" # guaranteed on $PATH
fi

View File

@ -1,200 +0,0 @@
# DO NOT USE - This script is intended to be called by the Cirrus-CI
# `test_build-push` task. It is not intended to be used otherwise
# and may cause harm. It's purpose is to confirm the 'main.sh' script
# behaves in an expected way, given a local test repository as input.
set -eo pipefail
SCRIPT_DIRPATH=$(dirname $(realpath "${BASH_SOURCE[0]}"))
source $SCRIPT_DIRPATH/../lib.sh
req_env_vars CIRRUS_CI
# No need to test if image wasn't built
if TARGET_NAME=build-push skip_on_pr_label; then exit 0; fi
# Architectures to test with (golang standard names)
TESTARCHES="amd64 arm64"
# main.sh is sensitive to this value
ARCHES=$(tr " " ","<<<"$TESTARCHES")
export ARCHES
# Contrived "version" for testing purposes
FAKE_VER_X=$RANDOM
FAKE_VER_Y=$RANDOM
FAKE_VER_Z=$RANDOM
FAKE_VERSION="$FAKE_VER_X.$FAKE_VER_Y.$FAKE_VER_Z"
# Contrived source repository for testing
SRC_TMP=$(mktemp -p '' -d tmp-build-push-test-XXXX)
# Do not change, main.sh is sensitive to the 'testing' name
TEST_FQIN=example.com/testing/stable
# Stable build should result in manifest list tagged this
TEST_FQIN2=example.com/containers/testing
# Don't allow main.sh or tag_version.sh to auto-update at runtime
export BUILDPUSHAUTOUPDATED=1
trap "rm -rf $SRC_TMP" EXIT
# main.sh expects $PWD to be a git repository.
msg "
##### Constructing local test repository #####"
cd $SRC_TMP
showrun git init -b main testing
cd testing
git config --local user.name "Testy McTestface"
git config --local user.email "test@example.com"
git config --local advice.detachedHead "false"
git config --local commit.gpgsign "false"
# The following paths match the style of sub-dir in the actual
# skopeo/buildah/podman repositories. Only the 'stable' flavor
# is tested here, since it involves the most complex workflow.
mkdir -vp "contrib/testimage/stable"
cd "contrib/testimage/stable"
echo "build-push-test version v$FAKE_VERSION" | tee "FAKE_VERSION"
cat <<EOF | tee "Containerfile"
FROM registry.fedoraproject.org/fedora:latest
ARG FLAVOR
ADD /FAKE_VERSION /
RUN echo "FLAVOUR=\$FLAVOR" > /FLAVOUR
EOF
# As an additional test, build and check images when pasing
# the 'stable' flavor name as a command-line arg instead
# of using the subdirectory dirname (old method).
cd $SRC_TMP/testing/contrib/testimage
cp stable/* ./
cd $SRC_TMP/testing
# The images will have the repo & commit ID set as labels
git add --all
git commit -m 'test repo initial commit'
TEST_REVISION=$(git rev-parse HEAD)
# Given the flavor-name as the first argument, verify built image
# expectations. For 'stable' image, verify that main.sh will properly
# version-tagged both FQINs. For other flavors, verify expected labels
# on the `latest` tagged FQINs.
verify_built_images() {
local _fqin _arch xy_ver x_ver img_ver img_src img_rev _fltr
local _test_tag expected_flavor _test_fqins
expected_flavor="$1"
msg "
##### Testing execution of '$expected_flavor' images for arches $TESTARCHES #####"
podman --version
req_env_vars TESTARCHES FAKE_VERSION TEST_FQIN TEST_FQIN2
declare -a _test_fqins
_test_fqins=("${TEST_FQIN%stable}$expected_flavor")
if [[ "$expected_flavor" == "stable" ]]; then
_test_fqins+=("$TEST_FQIN2")
test_tag="v$FAKE_VERSION"
xy_ver="v$FAKE_VER_X.$FAKE_VER_Y"
x_ver="v$FAKE_VER_X"
else
test_tag="latest"
xy_ver="latest"
x_ver="latest"
fi
for _fqin in "${_test_fqins[@]}"; do
for _arch in $TESTARCHES; do
msg "Testing container can execute '/bin/true'"
showrun podman run -i --arch=$_arch --rm "$_fqin:$test_tag" /bin/true
msg "Testing container FLAVOR build-arg passed correctly"
showrun podman run -i --arch=$_arch --rm "$_fqin:$test_tag" \
cat /FLAVOUR | tee /dev/stderr | fgrep -xq "FLAVOUR=$expected_flavor"
if [[ "$expected_flavor" == "stable" ]]; then
msg "Testing tag '$xy_ver'"
if ! showrun podman manifest exists $_fqin:$xy_ver; then
die "Failed to find manifest-list tagged '$xy_ver'"
fi
msg "Testing tag '$x_ver'"
if ! showrun podman manifest exists $_fqin:$x_ver; then
die "Failed to find manifest-list tagged '$x_ver'"
fi
fi
done
if [[ "$expected_flavor" == "stable" ]]; then
msg "Testing image $_fqin:$test_tag version label"
_fltr='.[].Config.Labels."org.opencontainers.image.version"'
img_ver=$(podman inspect $_fqin:$test_tag | jq -r -e "$_fltr")
showrun test "$img_ver" == "v$FAKE_VERSION"
fi
msg "Testing image $_fqin:$test_tag source label"
_fltr='.[].Config.Labels."org.opencontainers.image.source"'
img_src=$(podman inspect $_fqin:$test_tag | jq -r -e "$_fltr")
showrun test "$img_src" == "git://testing"
msg "Testing image $_fqin:$test_tag source revision"
_fltr='.[].Config.Labels."org.opencontainers.image.revision"'
img_rev=$(podman inspect $_fqin:$test_tag | jq -r -e "$_fltr")
showrun test "$img_rev" == "$TEST_REVISION"
done
}
remove_built_images() {
buildah --version
for _fqin in $TEST_FQIN $TEST_FQIN2; do
for tag in latest v$FAKE_VERSION v$FAKE_VER_X.$FAKE_VER_Y v$FAKE_VER_X; do
# Don't care if this fails
podman manifest rm $_fqin:$tag || true
done
done
}
msg "
##### Testing build-push subdir-flavor run of '$TEST_FQIN' & '$TEST_FQIN2' #####"
cd $SRC_TMP/testing
export DRYRUN=1 # Force main.sh not to push anything
req_env_vars ARCHES DRYRUN
# main.sh is sensitive to 'testing' value.
# Also confirms main.sh is on $PATH
env A_DEBUG=1 main.sh git://testing contrib/testimage/stable
verify_built_images stable
msg "
##### Testing build-push flavour-arg run for '$TEST_FQIN' & '$TEST_FQIN2' #####"
remove_built_images
env A_DEBUG=1 main.sh git://testing contrib/testimage foobarbaz
verify_built_images foobarbaz
# This script verifies it's only/ever running inside CI. Use a fake
# main.sh to verify it auto-updates itself w/o actually performing
# a build. N/B: This test must be run last, in a throw-away environment,
# it _WILL_ modify on-disk contents!
msg "
##### Testing auto-update capability #####"
cd $SRC_TMP
#shellcheck disable=SC2154
cat >main.sh<< EOF
#!/bin/bash
source /etc/automation_environment # defines AUTOMATION_LIB_PATH
source "$AUTOMATION_LIB_PATH/common_lib.sh"
source "$AUTOMATION_LIB_PATH/autoupdate.sh"
EOF
chmod +x main.sh
# Back to where we were
cd -
# Expect the real main.sh to bark one of two error messages
# and exit non-zero.
EXP_RX1="Must.be.called.with.at.least.two.arguments"
EXP_RX2="does.not.appear.to.be.the.root.of.a.git.repo"
if output=$(env --ignore-environment \
BUILDPUSHAUTOUPDATED=0 \
AUTOMATION_LIB_PATH=$AUTOMATION_LIB_PATH \
$SRC_TMP/main.sh 2>&1); then
die "Fail. Expected main.sh to exit non-zero"
else
if [[ "$output" =~ $EXP_RX1 ]] || [[ "$output" =~ $EXP_RX2 ]]; then
echo "PASS"
else
die "Fail. Expecting match to '$EXP_RX1' or '$EXP_RX2', got:
$output"
fi
fi

View File

@ -27,8 +27,10 @@ INSTALL_PACKAGES=(\
git git
jq jq
podman podman
python3-pip
qemu-user-static qemu-user-static
skopeo skopeo
unzip
) )
echo "Installing general build/test dependencies" echo "Installing general build/test dependencies"
@ -37,11 +39,7 @@ bigto $SUDO dnf install -y "${INSTALL_PACKAGES[@]}"
# It was observed in F33, dnf install doesn't always get you the latest/greatest # It was observed in F33, dnf install doesn't always get you the latest/greatest
lilto $SUDO dnf update -y lilto $SUDO dnf update -y
# Re-install with the 'build-push' component # Re-install would append to this, making a mess.
install_automation_tooling build-push $SUDO rm -f /etc/automation_environment
# Re-install the latest version with the 'build-push' component
# Install main scripts into directory on $PATH install_automation_tooling latest build-push
cd $REPO_DIRPATH/build-push
set -x
# Do not auto-update to allow testing inside a PR
$SUDO env BUILDPUSHAUTOUPDATED=1 bash ./.install.sh

View File

@ -19,6 +19,7 @@ variables: # Empty value means it must be passed in on command-line
# See Makefile for definitions # See Makefile for definitions
FEDORA_RELEASE: "{{env `FEDORA_RELEASE`}}" FEDORA_RELEASE: "{{env `FEDORA_RELEASE`}}"
PRIOR_FEDORA_RELEASE: "{{env `PRIOR_FEDORA_RELEASE`}}" PRIOR_FEDORA_RELEASE: "{{env `PRIOR_FEDORA_RELEASE`}}"
RAWHIDE_RELEASE: "{{env `RAWHIDE_RELEASE`}}"
DEBIAN_RELEASE: "{{env `DEBIAN_RELEASE`}}" DEBIAN_RELEASE: "{{env `DEBIAN_RELEASE`}}"
builders: builders:
@ -48,6 +49,15 @@ builders:
# Permit running nested VM's to support specialized testing # Permit running nested VM's to support specialized testing
image_licenses: ["projects/vm-options/global/licenses/enable-vmx"] image_licenses: ["projects/vm-options/global/licenses/enable-vmx"]
- <<: *gce_hosted_image
name: 'rawhide'
# The latest fedora base image will be "upgraded" to rawhide
source_image: 'fedora-b{{user `IMG_SFX`}}'
labels:
<<: *gce_labels
src: 'fedora-b{{user `IMG_SFX` }}'
release: 'rawhide-{{user `RAWHIDE_RELEASE`}}'
- <<: *gce_hosted_image - <<: *gce_hosted_image
name: 'fedora' name: 'fedora'
labels: &fedora_gce_labels labels: &fedora_gce_labels
@ -65,9 +75,6 @@ builders:
source_image_family: 'fedora-base' source_image_family: 'fedora-base'
labels: *fedora_gce_labels labels: *fedora_gce_labels
- <<: *aux_fed_img
name: 'fedora-podman-py'
- <<: *aux_fed_img - <<: *aux_fed_img
name: 'fedora-netavark' name: 'fedora-netavark'
@ -173,23 +180,30 @@ provisioners:
- type: 'shell' - type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- 'mkdir -p /tmp/automation_images' - 'mkdir -p /var/tmp/automation_images'
- type: 'file' - type: 'file'
source: '{{ pwd }}/' source: '{{ pwd }}/'
destination: "/tmp/automation_images" destination: "/var/tmp/automation_images"
- only: ['rawhide']
type: 'shell'
expect_disconnect: true # VM will be rebooted at end of script
inline:
- 'set -e'
- '/bin/bash /var/tmp/automation_images/cache_images/rawhide_setup.sh'
- except: ['debian'] - except: ['debian']
type: 'shell' type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- '/bin/bash /tmp/automation_images/cache_images/fedora_setup.sh' - '/bin/bash /var/tmp/automation_images/cache_images/fedora_setup.sh'
- only: ['debian'] - only: ['debian']
type: 'shell' type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- '/bin/bash /tmp/automation_images/cache_images/debian_setup.sh' - 'env DEBIAN_FRONTEND=noninteractive /bin/bash /var/tmp/automation_images/cache_images/debian_setup.sh'
post-processors: post-processors:
# This is critical for human-interaction. Copntents will be used # This is critical for human-interaction. Copntents will be used

View File

@ -14,12 +14,9 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh # shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh" source "$REPO_DIRPATH/lib.sh"
echo "Updating/Installing repos and packages for $OS_REL_VER" msg "Updating/Installing repos and packages for $OS_REL_VER"
lilto ooe.sh $SUDO apt-get -q -y update
lilto ooe.sh $SUDO apt-get -qq -y update bigto ooe.sh $SUDO apt-get -q -y upgrade
bigto ooe.sh $SUDO apt-get -qq -y upgrade
echo "Configuring additional package repositories"
INSTALL_PACKAGES=(\ INSTALL_PACKAGES=(\
apache2-utils apache2-utils
@ -42,12 +39,12 @@ INSTALL_PACKAGES=(\
crun crun
dnsmasq dnsmasq
e2fslibs-dev e2fslibs-dev
emacs-nox
file file
fuse3 fuse3
fuse-overlayfs
gcc gcc
gettext gettext
git-daemon-run git
gnupg2 gnupg2
go-md2man go-md2man
golang golang
@ -62,7 +59,6 @@ INSTALL_PACKAGES=(\
libdevmapper-dev libdevmapper-dev
libdevmapper1.02.1 libdevmapper1.02.1
libfuse-dev libfuse-dev
libfuse2
libfuse3-dev libfuse3-dev
libglib2.0-dev libglib2.0-dev
libgpgme11-dev libgpgme11-dev
@ -70,6 +66,7 @@ INSTALL_PACKAGES=(\
libnet1 libnet1
libnet1-dev libnet1-dev
libnl-3-dev libnl-3-dev
libostree-dev
libprotobuf-c-dev libprotobuf-c-dev
libprotobuf-dev libprotobuf-dev
libseccomp-dev libseccomp-dev
@ -84,8 +81,8 @@ INSTALL_PACKAGES=(\
ncat ncat
openssl openssl
parallel parallel
pkg-config
passt passt
pkg-config
podman podman
protobuf-c-compiler protobuf-c-compiler
protobuf-compiler protobuf-compiler
@ -96,7 +93,8 @@ INSTALL_PACKAGES=(\
python3-pip python3-pip
python3-protobuf python3-protobuf
python3-psutil python3-psutil
python3-pytoml python3-toml
python3-tomli
python3-requests python3-requests
python3-setuptools python3-setuptools
rsync rsync
@ -105,22 +103,31 @@ INSTALL_PACKAGES=(\
skopeo skopeo
slirp4netns slirp4netns
socat socat
libsqlite3-0
libsqlite3-dev
systemd-container systemd-container
sudo sudo
time time
unzip unzip
vim vim
wget wget
xfsprogs
xz-utils xz-utils
zip zip
zlib1g-dev zlib1g-dev
zstd zstd
) )
# Necessary to update cache of newly added repos # bpftrace is only needed on the host as containers cannot run ebpf
lilto $SUDO apt-get -q -y update # programs anyway and it is very big so we should not bloat the container
# images unnecessarily.
if ! ((CONTAINER)); then
INSTALL_PACKAGES+=( \
bpftrace
)
fi
echo "Installing general build/testing dependencies" msg "Installing general build/testing dependencies"
bigto $SUDO apt-get -q -y install "${INSTALL_PACKAGES[@]}" bigto $SUDO apt-get -q -y install "${INSTALL_PACKAGES[@]}"
# The nc installed by default is missing many required options # The nc installed by default is missing many required options
@ -145,10 +152,9 @@ curl --fail --silent --location \
$SUDO tee /etc/apt/trusted.gpg.d/docker_com.gpg &> /dev/null $SUDO tee /etc/apt/trusted.gpg.d/docker_com.gpg &> /dev/null
# Buildah CI does conformance testing vs the most recent Docker version. # Buildah CI does conformance testing vs the most recent Docker version.
# However, there is no Docker release for SID, so just use latest stable # FIXME: As of 7-2023, there is no 'trixie' dist for docker. Fix the next lines once that changes.
# release for Docker, whatever debian release that cooresponds to. #docker_debian_release=$(source /etc/os-release; echo "$VERSION_CODENAME")
# Ref: https://wiki.debian.org/DebianReleases docker_debian_release="bookworm"
docker_debian_release=bullseye
echo "deb https://download.docker.com/linux/debian $docker_debian_release stable" | \ echo "deb https://download.docker.com/linux/debian $docker_debian_release stable" | \
ooe.sh $SUDO tee /etc/apt/sources.list.d/docker.list &> /dev/null ooe.sh $SUDO tee /etc/apt/sources.list.d/docker.list &> /dev/null

View File

@ -17,14 +17,44 @@ fi
# shellcheck source=./lib.sh # shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh" source "$REPO_DIRPATH/lib.sh"
# Generate en_US.UTF-8 locale as this is required for a podman test (https://github.com/containers/podman/pull/19635).
$SUDO sed -i '/en_US.UTF-8/s/^#//g' /etc/locale.gen
$SUDO locale-gen
# Debian doesn't mount tmpfs on /tmp as default but we want this to speed tests up so
# they don't have to write to persistent disk.
# https://github.com/containers/podman/pull/22533
$SUDO mkdir -p /etc/systemd/system/local-fs.target.wants/
cat <<EOF | $SUDO tee /etc/systemd/system/tmp.mount
[Unit]
Description=Temporary Directory /tmp
ConditionPathIsSymbolicLink=!/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/tmp
Type=tmpfs
Options=size=75%%,mode=1777
EOF
# enable the unit by default
$SUDO ln -s ../tmp.mount /etc/systemd/system/local-fs.target.wants/tmp.mount
req_env_vars PACKER_BUILD_NAME req_env_vars PACKER_BUILD_NAME
bash $SCRIPT_DIRPATH/debian_packaging.sh bash $SCRIPT_DIRPATH/debian_packaging.sh
# dnsmasq is set to bind 0.0.0.0:53, that will conflict with our dns tests.
# We don't need a local resolver.
$SUDO systemctl disable dnsmasq.service
$SUDO systemctl mask dnsmasq.service
if ! ((CONTAINER)); then if ! ((CONTAINER)); then
warn "Making Debian kernel enable cgroup swap accounting" warn "Making Debian kernel enable cgroup swap accounting"
warn "Forcing CgroupsV1" SEDCMD='s/^GRUB_CMDLINE_LINUX="(.*)"/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1"/'
SEDCMD='s/^GRUB_CMDLINE_LINUX="(.*)"/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=0"/'
ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub.d/* ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub.d/*
ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub
ooe.sh $SUDO update-grub ooe.sh $SUDO update-grub
@ -32,6 +62,10 @@ fi
nm_ignore_cni nm_ignore_cni
if ! ((CONTAINER)); then
initialize_local_cache_registry
fi
finalize finalize
echo "SUCCESS!" echo "SUCCESS!"

View File

@ -88,8 +88,9 @@ if [[ $(uname -m) == "aarch64" ]]; then
$SUDO env PATH=$PATH CARGO_HOME=$CARGO_HOME rustup target add aarch64-unknown-linux-gnu $SUDO env PATH=$PATH CARGO_HOME=$CARGO_HOME rustup target add aarch64-unknown-linux-gnu
fi fi
msg "Install mandown to generate man pages" msg "Install tool to generate man pages"
$SUDO env PATH=$PATH CARGO_HOME=$CARGO_HOME cargo install mandown $SUDO go install github.com/cpuguy83/go-md2man/v2@latest
$SUDO install /root/go/bin/go-md2man /usr/local/bin/
# Downstream users of this image are specifically testing netavark & aardvark-dns # Downstream users of this image are specifically testing netavark & aardvark-dns
# code changes. We want to start with using the RPMs because they deal with any # code changes. We want to start with using the RPMs because they deal with any

View File

@ -1,98 +0,0 @@
#!/bin/bash
# This script is called from fedora_setup.sh and various Dockerfiles.
# It's not intended to be used outside of those contexts. It assumes the lib.sh
# library has already been sourced, and that all "ground-up" package-related activity
# needs to be done, including repository setup and initial update.
set -e
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# shellcheck disable=SC2154
warn "Enabling updates-testing repository for $PACKER_BUILD_NAME"
lilto ooe.sh $SUDO dnf install -y 'dnf-command(config-manager)'
lilto ooe.sh $SUDO dnf config-manager --set-enabled updates-testing
msg "Updating/Installing repos and packages for $OS_REL_VER"
bigto ooe.sh $SUDO dnf update -y
INSTALL_PACKAGES=(\
bash-completion
bridge-utils
buildah
bzip2
curl
findutils
fuse3
gcc
git
git-daemon
glib2-devel
glibc-devel
hostname
httpd-tools
iproute
iptables
jq
libtool
lsof
make
nmap-ncat
openssl
openssl-devel
pkgconfig
podman
policycoreutils
protobuf
protobuf-devel
python-pip-wheel
python-setuptools-wheel
python-toml
python-wheel-wheel
python3-PyYAML
python3-coverage
python3-dateutil
python3-docker
python3-fixtures
python3-libselinux
python3-libsemanage
python3-libvirt
python3-pip
python3-psutil
python3-pylint
python3-pytest
python3-pyxdg
python3-requests
python3-requests-mock
python3-virtualenv
python3.6
python3.8
python3.9
redhat-rpm-config
rsync
sed
skopeo
socat
tar
time
tox
unzip
vim
wget
xz
zip
zstd
)
echo "Installing general build/test dependencies"
bigto $SUDO dnf install -y "${INSTALL_PACKAGES[@]}"
# It was observed in F33, dnf install doesn't always get you the latest/greatest
lilto $SUDO dnf update -y

View File

@ -18,21 +18,17 @@ source "$REPO_DIRPATH/lib.sh"
# for both VM and container image build workflows. # for both VM and container image build workflows.
req_env_vars PACKER_BUILD_NAME req_env_vars PACKER_BUILD_NAME
# Do not enable updates-testing on the 'prior' Fedora release images # Only enable updates-testing on all 'latest' Fedora images (except rawhide)
# as a matter of general policy. Historically there have been many # as a matter of general policy. Historically there have been many
# problems with non-uniform behavior when both supported Fedora releases # problems with non-uniform behavior when both supported Fedora releases
# receive container-related dependency updates at the same time. Since # receive container-related dependency updates at the same time. Since
# the 'prior' release has the shortest support lifetime, keep it's behavior # the 'prior' release has the shortest support lifetime, keep it's behavior
# stable by only using released updates. # stable by only using released updates.
# shellcheck disable=SC2154 # shellcheck disable=SC2154
if [[ ! "$PACKER_BUILD_NAME" =~ prior ]]; then if [[ "$PACKER_BUILD_NAME" == "fedora" ]] && [[ ! "$PACKER_BUILD_NAME" =~ "prior" ]]; then
warn "Enabling updates-testing repository for $PACKER_BUILD_NAME" warn "Enabling updates-testing repository for $PACKER_BUILD_NAME"
lilto ooe.sh $SUDO dnf install -y 'dnf-command(config-manager)' lilto ooe.sh $SUDO dnf install -y 'dnf-command(config-manager)'
lilto ooe.sh $SUDO dnf config-manager --set-enabled updates-testing lilto ooe.sh $SUDO dnf config-manager setopt updates-testing.enabled=1
# Could be on prior-fedora also, but copr isn't installed by default
warn "Enabling sbrivio/passt repo. for passt packages"
$SUDO dnf copr enable -y sbrivio/passt
else else
warn "NOT enabling updates-testing repository for $PACKER_BUILD_NAME" warn "NOT enabling updates-testing repository for $PACKER_BUILD_NAME"
fi fi
@ -60,7 +56,7 @@ INSTALL_PACKAGES=(\
curl curl
device-mapper-devel device-mapper-devel
dnsmasq dnsmasq
docker-compose docker-distribution
e2fsprogs-devel e2fsprogs-devel
emacs-nox emacs-nox
fakeroot fakeroot
@ -69,10 +65,12 @@ INSTALL_PACKAGES=(\
fuse3 fuse3
fuse3-devel fuse3-devel
gcc gcc
gh
git git
git-daemon git-daemon
glib2-devel glib2-devel
glibc-devel glibc-devel
glibc-langpack-en
glibc-static glibc-static
gnupg gnupg
go-md2man go-md2man
@ -85,6 +83,7 @@ INSTALL_PACKAGES=(\
iproute iproute
iptables iptables
jq jq
koji
krb5-workstation krb5-workstation
libassuan libassuan
libassuan-devel libassuan-devel
@ -104,7 +103,7 @@ INSTALL_PACKAGES=(\
libxslt-devel libxslt-devel
lsof lsof
make make
mlocate man-db
msitools msitools
nfs-utils nfs-utils
nmap-ncat nmap-ncat
@ -114,42 +113,31 @@ INSTALL_PACKAGES=(\
pandoc pandoc
parallel parallel
passt passt
perl-Clone
perl-FindBin perl-FindBin
pigz
pkgconfig pkgconfig
podman podman
podman-remote
pre-commit
procps-ng procps-ng
protobuf protobuf
protobuf-c protobuf-c
protobuf-c-devel protobuf-c-devel
protobuf-devel protobuf-devel
python-pip-wheel python3-fedora-distro-aliases
python-setuptools-wheel python3-koji-cli-plugins
python-toml
python-wheel-wheel
python2
python3-PyYAML
python3-coverage
python3-dateutil
python3-devel
python3-docker
python3-fixtures
python3-libselinux
python3-libsemanage
python3-libvirt
python3-pip
python3-psutil
python3-pylint
python3-pyxdg
python3-requests
python3-requests-mock
redhat-rpm-config redhat-rpm-config
rpcbind rpcbind
rsync rsync
runc runc
sed sed
ShellCheck
skopeo skopeo
slirp4netns slirp4netns
socat socat
sqlite-libs
sqlite-devel
squashfs-tools squashfs-tools
tar tar
time time
@ -163,44 +151,77 @@ INSTALL_PACKAGES=(\
zstd zstd
) )
# Test with CNI in Fedora N-1 # Rawhide images don't need these packages
EXARG="" if [[ "$PACKER_BUILD_NAME" =~ fedora ]]; then
if [[ "$PACKER_BUILD_NAME" =~ prior ]]; then INSTALL_PACKAGES+=( \
EXARG="--exclude=netavark --exclude=aardvark-dns" python-pip-wheel
python-setuptools-wheel
python-toml
python-wheel-wheel
python3-PyYAML
python3-coverage
python3-dateutil
python3-devel
python3-docker
python3-fixtures
python3-libselinux
python3-libsemanage
python3-libvirt
python3-pip
python3-psutil
python3-pylint
python3-pyxdg
python3-requests
python3-requests-mock
)
else # podman-sequoia is only available in Rawhide
timebomb 20251101 "Also install the package in future Fedora releases, and enable Sequoia support in users of the images."
INSTALL_PACKAGES+=( \
podman-sequoia
)
fi fi
# Workarond: Around the time of this commit, the `criu` package
# was found to be missing a recommends-dependency on criu-libs.
# Until a fixed rpm lands in the Fedora repositories, manually
# include it here. This workaround should be removed once the
# package is corrected (likely > 3.17.1-3).
INSTALL_PACKAGES+=(criu-libs)
# When installing during a container-build, having this present # When installing during a container-build, having this present
# will seriously screw up future dnf operations in very non-obvious ways. # will seriously screw up future dnf operations in very non-obvious ways.
# bpftrace is only needed on the host as containers cannot run ebpf
# programs anyway and it is very big so we should not bloat the container
# images unnecessarily.
if ! ((CONTAINER)); then if ! ((CONTAINER)); then
INSTALL_PACKAGES+=( \ INSTALL_PACKAGES+=( \
bpftrace
composefs
container-selinux container-selinux
fuse-overlayfs
libguestfs-tools libguestfs-tools
selinux-policy-devel selinux-policy-devel
policycoreutils policycoreutils
) )
# Extra packages needed by podman-machine-os
INSTALL_PACKAGES+=( \
podman-machine
osbuild
osbuild-tools
osbuild-ostree
xfsprogs
e2fsprogs
)
fi fi
# Download these package files, but don't install them; Any tests # Download these package files, but don't install them; Any tests
# wishing to, may install them using their native tools at runtime. # wishing to, may install them using their native tools at runtime.
DOWNLOAD_PACKAGES=(\ DOWNLOAD_PACKAGES=(\
oci-umount
parallel parallel
podman-docker podman-docker
podman-plugins python3-devel
python3-pip
python3-pytest python3-pytest
python3-virtualenv python3-virtualenv
) )
msg "Installing general build/test dependencies" msg "Installing general build/test dependencies"
bigto $SUDO dnf install -y $EXARG "${INSTALL_PACKAGES[@]}" bigto $SUDO dnf install -y "${INSTALL_PACKAGES[@]}"
msg "Downloading packages for optional installation at runtime, as needed." msg "Downloading packages for optional installation at runtime, as needed."
$SUDO mkdir -p "$PACKAGE_DOWNLOAD_DIR" $SUDO mkdir -p "$PACKAGE_DOWNLOAD_DIR"
@ -214,6 +235,6 @@ $SUDO curl --fail --silent --location -O \
https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
cd - cd -
# Occasionally following an install, there are more updates available.
# It was observed in F33, dnf install doesn't always get you the latest/greatest # This may be due to activation of suggested/recommended dependency resolution.
lilto $SUDO dnf update -y lilto $SUDO dnf update -y

View File

@ -17,6 +17,12 @@ fi
# shellcheck source=./lib.sh # shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh" source "$REPO_DIRPATH/lib.sh"
# Make /tmp tmpfs bigger, by default we only get 50%. Bump it to 75% so the tests have more storage.
# Do not use 100% so we do not run out of memory for the process itself if tests start leaking big
# files on /tmp.
$SUDO mkdir -p /etc/systemd/system/tmp.mount.d
echo -e "[Mount]\nOptions=size=75%%,mode=1777\n" | $SUDO tee /etc/systemd/system/tmp.mount.d/override.conf
# packer and/or a --build-arg define this envar value uniformly # packer and/or a --build-arg define this envar value uniformly
# for both VM and container image build workflows. # for both VM and container image build workflows.
req_env_vars PACKER_BUILD_NAME req_env_vars PACKER_BUILD_NAME
@ -24,17 +30,10 @@ req_env_vars PACKER_BUILD_NAME
# shellcheck disable=SC2154 # shellcheck disable=SC2154
if [[ "$PACKER_BUILD_NAME" =~ "netavark" ]]; then if [[ "$PACKER_BUILD_NAME" =~ "netavark" ]]; then
bash $SCRIPT_DIRPATH/fedora-netavark_packaging.sh bash $SCRIPT_DIRPATH/fedora-netavark_packaging.sh
elif [[ "$PACKER_BUILD_NAME" =~ "podman-py" ]]; then
bash $SCRIPT_DIRPATH/fedora-podman-py_packaging.sh
elif [[ "$PACKER_BUILD_NAME" =~ "build-push" ]]; then elif [[ "$PACKER_BUILD_NAME" =~ "build-push" ]]; then
bash $SCRIPT_DIRPATH/build-push_packaging.sh bash $SCRIPT_DIRPATH/build-push_packaging.sh
# Registers qemu emulation for non-native execution # Registers qemu emulation for non-native execution
$SUDO systemctl enable systemd-binfmt $SUDO systemctl enable systemd-binfmt
for arch in amd64 s390x ppc64le arm64; do
msg "Caching latest $arch fedora image..."
$SUDO podman pull --quiet --arch=$arch \
registry.fedoraproject.org/fedora:$OS_RELEASE_VER
done
else else
bash $SCRIPT_DIRPATH/fedora_packaging.sh bash $SCRIPT_DIRPATH/fedora_packaging.sh
fi fi
@ -48,6 +47,8 @@ if ! ((CONTAINER)); then
else else
msg "Enabling cgroup management from containers" msg "Enabling cgroup management from containers"
ooe.sh $SUDO setsebool -P container_manage_cgroup true ooe.sh $SUDO setsebool -P container_manage_cgroup true
initialize_local_cache_registry
fi fi
fi fi

345
cache_images/local-cache-registry Executable file
View File

@ -0,0 +1,345 @@
#! /bin/bash
#
# local-cache-registry - set up and manage a local registry with cached images
#
# Used in containers CI, to reduce exposure to registry flakes.
#
# We start with the docker registry image. Pull it, extract the registry
# binary and config, tweak the config, and create a systemd unit file that
# will start the registry at boot.
#
# We also populate that registry with a (hardcoded) list of container
# images used in CI tests. That way a CI VM comes up alreay ready,
# and CI tests do not need to do remote pulls. The image list is
# hardcoded right here in this script file, in the automation_images
# repo. See below for reasons.
#
ME=$(basename $0)
###############################################################################
# BEGIN defaults
# FQIN of registry image. From this image, we extract the registry to run.
PODMAN_REGISTRY_IMAGE=quay.io/libpod/registry:2.8.2
# Fixed path to registry setup. This is the directory used by the registry.
PODMAN_REGISTRY_WORKDIR=/var/cache/local-registry
# Fixed port on which registry listens. This is hardcoded and must be
# shared knowledge among all CI repos that use this registry.
REGISTRY_PORT=60333
# Podman binary to run
PODMAN=${PODMAN:-/usr/bin/podman}
# Temporary directories for podman, so we don't clobber any system files.
# Wipe them upon script exit.
PODMAN_TMPROOT=$(mktemp -d --tmpdir $ME.XXXXXXX)
trap 'status=$?; rm -rf $PODMAN_TMPROOT && exit $status' 0
# Images to cache. Default prefix is "quay.io/libpod/"
#
# It seems evil to hardcode this list as part of the script itself
# instead of a separate file or resource but there's a good reason:
# keeping code and data together in one place makes it possible for
# a podman (and some day other repo?) developer to run a single
# command, contrib/cirrus/get-local-registry-script, which will
# fetch this script and allow the dev to run it to start a local
# registry on their system.
#
# As of 2024-07-02 this list includes podman and buildah images
#
# FIXME: periodically run this to look for no-longer-needed images:
#
# for i in $(sed -ne '/IMAGELIST=/,/^[^ ]/p' <cache_images/local-cache-registry | sed -ne 's/^ *//p');do grep -q -R $i ../podman/test ../buildah/tests || echo "unused $i";done
#
declare -a IMAGELIST=(
alpine:3.10.2
alpine:latest
alpine_healthcheck:latest
alpine_nginx:latest
alpine@sha256:634a8f35b5f16dcf4aaa0822adc0b1964bb786fca12f6831de8ddc45e5986a00
alpine@sha256:f270dcd11e64b85919c3bab66886e59d677cf657528ac0e4805d3c71e458e525
alpine@sha256:fa93b01658e3a5a1686dc3ae55f170d8de487006fb53a28efcd12ab0710a2e5f
autoupdatebroken:latest
badhealthcheck:latest
busybox:1.30.1
busybox:glibc
busybox:latest
busybox:musl
cirros:latest
fedora/python-311:latest
healthcheck:config-only
k8s-pause:3.5
podman_python:latest
redis:alpine
registry:2.8.2
registry:volume_omitted
systemd-image:20240124
testartifact:20250206-single
testartifact:20250206-multi
testartifact:20250206-multi-no-title
testartifact:20250206-evil
testdigest_v2s2
testdigest_v2s2:20200210
testimage:00000000
testimage:00000004
testimage:20221018
testimage:20241011
testimage:multiimage
testimage@sha256:1385ce282f3a959d0d6baf45636efe686c1e14c3e7240eb31907436f7bc531fa
testdigest_v2s2:20200210
testdigest_v2s2@sha256:755f4d90b3716e2bf57060d249e2cd61c9ac089b1233465c5c2cb2d7ee550fdb
volume-plugin-test-img:20220623
podman/stable:v4.3.1
podman/stable:v4.8.0
skopeo/stable:latest
ubuntu:latest
)
# END defaults
###############################################################################
# BEGIN help messages
missing=" argument is missing; see $ME -h for details"
usage="Usage: $ME [options] [initialize | cache IMAGE...]
$ME manages a local instance of a container registry.
When called to initialize a registry, $ME will pull
this image into a local temporary directory:
$PODMAN_REGISTRY_IMAGE
...then extract the registry binary and config, tweak the config,
start the registry, and populate it with a list of images needed by tests:
\$ $ME initialize
To fetch individual images into the cache:
\$ $ME cache libpod/testimage:21120101
Override the default image and/or port with:
-i IMAGE registry image to pull (default: $PODMAN_REGISTRY_IMAGE)
-P PORT port to bind to (on 127.0.0.1) (default: $REGISTRY_PORT)
Other options:
-h display usage message
"
die () {
echo "$ME: $*" >&2
exit 1
}
# END help messages
###############################################################################
# BEGIN option processing
while getopts "i:P:hv" opt; do
case "$opt" in
i) PODMAN_REGISTRY_IMAGE=$OPTARG ;;
P) REGISTRY_PORT=$OPTARG ;;
h) echo "$usage"; exit 0;;
v) verbose=1 ;;
\?) echo "Run '$ME -h' for help" >&2; exit 1;;
esac
done
shift $((OPTIND-1))
# END option processing
###############################################################################
# BEGIN helper functions
function podman() {
${PODMAN} --root ${PODMAN_TMPROOT}/root \
--runroot ${PODMAN_TMPROOT}/runroot \
--tmpdir ${PODMAN_TMPROOT}/tmp \
"$@"
}
###############
# must_pass # Run a command quietly; abort with error on failure
###############
function must_pass() {
local log=${PODMAN_TMPROOT}/log
"$@" &> $log
if [ $? -ne 0 ]; then
echo "$ME: Command failed: $*" >&2
cat $log >&2
# If we ever get here, it's a given that the registry is not running.
exit 1
fi
}
###################
# wait_for_port # Returns once port is available on localhost
###################
function wait_for_port() {
local port=$1 # Numeric port
local host=127.0.0.1
local _timeout=5
# Wait
while [ $_timeout -gt 0 ]; do
{ exec {unused_fd}<> /dev/tcp/$host/$port; } &>/dev/null && return
sleep 1
_timeout=$(( $_timeout - 1 ))
done
die "Timed out waiting for port $port"
}
#################
# cache_image # (singular) fetch one remote image
#################
function cache_image() {
local img=$1
# Almost all our images are under libpod; no need to repeat that part
if ! expr "$img" : "^\(.*\)/" >/dev/null; then
img="libpod/$img"
fi
# Almost all our images are from quay.io, but "domain.tld" prefix overrides
registry=$(expr "$img" : "^\([^/.]\+\.[^/]\+\)/" || true)
if [[ -n "$registry" ]]; then
img=$(expr "$img" : "[^/]\+/\(.*\)")
else
registry=quay.io
fi
echo
echo "...caching: $registry / $img"
# FIXME: inspect, and only pull if missing?
for retry in 1 2 3 0;do
skopeo --registries-conf /dev/null \
copy --all --dest-tls-verify=false \
docker://$registry/$img \
docker://127.0.0.1:${REGISTRY_PORT}/$img \
&& return
sleep $((retry * 30))
done
die "Too many retries; unable to cache $registry/$img"
}
##################
# cache_images # (plural) fetch all remote images
##################
function cache_images() {
for img in "${IMAGELIST[@]}"; do
cache_image "$img"
done
}
# END helper functions
###############################################################################
# BEGIN action processing
###################
# do_initialize # Start, then cache images
###################
#
# Intended to be run only from automation_images repo, or by developer
# on local workstation. This should never be run from podman/buildah/etc
# because it defeats the entire purpose of the cache -- a dead registry
# will cause this to fail.
#
function do_initialize() {
# This action can only be run as root
if [[ "$(id -u)" != "0" ]]; then
die "this script must be run as root"
fi
# For the next few commands, die on any error
set -e
mkdir -p ${PODMAN_REGISTRY_WORKDIR}
# Copy of this script
if ! [[ $0 =~ ${PODMAN_REGISTRY_WORKDIR} ]]; then
rm -f ${PODMAN_REGISTRY_WORKDIR}/$ME
cp $0 ${PODMAN_REGISTRY_WORKDIR}/$ME
fi
# Give it three tries, to compensate for flakes
podman pull ${PODMAN_REGISTRY_IMAGE} &>/dev/null ||
podman pull ${PODMAN_REGISTRY_IMAGE} &>/dev/null ||
must_pass podman pull ${PODMAN_REGISTRY_IMAGE}
# Mount the registry image...
registry_root=$(podman image mount ${PODMAN_REGISTRY_IMAGE})
# ...copy the registry binary into our own bin...
cp ${registry_root}/bin/registry /usr/bin/docker-registry
# ...and copy the config, making a few adjustments to it.
sed -e "s;/var/lib/registry;${PODMAN_REGISTRY_WORKDIR};" \
-e "s;:5000;127.0.0.1:${REGISTRY_PORT};" \
< ${registry_root}/etc/docker/registry/config.yml \
> /etc/local-registry.yml
podman image umount -a
# Create a systemd unit file. Enable it (so it starts at boot)
# and also start it --now.
cat > /etc/systemd/system/$ME.service <<EOF
[Unit]
Description=Local Cache Registry for CI tests
[Service]
ExecStart=/usr/bin/docker-registry serve /etc/local-registry.yml
Type=exec
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now $ME.service
wait_for_port ${REGISTRY_PORT}
cache_images
}
##############
# do_cache # Cache one or more images
##############
function do_cache() {
if [[ -z "$*" ]]; then
die "missing args to 'cache'"
fi
for img in "$@"; do
cache_image "$img"
done
}
# END action processing
###############################################################################
# BEGIN command-line processing
# First command-line arg must be an action
action=${1?ACTION$missing}
shift
case "$action" in
init|initialize) do_initialize ;;
cache) do_cache "$@" ;;
*) die "Unknown action '$action'; must be init | cache IMAGE" ;;
esac
# END command-line processing
###############################################################################
exit 0

View File

@ -0,0 +1,38 @@
#!/bin/bash
# This script is called by packer on the rawhide VM, to update and reboot using
# the rawhide kernel. It's not intended to be used outside of this context.
set -e
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# packer and/or a --build-arg define this envar value uniformly
# for both VM and container image build workflows.
req_env_vars PACKER_BUILD_NAME
warn "Upgrading Fedora '$OS_RELEASE_VER' to rawhide, this might break."
# shellcheck disable=SC2154
warn "If so, this script may be found in the repo. as '$SCRIPT_DIRPATH/$SCRIPT_FILENAME'."
# Show what's happening
set -x
# Rawhide often has GPG issues, don't bother checking
$SUDO sed -i -r -e 's/^gpgcheck=.+/gpgcheck=False/' /etc/dnf/dnf.conf
$SUDO sed -i -r -e 's/^gpgcheck=.+/gpgcheck=0/' /etc/yum.repos.d/*.repo
# Called as `dnf5` here to confirm "old" dnf has been replaced.
$SUDO dnf5 -y distro-sync --releasever=rawhide --allowerasing
$SUDO dnf5 upgrade -y
# A shared fedora_packaging.sh script is called next that doesn't always support dnf5
$SUDO ln -s $(type -P dnf5) /usr/local/bin/dnf
# Packer will try to run 'cache_images/fedora_setup.sh' next, make sure the system
# is actually running rawhide (and verify it boots).
$SUDO reboot

View File

@ -1,10 +1,14 @@
ARG BASE_NAME=registry.fedoraproject.org/fedora-minimal ARG BASE_NAME=registry.fedoraproject.org/fedora-minimal
ARG BASE_TAG=latest # FIXME FIXME FIXME! 2023-11-16: revert "38" to "latest"
# ...38 is because as of this moment, latest is 39, which
# has python-3.12, which causes something to barf:
# aiohttp/_websocket.c:3744:45: error: PyLongObject {aka struct _longobject} has no member named ob_digit
# Possible cause: https://github.com/cython/cython/issues/5238
ARG BASE_TAG=38
FROM ${BASE_NAME}:${BASE_TAG} as updated_base FROM ${BASE_NAME}:${BASE_TAG} as updated_base
RUN microdnf update -y && \ RUN microdnf upgrade -y && \
microdnf clean all && \ microdnf clean all
rm -rf /var/cache/dnf
ENV _RUNTIME_DEPS="bash python3" ENV _RUNTIME_DEPS="bash python3"
ENV _BUILD_DEPS="coreutils curl git python3 python3-pip python3-virtualenv python3-devel gcc g++" ENV _BUILD_DEPS="coreutils curl git python3 python3-pip python3-virtualenv python3-devel gcc g++"
@ -15,17 +19,18 @@ FROM updated_base as builder
RUN microdnf install -y ${_RUNTIME_DEPS} ${_BUILD_DEPS} && \ RUN microdnf install -y ${_RUNTIME_DEPS} ${_BUILD_DEPS} && \
export INSTALL_PREFIX=/usr/share && \ export INSTALL_PREFIX=/usr/share && \
curl -sL \ curl -sL \
https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh | \ https://raw.githubusercontent.com/containers/automation/main/bin/install_automation.sh | \
bash -s latest cirrus-ci_artifacts bash -s latest cirrus-ci_artifacts
FROM updated_base as final FROM updated_base as final
RUN microdnf install -y ${_BUILD_DEPS} && \ RUN microdnf install -y ${_RUNTIME_DEPS} && \
microdnf clean all && \ microdnf clean all
rm -rf /var/cache/dnf
COPY --from=builder /usr/share/automation /usr/share/automation COPY --from=builder /usr/share/automation /usr/share/automation
COPY --from=builder /etc/automation_environment /etc/automation_environment COPY --from=builder /etc/automation_environment /etc/automation_environment
# Env. is used by test.sh script.
ENV CCIABIN=/usr/share/automation/bin/cirrus-ci_artifacts
ENTRYPOINT ["/usr/share/automation/bin/cirrus-ci_artifacts"] ENTRYPOINT ["/usr/share/automation/bin/cirrus-ci_artifacts"]

View File

@ -1,17 +0,0 @@
{
"builds": [
{
"name": "fedora-podman-py",
"builder_type": "googlecompute",
"build_time": 1658176090,
"files": null,
"artifact_id": "fedora-podman-py-c5419329914142720",
"packer_run_uuid": "e5b1e6ab-37a5-a695-624d-47bf0060b272",
"custom_data": {
"IMG_SFX": "5419329914142720",
"STAGE": "cache"
}
}
],
"last_run_uuid": "e5b1e6ab-37a5-a695-624d-47bf0060b272"
}

36
check-imgsfx.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
#
# 2024-01-25 esm
# 2024-06-28 cevich
#
# This script is intended to be used by the `pre-commit` utility, or it may
# be manually copied (or symlinked) as local `.git/hooks/pre-push` file.
# It's purpose is to keep track of image-suffix values which have already
# been pushed, to avoid them being immediately rejected by CI validation.
# To use it with the `pre-commit` utility, simply add something like this
# to your `.pre-commit-config.yaml`:
#
# ---
# repos:
# - repo: https://github.com/containers/automation_images.git
# rev: <tag or commit sha>
# hooks:
# - id: check-imgsfx
set -eo pipefail
# Ensure CWD is the repo root
cd $(dirname "${BASH_SOURCE[0]}")
imgsfx=$(<IMG_SFX)
imgsfx_history=".git/hooks/imgsfx.history"
if [[ -e $imgsfx_history ]]; then
if grep -q "$imgsfx" $imgsfx_history; then
echo "FATAL: $imgsfx has already been used" >&2
echo "Please rerun 'make IMG_SFX'" >&2
exit 1
fi
fi
echo $imgsfx >>$imgsfx_history

View File

@ -1,4 +1,4 @@
# This dockerfile defines the environment for Cirrus-CI when # This Containerfile defines the environment for Cirrus-CI when
# running automated checks and tests. It may also be used # running automated checks and tests. It may also be used
# for development/debugging or manually building most # for development/debugging or manually building most
# Makefile targets. # Makefile targets.
@ -8,17 +8,17 @@ FROM registry.fedoraproject.org/fedora:${FEDORA_RELEASE}
ARG PACKER_VERSION ARG PACKER_VERSION
MAINTAINER https://github.com/containers/automation_images/ci MAINTAINER https://github.com/containers/automation_images/ci
ENV CIRRUS_WORKING_DIR=/tmp/automation_images \ ENV CIRRUS_WORKING_DIR=/var/tmp/automation_images \
PACKER_INSTALL_DIR=/usr/local/bin \ PACKER_INSTALL_DIR=/usr/local/bin \
PACKER_VERSION=$PACKER_VERSION \ PACKER_VERSION=$PACKER_VERSION \
CONTAINER=1 CONTAINER=1
# When using the dockerfile-as-ci feature of Cirrus-CI, it's unsafe # When using the containerfile-as-ci feature of Cirrus-CI, it's unsafe
# to rely on COPY or ADD instructions. See documentation for warning. # to rely on COPY or ADD instructions. See documentation for warning.
RUN test -n "$PACKER_VERSION" RUN test -n "$PACKER_VERSION"
RUN dnf update -y && \ RUN dnf update -y && \
dnf mark remove $(rpm -qa | grep -Ev '(gpg-pubkey)|(dnf)|(sudo)') && \ dnf -y mark dependency $(rpm -qa | grep -Ev '(gpg-pubkey)|(dnf)|(sudo)') && \
dnf install -y --exclude selinux-policy-targeted \ dnf install -y \
ShellCheck \ ShellCheck \
bash-completion \ bash-completion \
coreutils \ coreutils \
@ -38,7 +38,7 @@ RUN dnf update -y && \
util-linux \ util-linux \
unzip \ unzip \
&& \ && \
dnf mark install dnf sudo $_ && \ dnf -y mark user dnf sudo $_ && \
dnf autoremove -y && \ dnf autoremove -y && \
dnf clean all dnf clean all

View File

@ -35,6 +35,14 @@ if [[ -n "$AWS_INI" ]]; then
set_aws_filepath set_aws_filepath
fi fi
id
# FIXME: ssh-keygen seems to fail to create keys with Permission denied
# in the base_images make target, I have no idea why but all CI jobs are
# broken because of this. Let's try without selinux.
if [[ "$(getenforce)" == "Enforcing" ]]; then
setenforce 0
fi
set -x set -x
cd "$REPO_DIRPATH" cd "$REPO_DIRPATH"
export IMG_SFX=$IMG_SFX export IMG_SFX=$IMG_SFX

View File

@ -44,13 +44,6 @@ SRC_FQIN="$TARGET_NAME:$IMG_SFX"
make "$TARGET_NAME" IMG_SFX=$IMG_SFX make "$TARGET_NAME" IMG_SFX=$IMG_SFX
# Prevent pushing 'latest' images from PRs, only branches and tags
# shellcheck disable=SC2154
if [[ $PUSH_LATEST -eq 1 ]] && [[ -n "$CIRRUS_PR" ]]; then
echo -e "\nWarning: Refusing to push 'latest' images when testing from a PR.\n"
PUSH_LATEST=0
fi
# Don't leave credential file sticking around anywhere # Don't leave credential file sticking around anywhere
trap "podman logout --all" EXIT INT CONT trap "podman logout --all" EXIT INT CONT
set +x # protect username/password values set +x # protect username/password values
@ -64,9 +57,3 @@ set -x # Easier than echo'ing out status for everything
# shellcheck disable=SC2154 # shellcheck disable=SC2154
podman tag "$SRC_FQIN" "$DEST_FQIN" podman tag "$SRC_FQIN" "$DEST_FQIN"
podman push "$DEST_FQIN" podman push "$DEST_FQIN"
if ((PUSH_LATEST)); then
LATEST_FQIN="${DEST_FQIN%:*}:latest"
podman tag "$SRC_FQIN" "$LATEST_FQIN"
podman push "$LATEST_FQIN"
fi

36
ci/tag_latest.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
set -eo pipefail
if [[ -z "$CI" ]] || [[ "$CI" != "true" ]] || [[ -z "$IMG_SFX" ]]; then
echo "This script is intended to be run by CI and nowhere else."
exit 1
fi
# This envar is set by the CI system
# shellcheck disable=SC2154
if [[ "$CIRRUS_CHANGE_MESSAGE" =~ .*CI:DOCS.* ]]; then
echo "This script must never tag anything after a [CI:DOCS] PR merge"
exit 0
fi
# Ensure no secrets leak via debugging var expansion
set +x
# This secret envar is set by the CI system
# shellcheck disable=SC2154
echo "$REG_PASSWORD" | \
skopeo login --password-stdin --username "$REG_USERNAME" "$REGPFX"
declare -a imgnames
imgnames=( imgts imgobsolete imgprune gcsupld get_ci_vm orphanvms ccia )
# A [CI:TOOLING] build doesn't produce CI VM images
if [[ ! "$CIRRUS_CHANGE_MESSAGE" =~ .*CI:TOOLING.* ]]; then
imgnames+=( skopeo_cidev fedora_podman prior-fedora_podman )
fi
for imgname in "${imgnames[@]}"; do
echo "##### Tagging $imgname -> latest"
# IMG_SFX is defined by CI system
# shellcheck disable=SC2154
skopeo copy "docker://$REGPFX/$imgname:c${IMG_SFX}" "docker://$REGPFX/${imgname}:latest"
done

View File

@ -13,12 +13,24 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh # shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh" source "$REPO_DIRPATH/lib.sh"
req_env_vars CIRRUS_PR CIRRUS_BASE_SHA CIRRUS_CHANGE_TITLE req_env_vars CIRRUS_PR CIRRUS_PR_TITLE CIRRUS_USER_PERMISSION CIRRUS_BASE_BRANCH
show_env_vars
# die() will add a reference to this file and line number. # die() will add a reference to this file and line number.
[[ "$CIRRUS_CI" == "true" ]] || \ [[ "$CIRRUS_CI" == "true" ]] || \
die "This script is only/ever intended to be run by Cirrus-CI." die "This script is only/ever intended to be run by Cirrus-CI."
# This is imperfect security-wise, but attempt to catch an accidental
# change in Cirrus-CI Repository settings. Namely the hard-to-read
# "slider" that enables non-contributors to run jobs. We don't want
# that on this repo, ever. because there are sensitive secrets in use.
# This variable is set by CI and validated non-empty above
# shellcheck disable=SC2154
if [[ "$CIRRUS_USER_PERMISSION" != "write" ]] && [[ "$CIRRUS_USER_PERMISSION" != "admin" ]]; then
die "CI Execution not supported with permission level '$CIRRUS_USER_PERMISSION'"
fi
for target in image_builder/gce.json base_images/cloud.json \ for target in image_builder/gce.json base_images/cloud.json \
cache_images/cloud.json win_images/win-server-wsl.json; do cache_images/cloud.json win_images/win-server-wsl.json; do
if ! make $target; then if ! make $target; then
@ -32,18 +44,28 @@ if [[ -z "$CIRRUS_PR" ]]; then
exit 0 exit 0
fi fi
# For Docs-only PRs, no further checks are needed
# Variable is defined by Cirrus-CI at runtime # Variable is defined by Cirrus-CI at runtime
# shellcheck disable=SC2154 # shellcheck disable=SC2154
if [[ ! "$CIRRUS_CHANGE_TITLE" =~ CI:DOCS ]] && \ if [[ "$CIRRUS_PR_TITLE" =~ CI:DOCS ]]; then
! git diff --name-only ${CIRRUS_BASE_SHA}..HEAD | grep -q IMG_SFX; then msg "This looks like a docs-only PR, skipping further validation checks."
exit 0
fi
# Fix "Not a valid object name main" error from Cirrus's
# incomplete checkout.
git remote update origin
# Determine where PR branched off of $CIRRUS_BASE_BRANCH
# shellcheck disable=SC2154
base_sha=$(git merge-base origin/${CIRRUS_BASE_BRANCH:-main} HEAD)
if ! git diff --name-only ${base_sha}..HEAD | grep -q IMG_SFX; then
die "Every PR that builds images must include an updated IMG_SFX file. die "Every PR that builds images must include an updated IMG_SFX file.
Simply run 'make IMG_SFX', commit the result, and re-push." Simply run 'make IMG_SFX', commit the result, and re-push."
else else
IMG_SFX="$(<./IMG_SFX)" IMG_SFX="$(<./IMG_SFX)"
# IMG_SFX was modified vs PR's base-branch, confirm version moved forward # IMG_SFX was modified vs PR's base-branch, confirm version moved forward
# shellcheck disable=SC2154 v_prev=$(git show ${base_sha}:IMG_SFX 2>&1 || true)
v_prev=$(git show ${CIRRUS_BASE_SHA}:IMG_SFX 2>&1 || true)
# Verify new IMG_SFX value always version-sorts later than previous value. # Verify new IMG_SFX value always version-sorts later than previous value.
# This prevents screwups due to local timezone, bad, or unset clocks, etc. # This prevents screwups due to local timezone, bad, or unset clocks, etc.
new_img_ver=$(awk -F 't' '{print $1"."$2}'<<<"$IMG_SFX" | cut -dz -f1) new_img_ver=$(awk -F 't' '{print $1"."$2}'<<<"$IMG_SFX" | cut -dz -f1)

View File

@ -0,0 +1,43 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-symlinks
- id: mixed-line-ending
- id: no-commit-to-branch
args: [--branch, main]
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
args: [--config, .codespellrc]
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 3.0.0
hooks:
- id: forbid-binary
exclude: >
(?x)^(
get_ci_vm/good_repo_test/dot_git.tar.gz
)$
- id: script-must-have-extension
- id: shellcheck
# These come from ci/shellcheck.sh
args:
- --color=always
- --format=tty
- --shell=bash
- --external-sources
- --enable=add-default-case,avoid-nullary-conditions,check-unassigned-uppercase
- --exclude=SC2046,SC2034,SC2090,SC2064
- --wiki-link-count=0
- --severity=warning
- repo: https://github.com/containers/automation_images.git
rev: 2e5a2acfe21cc4b13511b453733b8875e592ad9c
hooks:
- id: check-imgsfx

View File

@ -1,14 +1,13 @@
# This is a listing of GCP Project IDs which use images produced by # This is a listing of Google Cloud Platform Project IDs for
# this repo. It's used by the "Orphan VMs" github action to monitor # orphan VM monitoring and possibly other automation tasks.
# for any leftover/lost VMs. # Note: CI VM images produced by this repo are all stored within
# the libpod-218412 project (in addition to some AWS EC2)
buildah buildah
conmon-222014 conmon-222014
containers-build-source-image containers-build-source-image
dnsname-8675309
libpod-218412 libpod-218412
netavark-2021 netavark-2021
oci-seccomp-bpf-hook oci-seccomp-bpf-hook
podman-py
skopeo skopeo
storage-240716 storage-240716
udica-247612 udica-247612

View File

@ -19,8 +19,7 @@ RUN bash ./get_ci_vm/setup.sh
# conflicts. # conflicts.
ADD /get_ci_vm/entrypoint.sh ./get_ci_vm/ ADD /get_ci_vm/entrypoint.sh ./get_ci_vm/
# Add this late to optomize cache effecacy for development workflows ENTRYPOINT ["/usr/bin/ssh-agent", "/bin/bash", "/usr/src/automation_images/get_ci_vm/entrypoint.sh"]
ENTRYPOINT ["/bin/bash", "/usr/src/automation_images/get_ci_vm/entrypoint.sh"]
WORKDIR "/root" WORKDIR "/root"
ENV HOME="/root" \ ENV HOME="/root" \
SRCDIR="" \ SRCDIR="" \

View File

@ -5,6 +5,36 @@ This directory contains the source for building [the
This image image is used by many containers-org repos. `hack/get_ci_vm.sh` script. This image image is used by many containers-org repos. `hack/get_ci_vm.sh` script.
It is not intended to be called via any other mechanism. It is not intended to be called via any other mechanism.
In general/high-level terms, the architecture and operation is:
1. [containers/automation hosts cirrus-ci_env](https://github.com/containers/automation/tree/main/cirrus-ci_env),
a python mini-implementation of a `.cirrus.yml` parser. It's only job is to extract all required envars,
given a task name (including from a matrix element). It's highly dependent on
[certain YAML formatting requirements](README.md#downstream-repository-cirrusyml-requirements). If the target
repo. doesn't follow those standards, nasty/ugly python errors will vomit forth. Mainly this has to do with
Cirrus-CI's use of a non-standard YAML parser, allowing things like certain duplicate dictionary keys.
1. [containers/automation_images hosts get_ci_vm](https://github.com/containers/automation_images/tree/main/get_ci_vm),
a bundling of the `cirrus-ci_env` python script with an `entrypoint.sh` script inside a container image.
1. When a user runs `hack/get_ci_vm.sh` inside a target repo, the container image is entered, and `.cirrus.yml`
is parsed based on the CLI task-name. A VM is then provisioned based on specific envars (see the "Env. Vars."
entries in the sections for [APIv1](README.md#env-vars) and [APIv2](README.md#env-vars-1) sections below).
This is the most complex part of the process.
1. The remote system will not have **any** of the otherwise automatic Cirrus-CI operations performed (like "clone")
nor any magic CI variables defined. Having a VM ready, the container entrypoint script transfers a copy of
the local repo (including any uncommited changes).
1. The container entrypoint script then performs **_remote_** execution of the `hack/get_ci_vm.sh` script
including the magic `--setup` parameter. Though it varies by repo, typically this will establish everything
necessary to simulate a CI environment, via a call to the repo's own `setup.sh` or equivalent. Typically
The repo's setup scripts will persist any required envars into a `/etc/ci_environment` or similar. Though
this isn't universal.
1. Lastly, the user is dropped into a shell on the VM, inside the repo copy, with all envars defined and
ready to start running tests.
_Note_: If there are any envars found to be missing, they must be defined by updating either the repo normal CI
setup scripts (preferred), or in the `hack/get_ci_vm.sh` `--setup` section.
# Building
Example build (from repository root): Example build (from repository root):
```bash ```bash

View File

@ -66,9 +66,9 @@ delvm() {
} }
image_hints() { image_hints() {
_BIS=$(egrep -m 1 '_BUILT_IMAGE_SUFFIX:[[:space:]+"[[:print:]]+"' \ _BIS=$(grep -E -m 1 '_BUILT_IMAGE_SUFFIX:[[:space:]+"[[:print:]]+"' \
"$SECCOMPHOOKROOT/.cirrus.yml" | cut -d: -f 2 | tr -d '"[:blank:]') "$SECCOMPHOOKROOT/.cirrus.yml" | cut -d: -f 2 | tr -d '"[:blank:]')
egrep '[[:space:]]+[[:alnum:]].+_CACHE_IMAGE_NAME:[[:space:]+"[[:print:]]+"' \ grep -E '[[:space:]]+[[:alnum:]].+_CACHE_IMAGE_NAME:[[:space:]+"[[:print:]]+"' \
"$SECCOMPHOOKROOT/.cirrus.yml" | cut -d: -f 2 | tr -d '"[:blank:]' | \ "$SECCOMPHOOKROOT/.cirrus.yml" | cut -d: -f 2 | tr -d '"[:blank:]' | \
sed -r -e "s/\\\$[{]_BUILT_IMAGE_SUFFIX[}]/$_BIS/" | sort -u sed -r -e "s/\\\$[{]_BUILT_IMAGE_SUFFIX[}]/$_BIS/" | sort -u
} }
@ -141,7 +141,7 @@ cd $SECCOMPHOOKROOT
# Attempt to determine if named 'oci-seccomp-bpf-hook' gcloud configuration exists # Attempt to determine if named 'oci-seccomp-bpf-hook' gcloud configuration exists
showrun $PGCLOUD info > $TMPDIR/gcloud-info showrun $PGCLOUD info > $TMPDIR/gcloud-info
if egrep -q "Account:.*None" $TMPDIR/gcloud-info if grep -E -q "Account:.*None" $TMPDIR/gcloud-info
then then
echo -e "\n${YEL}WARNING: Can't find gcloud configuration for 'oci-seccomp-bpf-hook', running init.${NOR}" echo -e "\n${YEL}WARNING: Can't find gcloud configuration for 'oci-seccomp-bpf-hook', running init.${NOR}"
echo -e " ${RED}Please choose '#1: Re-initialize' and 'login' if asked.${NOR}" echo -e " ${RED}Please choose '#1: Re-initialize' and 'login' if asked.${NOR}"
@ -151,7 +151,7 @@ then
# Verify it worked (account name == someone@example.com) # Verify it worked (account name == someone@example.com)
$PGCLOUD info > $TMPDIR/gcloud-info-after-init $PGCLOUD info > $TMPDIR/gcloud-info-after-init
if egrep -q "Account:.*None" $TMPDIR/gcloud-info-after-init if grep -E -q "Account:.*None" $TMPDIR/gcloud-info-after-init
then then
echo -e "${RED}ERROR: Could not initialize 'oci-seccomp-bpf-hook' configuration in gcloud.${NOR}" echo -e "${RED}ERROR: Could not initialize 'oci-seccomp-bpf-hook' configuration in gcloud.${NOR}"
exit 5 exit 5

View File

@ -235,7 +235,7 @@ has_valid_aws_credentials() {
_awsoutput=$($AWSCLI configure list 2>&1 || true) _awsoutput=$($AWSCLI configure list 2>&1 || true)
dbg "$AWSCLI configure list" dbg "$AWSCLI configure list"
dbg "$_awsoutput" dbg "$_awsoutput"
if egrep -qx 'The config profile.+could not be found'<<<"$_awsoutput"; then if grep -E -qx 'The config profile.+could not be found'<<<"$_awsoutput"; then
dbg "AWS config/credentials are missing" dbg "AWS config/credentials are missing"
return 1 return 1
elif [[ ! -r "$EC2_SSH_KEY" ]] || [[ ! -r "${EC2_SSH_KEY}.pub" ]]; then elif [[ ! -r "$EC2_SSH_KEY" ]] || [[ ! -r "${EC2_SSH_KEY}.pub" ]]; then
@ -413,6 +413,9 @@ make_setup_tarball() {
status "Preparing setup tarball for instance." status "Preparing setup tarball for instance."
req_env_vars DESTDIR _TMPDIR SRCDIR UPSTREAM_REPO req_env_vars DESTDIR _TMPDIR SRCDIR UPSTREAM_REPO
mkdir -p "${_TMPDIR}$DESTDIR" mkdir -p "${_TMPDIR}$DESTDIR"
# Mark the volume-mounted source repo as safe system-wide (w/in the container)
git config --global --add safe.directory "$SRCDIR"
git config --global --add safe.directory "$SRCDIR/.git"
# We have no way of knowing what state or configuration the user's # We have no way of knowing what state or configuration the user's
# local repository is in. Work from a local clone, so we can # local repository is in. Work from a local clone, so we can
# specify our own setup and prevent unexpected script breakage. # specify our own setup and prevent unexpected script breakage.
@ -515,8 +518,8 @@ init_gcevm() {
DNS_NAME=$INST_NAME # gcloud compute ssh wrapper will resolve this DNS_NAME=$INST_NAME # gcloud compute ssh wrapper will resolve this
GCLOUD="${GCLOUD:-gcloud} --configuration=$GCLOUD_CFG --project=$GCLOUD_PROJECT" GCLOUD="${GCLOUD:-gcloud} --configuration=$GCLOUD_CFG --project=$GCLOUD_PROJECT"
_args="--force-key-file-overwrite --strict-host-key-checking=no --zone=$GCLOUD_ZONE" _args="--force-key-file-overwrite --strict-host-key-checking=no --zone=$GCLOUD_ZONE"
SSH_CMD="$GCLOUD compute ssh $_args root@$DNS_NAME --" SSH_CMD="$GCLOUD compute ssh --ssh-flag=-o=AddKeysToAgent=yes $_args root@$DNS_NAME --"
SCP_CMD="$GCLOUD compute scp $_args" SCP_CMD="$GCLOUD compute scp --scp-flag=-o=AddKeysToAgent=yes $_args"
CREATE_CMD="$GCLOUD compute instances create \ CREATE_CMD="$GCLOUD compute instances create \
--zone=$GCLOUD_ZONE --image-project=$GCLOUD_IMGPROJECT \ --zone=$GCLOUD_ZONE --image-project=$GCLOUD_IMGPROJECT \
--image=$INST_IMAGE --custom-cpu=$GCLOUD_CPUS \ --image=$INST_IMAGE --custom-cpu=$GCLOUD_CPUS \
@ -533,9 +536,6 @@ init_gcevm() {
Can't find valid GCP credentials, attempting to (re)initialize. Can't find valid GCP credentials, attempting to (re)initialize.
If asked, please choose '#1: Re-initialize', 'login', and a nearby If asked, please choose '#1: Re-initialize', 'login', and a nearby
GCLOUD_ZONE, otherwise simply follow the prompts. GCLOUD_ZONE, otherwise simply follow the prompts.
Note: If asked to set a SSH-key passphrase, DO NOT SET ONE, it
will make your life miserable! Set an empty password for the key.
" "
$GCLOUD init --project=$GCLOUD_PROJECT --console-only --skip-diagnostics $GCLOUD init --project=$GCLOUD_PROJECT --console-only --skip-diagnostics
if ! has_valid_gcp_credentials; then if ! has_valid_gcp_credentials; then

View File

@ -2,9 +2,9 @@
# This script is intended to be executed as part of the container # This script is intended to be executed as part of the container
# image build process. Using it under any other context is virtually # image build process. Using it under any other context is virtually
# guarantied to cause you much pain and suffering. # guaranteed to cause you much pain and suffering.
set -eo pipefail set -xeo pipefail
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}") SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH") SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
@ -14,6 +14,7 @@ source "$REPO_DIRPATH/lib.sh"
declare -a PKGS declare -a PKGS
PKGS=( \ PKGS=( \
aws-cli
coreutils coreutils
curl curl
gawk gawk
@ -30,9 +31,7 @@ apk upgrade
apk add --no-cache "${PKGS[@]}" apk add --no-cache "${PKGS[@]}"
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
pip3 install --upgrade pip aws --version # Confirm that aws actually runs
pip3 install --no-cache-dir awscli
aws --version # Confirm it actually runs
install_automation_tooling cirrus-ci_env install_automation_tooling cirrus-ci_env

View File

@ -78,7 +78,7 @@ testf() {
echo "# $@" > /dev/stderr echo "# $@" > /dev/stderr
fi fi
# Using egrep vs file safer than shell builtin test # Using grep -E vs file safer than shell builtin test
local a_out_f local a_out_f
local a_exit=0 local a_exit=0
a_out_f=$(mktemp -p '' "tmp_${FUNCNAME[0]}_XXXXXXXX") a_out_f=$(mktemp -p '' "tmp_${FUNCNAME[0]}_XXXXXXXX")
@ -109,7 +109,7 @@ testf() {
if ((TEST_DEBUG)); then if ((TEST_DEBUG)); then
echo "Received $(wc -l $a_out_f | awk '{print $1}') output lines of $(wc -c $a_out_f | awk '{print $1}') bytes total" echo "Received $(wc -l $a_out_f | awk '{print $1}') output lines of $(wc -c $a_out_f | awk '{print $1}') bytes total"
fi fi
if egrep -q "$e_out_re" "${a_out_f}.oneline"; then if grep -E -q "$e_out_re" "${a_out_f}.oneline"; then
_test_report "Command $1 exited as expected with expected output" "0" "$a_out_f" _test_report "Command $1 exited as expected with expected output" "0" "$a_out_f"
else else
_test_report "Expecting regex '$e_out_re' match to (whitespace-squashed) output" "1" "$a_out_f" _test_report "Expecting regex '$e_out_re' match to (whitespace-squashed) output" "1" "$a_out_f"

View File

@ -67,7 +67,7 @@ else
fi fi
# Support both '.CHECKSUM' and '-CHECKSUM' at the end # Support both '.CHECKSUM' and '-CHECKSUM' at the end
filename=$(egrep -i -m 1 -- "$extension$" <<<"$by_arch" || true) filename=$(grep -E -i -m 1 -- "$extension$" <<<"$by_arch" || true)
[[ -n "$filename" ]] || \ [[ -n "$filename" ]] || \
die "No '$extension' targets among $by_arch" die "No '$extension' targets among $by_arch"

View File

@ -4,7 +4,7 @@
# at the root of this repository. It should be built with # at the root of this repository. It should be built with
# the repository root as the context directory. # the repository root as the context directory.
ARG CENTOS_STREAM_RELEASE=8 ARG CENTOS_STREAM_RELEASE=9
FROM quay.io/centos/centos:stream${CENTOS_STREAM_RELEASE} FROM quay.io/centos/centos:stream${CENTOS_STREAM_RELEASE}
ARG PACKER_VERSION ARG PACKER_VERSION
MAINTAINER https://github.com/containers/automation_images/image_builder MAINTAINER https://github.com/containers/automation_images/image_builder

View File

@ -45,16 +45,16 @@ provisioners:
- type: 'shell' - type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- 'mkdir -p /tmp/automation_images' - 'mkdir -p /var/tmp/automation_images'
- type: 'file' - type: 'file'
source: '{{ pwd }}/' source: '{{ pwd }}/'
destination: '/tmp/automation_images/' destination: '/var/tmp/automation_images/'
- type: 'shell' - type: 'shell'
inline: inline:
- 'set -e' - 'set -e'
- '/bin/bash /tmp/automation_images/image_builder/setup.sh' - '/bin/bash /var/tmp/automation_images/image_builder/setup.sh'
post-processors: post-processors:
# Must be double-nested to guarantee execution order # Must be double-nested to guarantee execution order

View File

@ -1,16 +1,9 @@
[google-compute-engine] # Copy-pasted from https://cloud.google.com/sdk/docs/install#red-hatfedoracentos
name=Google Compute Engine
baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el8-x86_64-stable [google-cloud-cli]
name=Google Cloud CLI
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el9-x86_64
enabled=1 enabled=1
gpgcheck=1 gpgcheck=1
repo_gpgcheck=1 repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

View File

@ -23,6 +23,19 @@ source "$REPO_DIRPATH/lib.sh"
dnf update -y dnf update -y
dnf -y install epel-release dnf -y install epel-release
dnf install -y $(<"$INST_PKGS_FP") # Allow erasing pre-installed curl-minimal package
dnf install -y --allowerasing $(<"$INST_PKGS_FP")
# As of 2024-04-24 installing the EPEL `awscli` package results in error:
# nothing provides python3.9dist(docutils) >= 0.10
# Grab the binary directly from amazon instead
# https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
AWSURL="https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"
cd /tmp
curl --fail --location -O "${AWSURL}"
# There's little reason to see every single file extracted
unzip -q awscli*.zip
./aws/install -i /usr/local/share/aws-cli -b /usr/local/bin
rm -rf awscli*.zip ./aws
install_automation_tooling install_automation_tooling

View File

@ -1,4 +1,3 @@
awscli
buildah buildah
bash-completion bash-completion
curl curl
@ -6,12 +5,13 @@ findutils
gawk gawk
genisoimage genisoimage
git git
google-cloud-sdk google-cloud-cli
jq jq
libvirt libvirt
libvirt-admin libvirt-admin
libvirt-client libvirt-client
libvirt-daemon libvirt-daemon
libxcrypt-compat
make make
openssh openssh
openssl openssl
@ -24,6 +24,7 @@ rng-tools
rootfiles rootfiles
rsync rsync
sed sed
skopeo
tar tar
unzip unzip
util-linux util-linux

View File

@ -11,13 +11,13 @@ set -eo pipefail
# shellcheck source=imgts/lib_entrypoint.sh # shellcheck source=imgts/lib_entrypoint.sh
source /usr/local/bin/lib_entrypoint.sh source /usr/local/bin/lib_entrypoint.sh
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX
gcloud_init gcloud_init
# Set this to 1 for testing # Set this to 1 for testing
DRY_RUN="${DRY_RUN:-0}" DRY_RUN="${DRY_RUN:-0}"
OBSOLETE_LIMIT=10 OBSOLETE_LIMIT=50
THEFUTURE=$(date --date='+1 hour' +%s) THEFUTURE=$(date --date='+1 hour' +%s)
TOO_OLD_DAYS='30' TOO_OLD_DAYS='30'
TOO_OLD_DESC="$TOO_OLD_DAYS days ago" TOO_OLD_DESC="$TOO_OLD_DAYS days ago"
@ -40,8 +40,8 @@ $GCLOUD compute images list --format="$FORMAT" --filter="$FILTER" | \
count_image count_image
reason="" reason=""
created_ymd=$(date --date=$creationTimestamp --iso-8601=date) created_ymd=$(date --date=$creationTimestamp --iso-8601=date)
permanent=$(egrep --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true) permanent=$(grep -E --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true)
last_used=$(egrep --only-matching --max-count=1 'last-used=[[:digit:]]+' <<< $labels || true) last_used=$(grep -E --only-matching --max-count=1 'last-used=[[:digit:]]+' <<< $labels || true)
LABELSFX="labels: '$labels'" LABELSFX="labels: '$labels'"
@ -54,6 +54,14 @@ $GCLOUD compute images list --format="$FORMAT" --filter="$FILTER" | \
continue continue
fi fi
# Any image matching the currently in-use IMG_SFX must always be preserved
# Value is defined in cirrus.yml
# shellcheck disable=SC2154
if [[ "$name" =~ $IMG_SFX ]]; then
msg "Retaining current (latest) image $name | $labels"
continue
fi
# No label was set # No label was set
if [[ -z "$last_used" ]] if [[ -z "$last_used" ]]
then # image lacks any tracking labels then # image lacks any tracking labels
@ -91,7 +99,7 @@ aws_init
# The AWS cli returns a huge blob of data we mostly don't need. # The AWS cli returns a huge blob of data we mostly don't need.
# Use query statement to simplify the results. N/B: The get_tag_value() # Use query statement to simplify the results. N/B: The get_tag_value()
# function expects to find a "TAGS" item w/ list value. # function expects to find a "TAGS" item w/ list value.
ami_query='Images[*].{ID:ImageId,CREATED:CreationDate,STATE:State,TAGS:Tags}' ami_query='Images[*].{ID:ImageId,CREATED:CreationDate,STATE:State,TAGS:Tags,DEP:DeprecationTime}'
all_amis=$($AWS ec2 describe-images --owners self --query "$ami_query") all_amis=$($AWS ec2 describe-images --owners self --query "$ami_query")
nr_amis=$(jq -r -e length<<<"$all_amis") nr_amis=$(jq -r -e length<<<"$all_amis")
@ -109,15 +117,16 @@ lltcmd=(\
req_env_vars all_amis nr_amis req_env_vars all_amis nr_amis
for (( i=nr_amis ; i ; i-- )); do for (( i=nr_amis ; i ; i-- )); do
unset ami ami_id state created created_ymd name name_tag unset ami ami_id state created created_ymd name name_tag dep
ami=$(jq -r -e ".[$((i-1))]"<<<"$all_amis") ami=$(jq -r -e ".[$((i-1))]"<<<"$all_amis")
ami_id=$(jq -r -e ".ID"<<<"$ami") ami_id=$(jq -r -e ".ID"<<<"$ami")
state=$(jq -r -e ".STATE"<<<"$ami") state=$(jq -r -e ".STATE"<<<"$ami")
created=$(jq -r -e ".CREATED"<<<"$ami") created=$(jq -r -e ".CREATED"<<<"$ami")
created_ymd=$(date --date="$created" --iso-8601=date) created_ymd=$(date --date="$created" --iso-8601=date)
dep=$(jq -r -e ".DEP"<<<"$ami")
unset tags unset tags
# The name-tag is easier on human eys if on is set. # The name-tag is easier on human eys if one is set.
name="$ami_id" name="$ami_id"
if name_tag=$(get_tag_value "Name" "$ami"); then if name_tag=$(get_tag_value "Name" "$ami"); then
name="$name_tag" name="$name_tag"
@ -138,13 +147,23 @@ for (( i=nr_amis ; i ; i-- )); do
done done
unset automation permanent reason unset automation permanent reason
automation=$(egrep --only-matching --max-count=1 \ automation=$(grep -E --only-matching --max-count=1 \
--ignore-case 'automation=true' <<< $tags || true) --ignore-case 'automation=true' <<< $tags || true)
permanent=$(egrep --only-matching --max-count=1 \ permanent=$(grep -E --only-matching --max-count=1 \
--ignore-case 'permanent=true' <<< $tags || true) --ignore-case 'permanent=true' <<< $tags || true)
if [[ -n "$permanent" ]]; then if [[ -n "$permanent" ]]; then
msg "Retaining forever $name | $tags" msg "Retaining forever $name | $tags"
# Permanent AMIs should never ever have a deprecation date set
$AWS ec2 disable-image-deprecation --image-id "$ami_id" > /dev/null
continue
fi
# Any image matching the currently in-use IMG_SFX
# must always be preserved. Values are defined in cirrus.yml
# shellcheck disable=SC2154
if [[ "$name" =~ $IMG_SFX ]]; then
msg "Retaining current (latest) image $name | $tags"
continue continue
fi fi
@ -173,18 +192,24 @@ for (( i=nr_amis ; i ; i-- )); do
continue continue
else else
msg "Retaining $ami_id | $created_ymd | $state | $tags" msg "Retaining $ami_id | $created_ymd | $state | $tags"
if [[ "$dep" != "null" ]]; then
msg " Removing previously set AMI deprecation timestamp: $dep"
# Ignore confirmation output.
$AWS ec2 disable-image-deprecation --image-id "$ami_id" > /dev/null
fi
fi fi
done done
COUNT=$(<"$IMGCOUNT") COUNT=$(<"$IMGCOUNT")
CANDIDATES=$(wc -l <$TOOBSOLETE)
msg "########################################################################" msg "########################################################################"
msg "Obsoleting $OBSOLETE_LIMIT random images of $COUNT examined:" msg "Obsoleting $OBSOLETE_LIMIT random image candidates ($CANDIDATES/$COUNT total):"
# Require a minimum number of images to exist. Also if there is some # Require a minimum number of images to exist. Also if there is some
# horrible scripting accident, this limits the blast-radius. # horrible scripting accident, this limits the blast-radius.
if [[ "$COUNT" -lt $OBSOLETE_LIMIT ]] if [[ "$CANDIDATES" -lt $OBSOLETE_LIMIT ]]
then then
die 0 "Safety-net Insufficient images ($COUNT) to process ($OBSOLETE_LIMIT required)" die 0 "Safety-net Insufficient images ($CANDIDATES) to process ($OBSOLETE_LIMIT required)"
fi fi
# Don't let one bad apple ruin the whole bunch # Don't let one bad apple ruin the whole bunch

View File

@ -11,14 +11,14 @@ set -e
# shellcheck source=imgts/lib_entrypoint.sh # shellcheck source=imgts/lib_entrypoint.sh
source /usr/local/bin/lib_entrypoint.sh source /usr/local/bin/lib_entrypoint.sh
req_env_vars GCPJSON GCPNAME GCPPROJECT req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX
gcloud_init gcloud_init
# Set this to 1 for testing # Set this to 1 for testing
DRY_RUN="${DRY_RUN:-0}" DRY_RUN="${DRY_RUN:-0}"
# For safety's sake limit nr deletions # For safety's sake limit nr deletions
DELETE_LIMIT=10 DELETE_LIMIT=50
ABOUTNOW=$(date --iso-8601=date) # precision is not needed for this use ABOUTNOW=$(date --iso-8601=date) # precision is not needed for this use
# Format Ref: https://cloud.google.com/sdk/gcloud/reference/topic/formats # Format Ref: https://cloud.google.com/sdk/gcloud/reference/topic/formats
# Field list from `gcloud compute images list --limit=1 --format=text` # Field list from `gcloud compute images list --limit=1 --format=text`
@ -31,7 +31,7 @@ PROJRE="/v1/projects/$GCPPROJECT/global/"
FILTER="selfLink~$PROJRE AND deprecated.state=OBSOLETE AND deprecated.deleted<$ABOUTNOW" FILTER="selfLink~$PROJRE AND deprecated.state=OBSOLETE AND deprecated.deleted<$ABOUTNOW"
TODELETE=$(mktemp -p '' todelete.XXXXXX) TODELETE=$(mktemp -p '' todelete.XXXXXX)
msg "Searching for obsolete images using filter:${NOR} $FILTER" msg "Searching for obsolete GCP images using filter:${NOR} $FILTER"
# Ref: https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#deprecating_an_image # Ref: https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#deprecating_an_image
$GCLOUD compute images list --show-deprecated \ $GCLOUD compute images list --show-deprecated \
--format="$FORMAT" --filter="$FILTER" | \ --format="$FORMAT" --filter="$FILTER" | \
@ -39,32 +39,108 @@ $GCLOUD compute images list --show-deprecated \
do do
count_image count_image
reason="" reason=""
permanent=$(egrep --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true) permanent=$(grep -E --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true)
[[ -z "$permanent" ]] || \ [[ -z "$permanent" ]] || \
die 1 "Refusing to delete a deprecated image labeled permanent=true. Please use gcloud utility to set image active, then research the cause of deprecation." die 1 "Refusing to delete a deprecated image labeled permanent=true. Please use gcloud utility to set image active, then research the cause of deprecation."
[[ "$dep_state" == "OBSOLETE" ]] || \ [[ "$dep_state" == "OBSOLETE" ]] || \
die 1 "Unexpected depreciation-state encountered for $name: $dep_state; labels: $labels" die 1 "Unexpected depreciation-state encountered for $name: $dep_state; labels: $labels"
# Any image matching the currently in-use IMG_SFX must always be preserved.
# Values are defined in cirrus.yml
# shellcheck disable=SC2154
if [[ "$name" =~ $IMG_SFX ]]; then
msg " Skipping current (latest) image $name"
continue
fi
reason="Obsolete as of $del_date; labels: $labels" reason="Obsolete as of $del_date; labels: $labels"
echo "$name $reason" >> $TODELETE echo "GCP $name $reason" >> $TODELETE
done done
msg "Searching for deprecated EC2 images prior to${NOR} $ABOUTNOW"
aws_init
# The AWS cli returns a huge blob of data we mostly don't need.
# # Use query statement to simplify the results. N/B: The get_tag_value()
# # function expects to find a "TAGS" item w/ list value.
ami_query='Images[*].{ID:ImageId,TAGS:Tags,DEP:DeprecationTime,SNAP:BlockDeviceMappings[0].Ebs.SnapshotId}'
all_amis=$($AWS ec2 describe-images --owners self --query "$ami_query")
nr_amis=$(jq -r -e length<<<"$all_amis")
req_env_vars all_amis nr_amis
for (( i=nr_amis ; i ; i-- )); do
count_image
unset ami ami_id dep snap permanent
ami=$(jq -r -e ".[$((i-1))]"<<<"$all_amis")
ami_id=$(jq -r -e ".ID"<<<"$ami")
dep=$(jq -r -e ".DEP"<<<"$ami")
if [[ "$dep" == null ]] || [[ -z "$dep" ]]; then continue; fi
dep_ymd=$(date --date="$dep" --iso-8601=date)
snap=$(jq -r -e ".SNAP"<<<$ami)
if permanent=$(get_tag_value "permanent" "$ami") && \
[[ "$permanent" == "true" ]]
then
warn 0 "Found permanent image '$ami_id' with deprecation '$dep_ymd'. Clearing deprecation date."
$AWS ec2 disable-image-deprecation --image-id "$ami_id" > /dev/null
continue
fi
unset name
if ! name=$(get_tag_value "Name" "$ami"); then
warn 0 " EC2 AMI ID '$ami_id' is missing a 'Name' tag"
fi
# Any image matching the currently in-use IMG_SFX
# must always be preserved.
if [[ "$name" =~ $IMG_SFX ]]; then
warn 0 " Retaining current (latest) image $name id $ami_id"
$AWS ec2 disable-image-deprecation --image-id "$ami_id" > /dev/null
continue
fi
if [[ $(echo -e "$ABOUTNOW\n$dep_ymd" | sort | tail -1) == "$ABOUTNOW" ]]; then
reason="Obsolete as of '$dep_ymd'; snap=$snap"
echo "EC2 $ami_id $reason" >> $TODELETE
fi
done
COUNT=$(<"$IMGCOUNT") COUNT=$(<"$IMGCOUNT")
CANDIDATES=$(wc -l <$TODELETE)
msg "########################################################################" msg "########################################################################"
msg "Deleting up to $DELETE_LIMIT random images of $COUNT examined:" msg "Deleting up to $DELETE_LIMIT random image candidates ($CANDIDATES/$COUNT total)::"
# Require a minimum number of images to exist # Require a minimum number of images to exist
if [[ "$COUNT" -lt $DELETE_LIMIT ]] if [[ "$CANDIDATES" -lt $DELETE_LIMIT ]]
then then
die 0 "Safety-net Insufficient images ($COUNT) to process deletions ($DELETE_LIMIT required)" die 0 "Safety-net Insufficient images ($CANDIDATES) to process deletions ($DELETE_LIMIT required)"
fi fi
sort --random-sort $TODELETE | tail -$DELETE_LIMIT | \ sort --random-sort $TODELETE | tail -$DELETE_LIMIT | \
while read -r image_name reason; do while read -r cloud image_name reason; do
msg "Deleting $image_name:${NOR} $reason" msg "Deleting $cloud $image_name:${NOR} $reason"
if ((DRY_RUN)); then if ((DRY_RUN)); then
msg "Dry-run: No changes made" msg "Dry-run: No changes made"
else elif [[ "$cloud" == "GCP" ]]; then
$GCLOUD compute images delete $image_name $GCLOUD compute images delete $image_name
elif [[ "$cloud" == "EC2" ]]; then
# Snapshot ID's always start with 'snap-' followed by a hexadecimal string
snap_id=$(echo "$reason" | sed -r -e 's/.* snap=(snap-[a-f0-9]+).*/\1/')
[[ -n "$snap_id" ]] || \
die 1 "Failed to parse EC2 snapshot ID for '$image_name' from string: '$reason'"
# Because it aims to be as helpful and useful as possible, not all failure conditions
# result in a non-zero exit >:(
unset output
output=$($AWS ec2 deregister-image --image-id "$image_name")
[[ ! "$output" =~ An\ error\ occurred ]] || \
die 1 "$output"
msg " ...deleting snapshot $snap_id:${NOR} (formerly used by $image_name)"
output=$($AWS ec2 delete-snapshot --snapshot-id "$snap_id")
[[ ! "$output" =~ An\ error\ occurred ]] || \
die 1 "$output"
else
die 1 "Unknown/Unsupported cloud '$cloud' record encountered in \$TODELETE file"
fi fi
done done

View File

@ -1,19 +1,18 @@
ARG CENTOS_STREAM_RELEASE=8 ARG CENTOS_STREAM_RELEASE=9
FROM quay.io/centos/centos:stream${CENTOS_STREAM_RELEASE} FROM quay.io/centos/centos:stream${CENTOS_STREAM_RELEASE}
ARG dnfycache="dnf -y --setopt=keepcache=true"
# Only needed for installing build-time dependencies # Only needed for installing build-time dependencies
COPY /imgts/google-cloud-sdk.repo /etc/yum.repos.d/google-cloud-sdk.repo COPY /imgts/google-cloud-sdk.repo /etc/yum.repos.d/google-cloud-sdk.repo
RUN ${dnfycache} update && \ RUN dnf -y update && \
${dnfycache} install epel-release && \ dnf -y install epel-release && \
${dnfycache} install python3 jq && \ dnf -y install python3 jq libxcrypt-compat && \
${dnfycache} --exclude=google-cloud-sdk-366.0.0-1 \ dnf -y install google-cloud-sdk && \
install google-cloud-sdk dnf clean all
# https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html # https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
ARG AWSURL="https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" ARG AWSURL="https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"
RUN ${dnfycache} install unzip glibc groff-base less && \ RUN dnf -y install unzip glibc groff-base less && \
dnf clean all && \
cd /tmp && \ cd /tmp && \
curl --fail --location -O "${AWSURL}" && \ curl --fail --location -O "${AWSURL}" && \
unzip awscli*.zip && \ unzip awscli*.zip && \

View File

@ -181,6 +181,10 @@ if [[ -n "$EC2IMGNAMES" ]]; then
else else
msg "${DRPREFIX}Updated image $image ($amiid) metadata." msg "${DRPREFIX}Updated image $image ($amiid) metadata."
fi fi
# Ensure image wasn't previously marked as deprecated. Ignore
# confirmation output.
$AWS ec2 disable-image-deprecation --image-id "$amiid" > /dev/null
done done
fi fi

View File

@ -1,19 +1,9 @@
# From https://github.com/GoogleCloudPlatform/compute-image-packages # Copy-pasted from https://cloud.google.com/sdk/docs/install#red-hatfedoracentos
[google-compute-engine]
name=Google Compute Engine
baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el8-x86_64-stable
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
# From https://cloud.google.com/sdk/docs/install#rpm [google-cloud-cli]
[google-cloud-sdk] name=Google Cloud CLI
name=Google Cloud SDK baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el9-x86_64
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
enabled=1 enabled=1
gpgcheck=1 gpgcheck=1
repo_gpgcheck=0 repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

View File

@ -5,7 +5,7 @@ set -e
RED="\e[1;31m" RED="\e[1;31m"
YEL="\e[1;33m" YEL="\e[1;33m"
NOR="\e[0m" NOR="\e[0m"
SENTINEL="__unknown__" # default set in dockerfile SENTINEL="__unknown__" # default set in Containerfile
# Disable all input prompts # Disable all input prompts
# https://cloud.google.com/sdk/docs/scripting-gcloud # https://cloud.google.com/sdk/docs/scripting-gcloud
GCLOUD="gcloud --quiet" GCLOUD="gcloud --quiet"
@ -55,7 +55,7 @@ gcloud_init() {
then then
TMPF="$1" TMPF="$1"
else else
TMPF=$(mktemp -p '' .$(uuidgen)_XXXX.json) TMPF=$(mktemp -p '' .XXXXXXXX)
trap "rm -f $TMPF &> /dev/null" EXIT trap "rm -f $TMPF &> /dev/null" EXIT
# Required variable must be set by caller # Required variable must be set by caller
# shellcheck disable=SC2154 # shellcheck disable=SC2154
@ -77,7 +77,7 @@ aws_init() {
then then
TMPF="$1" TMPF="$1"
else else
TMPF=$(mktemp -p '' .$(uuidgen)_XXXX.ini) TMPF=$(mktemp -p '' .XXXXXXXX)
fi fi
# shellcheck disable=SC2154 # shellcheck disable=SC2154
echo "$AWSINI" > $TMPF echo "$AWSINI" > $TMPF

View File

@ -1,94 +0,0 @@
# Semi-manual image imports
## Overview
[Due to a bug in
packer](https://github.com/hashicorp/packer-plugin-amazon/issues/264) and
the sheer complexity of EC2 image imports, this process is impractical for
full automation. It tends toward nearly always requiring supervision of a
human:
* There are multiple failure-points, some are not well reported to
the user by tools here or by AWS itself.
* The upload of the image to s3 can be unreliable. Silently corrupting image
data.
* The import-process is managed by a hosted AWS service which can be slow
and is occasionally unreliable.
* Failure often results in one or more leftover/incomplete resources
(s3 objects, EC2 snapshots, and AMIs)
## Requirements
* You're generally familiar with the (manual)
[EC2 snapshot import process](https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html).
* You are in possession of an AWS EC2 account, with the [IAM policy
`vmimport`](https://docs.aws.amazon.com/vm-import/latest/userguide/required-permissions.html#vmimport-role) attached.
* Both "Access Key" and "Secret Access Key" values set in [a credentials
file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
* Podman is installed and functional
* At least 10gig free space under `/tmp`, more if there are failures / multiple runs.
* *Network bandwidth sufficient for downloading and uploading many GBs of
data, potentially multiple times.*
## Process
Unless there is a problem with the current contents or age of the
imported images, this process does not need to be followed. The
normal PR-based build workflow can simply be followed as usual.
This process is only needed to bring newly updated Fedora images into
AWS to build CI images from. For example, due to a new Beta or GA release.
***Note:*** Most of the steps below will happen within a container environment.
Any exceptions are noted in the individual steps below with *[HOST]*
1. *[HOST]* Edit the `Makefile`, update the Fedora release numbers
under the section
`##### Important image release and source details #####`
1. *[HOST]* Run
```bash
$ make image_builder_debug \
IMG_SFX=$(date +%s) \
GAC_FILEPATH=/dev/null \
AWS_SHARED_CREDENTIALS_FILE=/path/to/.aws/credentials
```
1. Run `make import_images` (or `make --jobs=4 import_images` if you're brave).
1. The following steps should all occur successfully for each imported image.
1. Image is downloaded.
1. Image checksum is downloaded.
1. Image is verified against the checksum.
1. Image is converted to `VHDX` format.
1. The `VHDX` image is uploaded to the `packer-image-import` S3 bucket.
1. AWS `import-snapshot` process is started (uses AWS vmimport service)
1. Progress of snapshot import is monitored until completion or failure.
1. The imported snapshot is converted into an AMI
1. Essential tags are added to the AMI
1. Details ascii-table about the new AMI is printed on success.
1. Assuming all image imports were successful, a final success message will be
printed by `make` with instructions for updating the `Makefile`.
1. *[HOST]* Update the `Makefile` as instructed, commit the
changes and push to a PR. The automated image building process
takes over and runs as usual.
## Failure responses
This list is not exhaustive, and only represents common/likely failures.
Normally there is no need to exit the build container.
* If image download fails, double-check any error output, run `make clean`
and retry.
* If checksum validation fails,
run `make clean`.
Retry `make import_images`.
* If s3 upload fails,
Confirm service availability,
retry `make import_images`.
* If snapshot import fails with a `Disk validation failed` error,
Retry `make import_images`.
* If snapshot import fails with non-validation error,
find snapshot in EC2 and delete it manually.
Retry `make import_images`.
* If AMI registration fails, remove any conflicting AMIs *and* snapshots.
Retry `make import_images`.
* If import was successful but AMI tagging failed, manually add
the required tags to AMI: `automation=false` and `Name=<name>-i${IMG_SFX}`.
Where `<name>` is `fedora-aws` or `fedora-aws-arm64`.

View File

@ -1,45 +0,0 @@
#!/bin/bash
# This script is intended to be run by packer, usage under any other
# environment may behave badly. Its purpose is to download a VM
# image and a checksum file. Verify the image's checksum matches.
# If it does, convert the downloaded image into the format indicated
# by the first argument's `.extension`.
#
# The first argument is the file path and name for the output image,
# the second argument is the image download URL (ending in a filename).
# The third argument is the download URL for a checksum file containing
# details necessary to verify vs filename included in image download URL.
set -eo pipefail
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
[[ "$#" -eq 3 ]] || \
die "Expected to be called with three arguments, not: $#"
# Packer needs to provide the desired filename as it's unable to parse
# a filename out of the URL or interpret output from this script.
dest_dirpath=$(dirname "$1")
dest_filename=$(basename "$1")
dest_format=$(cut -d. -f2<<<"$dest_filename")
src_url="$2"
src_filename=$(basename "$src_url")
cs_url="$3"
req_env_vars dest_dirpath dest_filename dest_format src_url src_filename cs_url
mkdir -p "$dest_dirpath"
cd "$dest_dirpath"
[[ -r "$src_filename" ]] || \
curl --fail --location -O "$src_url"
echo "Downloading & verifying checksums in $cs_url"
curl --fail --location "$cs_url" -o - | \
sha256sum --ignore-missing --check -
echo "Converting '$src_filename' to ($dest_format format) '$dest_filename'"
qemu-img convert "$src_filename" -O "$dest_format" "${dest_filename}"

View File

@ -1,31 +0,0 @@
{
"builds": [
{
"name": "fedora-aws",
"builder_type": "hamsterwheel",
"build_time": 0,
"files": null,
"artifact_id": "",
"packer_run_uuid": null,
"custom_data": {
"IMG_SFX": "fedora-aws-i@@@IMG_SFX@@@",
"STAGE": "import",
"TASK": "@@@CIRRUS_TASK_ID@@@"
}
},
{
"name": "fedora-aws-arm64",
"builder_type": "hamsterwheel",
"build_time": 0,
"files": null,
"artifact_id": "",
"packer_run_uuid": null,
"custom_data": {
"IMG_SFX": "fedora-aws-arm64-i@@@IMG_SFX@@@",
"STAGE": "import",
"TASK": "@@@CIRRUS_TASK_ID@@@"
}
}
],
"last_run_uuid": "00000000-0000-0000-0000-000000000000"
}

View File

@ -1,18 +0,0 @@
{
"Name": "@@@NAME@@@-i@@@IMG_SFX@@@",
"VirtualizationType": "hvm",
"Architecture": "@@@ARCH@@@",
"EnaSupport": true,
"RootDeviceName": "/dev/sda1",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"SnapshotId": "@@@SNAPSHOT_ID@@@",
"VolumeSize": 10,
"VolumeType": "gp2"
}
}
]
}

View File

@ -1,84 +0,0 @@
#!/bin/bash
# This script is intended to be called by the main Makefile
# to wait for and confirm successful import and conversion
# of an uploaded image object from S3 into EC2. It expects
# the path to a file containing the import task ID as the
# first argument.
#
# If the import is successful, the snapshot ID is written
# to stdout. Otherwise, all output goes to stderr, and
# the script exits non-zero on failure or timeout. On
# failure, the file containing the import task ID will
# be removed.
set -eo pipefail
AWS="${AWS:-aws --output json --region us-east-1}"
# The import/conversion process can take a LONG time, have observed
# > 10 minutes on occasion. Normally, takes 2-5 minutes.
SLEEP_SECONDS=10
TIMEOUT_SECONDS=720
TASK_ID_FILE="$1"
tmpfile=$(mktemp -p '' tmp.$(basename ${BASH_SOURCE[0]}).XXXX)
die() { echo "ERROR: ${1:-No error message provided}" > /dev/stderr; exit 1; }
msg() { echo "${1:-No error message provided}" > /dev/stderr; }
unset snapshot_id
handle_exit() {
set +e
rm -f "$tmpfile" &> /dev/null
if [[ -n "$snapshot_id" ]]; then
msg "Success ($task_id): $snapshot_id"
echo -n "$snapshot_id" > /dev/stdout
return 0
fi
rm -f "$TASK_ID_FILE"
die "Timeout or other error reported while waiting for snapshot import"
}
trap handle_exit EXIT
[[ -n "$AWS_SHARED_CREDENTIALS_FILE" ]] || \
die "\$AWS_SHARED_CREDENTIALS_FILE must not be unset/empty."
[[ -r "$1" ]] || \
die "Can't read task id from file '$TASK_ID_FILE'"
task_id=$(<$TASK_ID_FILE)
msg "Waiting up to $TIMEOUT_SECONDS seconds for '$task_id' import. Checking progress every $SLEEP_SECONDS seconds."
for (( i=$TIMEOUT_SECONDS ; i ; i=i-$SLEEP_SECONDS )); do \
# Sleep first, to give AWS time to start meaningful work.
sleep ${SLEEP_SECONDS}s
$AWS ec2 describe-import-snapshot-tasks \
--import-task-ids $task_id > $tmpfile
if ! st_msg=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.StatusMessage?' $tmpfile) && \
[[ -n $st_msg ]] && \
[[ ! "$st_msg" =~ null ]]
then
die "Unexpected result: $st_msg"
elif egrep -iq '(error)|(fail)' <<<"$st_msg"; then
die "$task_id: $st_msg"
fi
msg "$task_id: $st_msg (${i}s remaining)"
# Why AWS you use StatusMessage && Status? Bad names! WHY!?!?!?!
if status=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.Status?' $tmpfile) && \
[[ "$status" == "completed" ]] && \
snapshot_id=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.SnapshotId?' $tmpfile)
then
msg "Import complete to: $snapshot_id"
break
else
unset snapshot_id
fi
done

69
lib.sh
View File

@ -19,26 +19,11 @@ OS_REL_VER="$OS_RELEASE_ID-$OS_RELEASE_VER"
# This location is checked by automation in other repos, please do not change. # This location is checked by automation in other repos, please do not change.
PACKAGE_DOWNLOAD_DIR=/var/cache/download PACKAGE_DOWNLOAD_DIR=/var/cache/download
INSTALL_AUTOMATION_VERSION="4.1.2" # N/B: This is managed by renovate
INSTALL_AUTOMATION_VERSION="5.0.1"
PUSH_LATEST="${PUSH_LATEST:-0}"
# Mask secrets in show_env_vars() from automation library # Mask secrets in show_env_vars() from automation library
SECRET_ENV_RE='(ACCOUNT)|(.+_JSON)|(AWS.+)|(SSH)|(PASSWORD)|(TOKEN)' SECRET_ENV_RE='(^PATH$)|(^BASH_FUNC)|(^_.*)|(.*PASSWORD.*)|(.*TOKEN.*)|(.*SECRET.*)|(.*ACCOUNT.*)|(.+_JSON)|(AWS.+)|(.*SSH.*)|(.*GCP.*)'
# Some platforms set and make this read-only
[[ -n "$UID" ]] || \
UID=$(getent passwd $USER | cut -d : -f 3)
SUDO=""
if [[ -n "$UID" ]] && [[ "$UID" -ne 0 ]]; then
SUDO="sudo"
fi
if [[ "$OS_RELEASE_ID" == "debian" ]]; then
export DEBIAN_FRONTEND=noninteractive
SUDO="$SUDO env DEBIAN_FRONTEND=$DEBIAN_FRONTEND"
fi
if [[ -r "/etc/automation_environment" ]]; then if [[ -r "/etc/automation_environment" ]]; then
source /etc/automation_environment source /etc/automation_environment
@ -55,13 +40,28 @@ else # Automation common library not installed yet
bigto() { die "Automation library not installed; Required for bigto()"; } bigto() { die "Automation library not installed; Required for bigto()"; }
fi fi
# Setting noninteractive is critical, apt-get can hang w/o it.
# N/B: Must be done _after_ potential loading of automation libraries
export SUDO="env DEBIAN_FRONTEND=noninteractive"
if [[ "$UID" -ne 0 ]]; then
export SUDO="sudo env DEBIAN_FRONTEND=noninteractive"
fi
install_automation_tooling() { install_automation_tooling() {
local version_arg
version_arg="$INSTALL_AUTOMATION_VERSION"
if [[ "$1" == "latest" ]]; then
version_arg="latest"
shift
fi
# This script supports installing all current and previous versions # This script supports installing all current and previous versions
local installer_url="https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh" local installer_url="https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh"
curl --silent --show-error --location \ curl --silent --show-error --location \
--url "$installer_url" | \ --url "$installer_url" | \
$SUDO env INSTALL_PREFIX=/usr/share /bin/bash -s - \ $SUDO env INSTALL_PREFIX=/usr/share /bin/bash -s - \
"$INSTALL_AUTOMATION_VERSION" "$@" "$version_arg" "$@"
# This defines AUTOMATION_LIB_PATH # This defines AUTOMATION_LIB_PATH
source /usr/share/automation/environment source /usr/share/automation/environment
#shellcheck disable=SC1090 #shellcheck disable=SC1090
@ -168,9 +168,13 @@ skip_on_pr_label() {
# print a space-separated list of labels when run under Cirrus-CI for a PR # print a space-separated list of labels when run under Cirrus-CI for a PR
get_pr_labels() { get_pr_labels() {
req_env_vars CIRRUS_CI CIRRUS_PR CIRRUS_REPO_CLONE_TOKEN req_env_vars CIRRUS_CI CIRRUS_REPO_CLONE_TOKEN
req_env_vars CIRRUS_REPO_OWNER CIRRUS_REPO_NAME req_env_vars CIRRUS_REPO_OWNER CIRRUS_REPO_NAME
# Empty for non-PRs
# shellcheck disable=SC2154
[[ -n "$CIRRUS_PR" ]] || return 0
local query h_accept h_content api result fltrpfx local query h_accept h_content api result fltrpfx
local filter labels h_auth h_accept h_content local filter labels h_auth h_accept h_content
@ -234,7 +238,7 @@ remove_netavark_aardvark_files() {
do do
# Sub-directories may contain unrelated/valuable stuff # Sub-directories may contain unrelated/valuable stuff
if [[ -d "$fullpath" ]]; then continue; fi if [[ -d "$fullpath" ]]; then continue; fi
sudo rm -vf "$fullpath" $SUDO rm -vf "$fullpath"
done done
} }
@ -282,6 +286,16 @@ unmanaged-devices=interface-name:*podman*;interface-name:veth*
EOF EOF
} }
# Create a local registry, seed it with remote images
initialize_local_cache_registry() {
msg "Initializing local cache registry"
#shellcheck disable=SC2154
$SUDO ${SCRIPT_DIRPATH}/local-cache-registry initialize
msg "du -sh /var/cache/local-registry"
du -sh /var/cache/local-registry
}
common_finalize() { common_finalize() {
set -x # extra detail is no-longer necessary set -x # extra detail is no-longer necessary
cd / cd /
@ -294,7 +308,7 @@ common_finalize() {
$SUDO rm -rf /var/lib/cloud/instanc* $SUDO rm -rf /var/lib/cloud/instanc*
$SUDO rm -rf /root/.ssh/* $SUDO rm -rf /root/.ssh/*
$SUDO rm -rf /etc/ssh/*key* $SUDO rm -rf /etc/ssh/*key*
$SUDO rm -rf /tmp/* $SUDO rm -rf /tmp/* /var/tmp/automation_images
$SUDO rm -rf /tmp/.??* $SUDO rm -rf /tmp/.??*
echo -n "" | $SUDO tee /etc/machine-id echo -n "" | $SUDO tee /etc/machine-id
$SUDO sync $SUDO sync
@ -316,7 +330,10 @@ rh_finalize() {
# Packaging cache is preserved across builds of container images # Packaging cache is preserved across builds of container images
$SUDO rm -f /etc/udev/rules.d/*-persistent-*.rules $SUDO rm -f /etc/udev/rules.d/*-persistent-*.rules
$SUDO touch /.unconfigured # force firstboot to run $SUDO touch /.unconfigured # force firstboot to run
common_finalize
echo
echo "# PACKAGE LIST"
rpm -qa | sort
} }
# Called during VM Image setup, not intended for general use. # Called during VM Image setup, not intended for general use.
@ -332,7 +349,9 @@ debian_finalize() {
fi fi
set -x set -x
# Packaging cache is preserved across builds of container images # Packaging cache is preserved across builds of container images
common_finalize # pipe-cat is not a NOP! It prevents using $PAGER and then hanging
echo "# PACKAGE LIST"
dpkg -l | cat
} }
finalize() { finalize() {
@ -345,4 +364,6 @@ finalize() {
else else
die "Unknown/Unsupported Distro '$OS_RELEASE_ID'" die "Unknown/Unsupported Distro '$OS_RELEASE_ID'"
fi fi
common_finalize
} }

View File

@ -40,8 +40,10 @@ fi
# I don't expect there will ever be more than maybe 0-20 instances at any time. # I don't expect there will ever be more than maybe 0-20 instances at any time.
for instance_index in $(seq 1 $(jq -e 'length'<<<"$simple_inst_list")); do for instance_index in $(seq 1 $(jq -e 'length'<<<"$simple_inst_list")); do
instance=$(jq -e ".[$instance_index - 1]"<<<"$simple_inst_list") instance=$(jq -e ".[$instance_index - 1]"<<<"$simple_inst_list")
# aws commands require an instance ID
instid=$(jq -r ".ID"<<<"$instance")
# A Name-tag isn't guaranteed, default to stupid, unreadable, generated ID # A Name-tag isn't guaranteed, default to stupid, unreadable, generated ID
name=$(jq -r ".ID"<<<"$instance") name=$instid
if name_tag=$(get_tag_value "Name" "$instance"); then if name_tag=$(get_tag_value "Name" "$instance"); then
# This is MUCH more human-friendly and easier to find in the WebUI. # This is MUCH more human-friendly and easier to find in the WebUI.
# If it was an instance leaked by Cirrus-CI, it may even include the # If it was an instance leaked by Cirrus-CI, it may even include the
@ -69,6 +71,7 @@ for instance_index in $(seq 1 $(jq -e 'length'<<<"$simple_inst_list")); do
continue continue
fi fi
# First part of the status line item to append in the e-mail
line="* VM $name running $age_days days" line="* VM $name running $age_days days"
# It would be nice to list all the tags like we do for GCE VMs, # It would be nice to list all the tags like we do for GCE VMs,
@ -76,7 +79,39 @@ for instance_index in $(seq 1 $(jq -e 'length'<<<"$simple_inst_list")); do
# Only print this handy-one (set by get_ci_vm) if it's there. # Only print this handy-one (set by get_ci_vm) if it's there.
if inuseby_tag=$(get_tag_value "in-use-by" "$instance"); then if inuseby_tag=$(get_tag_value "in-use-by" "$instance"); then
dbg "Found instance '$name' tagged in-use-by=$inuseby_tag." dbg "Found instance '$name' tagged in-use-by=$inuseby_tag."
line+=" tagged in-use-by=$inuseby_tag" line+="; likely get_ci_vm, in-use-by=$inuseby_tag"
elif ((DRY_RUN==0)); then # NOT a persistent or a get_ci_vm instance
# Around Jun/Jul '23 an annoyingly steady stream of EC2 orphans were
# reported to Cirrus-support. They've taken actions to resolve,
# but the failure-modes are many and complex. Since most of the EC2
# instances are rather expensive to keep needlessly running, and manual
# cleanup is annoying, try to terminate them automatically.
dbg "Attempting to terminate instance '$name'"
# Operation runs asynchronously, no error reported for already terminated instance.
# Any stdout/stderr here would make the eventual e-mail unreadable.
if ! termout=$(aws ec2 terminate-instances --no-paginate --output json --instance-ids "$instid" 2>&1)
then
echo "::error::Auto-term. of '$instid' failed, 'aws' output: $termout" > /dev/stderr
# Catch rare TOCTOU race, instance was running, terminated, and pruned while looping.
# (terminated instances stick around for a while until purged automatically)
if [[ "$termout" =~ InvalidInstanceID ]]; then
line+="; auto-term. failed, instance vanished"
else # Something else horrible broke, let the operators know.
line+="; auto-term. failed, see GHA workflow log"
fi
else
dbg "Successful term. command output: '$termout'"
# At this point, the script could sit around in a poll-loop, waiting to confirm
# the `$termout` JSON contains `CurrentState: { Code: 48, Name: terminated }`.
# However this could take _minutes_, and there may be a LOT of instances left
# to process. Do the next best thing: Hope the termination eventually works,
# but also let the operator know an attempt was made.
line+="; probably successful auto-termination"
fi
else # no in-use-by tag, DRY_RUN==1
dbg "DRY_RUN: Would normally have tried to terminate instance '$name' (ID $instid)"
fi fi
echo "$line" >> "$OUTPUT" echo "$line" >> "$OUTPUT"

View File

@ -18,7 +18,9 @@ req_env_vars GCPJSON GCPNAME GCPPROJECT GCPPROJECTS AWSINI
NOW=$(date +%s) NOW=$(date +%s)
TOO_OLD='3 days ago' # Detect Friday Orphans on Monday TOO_OLD='3 days ago' # Detect Friday Orphans on Monday
EVERYTHING=${EVERYTHING:-0} # set to '1' for testing EVERYTHING=${EVERYTHING:-0} # set to '1' for testing
DRY_RUN=${DRY_RUN:-0}
if ((EVERYTHING)); then if ((EVERYTHING)); then
DRY_RUN=1
TOO_OLD="3 seconds ago" TOO_OLD="3 seconds ago"
fi fi
# Anything older than this is "too old" # Anything older than this is "too old"

View File

@ -15,6 +15,16 @@ ARG PACKER_BUILD_NAME=
ENV AI_PATH=/usr/src/automation_images \ ENV AI_PATH=/usr/src/automation_images \
CONTAINER=1 CONTAINER=1
ARG IMG_SFX=
ARG CIRRUS_TASK_ID=
ARG GIT_HEAD=
# Ref: https://github.com/opencontainers/image-spec/blob/main/annotations.md
LABEL org.opencontainers.image.url="https://cirrus-ci.com/task/${CIRRUS_TASK_ID}"
LABEL org.opencontainers.image.documentation="https://github.com/containers/automation_images/blob/${GIT_HEAD}/README.md#container-images-overview-step-2"
LABEL org.opencontainers.image.source="https://github.com/containers/automation_images/blob/${GIT_HEAD}/podman/Containerfile"
LABEL org.opencontainers.image.version="${IMG_SFX}"
LABEL org.opencontainers.image.revision="${GIT_HEAD}"
# Only add needed files to avoid invalidating build cache # Only add needed files to avoid invalidating build cache
ADD /lib.sh "$AI_PATH/" ADD /lib.sh "$AI_PATH/"
ADD /podman/* "$AI_PATH/podman/" ADD /podman/* "$AI_PATH/podman/"

View File

@ -12,7 +12,6 @@ RUN dnf -y update && \
dnf clean all dnf clean all
ENV REG_REPO="https://github.com/docker/distribution.git" \ ENV REG_REPO="https://github.com/docker/distribution.git" \
REG_COMMIT="b5ca020cfbe998e5af3457fda087444cf5116496" \
REG_COMMIT_SCHEMA1="ec87e9b6971d831f0eff752ddb54fb64693e51cd" \ REG_COMMIT_SCHEMA1="ec87e9b6971d831f0eff752ddb54fb64693e51cd" \
OSO_REPO="https://github.com/openshift/origin.git" \ OSO_REPO="https://github.com/openshift/origin.git" \
OSO_TAG="v1.5.0-alpha.3" OSO_TAG="v1.5.0-alpha.3"

View File

@ -9,7 +9,6 @@ set -e
declare -a req_vars declare -a req_vars
req_vars=(\ req_vars=(\
REG_REPO REG_REPO
REG_COMMIT
REG_COMMIT_SCHEMA1 REG_COMMIT_SCHEMA1
OSO_REPO OSO_REPO
OSO_TAG OSO_TAG
@ -43,12 +42,6 @@ cd "$REG_GOSRC"
( (
# This is required to be set like this by the build system # This is required to be set like this by the build system
export GOPATH="$PWD/Godeps/_workspace:$GOPATH" export GOPATH="$PWD/Godeps/_workspace:$GOPATH"
# This comes in from the Containerfile
# shellcheck disable=SC2154
git checkout -q "$REG_COMMIT"
go build -o /usr/local/bin/registry-v2 \
github.com/docker/distribution/cmd/registry
# This comes in from the Containerfile # This comes in from the Containerfile
# shellcheck disable=SC2154 # shellcheck disable=SC2154
git checkout -q "$REG_COMMIT_SCHEMA1" git checkout -q "$REG_COMMIT_SCHEMA1"
@ -68,6 +61,10 @@ sed -i -e 's/\[\[ "\${go_version\[2]}" < "go1.5" ]]/false/' ./hack/common.sh
# 8 characters long. This can happen if/when systemd-resolved adds 'trust-ad'. # 8 characters long. This can happen if/when systemd-resolved adds 'trust-ad'.
sed -i '/== "attempts:"/s/ 8 / 9 /' vendor/github.com/miekg/dns/clientconfig.go sed -i '/== "attempts:"/s/ 8 / 9 /' vendor/github.com/miekg/dns/clientconfig.go
# Backport https://github.com/ugorji/go/commit/8286c2dc986535d23e3fad8d3e816b9dd1e5aea6
# Go ≥ 1.22 panics with a base64 encoding using duplicated characters.
sed -i -e 's,"encoding/base64","encoding/base32", ; s,base64.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789__"),base32.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef"),' vendor/github.com/ugorji/go/codec/gen.go
make build make build
make all WHAT=cmd/dockerregistry make all WHAT=cmd/dockerregistry
cp -a ./_output/local/bin/linux/*/* /usr/local/bin/ cp -a ./_output/local/bin/linux/*/* /usr/local/bin/

View File

@ -7,11 +7,12 @@
set +e # Not all of these exist on every platform set +e # Not all of these exist on every platform
SUDO="" # Setting noninteractive is critical, apt-get can hang w/o it.
[[ "$UID" -eq 0 ]] || \ if [[ "$UID" -ne 0 ]]; then
SUDO="sudo" export SUDO="sudo env DEBIAN_FRONTEND=noninteractive"
fi
EVIL_UNITS="cron crond atd apt-daily-upgrade apt-daily fstrim motd-news systemd-tmpfiles-clean update-notifier-download mlocate-updatedb" EVIL_UNITS="cron crond atd apt-daily-upgrade apt-daily fstrim motd-news systemd-tmpfiles-clean update-notifier-download mlocate-updatedb plocate-updatedb"
if [[ "$1" == "--list" ]] if [[ "$1" == "--list" ]]
then then
@ -41,3 +42,44 @@ if [[ -d "$EAAD" ]]; then
echo "Checking/Patching $filename" echo "Checking/Patching $filename"
$SUDO sed -i -r -e "s/$PERIODIC_APT_RE/"'\10"\;/' "$EAAD/$filename"; done $SUDO sed -i -r -e "s/$PERIODIC_APT_RE/"'\10"\;/' "$EAAD/$filename"; done
fi fi
# Early 2023: https://github.com/containers/podman/issues/16973
#
# We see countless instances of "lookup cdn03.quay.io" flakes.
# Disabling the systemd resolver (Podman #17505) seems to have almost
# eliminated those -- the exceptions are early-on steps that run
# before that happens.
#
# Opinions differ on the merits of systemd-resolve, but the fact is
# it breaks our CI testing. Here we disable it for all VMs.
# shellcheck disable=SC2154
if ! ((CONTAINER)); then
nsswitch=/etc/authselect/nsswitch.conf
if [[ -e $nsswitch ]]; then
if grep -q -E 'hosts:.*resolve' $nsswitch; then
echo "Disabling systemd-resolved"
$SUDO sed -i -e 's/^\(hosts: *\).*/\1files dns myhostname/' $nsswitch
$SUDO systemctl disable --now systemd-resolved
$SUDO rm -f /etc/resolv.conf
# NetworkManager may already be running, or it may not....
$SUDO systemctl start NetworkManager
sleep 1
$SUDO systemctl restart NetworkManager
# ...and it may create resolv.conf upon start/restart, or it
# may not. Keep restarting until it does. (Yes, I realize
# this is cargocult thinking. Don't care. Not worth the effort
# to diagnose and solve properly.)
retries=10
while ! test -e /etc/resolv.conf;do
retries=$((retries - 1))
if [[ $retries -eq 0 ]]; then
die "Timed out waiting for resolv.conf"
fi
$SUDO systemctl restart NetworkManager
sleep 5
done
fi
fi
fi

View File

@ -0,0 +1,4 @@
<powershell>
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 0
Enable-NetFirewallRule -DisplayGroup "Remote Desktop"
</powershell>

50
win_images/win-lib.ps1 Normal file
View File

@ -0,0 +1,50 @@
$ErrorActionPreference = "stop"
Set-ExecutionPolicy Bypass -Scope Process -Force
function Check-Exit {
param(
[parameter(ValueFromRemainingArguments = $true)]
[string[]] $codes = @(0)
)
if ($LASTEXITCODE -eq $null) {
return
}
foreach ($code in $codes) {
if ($LASTEXITCODE -eq $code) {
return
}
}
Exit $LASTEXITCODE
}
# Retry installation on failure or 5-minute timeout (for all packages)
function retryInstall {
param([Parameter(ValueFromRemainingArguments)] [string[]] $pkgs)
foreach ($pkg in $pkgs) {
for ($retries = 0; ; $retries++) {
if ($retries -gt 5) {
throw "Could not install package $pkg"
}
if ($pkg -match '(.[^\@]+)@(.+)') {
$pkg = @("--version", $Matches.2, $Matches.1)
}
# Chocolatey best practices as of 2024-04:
# https://docs.chocolatey.org/en-us/choco/commands/#scripting-integration-best-practices-style-guide
# Some of those are suboptimal, e.g., using "upgrade" to mean "install",
# hardcoding a specific API URL. We choose to reject those.
choco install $pkg -y --allow-downgrade --execution-timeout=300
if ($LASTEXITCODE -eq 0) {
break
}
Write-Host "Error installing, waiting before retry..."
Start-Sleep -Seconds 6
}
}
}

View File

@ -17,24 +17,29 @@ builders:
most_recent: true most_recent: true
owners: owners:
- amazon - amazon
# While this image should run on metal, we can build it on smaller/cheaper systems # While this image should run on metal, we can build it on smaller/cheaper systems
instance_type: t3.large instance_type: t3.large
force_deregister: true # Remove AMI with same name if exists force_deregister: true # Remove AMI with same name if exists
force_delete_snapshot: true # Also remove snapshots of force-removed AMI force_delete_snapshot: true # Also remove snapshots of force-removed AMI
# Note that we do not set shutdown_behavior to terminate, as a clean shutdown is required # Note that we do not set shutdown_behavior to terminate, as a clean shutdown is required
# for windows provisioning to complete successfully. # for windows provisioning to complete successfully.
communicator: winrm communicator: winrm
winrm_username: Administrator # AWS provisions Administrator, unlike GCE winrm_username: Administrator # AWS provisions Administrator, unlike GCE
winrm_insecure: true winrm_insecure: true
winrm_use_ssl: true winrm_use_ssl: true
winrm_timeout: 25m winrm_timeout: 25m
# Script that runs on server start, needed to prep and enable winrm # Script that runs on server start, needed to prep and enable winrm
user_data_file: '{{template_dir}}/bootstrap.ps1' user_data_file: '{{template_dir}}/bootstrap.ps1'
# Required for network access, must be the 'default' group used by Cirrus-CI # Required for network access, must be the 'default' group used by Cirrus-CI
security_group_id: "sg-042c75677872ef81c" security_group_id: "sg-042c75677872ef81c"
ami_name: &ami_name '{{build_name}}-c{{user `IMG_SFX`}}' ami_name: &ami_name '{{build_name}}-c{{user `IMG_SFX`}}'
ami_description: 'Built in https://cirrus-ci.com/task/{{user `CIRRUS_TASK_ID`}}' ami_description: 'Built in https://cirrus-ci.com/task/{{user `CIRRUS_TASK_ID`}}'
launch_block_device_mappings:
- device_name: '/dev/sda1'
volume_size: 200
volume_type: 'gp3'
iops: 6000
delete_on_termination: true
# These are critical and used by security-polciy to enforce instance launch limits. # These are critical and used by security-polciy to enforce instance launch limits.
tags: &awstags tags: &awstags
# EC2 expects "Name" to be capitalized # EC2 expects "Name" to be capitalized
@ -53,18 +58,22 @@ builders:
provisioners: provisioners:
- type: powershell - type: powershell
script: '{{template_dir}}/win_packaging.ps1' inline:
- '$ErrorActionPreference = "stop"'
- 'New-Item -Path "c:\" -Name "temp" -ItemType "directory" -Force'
- 'New-Item -Path "c:\temp" -Name "automation_images" -ItemType "directory" -Force'
- type: 'file'
source: '{{ pwd }}/'
destination: "c:\\temp\\automation_images\\"
- type: powershell
inline:
- 'c:\temp\automation_images\win_images\win_packaging.ps1'
# Several installed items require a reboot, do that now in case it would
# cause a problem with final image preperations.
- type: windows-restart - type: windows-restart
- type: powershell - type: powershell
inline: inline:
# Disable WinRM as a security precuation (cirrus launches an agent from user-data, so we don't need it) - 'c:\temp\automation_images\win_images\win_finalization.ps1'
- Set-Service winrm -StartupType Disabled
# Also disable RDP (can be enabled via user-data manually)
- Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 1
- Disable-NetFirewallRule -DisplayGroup "Remote Desktop"
# Setup Autologon and reset, must be last, due to pw change
- type: powershell
script: '{{template_dir}}/auto_logon.ps1'
post-processors: post-processors:
@ -75,4 +84,3 @@ post-processors:
IMG_SFX: '{{ user `IMG_SFX` }}' IMG_SFX: '{{ user `IMG_SFX` }}'
STAGE: cache STAGE: cache
TASK: '{{user `CIRRUS_TASK_ID`}}' TASK: '{{user `CIRRUS_TASK_ID`}}'

View File

@ -1,6 +1,13 @@
$ErrorActionPreference = "stop"
$username = "Administrator"
. $PSScriptRoot\win-lib.ps1
# Disable WinRM as a security precuation (cirrus launches an agent from user-data, so we don't need it)
Set-Service winrm -StartupType Disabled
# Also disable RDP (can be enabled via user-data manually)
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 1
Disable-NetFirewallRule -DisplayGroup "Remote Desktop"
$username = "Administrator"
# Temporary random password to allow autologon that will be replaced # Temporary random password to allow autologon that will be replaced
# before the instance is put into service. # before the instance is put into service.
$syms = [char[]]([char]'a'..[char]'z' ` $syms = [char[]]([char]'a'..[char]'z' `
@ -15,8 +22,8 @@ $encPass = ConvertTo-SecureString $password -AsPlainText -Force
Set-LocalUser -Name $username -Password $encPass Set-LocalUser -Name $username -Password $encPass
$winLogon= "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" $winLogon= "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon"
Set-ItemProperty $winLogon "AutoAdminLogon" -Value "1" -type String Set-ItemProperty $winLogon "AutoAdminLogon" -Value "1" -type String
Set-ItemProperty $winLogon "DefaultUsername" -Value $username -type String Set-ItemProperty $winLogon "DefaultUsername" -Value $username -type String
Set-ItemProperty $winLogon "DefaultPassword" -Value $password -type String Set-ItemProperty $winLogon "DefaultPassword" -Value $password -type String
# Lock the screen immediately, even though it's unattended, just in case # Lock the screen immediately, even though it's unattended, just in case
@ -28,6 +35,6 @@ Set-ItemProperty `
# NOTE: For now, we do not run sysprep, since initialization with reboots # NOTE: For now, we do not run sysprep, since initialization with reboots
# are exceptionally slow on metal nodes, which these target to run. This # are exceptionally slow on metal nodes, which these target to run. This
# will lead to a duplicate machine id, which is not ideal, but allows # will lead to a duplicate machine id, which is not ideal, but allows
# instances to start instantly. So, instead of sysprep, trigger a reset so # instances to start quickly. So, instead of sysprep, trigger a reset so
# that the admin password reset, and activation rerun on boot # that the admin password reset, and activation rerun on boot.
& 'C:\Program Files\Amazon\EC2Launch\ec2launch' reset --block & 'C:\Program Files\Amazon\EC2Launch\ec2launch' reset --block

View File

@ -1,36 +1,36 @@
function CheckExit {
param(
[parameter(ValueFromRemainingArguments = $true)]
[string[]] $codes = @(0)
)
if ($LASTEXITCODE -eq $null) {
return
}
foreach ($code in $codes) {
if ($LASTEXITCODE -eq $code) {
return
}
}
Exit $LASTEXITCODE
}
. $PSScriptRoot\win-lib.ps1
# Disables runtime process virus scanning, which is not necessary # Disables runtime process virus scanning, which is not necessary
Set-MpPreference -DisableRealtimeMonitoring 1 Set-MpPreference -DisableRealtimeMonitoring 1
$ErrorActionPreference = "stop"
Set-ExecutionPolicy Bypass -Scope Process -Force
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072 [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
# Install Git, BZ2 archive support, Go, and the MingW (GCC for Win) compiler for CGO support # Install basic required tooling.
# Add pstools to workaorund sess 0 WSL bug # psexec needed to workaround session 0 WSL bug
choco install -y git mingw archiver psexec; CheckExit retryInstall 7zip git archiver psexec golang mingw StrawberryPerl zstandard; Check-Exit
choco install golang --version 1.19.2 -y; CheckExit
# Update service is required for dotnet
Set-Service -Name wuauserv -StartupType "Manual"; Check-Exit
# Install dotnet as that's the best way to install WiX 4+
# Choco does not support installing anything over WiX 3.14
Invoke-WebRequest -Uri https://dotnet.microsoft.com/download/dotnet/scripts/v1/dotnet-install.ps1 -OutFile dotnet-install.ps1
.\dotnet-install.ps1 -InstallDir 'C:\Program Files\dotnet'
# Configure NuGet sources for dotnet to fetch wix (and other packages) from
& 'C:\Program Files\dotnet\dotnet.exe' nuget add source https://api.nuget.org/v3/index.json -n nuget.org
# Install wix
& 'C:\Program Files\dotnet\dotnet.exe' tool install --global wix
# Install Hyper-V
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Management-PowerShell -All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Management-Clients -All -NoRestart
# Install WSL, and capture text output which is not normally visible # Install WSL, and capture text output which is not normally visible
$x = wsl --install; CheckExit 0 1 # wsl returns 1 on reboot required $x = wsl --install; Check-Exit 0 1 # wsl returns 1 on reboot required
Write-Output $x Write-Host $x
Exit 0 Exit 0