Compare commits

...

241 Commits

Author SHA1 Message Date
Paul Holzinger a0b436c123
Merge pull request #411 from mtrmac/podman-sequoia
WIP: Install podman-sequoia in rawhide images
2025-08-19 20:31:41 +02:00
Miloslav Trmač d8d2fc4c90 Install podman-sequoia in rawhide images
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-08-12 19:33:06 +02:00
Miloslav Trmač 2c9f480248 Update the IMG_SFX rules to work on macOS
- (date --utc) is not supported
- The $(file ) make function is not supported
- macOS sed has no \+ in basic regular expressions, use
  the extended format
- (quote arguments to [ ] to avoid confusing error messages if an earlier sed fails)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-07-30 20:55:44 +02:00
Miloslav Trmač 34add92ba5
Merge pull request #410 from lsm5/skopeo-registry
skopeo_cidev: Depend on docker-distribution
2025-07-23 19:08:48 +02:00
Lokesh Mandvekar 3c73fc4fa8
skopeo / fedora cache_image: Install docker-distribution
Having the registry binary named `registry-v2` causes trouble for
`make test-integration-local`. The registry binary provided by the
docker-distribution package is just `/usr/bin/registry`.

Depending on docker-distribution should make things simpler, more
consistent and usable regardles of CI / testing environment.

In skopeo cirrus jobs, the integration tests are run on the host itself
but a lot of the binaries are copied from the skopeo_cidev container.
So, in this case docker-distribution is directly installed on the host
environment and the registry-v2 build is removed from the skopeo_cidev
image.

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2025-07-21 14:11:23 -04:00
Paul Holzinger 0e1497cd77
Merge pull request #408 from Luap99/podman-py-rm
remove podman-py
2025-07-01 10:14:23 +02:00
Paul Holzinger 08a78fef72
new image build 2025-06-27
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-27 17:52:11 +02:00
Paul Holzinger 6489ad88d4
remove podman-py
It only uses tmt now and not cirrus anymore. So delete all the image
build infra for it.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-27 17:51:05 +02:00
Paul Holzinger 6b776d0590
Merge pull request #407 from timcoding1988/feat/add-gh-to-fedora
Feat/add gh to fedora
2025-06-24 11:57:40 +02:00
timcoding1988 5f27145d64 1. adding gh 2. remove 4.0 timebomb check
Signed-off-by: Tim Zhou <tzhou@redhat.com>
2025-06-18 10:39:18 -04:00
Paul Holzinger 699dbfbcc1
Merge pull request #404 from Luap99/packages
update to Fedora 42 and add some packages
2025-04-23 11:21:52 +02:00
Paul Holzinger 56b6c5c1f8
update IMG_SFX 2025-04-22
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 15:08:26 +02:00
Paul Holzinger 1a7005b4ea
ci: work around build issue
All the base image jobs are failing with:

ssh-keygen -f /tmp/cirrus-ci-build_tmp/cidata.ssh -P "" -q -t ed25519
Saving key "/tmp/cirrus-ci-build_tmp/cidata.ssh" failed: Permission denied
make: *** [Makefile:216: /tmp/cirrus-ci-build_tmp/cidata.ssh] Error 1

I have no idea what happend but let's try without selinux in case
selinux is blocking file access.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 15:08:20 +02:00
Paul Holzinger e960222013
f42: force newer criu
To fix broken checkpoint tests.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-22 11:58:46 +02:00
Paul Holzinger 087a6c4b24
AWS fedora: work around selinux bug
On f42 restorecon no longer applies the new label:
https://bugzilla.redhat.com/show_bug.cgi?id=2360183

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-16 16:35:42 +02:00
Paul Holzinger 12c503fb07
fedora: remove python3.8
The package has been removed in f42.

https://fedoraproject.org/wiki/Changes/RetirePython3.8

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 20:11:14 +02:00
Paul Holzinger 96f688b0e3
update to Fedora 42
It has been released.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:53 +02:00
Paul Holzinger 632e4b16f8
.github: check_cirrus_cron work around github bug
So I wondered why our email workflow only reported things for podman...

It seems the secrets: inherit is broken and no longer working, I see all
jobs on all repos failing with:

Error when evaluating 'secrets'. .github/workflows/check_cirrus_cron.yml (Line: 19, Col: 11): Secret SECRET_CIRRUS_API_KEY is required, but not provided while calling.

This makes no sense to me I doubled checked the names, nothing changed
on our side and it is consistent for all projects. Interestingly this
same thing passed on March 10 and 11 (on all repos) but failed before
and after this as well.

Per[1] we are not alone, anyway let's try to get this working again even
if it means more duplication.

[1] https://github.com/actions/runner/issues/2709

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:02 +02:00
Paul Holzinger ea0295744e
github: use thollander/actions-comment-pull-request
jungwinter/comment doesn't seem very much maintained and makes use of
the deprecated set-output[1].

[1] https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 18:13:02 +02:00
Paul Holzinger e073d1b16d
debian: disable dnsmasq service
This conflicts with aardvark-dns which also binds this port.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-01 11:20:18 +02:00
Paul Holzinger af87d70dce
add sqlite3 lib/dev packages
I like to dynamically link sqlite3 in podman builds to make the binaries
smaller.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-31 14:31:52 +02:00
Lokesh Mandvekar 879a69260c
Fedora cache image: install koji and fedora-distro-aliases
Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2025-03-31 14:23:09 +02:00
Paul Holzinger 564840b6bc
Merge pull request #402 from Luap99/new-images
new images 2025-03-24
2025-03-24 14:59:33 +01:00
Paul Holzinger 6c11ff7257
new images 2025-03-24
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-24 12:19:25 +01:00
Daniel J Walsh fe4e4f3cd7
Merge pull request #401 from Luap99/new-images
new images 20250312
2025-03-12 16:58:26 -04:00
Paul Holzinger 617fe85f37
new images 20250312
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-12 17:54:25 +01:00
Paul Holzinger 3319c260ad
Merge pull request #400 from Luap99/artifacts
add new testartifacts in the cache registry
2025-02-11 20:33:21 +01:00
Paul Holzinger 1a185cfb81
new images
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:08:49 +01:00
Paul Holzinger 3f7b07de69
debian: remove tar work around
Thanks to Reinhard for patching the debian package to no longer trigger
the bug.

https://salsa.debian.org/debian/tar/-/merge_requests/6

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:06:24 +01:00
Paul Holzinger d2652b1135
add new testartifact to image cache
This is needed by https://github.com/containers/podman/pull/25238

To avoid flakes we need to have the test artifacts in the cache
registry.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-10 17:02:56 +01:00
Paul Holzinger 4b32b8267d
Merge pull request #399 from Luap99/new-images
new images 2025-01-31
2025-02-03 16:04:16 +01:00
Paul Holzinger 4756da479a
new images 2025-01-31
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-31 13:19:19 +01:00
Paul Holzinger ed0f37f1bd
Merge pull request #398 from Luap99/new-images
new images
2025-01-07 18:46:23 +01:00
Paul Holzinger e5a1016f08
new images
Removed two timebombs that no longer apply, composefs is installed in
the main package list and the pasta version is in stable now.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-07 14:24:36 +01:00
Paul Holzinger 8c6d4bb0bf
debian: remove git-daemon-run
The package no longer exists[1] in sid. Per quick search it just
contained a simple script not something we actually use. We need the git
daemon command and that is already part of the main git package AFAICS.

[1] 2de766588e

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-01-07 14:04:39 +01:00
Paul Holzinger 21cebe3fec
Merge pull request #397 from baude/add7z
Add 7zip Windows compression utility
2025-01-06 15:31:32 +01:00
Brent Baude 856110c78d Add 7zip Windows compression utility
The Fedora images used to test libhvee are now being shipped with xz
compression.  Because the golang xz decompression is extremely slow, I'm
proposing to use this command line utility.

Signed-off-by: Brent Baude <bbaude@redhat.com>
2024-12-18 09:52:12 -06:00
Paul Holzinger 46c3bf5c93
Merge pull request #396 from Luap99/podman-machine-os
add packages needed by podman-machine-os
2024-12-13 15:23:22 +01:00
Paul Holzinger d317246fd6
build new images
- remove old pasta bump and add new bump for rawhide issue
  https://github.com/containers/podman/issues/24804
- bump debian tar timebomb, it still has the same broken version

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-12 13:25:24 +01:00
Paul Holzinger 006e5b1db8
add packages needed by podman-machine-os
So that we do not have to deal with dnf install issues over there at
runtime.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-12-05 13:45:56 +01:00
Ed Santiago 99e20928ad
Merge pull request #394 from edsantiago/bump-systemd
Bump. Let's see if we pick up a new systemd.
2024-11-20 08:03:34 -07:00
Ed Santiago 7c285acaaa Bump. Let's see if we pick up a new systemd.
Desperate attempt to look into podman issue 24220, the
missing-logs-and-events flake. I noticed on 1mt that
rawhide is on systemd-257~rc1, which is what's on
debian, and we haven't seen 24220 on debian. F41
is still on 256.7.

Let's see what this PR brings in. If we get systemd-257
on rawhide, let's hammer at it on podman and see what
happens with 24220.

Also, fix a big duh on my part. My new README-simplified
had a line beginning with the word "timebomb", which
'make timebomb-check' interpreted as an actual timebomb
directive, which caused the check to fail. Workaround
is to shuffle words; a more proper solution might be
to exclude READMEs, or look only in *.sh files, or
some other smart filter.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-18 06:06:17 -07:00
Paul Holzinger 454288919f
Merge pull request #393 from edsantiago/lets-see
Another bump, to pick up 6.11.6 kernel
2024-11-11 14:20:37 +01:00
Ed Santiago 2b3a418d3e Another bump, to get 6.11.6 kernel
Also, bump pasta on f40 just to eliminate all chances
of podman flake 24219.

Also, add a simplified README explaining the usual-case
actions in this repo.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-07 13:58:15 -07:00
Paul Holzinger f4bbaabf94
Merge pull request #392 from edsantiago/f41-clean
VMs: bump to f41
2024-11-07 19:23:52 +01:00
Ed Santiago 4b297585c3 bump IMG_SFX
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:35:17 -07:00
Ed Santiago 4839366e72 Installed packages: make them work again
Changes necessary to get working VM images. I can't remember
why all of these are necessary. I think the docker-compose
change is because that package started bringing in too many
unwanted dependencies that conflict with podman. Anyhow,
this works.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:32:10 -07:00
Ed Santiago aef024bab7 Changes needed for new dnf
Lots of things seem to have changed in dnf-land. These are the
changes that get us working again.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:30:06 -07:00
Ed Santiago 4a12d4e3bd Fedora AWS query: strip the us-east-1
Something has changed in Fedora images on AWS. The us-east-1 suffix
no longer exists. Remove it.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:26:07 -07:00
Ed Santiago 4392650a1c Fedora 41 is stable. Bump.
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-06 09:24:29 -07:00
Paul Holzinger 7ef71ffbbd
Merge pull request #389 from edsantiago/testimage-20241011
cache registry: add testimage:20241011
2024-10-17 13:47:08 +02:00
Ed Santiago 57ebb34516 cache registry: add testimage:20241011
Needed by podman for debugging a pasta flake and, more
importantly, supporting infrastructure changes (buildah 5595)
that break APIv2 test assumptions. Fixing these failures
will silence red-herring test failures in our ongoing
testing of zstd:chunked.

The 20240123 image is not used anywhere other than podman,
so it is safe to remove.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-10-16 08:46:44 -06:00
Ed Santiago a478e68664
Merge pull request #376 from inknos/update-python-versions-and-packages
Remove unused packages and update python versions
2024-10-15 08:36:03 -06:00
Nicola Sella 9301643309 Remove unused packages and update python versions
python-xdg was removed as a dependency
8d1020ecc8

tests are currently done for py12
330cf0b501

Signed-off-by: Nicola Sella <nsella@redhat.com>
2024-10-15 10:55:18 +02:00
Ed Santiago d8ee5ceae2
Merge pull request #387 from Luap99/win-zstd
Add zstd on windows
2024-10-10 11:54:35 -06:00
Paul Holzinger ef2c8f2e71
Build new images
Bump debian tar timebomb, remove manual crun install as the package is
stable now and most importantly remove IMA workaround as the issue[1],
we will see if that is true.

[1] https://github.com/containers/podman/issues/18543

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-10 12:55:59 +02:00
Paul Holzinger aa36f713ee
windows: add zstandard package
Windows does not have zstd by default so we need to install it. In
particular I am looking at switching the repo archive to zstd as this
makes things much faster (over 1min in podman)[1] but the windows
testing is unable to extract that. While archiver added zstd support a
while back it is not in the version that is on chocolatey which seems a
bit out of date.

[1] https://github.com/containers/podman/pull/24120

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-10 12:42:38 +02:00
Ed Santiago 456905c2ed
Merge pull request #386 from edsantiago/test-crun-17
Build images with crun 1.17
2024-09-17 18:08:11 -06:00
Ed Santiago b5c7d46947 Build images with crun 1.17
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-09-11 09:09:35 -06:00
Lokesh Mandvekar 90ac9fc314
Merge pull request #385 from Luap99/ShellCheck
Add ShellCheck to fedora images
2024-09-11 19:12:00 +05:30
Paul Holzinger 2c858e70b9
Add ShellCheck to fedora images
It is installed at runtime in podman which is not good[1]. Install it
here so we can drop the dnf install there.

Also update some timebombs, pasta is in stable now, tar is still broken
in debian and IMA bug is also still not fixed in podman.

[1] f22f4cfe50/contrib/cirrus/prebuild.sh (L54)

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-09-06 17:34:23 +02:00
Ed Santiago 454f7be018
Merge pull request #383 from edsantiago/main
Build new VMs
2024-08-26 13:01:35 -06:00
Ed Santiago 3bc493fe31 Build new VMs
Timebomb pasta 08-14 on f39. See how/if this works in podman.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-08-21 11:14:47 -06:00
Chris Evich 9f437cb621
Merge pull request #382 from cevich/fix_debug_test_flake
[CI:TOOLING] Fix test_debug_task passing/failing by chance
2024-08-20 19:06:07 -04:00
Chris Evich 5edc6ba963
Fix test_debug_task passing/failing by chance
There's no guarantee of nested-virt support with the standard
"pick first available" VM type done by the `&ibi_vm` alias.
However, nested-virt is required for `image_builder_debug`
matrix element of `test_debug_task`.  Switch to the alias
purpose-built to supply a nested-virt capable VM.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-20 16:24:49 -04:00
Chris Evich fc75a1a84a
Merge pull request #380 from cevich/faster_simpler_tooling_builds
[CI:TOOLING] Track image IDs instead of tar exports
2024-08-19 15:01:45 -04:00
Chris Evich 8b60787478
Update debugging docs
Clarify the difference between `ci_debug` and `image_builder_debug`.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 9400efd805
Add tests for debug targets
Previously if either debugging targets broke in some way, nobody would
know.  Fix this by adding simple CI tests that confirm they build and
run a basic command.

Also, quiet down the unzipping of AWS cli tools.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 4958aa2422
Track image IDs instead of tar exports
Previously all container builds run by the Makefile managed them based
on presence/absence of a docker-archive tar file.  Producing these
exports is time-consuming and ultimately unnecessary extra work.  The
tar files are never actually consumed in a meaningful way by any other
targets.  Further, most of the container builds in CI run in parallel,
simply throwing away the tar when finished.

Fix this by switching to management based on image-ID files instead.
The only exception is the `imgts` image and images which are based on
it.  For those, some special handling is required (already done by the
CI build script), so some comments were added to assist.

Also, remove the `bench_stuff` target entirely as this has long since
been retired.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-19 12:49:57 -04:00
Chris Evich 217ff7ed3e
Merge pull request #379 from cevich/gcp_update
[CI:DOCS] Retire oversight of dnsname project
2024-08-19 10:12:22 -04:00
Chris Evich 4cd328ddfa
Minor: Update/clarify comment
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-16 10:19:16 -04:00
Chris Evich 1e2bebe9b0
Retire oversight of dnsname project
This github repo has been archived, CI disabled, and the GCE project
deleted.  Stop tracking it in automation.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-16 10:15:42 -04:00
Chris Evich 3db41a4702
Merge pull request #375 from cevich/bigger_fedora_vms
Catch Fedora-base image update problems early
2024-08-12 16:36:07 -04:00
Chris Evich 46c104b403
Catch Fedora-base image update problems early
Previously updates were disabled due to the cloud VM only having 2-gig
and the nested-VM only having 1-gig of memory.  Allow Fedora base-image
package updates by increasing the available resources.  Enabling
base-level (esp. kernel) package updates early supports spotting
fundamental image problems early.  Otherwise they may not be found until
a set of images is deployed downstream.

Also, update a few comments relating to followup package update.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 13:43:29 -04:00
Chris Evich b162196e68
Merge pull request #374 from cevich/rm_network_flakes
Reduce impact of networking slowdowns
2024-08-12 13:39:40 -04:00
Chris Evich 0a1e3dbfff
Reduce impact of networking slowdowns
Previously if a repository server, the internet, or the execution
environment experienced some kind of networking slowdown, it could lead
to a package install or update timeout failure.  Increase resiliency in
these situations with additional retries, timeouts, and lowered minimum
rates.  Also increase the timeout on the related Cirrus-CI tasks.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 10:59:43 -04:00
Ed Santiago 83c9b1661c
Merge pull request #371 from Luap99/ebpf
add bpftrace for CI debugging
2024-08-06 10:16:57 -06:00
Paul Holzinger 13b68fe5aa
new image IDs
Bump timebomb to Sep 1st, the podman issue is still not fixed and I
haven't looked at the debian bug but I assume it is also still not
fixed.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-08-05 19:32:30 +02:00
Paul Holzinger 5d99e6aed4
add bpftrace for CI debugging
I like to run a bpftrace based program in CI to collect better logs for
specific processes not observed in the normal testing such as the podman
container cleanup command.

Given you need to have full privs to run ebpf and the package pulls in
an entire toolchain which is almost 500MB in install size we do not add
it the the container images to not bloat them without reason.

https://github.com/containers/podman/pull/23487

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-08-05 19:05:24 +02:00
Ed Santiago 798e83dba9
Merge pull request #357 from edsantiago/local-cache-registry
Create a local registry
2024-07-22 05:42:13 -06:00
Ed Santiago 7e977eee41 Create a local registry
...to minimize hiccups. RUN-2091 in Jira. Network registries
are too unreliable; they cause too many flakes in CI. Here
we set up a registry running on each VM, prepopulated with
all container images used in podman and buildah tests.

Related PRs:
   https://github.com/containers/podman/pull/22726
   https://github.com/containers/buildah/pull/5584

Once those merge, podman and buildah CI tests will fetch
images from this local registry.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-07-08 09:26:55 -06:00
Chris Evich e1662886ab
Merge pull request #370 from cevich/increase_image_rm_rate
[CI:TOOLING] Increase obsolete image flagging and pruning
2024-07-08 11:23:37 -04:00
Chris Evich f67769a6ff
Increase obsolete image flagging and pruning
It was observed in the Cirrus-CI cron logs, that only the total
number of images scanned is reported.  Fix this by giving more
useful info., like the number of candidates for obsolete/pruning.

Related, the original restriction of `10` obsolete/prune images
was originally put in place when only a few repos utilized Cirrus-CI
VMs and image building was substantially more infrequent.  The
reason it exists is to prevent potential catastrophe should the `meta`
time stamp updating tasks have a bug or some other related failure occur.
Increase the limit to `50` so deletions may proceed much more rapidly.

*Note:* "Obsolete" images still live w/in a 30-day window where they can
be recovered if need be.  It's simply that any attempted use by CI will
fail, putting someone on notice that image recovery may be necessary.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-08 09:56:36 -04:00
Chris Evich a86360dc58
Remove ref to missing tool
The `uuidgen` tool has long-since been removed from the tooling images.
For whatever reason one call to it still existed.  Remove it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-05 11:48:44 -04:00
Chris Evich dd546e9037
Merge pull request #369 from cevich/aws_creds_docs
[CI:DOCS] Add link to AWS credentials file format
2024-07-05 11:23:35 -04:00
Chris Evich b0f018152e
[CI:DOCS] Add link to AWS credentials file format
Previously this was available in `import_images/README.md` which was
recently removed.  Since this page is difficult to find in the AWS docs,
link it directly into the main README.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-05 11:21:10 -04:00
Lokesh Mandvekar faf62c81b7
Merge pull request #354 from lsm5/dotnet
Windows: install dotnet and latest wix
2024-07-02 15:52:13 -04:00
Chris Evich b1864a66e9
Merge pull request #368 from cevich/fix_renovate_lib
[CI:DOCS] Fix renovate updating lib.sh
2024-07-02 14:38:03 -04:00
Chris Evich 07a870aa8e
Fix renovate updating lib.sh
Previously Renovate was failing in a multi-line search for an anchored
pattern in `lib.sh`.  This resulted in it completely ignoring the custom
regex manager for that file, as observed in the debug logs.  Fix this by
removing the regex anchors.

Also remove the filename anchors referenced in the `lib.sh` package rule
as they're unnecessary.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 14:28:53 -04:00
Chris Evich 419d61271c
Merge pull request #367 from cevich/fix_update_renovate_config
[CI:DOCS] Reformat renovate config + other minor updates
2024-07-02 14:18:23 -04:00
Lokesh Mandvekar 84304ec159
Windows: install dotnet and latest wix
wix3 is EOL and choco doesn't support installing wix > 3.14.

So, this commit installs the `dotnet` runtime and uses dotnet to install
the latest wix in the windows image.

Also remove pasta package timebomb from debian packaging.

Resolves: RUN-2055

Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2024-07-02 14:07:13 -04:00
Chris Evich 8319550d63
Reformat renovate config + other minor updates
Previously the Renovate configuration was using an older format no longer
supported by the bot.  Apply automatic fixes proposed by the bot,
re-adding/adjusting the old comments as needed.

Also:

* Drop automatic assignment of Renovate PRs to `cevich`
* Reference the GHCR registry container image
* Simplify CI VM update warning message conditions & text.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 13:59:09 -04:00
Ed Santiago 6b9b9f9f08
Merge pull request #366 from cevich/do_not_use_cirrus_base_sha
[CI:DOCS] Remove broken CIRRUS_BASE_SHA usage
2024-07-02 09:10:57 -06:00
Ed Santiago 38e7c58ee6
Merge pull request #363 from cevich/rm_import_images
Use fedoraproject published EC2 images
2024-07-02 09:10:13 -06:00
Chris Evich 03802c1e7a
Remove broken CIRRUS_BASE_SHA usage
Unfortunately this value doesn't properly reflect the current branch
point of a PR.  Replace it with a call to `git merge-base` instead.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-02 10:01:42 -04:00
Chris Evich 6ec9ceecf3
Merge pull request #365 from cevich/example_pre-commit-config
[CI:DOCS] Add example pre-commit config
2024-07-01 15:49:57 -04:00
Chris Evich fcf08a3e5a
Add example pre-commit config
Add suggested/example `pre-commit` configuration for this repo. To use
as-is, simply symlink to `.pre-commit-config.yaml`.  Otherwise it can
be a basis for a custom configuration.

Fix all findings from the example pre-commit hooks.

Also include codespell config w/ repo-specific dictionary extension.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 15:48:59 -04:00
Chris Evich 29014788ac
Use fedoraproject published EC2 images
Previously a very complex, manual, and failure-prone `import_images`
stage was required to bring raw images into EC2.  Primarily this was
necessary because beta images aren't published on EC2 by the
fedoraproject.  However, since the original implementation, CI
operations against rawhide have largely supplanted the need to support
testing against the beta images.  This means the 'import_images' stage
can be completely dropped, and the 'base_images' stage can simply source
images (including `rawhide` if necessary) published by the Fedora
project.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:52:11 -04:00
Chris Evich 108ec30605
Remove Debian pasta apparmor workaround
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:52:11 -04:00
Ed Santiago cfc18f05da
Merge pull request #364 from cevich/imgsfx_history
[CI:DOCS] Add pre-commit (app) hook to check IMGSFX
2024-07-01 09:35:03 -06:00
Chris Evich 2e5a2acfe2
Add pre-commit (app) hook to check IMGSFX
Intended for use by [the pre-commit
app](https://pre-commit.com/#intro), this hook keeps track of all IMG_SFX
values pushed, failing when any duplicate is found.  In the case of
pushing to PRs that don't build CI VM images, the hook failure must be
manually bypassed.  Example `.pre-commit-config.yaml`:

```yaml
---
repos:
  - repo: https://github.com/containers/automation_images.git
    rev: <tag or commit sha>
    hooks:
      - id: check-imgsfx
```

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-01 11:32:03 -04:00
Chris Evich 014b518abf
Merge pull request #362 from cevich/get_ci_vm_docs
[CI:DOCS] Improve get_ci_vm container docs
2024-06-24 15:55:23 -04:00
Chris Evich 03d55b684b
Improve get_ci_vm container docs
The readme contained a lot of technical/implementation details, but
lacked an overview of the architecture/operations.  Fix this.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-06-24 11:16:43 -04:00
Paul Holzinger 8a55408a27
Merge pull request #361 from edsantiago/bump
Semiregular VM catchup
2024-06-21 14:32:56 +02:00
Ed Santiago 79bf8749af Semiregular VM catchup
- rawhide now includes rpm-plugin-ima, which breaks rootless
  podman pods. Add a timebomb'ed workaround until there's a
  more definitive solution in podman or its containers-* libraries

- bug fix for Makefile, handle indented timebombs

- install composefs in rawhide

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-06-20 09:31:27 -06:00
Ed Santiago 91846357a1
Merge pull request #360 from cevich/only_after_merge
[CI:DOCS] Stop tagging during cron runs
2024-05-29 14:51:03 -06:00
Chris Evich f7bdd130a7
Merge pull request #338 from edsantiago/debian_cgroups_v2
Debian: remove force-cgroups-v1 code
2024-05-29 14:38:18 -04:00
Chris Evich 7c1ecb657b
Stop tagging during cron runs
Previously the `tag_latest_images` was executing during the daily
'lifecycle' Cirrus-cron job.  This was unintentional, this task should
only run after a merge onto the default branch.  Fix the condition.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 10:55:22 -04:00
Miloslav Trmač 1e2559b4af Backport a patch to avoid a panic when compiled with Go >= 1.22
> panic: encoding alphabet includes duplicate symbols

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:17:28 -06:00
Miloslav Trmač 564b76cfe1 Also stop plocate-updatedb
plocate is the default locate implementation in Fedora.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:11:00 -06:00
Miloslav Trmač 6cbfbbac05 Stop installing mlocate
It has been retired in Rawhide, and it's unclear whether
we need it at all.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-05-29 08:11:00 -06:00
Ed Santiago e50990987f Debian: remove force-cgroups-v1 code
Per discussion in 2024-03-20 Planning meeting, we will no
longer be testing runc in CI. And cgroups V1 is dead too.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-05-29 08:10:58 -06:00
Ed Santiago e48dc5d37e
Merge pull request #359 from cevich/fix_uuidgen
[CI:TOOLING] Fix missing uuidgen tool
2024-05-29 08:09:59 -06:00
Chris Evich aae598a48a
Fix missing uuidgen tool
Previously this tool was used by a few container images as a
half-hearted attempt at thwarting guesses of the credentials
filename.  For whatever reason the `uuidgen` command is no
longer present in the latest base images but this anti-thwart
measure is also unnecessary and not very effective, remove it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 08:58:12 -04:00
Chris Evich 9acf75b6f5
Merge pull request #358 from cevich/fix_tag_latest
Fix tagging latest after [CI:TOOLING] PR merge
2024-05-29 08:55:57 -04:00
Chris Evich c63d02bec2
Fix tagging latest after [CI:TOOLING] PR merge
After a PR merges a branch-level job runs to tag the new container
images.  However, there is a special-case when a magic string is present
in the PR title:  No Fedora/Skopeo images were be built, so they should
not be tagged be ignored.

Prior to this commit, this special case isn't handled correctly, because
`CIRRUS_CHANGE_TITLE` only contains the first-line of the HEAD commit.
When executing on a branch, after a PR merge, this would be something
like:

`Merge pull request #FOO from some/thing`

Therefore not matching the intended magic string.  Fix this by switching
to a check against `CIRRUS_CHANGE_MESSAGE` which includes the entire
message.  Importantly, when merged using the github UI, the second line
of the commit message should contain the PR description and thus the
magic string.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-28 16:30:28 -04:00
Chris Evich afe1ced362
Merge pull request #356 from cevich/fix_get_ci_vm_test
[CI:TOOLING] Fix get_ci_vm test and new git safety checks
2024-05-23 15:02:29 -04:00
Chris Evich 499c24d856
Fix get_ci_vm test and new git safety checks
Previously, likely do to some git update the following error was
produced:

```
Testing: Verify mock 'gcevm' flavor main() workflow produces expected
output
fail - Expected exit-code 0 but received 128 while executing
mock_gcevm_workflow (output follows)
Winning lottery-number checksum: 0
gcloud --configuration=automation_images --project=automation_images
compute instances create --zone=us-central1-a
--image-project=automation_images --image=test-image-name --custom-cpu=0
--custom-memory=0Gb --boot-disk-size=0 --labels=in-use-by=foobar
foobar-test-image-name
gcloud --configuration=automation_images --project=automation_images
compute ssh --ssh-flag=-o=AddKeysToAgent=yes --force-key-file-overwrite
--strict-host-key-checking=no --zone=us-central1-a
root@foobar-test-image-name -- true
Cloning into '/tmp/get_ci_vm_hRxAoX.tmp/var/tmp/automation_images'...
fatal: detected dubious ownership in repository at
'/tmp/cirrus-ci-build/get_ci_vm/good_repo_test/.git'
To add an exception for this directory, call:

  git config --global --add safe.directory
/tmp/cirrus-ci-build/get_ci_vm/good_repo_test/.git
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
```

Fix this.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-23 14:32:42 -04:00
Ed Santiago b7395d11fe
Merge pull request #351 from Luap99/debian-tmpfs
debian: use tmpfs on /tmp + bump /tmp size on fedora
2024-05-13 13:15:32 -06:00
Paul Holzinger 09161bf540
bump image IMG_SFX
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-13 16:01:35 +02:00
Paul Holzinger aa79d45352
Update pasta apparmor profile
Now that we use /tmp we do not have to include the changes for /var/tmp.
However we need r (read) access to /tmp as pasta opens the path with
read access.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-13 15:37:19 +02:00
Paul Holzinger 663384815d
fedora: increase /tmp tmpfs size
By default we only get 50% of all memory, given our programs don't take
this much we should instead use more /tmp space in case we have to store
more images.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-08 11:41:24 +02:00
Paul Holzinger a2d4af6eff
debian: use tmpfs on /tmp
To make tests faster setup a tmpfs on /tmp like fedora does so that test
do not have to write everything onto persistent disk.

Fixes #350

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-05-08 11:41:24 +02:00
Ed Santiago 560a8f5db7
Merge pull request #349 from cevich/fedora40
Fedora40
2024-05-07 19:12:53 -06:00
Chris Evich ed4f43488b
Add debian pasta apparmor workaround
Ref:
https://github.com/containers/automation_images/pull/349#issuecomment-2090494124

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 26f0a720ed
Simplify setting Debian release version
Previously a convoluted system was used to add a "fake" release number
into `/etc/os-release` for CI/automation purposes.  It forced a
two-component version to satisfy some legacy automation-library needs.

Since the release number is also specified in the Makefile, and passed
into the packer call, it's trivial to simply provide this value to the
`debian_base-setup.sh` script.

This reduces complexity and avoids duplication.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 8fe782be13
Remove D12->13 grub timebomb
Previously this was necessary because simply updating the D12
grub-common package was no longer sufficient.  Importantly, make sure
the workaround/restriction on an update to tar is in place prior to
upgrading grub-common (which has a dependency on it).

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich da749c4c9a
Bump debian base-image tar workaround timebomb
The version hasn't changed, continue using the "old" version of tar.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich cd783f07c3
Remove Fedora passt timebomb
There was a lot of churn in this area causing many problems in CI.
Remove the workaround to see if problems have settled out with the most
recent packages.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 4958b8a6b7
Bump up to CentOS Stream 9 tooling images
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich f9e42ece82
Bump CI VMs to F40 & F39
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-06 09:30:18 -04:00
Chris Evich 078da3cb58
Merge pull request #353 from cevich/no_build_push
[CI:DOCS] Fix test_build-push failing w/ no_build-push label
2024-05-03 14:10:02 -04:00
Chris Evich a6ab11b389
Fix test_build-push failing w/ no_build-push label
Previously the build-push task was much more sophisticated and able to
run even if a new CI VM image was not produced.  This situation has now
changed, and the testing task requires some additional "smarts" to not
run when its image wasn't build.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-03 14:03:13 -04:00
Chris Evich 64e25fa32b
Merge pull request #348 from cevich/bump_automation_lib
Bump automation library version
2024-04-24 12:54:37 -04:00
Chris Evich c3a0ca1aba
Bump container build timeout
Many/most of the container image builds rely on pulling packages from
repos that are sometimes slow/busy.  Give the tasks a bit of extra time
in case it's needed.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-24 10:42:16 -04:00
Chris Evich 82ac450b89
Simplify build-push test
Previously this task depended on executing a downstream test script
intended for exercising an orthagonal orchestration script (which
happens to call `build-push.sh`.  Having upstream CI VM image builds
depend on a downstream script is very much not ideal.  Replace this with
a very quick/dirty test that simply confirms a multi-arch build
can function.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-24 10:39:53 -04:00
Chris Evich d9d87f33d6
Bump automation library version
Importantly, this contains a necessary fix for `build_push.sh` needed to
stop immutable-image existence-check failing on build (c/image_build
cron job).

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-24 10:39:50 -04:00
Ed Santiago cf72ba2655
Merge pull request #340 from edsantiago/tmp-should-be-tmpfs
Revert /tmp to tmpfs
2024-04-24 07:02:41 -06:00
Ed Santiago b2adc260a8 Revert /tmp to tmpfs
Podman *really* needs /tmp to be tmpfs, to detect and
handle reboots. Although there are (at this time) no
reboots involved in CI testing, it's still important
for CI hosts to reflect something close to a real-world
environment. And, there is work underway to check /tmp:

  https://github.com/containers/podman/pull/22141

This PR removes special-case Fedora code that was
disabling a tmpfs /tmp mount. History dates back to
PR #30 back in 2020.

Some of the image-build code in this repo performs
reboots and relies on persistent tmp files, so you'll
note a flurry of /tmp -> /var/tmp changes.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-04-11 06:49:18 -06:00
Ed Santiago fe0936e168
Merge pull request #346 from baude/validatepr
Add pre-commit to podman image
2024-04-11 06:46:33 -06:00
Ed Santiago 11f3c2a954
Merge branch 'main' into validatepr 2024-04-11 06:45:53 -06:00
Ed Santiago fc4b863bab
Merge pull request #347 from cevich/podman_oci_labels
Add OCI standard labels to podman images
2024-04-10 14:27:22 -06:00
Brent Baude 42fe503a39 Add PR validation packages to fedora image
In support of containers/podman/#22260, we need additional packages in
the podman fedora container:

* pre-commit
* man-db

Signed-off-by: Brent Baude <bbaude@redhat.com>
2024-04-10 15:06:40 -05:00
Chris Evich 5c66e14eca
Add OCI standard labels to podman images
Given a local container image, the OCI labels are very useful in tracking
down the source and revision from whence it came.  Tooling like Renovate
is also able to make use of these labels to suggest when newer versions
are available.

Note: The current OCI spec. references defining these as annotations,
however in practice, virtually nobody uses them.  Simple labels are more
much accessible to both humans and tooling (like Renovate).

Update the podman container images README section to reflect the
present-day reality.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-10 14:41:52 -04:00
Ed Santiago b38b5cf397
Merge pull request #345 from edsantiago/ya-new-pasta
Yet another new Pasta (04-05)
2024-04-10 06:39:08 -06:00
Ed Santiago 0ac3346842 Yet another new Pasta (04-05)
This one fixes a user-reported bug that we don't see in CI.
It's in bodhi for rawhide but no others. We want to test anyway.

Also, small changes to Windows Chocolatey install command
to conform to (some) best practices document. Link to such,
and explain why I disregard some of what they call "best".

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-04-09 13:25:47 -06:00
Ed Santiago 45282613ab
Merge pull request #344 from cevich/tag_latest_fedora_container
Tag latest fedora container
2024-04-09 10:23:23 -06:00
Chris Evich 8e0c9f3a52
Simplify latest image tagging
Previously when a PR was merged, another build ran for all the critical
container images, along with tagging them 'latest'.  This is not ideal,
because the content can change from the time the PR build and tested the
images until when it was merged.  There is also an anticipated future
need to access the `fedora_podman` and `prior-fedora_podman` images via a
"latest" tag.

* Update image-build tasks to only run in PRs
* Simplify `ci/make_container_images.sh` to no-longer require/use a
  magic `$PUSH_LATEST` value.
* Deduplicate all FQIN references to reuse a common prefix in `$REGPFX`
* Add a new `tag_latest_images_task` that only runs on branches, and
  simply adds a `latest` tag to all container images based on the
  (as-merged) value of `$IMG_SFX` (from IMG_SFX file)

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-09 10:53:11 -04:00
Chris Evich 619c79f716
Merge pull request #343 from cevich/build_push_updates
Build-push CI VM: Stop caching fedora
2024-04-05 10:12:32 -04:00
Chris Evich eb80bb9c30
Build-push CI VM: Stop caching fedora
Pulling the latest Fedora images needed to build P|B|S images was
previously done at CI VM build time.  However this causes some problems
in containers/image_build automation relating to the last pulled
architecture not matching the local system.  Since CI VM images can
stick around for a number of months sometimes, caching the "latest"
Fedora image becomes less and less impactful.  Simply stop the practice.

Also add the `unzip` package to support future image_build automation
and bump several timebomb statements.  Remove the debian grub timebomb
as that issue has been fixed.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-04 16:53:18 -04:00
Chris Evich 138d12e6e6
Merge pull request #342 from edsantiago/pasta-0326
Bump to pasta 03-26
2024-03-29 13:46:45 -04:00
Ed Santiago 0e56ce4e24 Bump to pasta 03-26
...and deal with broken grub on Debian. Switch to new better
debian blocking way, where we explicitly block broken versions
but allow future upgrades

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-03-28 08:09:26 -06:00
Chris Evich 791fd657c6
Merge pull request #339 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.12.0
2024-03-25 14:24:43 -04:00
renovate[bot] 6b9521f3d4
[skip-ci] Update dawidd6/action-send-mail action to v3.12.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-03-23 21:07:00 +00:00
Chris Evich 657b6acc75
Merge pull request #337 from edsantiago/howsitgoing
New VM build, just to see how things are
2024-03-20 15:29:11 -04:00
Ed Santiago c41f36a60f New VM build, just to see how things are
New pasta (03-20). And whatever else comes in.

Also: install StrawberryPerl on Windows, see:

  https://github.com/containers/podman/pull/21991

First CI-detected problem:

    debian: The following packages have unmet dependencies:
    debian:  libfuse2t64 : Breaks: libfuse2 (< 2.9.9-8.1)

Solution attempted: remove libfuse2 from INSTALL_PACKAGES

And, bump expired Debian timebombs

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-03-20 09:39:54 -06:00
Chris Evich a3e4099c72
Merge pull request #335 from cevich/migrate_build-push
Migrate build script from c/automation_images
2024-03-08 16:24:54 -05:00
Chris Evich ce9fbf2d1a
Migrate build script from c/automation_images
Ref: https://github.com/containers/image_build/pull/12

Previously the build-push scripts were run by automation in this repo.
That has since changed, with a migrated over to the
containers/image_build repo.  However, while automation there uses the
most recent build-push VM image, that image is produced in this repo.
Arrange to test the latest script against just-produced VM images to
ensure the environment is always supportive for the script.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-08 14:18:58 -05:00
Chris Evich 53eea3160a
Push back rc6 kernel timebomb
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-07 14:30:21 -05:00
Chris Evich 7111e7a5e8
Merge pull request #334 from cevich/move_quayimages
[CI:DOCS] Migrate quay.io container image build
2024-03-05 15:43:46 -05:00
Chris Evich e256fc30e4
[CI:DOCS] Migrate quay.io container image build
Moved to: https://github.com/containers/image_build

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-05 15:04:19 -05:00
Chris Evich 930bf6b852
Merge pull request #333 from cevich/build_push_bug_fix
Build-push bug fix
2024-02-29 14:52:13 -05:00
Chris Evich b006128ff9
Minor: Additional build-push debugging statements
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-29 12:21:19 -05:00
Chris Evich 0c597a7ef3
Fix bug introduced by #332
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-29 12:21:19 -05:00
Chris Evich 059c4c608c
Merge pull request #332 from cevich/build_push_dot
Support no-clone build-push mode
2024-02-28 14:38:14 -05:00
Chris Evich 565d822329
Support no-clone build-push mode
As originally conceived, the build context for each image lives in the
respective podman, buildah, and skopeo repositories.  A future set of
PRs will move both the source and build automation into the
new containers/image_build repository.  This is needed to support
images that are point-in-time rebuildable and run test-builds on
image context changes.

Add a magic 2nd argument prefix ('.'), and conditionals to prevent
cloning the build context repo. This will allow for an interim period
where build automation can run from both the current and new repository
until the context repos can be moved.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-28 14:02:27 -05:00
Chris Evich 447529fcae
Merge pull request #331 from edsantiago/rc6
Bump. Hoping to get rc6 kernel in rawhide
2024-02-28 13:55:06 -05:00
Ed Santiago d1c008a1d1 Bump. Hoping to get rc6 kernel in rawhide
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-27 05:58:15 -07:00
Chris Evich 4f989daed5
Merge pull request #329 from edsantiago/ditch_f38
New VMs again, keeping f38
2024-02-22 16:01:08 -05:00
Ed Santiago c625377c36 New VMs yet again
Need new pasta 2024-02-20 to fix hanging-tests problem.

Pasta 2024-02-20 is not yet stable on all fedorae, so add
a timebombed force-install.

Also: podman-plugins is obsolete and does not exist in rawhide.
Ditch it.

Also: jobs are occasionally timing out. Bump up timeouts.

Also: fix broken timebomb check in Makefile

Also: bump up expired Debian timebombs

Also: sideload pasta 02-20 for Debian

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-22 07:30:08 -07:00
Chris Evich 7547b67e33
Merge pull request #323 from cevich/use_library_timebomb
Utilize the new library timebomb() function.
2024-02-16 10:50:51 -05:00
Chris Evich 7d010362a1
Utilize the new library timebomb() function
N/B: This new automation library version includes a significant update
to stdio redirection for all functions.  Careful testing of these images
is highly recommended.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-15 12:00:32 -05:00
Chris Evich 5af77ad53a
Merge pull request #327 from edsantiago/new_netavark
New VMs: we need netavark 1.10.3
2024-02-15 11:57:01 -05:00
Ed Santiago 15fe9709bb New VMs: we need netavark 1.10.2-1.fc40
Also, add "rpm -qa" (fedora) and "dpkg -l" (debian) so Ed's
package-version script can get better data. It would be nice
if we could save those to an artifact file, but we can't.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-12 05:21:20 -07:00
Chris Evich 5dfa6aebfa
Merge pull request #325 from edsantiago/no_more_cni
New VMs: include netavark in prior-fedora
2024-02-01 17:08:05 -05:00
Ed Santiago 8c0332d2a8 New VMs: include netavark in prior-fedora
CNI is deprecated, and will no longer be tested in CI (Podman
PR 21410).

We've been force-removing netavark from prior-fedora. Remove
this special case so now all fedorae have netavark.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-01 07:31:05 -07:00
Chris Evich d1ce228ced
Merge pull request #326 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.11.0
2024-01-31 14:02:24 -05:00
renovate[bot] 7f8ae66fb5
[skip-ci] Update dawidd6/action-send-mail action to v3.11.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-30 21:26:27 +00:00
Ed Santiago c6ce03e4a1
Merge pull request #324 from edsantiago/new-vms
Let's see what we pick up this time
2024-01-29 07:37:27 -07:00
Ed Santiago 71dcd869a5 Let's see what we pick up this time
Results: debian tar is still broken, and I didn't check grub
but it's safe to assume that's still broken too, so, bump
up both timebombs.

...and:

  - add new timebomb-check target to prevent me from
    submitting a guaranteed-to-fail-CI job

  - get_ci_vm: use apk, not pip, to install aws-cli
    because our base image now whines about pip:

       This environment is externally managed

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-01-25 11:42:11 -07:00
Chris Evich 55f939df9f
Merge pull request #320 from edsantiago/new-vms
new vms
2024-01-16 11:06:17 -05:00
Chris Evich dc21540194
Merge pull request #321 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.10.0
2024-01-08 16:16:31 -05:00
Chris Evich f768dd484d
Merge pull request #322 from cevich/email_subj_fix
[CI:DOCS] Minor fix to fix orphan-vm e-mail subject
2024-01-08 11:07:45 -05:00
Chris Evich 9b3f9aa275
Minor fix to fix orphan-vm e-mail subject
Its been checking GCP and AWS clouds for a long time now.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-08 11:06:21 -05:00
renovate[bot] 1a940444ad
[skip-ci] Update dawidd6/action-send-mail action to v3.10.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-05 19:30:49 +00:00
Ed Santiago fed97ac56a new vms
Try to pick up new pasta.

Also, add perl-Clone, needed by the manpage/helpmsg xref script

Also, remove one timebomb (crun) and extend another (grub on debian):
crun is now 1.12-1 on all VMs.

And, finally, a seemingly innocuous change: google-cloud-sdk -> -cli
I have no idea what's going on here, but making this change gets
builds to pass. Without this change, one of the early image-build
CI steps fails because of a dnf conflict. What seems to be happening
is that in old builds (Dec 2023), 'dnf upgrade' upgraded  only -sdk.
In new builds (Jan 2024) it wants to bring in both -sdk and -cli,
and the two can't coexist.

Oh, one more: block debian upgrade of tar. The version in debian
right now is broken. Add a timebomb.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-01-02 14:27:28 -07:00
Chris Evich 61ad7cf83a
Merge pull request #319 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4
2023-12-14 15:57:32 -05:00
renovate[bot] da81c99493
[skip-ci] Update actions/upload-artifact action to v4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-14 20:31:54 +00:00
Chris Evich 3aba7d7eaf
Merge pull request #318 from n1hility/update-win-storage
Move win instance to faster storage and 6k iops
2023-12-12 09:44:51 -05:00
Jason T. Greene 1155207686 Move win instance to faster storage and 6k iops
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-12-08 13:39:12 -06:00
Chris Evich 4ffbee0218
Merge pull request #316 from n1hility/add-mgmt
Add hyperv management tools to Windows image
2023-12-08 11:15:09 -05:00
Chris Evich 2d41ea4849
Merge pull request #317 from cevich/docs_update
[CI:DOCS] Minor readme update
2023-12-07 12:10:36 -05:00
Chris Evich 8765d190c4
Minor readme update
Modern versions of the AWS cli allow all these options to exist in the
`credentials` file.  But for completeness, and to add in the region
default, best mention them.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-07 11:31:37 -05:00
Jason T. Greene 6d57972c89 Increase volume size to 200gb
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-12-06 16:59:23 -06:00
Jason T. Greene ae25083be1 Add hyperv management tools to Windows image
Extend timebomb om cache_images

Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-12-05 21:18:30 -06:00
Chris Evich dfe3b9d73c
Attempt to fix URL in notification mail
The docs are not specific enough to know for sure `run_id` is the
correct value to use.  When browsing to a job, there are two numbers
present in the URL, I cannot find a ref for the one of them :S
Hopefully `run_id` is correct and the second number isn't needed.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-01 10:41:21 -05:00
Chris Evich 416e87b605
Merge pull request #312 from edsantiago/f39_released
New f39 (official, not beta) image
2023-11-16 15:57:23 -05:00
Ed Santiago d16ced38be New f39 (official, not beta) image
First step: create new base images:

  1minutetip$ make IMPORT_IMG_SFX
  1minutetip$ make image_builder_debug ....

Second step:

  home$ make IMG_SFX

Commit and push. Subsequent emergency management steps:

  1) Change "-qq" to "-q" in debian apt-get, so we have some
     hope of figuring out what is failing.

  2) debian update of grub no longer works. Try a new way.
     (We can no longer update grub-common, due to dependency
     error. Old grub fails with a "version_find_latest" error.
     So, new solution is to provide version_find_latest).

     2a) New timebomb() function will ensure that temporary
         workarounds like this one do not accumulate.

  3) force-update crun on f38 so we get 1.11.2.
     3a) use new timebomb(), see 2a above.

  4) ccia is failing due to cython issue in newer Fedora.
     Force using f38, which works. Cannot timebomb().

  5) fedora-aws build kept timing out. Discover and add
     AWS_SOMETHING envariables to .cirrus.yml

Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-11-16 10:44:21 -07:00
Chris Evich b1b966eb7c
Merge pull request #308 from containers/renovate/dawidd6-action-send-mail-3.x
[skip-ci] Update dawidd6/action-send-mail action to v3.9.0
2023-10-17 12:26:41 -04:00
Chris Evich 03994f80e4
Merge pull request #309 from cevich/update_rawhide_crun
Update windows CI VMs for hyper-v machine testing
2023-10-05 12:09:58 -04:00
Chris Evich 2ee0d88384
Update windows CI VMs for hyper-v machine testing
In addition to updating mingw and golang, this moves the
installation of .Net and wixtoolset here instead of at CI runtime.
The windows packer-configuration was updated to operate more
consistently with how things are done in Linux WRT calling scripts.
Along with some file renames and other cosmetic changes, the windows
build timeout was increased since the extra packages seem to
place it right on the edge of the former value.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-10-04 15:53:22 -04:00
Chris Evich 1cfc6d352f
Remove temp. workarounds
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-10-03 16:56:03 -04:00
Chris Evich f68fc63aa8
Merge pull request #305 from cevich/docs_update
[CI:DOCS] Improve import-image docs
2023-09-29 14:23:48 -04:00
Chris Evich 6f157ff28e
Improve import-image docs
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-29 14:22:00 -04:00
Chris Evich c22ef2b398
Merge pull request #302 from edsantiago/f39
Bump to Fedora 39
2023-09-28 14:47:20 -04:00
Ed Santiago 80f5d3fd60 Bump to Fedora 39
Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-09-27 18:45:59 -06:00
Ed Santiago ea2dc8bd8b Housekeeping: egrep is deprecated
Replace with grep -E

Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-09-27 12:20:45 -06:00
Chris Evich e5de95d40e
Merge pull request #307 from n1hility/add-hyperv
Add hyperv to windows image
2023-09-27 11:04:19 -04:00
renovate[bot] 60f03d91f3
[skip-ci] Update dawidd6/action-send-mail action to v3.9.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-27 11:42:56 +00:00
Jason T. Greene f5884c1b03 Add Hyper-V
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-09-26 11:54:44 -05:00
Jason T. Greene 2028aa50d0 Add example userdata for reenabling RDP
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
2023-09-26 11:50:14 -05:00
Chris Evich 5b9f617e7d
Merge pull request #304 from cevich/fix_jq_null_iteration
Latest common automation library on build-push VM
2023-09-26 11:18:57 -04:00
Chris Evich 99a28fad77
Use latest common library + show version
The automation common library is version-pinned (in `lib.sh`) and
updates are carefully managed by renovate.  This is by design, so
breaking changes don't impact important CI environments.

However, on more than one occasion, there's been a need to update the
podman/buildah/skopeo image building scripts rapidly.  Since the
latest build-push VM image is always used, it's production doesn't need
to be tied down in the same way.  Mainly because there's extensive
testing of it from CI in this repo.

Make the necessary changes to allow installing the latest version of the
common automation library, along with the `build_push.sh` script,
specifically in the build-push VM image.

Also, add a debug message for the library version installed (will include
commit sha) to assist any future debugging.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-21 10:57:58 -04:00
Chris Evich 0582a0cc22
Minor: Fix documentation URL
Previous value was missing `$head_sha` and for some containers-org repos
would point at the wrong path.  Fix this by confirming the existence of
the README file, then using the location in the docs URL.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 17:16:17 -04:00
Chris Evich b86aea0acd
Merge pull request #298 from cevich/update_automation_lib
Update automation-library
2023-09-20 17:12:41 -04:00
Chris Evich 13f4ad1ca3
Workarond failure to update SID kernel
Without this, during package setup this error is emitted:

```
Setting up linux-image-6.5.0-1-cloud-amd64 (6.5.3-1) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-6.5.0-1-cloud-amd64
/etc/kernel/postinst.d/zz-update-grub:
Generating grub configuration file ...
/etc/grub.d/10_linux: 1: version_find_latest: not found
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127
dpkg: error processing package linux-image-6.5.0-1-cloud-amd64 (--configure):
```

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 15:58:51 -04:00
Chris Evich a2f2f472a4
Drop ZFS CI Support in Debian SID
Maintaining this is a PITA and it seems to break very frequently with
errors similar to:

```
Failed to process /etc/kernel/header_postinst.d at /var/lib/dpkg/info/linux-headers-6.5.0-1-cloud-amd64.postinst line 11.
dpkg: error processing package linux-headers-6.5.0-1-cloud-amd64 (--configure):
 installed linux-headers-6.5.0-1-cloud-amd64 package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of linux-headers-cloud-amd64:
 linux-headers-cloud-amd64 depends on linux-headers-6.5.0-1-cloud-amd64 (= 6.5.3-1); however:
  Package linux-headers-6.5.0-1-cloud-amd64 is not configured yet.

dpkg: error processing package linux-headers-cloud-amd64 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of linux-headers-amd64:
 linux-headers-amd64 depends on linux-headers-6.5.0-1-amd64 (= 6.5.3-1); however:
  Package linux-headers-6.5.0-1-amd64 is not configured yet.

dpkg: error processing package linux-headers-amd64 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of zfs-zed:
 zfs-zed depends on zfs-modules | zfs-dkms; however:
  Package zfs-modules is not installed.
  Package zfs-dkms which provides zfs-modules is not configured yet.
  Package zfs-dkms is not configured yet.

dpkg: error processing package zfs-zed (--configure):
 dependency problems - leaving unconfigured
```

The fact is ZFS is completely unsupported by those whom pay our bills,
a best-effort package in Debian, and an almost constant headache.  It's
only needed by the containers/storage CI, and nowhere else.  It's not
fair for CI in all the other repos to wait due to Debian+ZFS build
problems.  This commit removes ZFS support on all Debian images.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 15:58:27 -04:00
Chris Evich dd2f6bda56
Increase cache-image build timeout
On several occasions this job has hit the 45m wall due (probably) to
networking slowness (somewhere) downloading packages.  Bunp it up to use
the default 60m timeout.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 14:08:28 -04:00
Chris Evich 0de4a8bf4a
Update automation-library
Significantly, this version defines a `passthrough_envars()` function to
replace the two duplicate definitions in podman and buildah CI.  When
incorporating the new images into those environments, the duplicates
should be removed.

Also included is an important updates to the build-push script that
improves debugging in cases where the `--nopush` argument is used.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 14:08:28 -04:00
Chris Evich 26a61a6523
Remove emacs from debian SID
This was added as a developer-friendly package, but as of this commit
there are dependency problems in SID.  Remove it, if it's really still
needed somebody can add it back.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 14:08:28 -04:00
Chris Evich d83bbbe01e
Fix image build/push repo. arg. check
Likely a typo of variable name, was always intended to check vs the full
URL, not just the name.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-20 11:16:07 -04:00
Chris Evich cdab3b2497
Merge pull request #301 from cevich/multiarch_builds
Implement quay.io container image build and push
2023-09-20 09:50:27 -04:00
Chris Evich fd0eaecf09
Minor tweaks to multi-arch images
* After confirming the image source repository comes from github,
  point the source label/annotation directly using the exact commit.

* Add quay-specific expiration labels for 'testing' and 'upstream'
  images.  This way if builds stop or fail for some reason, any use
  of rapidly irrelivent images is blocked.

* Update tests

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 16:25:31 -04:00
Chris Evich e53f780ac4
Validate Cirrus-CI Repository settings in PRs
There's a critical little "slider" on the webpage that's somewhat
difficult to tell if it's enabled or not.  Make a somewhat weak attempt
to catch if it's state ever changes.  This is better than not checking
at all.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 10:26:17 -04:00
Chris Evich 96f616e440
Always show the repo. clone details
Otherwise, outside of a debugging environment, it's hard to tell in the
log what was cloned.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 10:25:05 -04:00
Chris Evich abcfe96b58
Implement quay.io container image build and push
This job use to be performed from the individual repositories CI,
however there was a major problem:

https://github.com/containers/podman/discussions/19796

Reinstate the build jobs in this repo. since it's secrets are secure and
builds are safe from general-public meddling.

Also, slighly alter the existing cirrus-cron triggered tasks such that
they only respond to a specific job name.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-19 10:24:41 -04:00
Chris Evich 09ae91d04c
Merge pull request #300 from cevich/multiarch_mulligan
Improve & rename main build-push script
2023-09-18 15:49:06 -04:00
Chris Evich 0226d63d3f
Improve & rename main build-push script.
This script orchestrates running of the actual `build_push.sh` script,
on behalf of various github containers-org repos.  Rename it to better
reflect that purpose.

Change behavior WRT first argument (git repo. URL) to shallow-clone the
repo into a temporary directory.

Remove the auto-update library in anticipation of executing builds from
Cirrus-cron in this (automation_images) repo.  Given encrypted secrets
are protected by execution context and actor.

Update labeling to also annotate the images, since newer tooling prefers
annotations but older tools only support labels.

Remove wait-for-copr from build-push VM image since it's not needed.  An
alternate build system was put in place.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-18 14:35:28 -04:00
Chris Evich 71cc3691c4
Revert "[CI:TOOLING] Fix wrong SHA in revision label"
This reverts commit 874da1b703.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-18 11:41:17 -04:00
Chris Evich d2a1ea8cc4
Replace quay.io robot credentials.
Removed out of an abundance of caution, ref:
https://github.com/containers/podman/discussions/19796

Double-checked Cirrus-CI 'Decrypt Credentials' setting for this repo.
is: Collaborators, Bots, and Users with Write permission.

Double-checked Github collaboration settings.  It's limited to specific
github users only.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-15 10:15:11 -04:00
Chris Evich 7482f50592
Merge pull request #299 from containers/renovate/actions-checkout-4.x
[skip-ci] Update actions/checkout action to v4
2023-09-05 09:57:20 -04:00
renovate[bot] c893f90c7e
[skip-ci] Update actions/checkout action to v4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-04 15:11:25 +00:00
Chris Evich ac93ef9bef
Merge pull request #297 from cevich/test_new_build_table
Test PR #296
2023-08-23 11:39:03 -04:00
Chris Evich f3dace1baa
Test PR #296
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 14:59:01 -04:00
Chris Evich 4288dfa701
Merge pull request #296 from cevich/only_the_bs
Obscure non-cache image IDs in pr-comment table
2023-08-22 14:57:20 -04:00
Chris Evich f248d99329
Update image suffix value
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 13:24:15 -04:00
Chris Evich 12065df676
Update code style using the `black` tool
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 13:20:26 -04:00
Chris Evich 6714c86834
Obscure non-cache image IDs in pr-comment table
All built images are included in the build-table added as a PR comment
to be helpful for reference and possible debugging.  However, it's
unhelpful if a human accidentally tries to deploy a non-cache image ID
into CI somewhere.  Those images are never to be used outside of very
special-case situations.  Obscure non-cache image IDs in the table to
prevent accidents.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-22 13:20:08 -04:00
Chris Evich 979faa40bc
Merge pull request #295 from Luap99/debian-locale
provide en_US.UTF-8 locale
2023-08-21 13:27:55 -04:00
Paul Holzinger ce66b7ec98
fedora: add glibc-langpack-en
Make sure the en_US.UTF-8 LANG is installed and can be used by podman
tests, see https://github.com/containers/podman/pull/19635.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2023-08-21 17:37:42 +02:00
Paul Holzinger b913d24a76
debian: generate en_US.UTF-8 locale
A podman test depends on that locale so we need to make it is installed
in the image, see https://github.com/containers/podman/pull/19635.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2023-08-17 13:20:00 +02:00
76 changed files with 1400 additions and 1532 deletions

View File

@ -6,7 +6,6 @@ load("cirrus", "fs")
def main():
return {
"env": {
"IMG_SFX": fs.read("IMG_SFX").strip(),
"IMPORT_IMG_SFX": fs.read("IMPORT_IMG_SFX").strip()
"IMG_SFX": fs.read("IMG_SFX").strip()
},
}

View File

@ -10,6 +10,8 @@ env:
CIRRUS_CLONE_DEPTH: 50
# Version of packer to use when building images
PACKER_VERSION: &PACKER_VERSION "1.8.3"
# Registry/namespace prefix where container images live
REGPFX: "quay.io/libpod"
#IMG_SFX = <See IMG_SFX file and .cirrus.star script>
#IMPORT_IMG_SFX = <See IMPORT_IMG_SFX file and .cirrus.star script>
@ -45,7 +47,7 @@ image_builder_task:
# Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true"
stateful: true
timeout_in: 40m
timeout_in: 50m
container:
dockerfile: "image_builder/Containerfile"
cpu: 2
@ -69,7 +71,7 @@ container_images_task: &container_images
skip: *ci_docs_tooling
depends_on:
- image_builder
timeout_in: 30m
timeout_in: &cntr_timeout 40m
gce_instance: &ibi_vm
image_project: "libpod-218412"
# Trust whatever was built most recently is functional
@ -81,7 +83,7 @@ container_images_task: &container_images
env:
TARGET_NAME: 'fedora_podman'
# Add a 'c' to the tag for consistency with VM Image names
DEST_FQIN: &fqin 'quay.io/libpod/${TARGET_NAME}:c$IMG_SFX'
DEST_FQIN: &fqin '${REGPFX}/${TARGET_NAME}:c$IMG_SFX'
- name: *name
env:
TARGET_NAME: 'prior-fedora_podman'
@ -97,12 +99,12 @@ container_images_task: &container_images
# TARGET_NAME: 'debian'
# DEST_FQIN: *fqin
env: &image_env
# For quay.io/libpod namespace
REG_USERNAME: ENCRYPTED[de755aef351c501ee480231c24eae25b15e2b2a2b7c629f477c1d427fc5269e360bb358a53bd8914605bae588e99b52a]
REG_PASSWORD: ENCRYPTED[52268944bb0d6642c33efb1c5d7fb82d0c40f9e6988448de35827f9be2cc547c1383db13e8b21516dbd7a0a69a7ae536]
# For $REGPFX namespace, select FQINs only.
REG_USERNAME: ENCRYPTED[df4efe530b9a6a731cfea19233e395a5206d24dfac25e84329de035393d191e94ead8c39b373a0391fa025cab15470f8]
REG_PASSWORD: ENCRYPTED[255ec05057707c20237a6c7d15b213422779c534f74fe019b8ca565f635dba0e11035a034e533a6f39e146e7435d87b5]
script: ci/make_container_images.sh;
package_cache: &package_cache
folder: "/tmp/automation_images_tmp/.cache/**"
folder: "/var/tmp/automation_images_tmp/.cache/**"
fingerprint_key: "${TARGET_NAME}-cache-version-1"
@ -111,31 +113,27 @@ container_images_task: &container_images
imgts_build_task:
alias: imgts_build
name: 'Build IMGTS image'
only_if: $CIRRUS_CRON == ''
only_if: *is_pr
skip: &ci_docs $CIRRUS_CHANGE_TITLE =~ '.*CI:DOCS.*'
depends_on:
- image_builder
timeout_in: 20m
timeout_in: *cntr_timeout
gce_instance: *ibi_vm
env:
<<: *image_env
PUSH_LATEST: 1 # scripts force to 0 if $CIRRUS_PR
env: *image_env
script: |
export TARGET_NAME=imgts
export DEST_FQIN="quay.io/libpod/${TARGET_NAME}:c${IMG_SFX}";
export PUSH_LATEST=0
[[ -n "$CIRRUS_PR" ]] || export PUSH_LATEST=1
export DEST_FQIN="${REGPFX}/${TARGET_NAME}:c${IMG_SFX}";
ci/make_container_images.sh;
tooling_images_task:
alias: tooling_images
name: 'Build Tooling image ${TARGET_NAME}'
only_if: $CIRRUS_CRON == ''
only_if: *is_pr
skip: *ci_docs
depends_on:
- imgts_build
timeout_in: 30m
timeout_in: *cntr_timeout
gce_instance: *ibi_vm
env: *image_env
matrix:
@ -152,9 +150,7 @@ tooling_images_task:
- env:
TARGET_NAME: ccia
script: |
export DEST_FQIN="quay.io/libpod/${TARGET_NAME}:c${IMG_SFX}";
export PUSH_LATEST=0
[[ -n "$CIRRUS_PR" ]] || export PUSH_LATEST=1
export DEST_FQIN="${REGPFX}/${TARGET_NAME}:c${IMG_SFX}";
ci/make_container_images.sh;
base_images_task:
@ -168,20 +164,21 @@ base_images_task:
# Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true"
stateful: true
timeout_in: 45m
# Cannot use a container for this task, virt required for fedora image conversion
gce_instance:
<<: *ibi_vm
# Nested-virt is required, need Intel Haswell or better CPU
enable_nested_virtualization: true
type: "n2-standard-2"
scopes: ["cloud-platform"]
timeout_in: 70m
gce_instance: *ibi_vm
matrix:
- &base_image
name: "${PACKER_BUILDS} Base Image"
gce_instance: &nested_virt_vm
<<: *ibi_vm
# Nested-virt is required, need Intel Haswell or better CPU
enable_nested_virtualization: true
type: "n2-standard-16"
scopes: ["cloud-platform"]
env:
PACKER_BUILDS: "fedora"
- <<: *base_image
gce_instance: *nested_virt_vm
env:
PACKER_BUILDS: "prior-fedora"
- <<: *base_image
@ -196,6 +193,8 @@ base_images_task:
env:
GAC_JSON: &gac_json ENCRYPTED[7fba7fb26ab568ae39f799ab58a476123206576b0135b3d1019117c6d682391370c801e149f29324ff4b50133012aed9]
AWS_INI: &aws_ini ENCRYPTED[4cd69097cd29a9899e51acf3bbacceeb83cb5c907d272ca1e2a8ccd515b03f2368a0680870c0d120fc32bc578bb0a930]
AWS_MAX_ATTEMPTS: 300
AWS_TIMEOUT_SECONDS: 3000
script: "ci/make.sh base_images"
manifest_artifacts:
path: base_images/manifest.json
@ -213,7 +212,7 @@ cache_images_task:
# Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true"
stateful: true
timeout_in: 45m
timeout_in: 90m
container:
dockerfile: "image_builder/Containerfile"
cpu: 2
@ -234,9 +233,6 @@ cache_images_task:
- <<: *cache_image
env:
PACKER_BUILDS: "fedora-netavark"
- <<: *cache_image
env:
PACKER_BUILDS: "fedora-podman-py"
- <<: *cache_image
env:
PACKER_BUILDS: "fedora-aws"
@ -255,6 +251,8 @@ cache_images_task:
env:
GAC_JSON: *gac_json
AWS_INI: *aws_ini
AWS_MAX_ATTEMPTS: 300
AWS_TIMEOUT_SECONDS: 3000
script: "ci/make.sh cache_images"
manifest_artifacts:
path: cache_images/manifest.json
@ -273,7 +271,6 @@ win_images_task:
# Packer needs time to clean up partially created VM images
auto_cancellation: $CI != "true"
stateful: true
timeout_in: 45m
# Packer WinRM communicator is not reliable on container tasks
gce_instance:
<<: *ibi_vm
@ -286,19 +283,39 @@ win_images_task:
path: win_images/manifest.json
type: application/json
# These targets are intended for humans, make sure they builds and function on a basic level
test_debug_task:
name: "Test ${TARGET} make target"
alias: test_debug
only_if: *is_pr
skip: *ci_docs
depends_on:
- validate
gce_instance: *nested_virt_vm
matrix:
- env:
TARGET: ci_debug
- env:
TARGET: image_builder_debug
env:
HOME: "/root"
GAC_FILEPATH: "/dev/null"
AWS_SHARED_CREDENTIALS_FILE: "/dev/null"
DBG_TEST_CMD: "true"
script: make ${TARGET}
# Test metadata addition to images (built or not) to ensure container functions
test_imgts_task: &imgts
name: "Test image timestamp/metadata updates"
alias: test_imgts
only_if: $CIRRUS_CRON == ''
only_if: *is_pr
skip: *ci_docs
depends_on: &imgts_deps
- base_images
- cache_images
- imgts_build
container:
image: 'quay.io/libpod/imgts:c$IMG_SFX'
image: '${REGPFX}/imgts:c$IMG_SFX'
cpu: 2
memory: '2G'
env: &imgts_env
@ -320,7 +337,6 @@ test_imgts_task: &imgts
fedora-c${IMG_SFX}
prior-fedora-c${IMG_SFX}
fedora-netavark-c${IMG_SFX}
fedora-podman-py-c${IMG_SFX}
rawhide-c${IMG_SFX}
debian-c${IMG_SFX}
build-push-c${IMG_SFX}
@ -360,13 +376,13 @@ imgts_task:
test_imgobsolete_task: &lifecycle_test
name: "Test obsolete image detection"
alias: test_imgobsolete
only_if: &only_prs $CIRRUS_PR != ''
only_if: *is_pr
skip: *ci_docs
depends_on:
- tooling_images
- imgts
container:
image: 'quay.io/libpod/imgobsolete:c$IMG_SFX'
image: '${REGPFX}/imgobsolete:c$IMG_SFX'
cpu: 2
memory: '2G'
env: &lifecycle_env
@ -385,9 +401,8 @@ test_orphanvms_task:
<<: *lifecycle_test
name: "Test orphan VMs detection"
alias: test_orphanvms
skip: *ci_docs
container:
image: 'quay.io/libpod/orphanvms:c$IMG_SFX'
image: '$REGPFX/orphanvms:c$IMG_SFX'
cpu: 2
memory: '2G'
env:
@ -405,24 +420,23 @@ test_imgprune_task:
<<: *lifecycle_test
name: "Test obsolete image removal"
alias: test_imgprune
skip: *ci_docs
depends_on:
- tooling_images
- imgts
container:
image: 'quay.io/libpod/imgprune:c$IMG_SFX'
image: '$REGPFX/imgprune:c$IMG_SFX'
test_gcsupld_task:
name: "Test uploading to GCS"
alias: test_gcsupld
only_if: *only_prs
only_if: *is_pr
skip: *ci_docs
depends_on:
- tooling_images
- imgts
container:
image: 'quay.io/libpod/gcsupld:c$IMG_SFX'
image: '$REGPFX/gcsupld:c$IMG_SFX'
cpu: 2
memory: '2G'
env:
@ -435,13 +449,13 @@ test_gcsupld_task:
test_get_ci_vm_task:
name: "Test get_ci_vm entrypoint"
alias: test_get_ci_vm
only_if: *only_prs
only_if: *is_pr
skip: *ci_docs
depends_on:
- tooling_images
- imgts
container:
image: 'quay.io/libpod/get_ci_vm:c$IMG_SFX'
image: '$REGPFX/get_ci_vm:c$IMG_SFX'
cpu: 2
memory: '2G'
env:
@ -452,12 +466,12 @@ test_get_ci_vm_task:
test_ccia_task:
name: "Test ccia entrypoint"
alias: test_ccia
only_if: *only_prs
only_if: *is_pr
skip: *ci_docs
depends_on:
- tooling_images
container:
image: 'quay.io/libpod/ccia:c$IMG_SFX'
image: '$REGPFX/ccia:c$IMG_SFX'
cpu: 2
memory: '2G'
test_script: ./ccia/test.sh
@ -466,27 +480,45 @@ test_ccia_task:
test_build-push_task:
name: "Test build-push VM functions"
alias: test_build-push
only_if: *only_prs
only_if: |
$CIRRUS_PR != '' &&
$CIRRUS_PR_LABELS !=~ ".*no_build-push.*"
skip: *ci_docs_tooling
depends_on:
- cache_images
gce_instance:
image_project: "libpod-218412"
image_family: 'build-push-cache'
image_name: build-push-c${IMG_SFX}
zone: "us-central1-a"
disk: 200
# More muscle to emulate multi-arch
type: "n2-standard-4"
script: bash ./build-push/test.sh
script: |
mkdir /tmp/context
echo -e "FROM scratch\nENV foo=bar\n" > /tmp/context/Containerfile
source /etc/automation_environment
A_DEBUG=1 build-push.sh --nopush --arches=amd64,arm64,s390x,ppc64le example.com/foo/bar /tmp/context
tag_latest_images_task:
alias: tag_latest_images
name: "Tag latest built container images."
only_if: |
$CIRRUS_CRON == '' &&
$CIRRUS_BRANCH == $CIRRUS_DEFAULT_BRANCH
skip: *ci_docs
gce_instance: *ibi_vm
env: *image_env
script: ci/tag_latest.sh
# N/B: "latest" image produced after PR-merge (branch-push)
cron_imgobsolete_task: &lifecycle_cron
name: "Periodicly mark old images obsolete"
alias: cron_imgobsolete
only_if: $CIRRUS_PR == '' && $CIRRUS_CRON != ''
only_if: $CIRRUS_CRON == 'lifecycle'
container:
image: 'quay.io/libpod/imgobsolete:latest'
image: '$REGPFX/imgobsolete:latest'
cpu: 2
memory: '2G'
env:
@ -502,7 +534,7 @@ cron_imgprune_task:
depends_on:
- cron_imgobsolete
container:
image: 'quay.io/libpod/imgprune:latest'
image: '$REGPFX/imgprune:latest'
success_task:
@ -516,6 +548,7 @@ success_task:
- base_images
- cache_images
- win_images
- test_debug
- test_imgts
- imgts
- test_imgobsolete

2
.codespelldict Normal file
View File

@ -0,0 +1,2 @@
IMGSFX,IMG-SFX->IMG_SFX
Dockerfile->Containerfile

0
.codespellignore Normal file
View File

4
.codespellrc Normal file
View File

@ -0,0 +1,4 @@
[codespell]
ignore-words = .codespellignore
dictionary = .codespelldict
quiet-level = 3

View File

@ -13,9 +13,9 @@ import sys
def msg(msg, newline=True):
"""Print msg to stderr with optional newline."""
nl = ''
nl = ""
if newline:
nl = '\n'
nl = "\n"
sys.stderr.write(f"{msg}{nl}")
sys.stderr.flush()
@ -23,13 +23,13 @@ def msg(msg, newline=True):
def stage_sort(item):
"""Return sorting-key for build-image-json item."""
if item["stage"] == "import":
return str("0010"+item["name"])
return str("0010" + item["name"])
elif item["stage"] == "base":
return str("0020"+item["name"])
return str("0020" + item["name"])
elif item["stage"] == "cache":
return str("0030"+item["name"])
return str("0030" + item["name"])
else:
return str("0100"+item["name"])
return str("0100" + item["name"])
if "GITHUB_ENV" not in os.environ:
@ -40,46 +40,58 @@ github_workspace = os.environ.get("GITHUB_WORKSPACE", ".")
# File written by a previous workflow step
with open(f"{github_workspace}/built_images.json") as bij:
msg(f"Reading image build data from {bij.name}:")
data = []
for build in json.load(bij): # list of build data maps
stage = build.get("stage", False)
name = build.get("name", False)
sfx = build.get("sfx", False)
task = build.get("task", False)
if bool(stage) and bool(name) and bool(sfx) and bool(task):
image_suffix = f'{stage[0]}{sfx}'
data.append(dict(stage=stage, name=name,
image_suffix=image_suffix, task=task))
if cirrus_ci_build_id is None:
cirrus_ci_build_id = sfx
msg(f"Including '{stage}' stage build '{name}' for task '{task}'.")
else:
msg(f"Skipping '{stage}' stage build '{name}' for task '{task}'.")
msg(f"Reading image build data from {bij.name}:")
data = []
for build in json.load(bij): # list of build data maps
stage = build.get("stage", False)
name = build.get("name", False)
sfx = build.get("sfx", False)
task = build.get("task", False)
if bool(stage) and bool(name) and bool(sfx) and bool(task):
image_suffix = f"{stage[0]}{sfx}"
data.append(
dict(stage=stage, name=name, image_suffix=image_suffix, task=task)
)
if cirrus_ci_build_id is None:
cirrus_ci_build_id = sfx
msg(f"Including '{stage}' stage build '{name}' for task '{task}'.")
else:
msg(f"Skipping '{stage}' stage build '{name}' for task '{task}'.")
url = 'https://cirrus-ci.com/task'
url = "https://cirrus-ci.com/task"
lines = []
data.sort(key=stage_sort)
for item in data:
lines.append('|*{0}*|[{1}]({2})|`{3}`|\n'.format(item['stage'],
item['name'], '{0}/{1}'.format(url, item['task']),
item['image_suffix']))
image_suffix = item["image_suffix"]
# Base-images should never actually be used, but it may be helpful
# to have them in the list in case some debugging is needed.
if item["stage"] != "cache":
image_suffix = "do-not-use"
lines.append(
"|*{0}*|[{1}]({2})|`{3}`|\n".format(
item["stage"],
item["name"],
"{0}/{1}".format(url, item["task"]),
image_suffix,
)
)
# This is the mechanism required to set an multi-line env. var.
# value to be consumed by future workflow steps.
with open(os.environ["GITHUB_ENV"], "a") as ghenv, \
open(f'{github_workspace}/images.md', "w") as mdfile, \
open(f'{github_workspace}/images.json', "w") as images_json:
with open(os.environ["GITHUB_ENV"], "a") as ghenv, open(
f"{github_workspace}/images.md", "w"
) as mdfile, open(f"{github_workspace}/images.json", "w") as images_json:
env_header = ("IMAGE_TABLE<<EOF\n")
header = (f"[Cirrus CI build](https://cirrus-ci.com/build/{cirrus_ci_build_id})"
" successful. [Found built image names and"
f' IDs](https://github.com/{os.environ["GITHUB_REPOSITORY"]}'
f'/actions/runs/{os.environ["GITHUB_RUN_ID"]}):\n'
"\n")
c_head = ("|*Stage*|**Image Name**|`IMAGE_SUFFIX`|\n"
"|---|---|---|\n")
env_header = "IMAGE_TABLE<<EOF\n"
header = (
f"[Cirrus CI build](https://cirrus-ci.com/build/{cirrus_ci_build_id})"
" successful. [Found built image names and"
f' IDs](https://github.com/{os.environ["GITHUB_REPOSITORY"]}'
f'/actions/runs/{os.environ["GITHUB_RUN_ID"]}):\n'
"\n"
)
c_head = "|*Stage*|**Image Name**|`IMAGE_SUFFIX`|\n" "|---|---|---|\n"
# Different output destinations get slightly different content
for dst in [ghenv, mdfile, sys.stderr]:
if dst == ghenv:
@ -92,5 +104,7 @@ with open(os.environ["GITHUB_ENV"], "a") as ghenv, \
dst.write("EOF\n\n")
json.dump(data, images_json, indent=4, sort_keys=True)
msg(f"Wrote github env file '{ghenv.name}', md-file '{mdfile.name}',"
f" and json-file '{images_json.name}'")
msg(
f"Wrote github env file '{ghenv.name}', md-file '{mdfile.name}',"
f" and json-file '{images_json.name}'"
)

View File

@ -1,20 +1,12 @@
/*
Renovate is a service similar to GitHub Dependabot, but with
(fantastically) more configuration options. So many options
in fact, if you're new I recommend glossing over this cheat-sheet
prior to the official documentation:
Renovate is a service similar to GitHub Dependabot.
https://www.augmentedmind.de/2021/07/25/renovate-bot-cheat-sheet
Configuration Update/Change Procedure:
1. Make changes
2. Manually validate changes (from repo-root):
Please Manually validate any changes to this file with:
podman run -it \
-v ./.github/renovate.json5:/usr/src/app/renovate.json5:z \
docker.io/renovate/renovate:latest \
ghcr.io/renovatebot/renovate:latest \
renovate-config-validator
3. Commit.
Configuration Reference:
https://docs.renovatebot.com/configuration-options/
@ -22,11 +14,9 @@
Monitoring Dashboard:
https://app.renovatebot.com/dashboard#github/containers
Note: The Renovate bot will create/manage it's business on
branches named 'renovate/*'. Otherwise, and by
default, the only the copy of this file that matters
is the one on the `main` branch. No other branches
will be monitored or touched in any way.
Note: The Renovate bot will create/manage its business on
branches named 'renovate/*'. The only copy of this
file that matters is the one on the `main` branch.
*/
{
@ -44,55 +34,45 @@
// This repo builds images, don't try to manage them.
"docker:disable"
],
/*************************************************
*** Repository-specific configuration options ***
*************************************************/
// Don't leave dep. update. PRs "hanging", assign them to people.
"assignees": ["cevich"],
// Don't build CI VM images for dep. update PRs (by default)
commitMessagePrefix: "[CI:DOCS]",
"commitMessagePrefix": "[CI:DOCS]",
"regexManagers": [
"customManagers": [
// Manage updates to the common automation library version
{
"customType": "regex",
"fileMatch": "^lib.sh$",
"matchStrings": ["^INSTALL_AUTOMATION_VERSION=\"(?<currentValue>.+)\""],
"matchStrings": ["INSTALL_AUTOMATION_VERSION=\"(?<currentValue>.+)\""],
"depNameTemplate": "containers/automation",
"datasourceTemplate": "github-tags",
"versioningTemplate": "semver-coerced",
// "v" included in tag, but should not be used in lib.sh
"extractVersionTemplate": "v(?<version>.+)",
},
"extractVersionTemplate": "^v(?<version>.+)$"
}
],
// N/B: LAST MATCHING RULE WINS, match statems are ANDed together.
// https://docs.renovatebot.com/configuration-options/#packagerules
"packageRules": [
// When automation library version updated, full CI VM image build
// is needed, along with some other overrides not required in
// (for example) github-action updates.
{
"matchManagers": ["regex"],
"matchFiles": ["lib.sh"], // full-path exact-match
// Don't wait, roll out CI VM Updates immediately
"matchManagers": ["custom.regex"],
"matchFileNames": ["lib.sh"],
"schedule": ["at any time"],
// Override default `[CI:DOCS]`, DO build new CI VM images.
commitMessagePrefix: null,
// Frequently, library updates require adjustments to build-scripts
"commitMessagePrefix": null,
"draftPR": true,
"reviewers": ["cevich"],
"prBodyNotes": [
// handlebar conditionals don't have logical operators, and renovate
// does not provide an 'isMinor' template field
"\
"\
{{#if isMajor}}\
:warning: Changes are **likely** required for build-scripts \
and/or downstream CI VM image users. Please check very carefully. :warning:\
{{/if}}\
{{#if isPatch}}\
:warning: Changes are **likely** required for build-scripts and/or downstream CI VM \
image users. Please check very carefully. :warning:\
{{else}}\
:warning: Changes *might be* required for build-scripts \
and/or downstream CI VM image users. Please double-check. :warning:\
{{/if}}\
"
],
:warning: Changes may be required for build-scripts and/or downstream CI VM \
image users. Please double-check. :warning:\
{{/if}}"
]
}
]
}

View File

@ -14,4 +14,9 @@ jobs:
# Ref: https://docs.github.com/en/actions/using-workflows/reusing-workflows
call_cron_failures:
uses: containers/podman/.github/workflows/check_cirrus_cron.yml@main
secrets: inherit
secrets:
SECRET_CIRRUS_API_KEY: ${{secrets.SECRET_CIRRUS_API_KEY}}
ACTION_MAIL_SERVER: ${{secrets.ACTION_MAIL_SERVER}}
ACTION_MAIL_USERNAME: ${{secrets.ACTION_MAIL_USERNAME}}
ACTION_MAIL_PASSWORD: ${{secrets.ACTION_MAIL_PASSWORD}}
ACTION_MAIL_SENDER: ${{secrets.ACTION_MAIL_SENDER}}

View File

@ -25,12 +25,12 @@ jobs:
orphan_vms:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
persist-credentials: false
# Avoid duplicating cron-fail_addrs.csv
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
repository: containers/podman
path: '_podman'
@ -44,14 +44,14 @@ jobs:
GCPPROJECT: 'libpod-218412'
run: |
export GCPNAME GCPJSON AWSINI GCPPROJECT
export GCPPROJECTS=$(egrep -vx '^#+.*$' $GITHUB_WORKSPACE/gcpprojects.txt | tr -s '[:space:]' ' ')
export GCPPROJECTS=$(grep -E -vx '^#+.*$' $GITHUB_WORKSPACE/gcpprojects.txt | tr -s '[:space:]' ' ')
podman run --rm \
-e GCPNAME -e GCPJSON -e AWSINI -e GCPPROJECT -e GCPPROJECTS \
quay.io/libpod/orphanvms:latest \
> /tmp/orphanvms_output.txt
- if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: orphanvms_output
path: /tmp/orphanvms_output.txt
@ -59,7 +59,7 @@ jobs:
- name: Count number of orphaned VMs
id: orphans
run: |
count=$(egrep -x '\* VM .+' /tmp/orphanvms_output.txt | wc -l)
count=$(grep -E -x '\* VM .+' /tmp/orphanvms_output.txt | wc -l)
# Assist with debugging job (step-outputs are otherwise hidden)
printf "Orphan VMs count:%d\n" $count
if [[ "$count" =~ ^[0-9]+$ ]]; then
@ -86,20 +86,20 @@ jobs:
- if: steps.orphans.outputs.count > 0
name: Send orphan notification e-mail
# Ref: https://github.com/dawidd6/action-send-mail
uses: dawidd6/action-send-mail@v3.8.0
uses: dawidd6/action-send-mail@v3.12.0
with:
server_address: ${{ secrets.ACTION_MAIL_SERVER }}
server_port: 465
username: ${{ secrets.ACTION_MAIL_USERNAME }}
password: ${{ secrets.ACTION_MAIL_PASSWORD }}
subject: Orphaned GCP VMs
subject: Orphaned CI VMs detected
to: ${{env.RCPTCSV}}
from: ${{ secrets.ACTION_MAIL_SENDER }}
body: file:///tmp/email_body.txt
- if: failure()
name: Send error notification e-mail
uses: dawidd6/action-send-mail@v3.8.0
uses: dawidd6/action-send-mail@v3.12.0
with:
server_address: ${{secrets.ACTION_MAIL_SERVER}}
server_port: 465
@ -108,4 +108,4 @@ jobs:
subject: Github workflow error on ${{github.repository}}
to: ${{env.RCPTCSV}}
from: ${{secrets.ACTION_MAIL_SENDER}}
body: "Job failed: https://github.com/${{github.repository}}/runs/${{github.job}}?check_suite_focus=true"
body: "Job failed: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}"

View File

@ -58,7 +58,7 @@ jobs:
fi
- if: steps.retro.outputs.is_pr == 'true'
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
persist-credentials: false
@ -132,12 +132,10 @@ jobs:
- if: steps.manifests.outputs.count > 0
name: Post PR comment with image name/id table
uses: jungwinter/comment@v1.1.0
uses: thollander/actions-comment-pull-request@v3
with:
issue_number: '${{ steps.retro.outputs.prn }}'
type: 'create'
token: '${{ secrets.GITHUB_TOKEN }}'
body: |
pr-number: '${{ steps.retro.outputs.prn }}'
message: |
${{ env.IMAGE_TABLE }}
# Ref: https://github.com/marketplace/actions/deploy-to-gist

1
.gitignore vendored
View File

@ -1,2 +1,3 @@
*/*.json
/.cache
.pre-commit-config.yaml

20
.pre-commit-hooks.yaml Normal file
View File

@ -0,0 +1,20 @@
---
# Ref: https://pre-commit.com/#creating-new-hooks
- id: check-imgsfx
name: Check IMG_SFX for accidental reuse.
description: |
Every PR intended to produce CI VM or container images must update
the `IMG_SFX` file via `make IMG_SFX`. The exact value will be
validated against global suffix usage (encoded as tags on the
`imgts` container image). This pre-commit hook verifies on every
push, the IMG_SFX file's value has not been pushed previously.
It's intended as a simple/imperfect way to save developers time
by avoiding force-pushes that will most certainly fail validation.
entry: ./check-imgsfx.sh
language: system
exclude: '.*' # Not examining any specific file/dir/link
always_run: true # ignore no matching files
fail_fast: true
pass_filenames: false
stages: ["pre-push"]

View File

@ -1 +1 @@
20230816t191118z-f38f37d13
20250812t173301z-f42f41d13

View File

@ -1 +0,0 @@
20230816t191121z-f38f37d13

260
Makefile
View File

@ -1,4 +1,7 @@
# Default is sh, which has scripting limitations
SHELL := $(shell command -v bash;)
##### Functions #####
# Evaluates to $(1) if $(1) non-empty, otherwise evaluates to $(2)
@ -15,16 +18,15 @@ if_ci_else = $(if $(findstring true,$(CI)),$(1),$(2))
##### Important image release and source details #####
export CENTOS_STREAM_RELEASE = 8
export CENTOS_STREAM_RELEASE = 9
export FEDORA_RELEASE = 38
export PRIOR_FEDORA_RELEASE = 37
# Warning: Beta Fedora releases are not supported. Verifiy EC2 AMI availability
# here: https://fedoraproject.org/cloud/download
export FEDORA_RELEASE = 42
export PRIOR_FEDORA_RELEASE = 41
# This should always be one-greater than $FEDORA_RELEASE (assuming it's actually the latest)
export RAWHIDE_RELEASE = 39
# See import_images/README.md
export FEDORA_IMPORT_IMG_SFX = $(_IMPORT_IMG_SFX)
export RAWHIDE_RELEASE = 43
# Automation assumes the actual release number (after SID upgrade)
# is always one-greater than the latest DEBIAN_BASE_FAMILY (GCE image).
@ -101,7 +103,6 @@ override _HLPFMT = "%-20s %s\n"
# Suffix value for any images built from this make execution
_IMG_SFX ?= $(file <IMG_SFX)
_IMPORT_IMG_SFX ?= $(file <IMPORT_IMG_SFX)
# Env. vars needed by packer
export CHECKPOINT_DISABLE = 1 # Disable hashicorp phone-home
@ -110,6 +111,12 @@ export PACKER_CACHE_DIR = $(call err_if_empty,_TEMPDIR)
# AWS CLI default, in case caller needs to override
export AWS := aws --output json --region us-east-1
# Needed for container-image builds
GIT_HEAD = $(shell git rev-parse HEAD)
# Save some typing
_IMGTS_FQIN := quay.io/libpod/imgts:c$(_IMG_SFX)
##### Targets #####
# N/B: The double-# after targets is gawk'd out as the target description
@ -124,17 +131,39 @@ help: ## Default target, parses special in-line comments as documentation.
# There are length/character limitations (a-z, 0-9, -) in GCE for image
# names and a max-length of 63.
.PHONY: IMG_SFX
IMG_SFX: ## Generate a new date-based image suffix, store in the file IMG_SFX
$(file >$@,$(shell date --utc +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE)))
@echo "$(file <IMG_SFX)"
IMG_SFX: timebomb-check ## Generate a new date-based image suffix, store in the file IMG_SFX
@echo "$$(date -u +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE))" > "$@"
@cat IMG_SFX
.PHONY: IMPORT_IMG_SFX
IMPORT_IMG_SFX: ## Generate a new date-based import-image suffix, store in the file IMPORT_IMG_SFX
$(file >$@,$(shell date --utc +%Y%m%dt%H%M%Sz)-f$(FEDORA_RELEASE)f$(PRIOR_FEDORA_RELEASE)d$(subst .,,$(DEBIAN_RELEASE)))
@echo "$(file <IMPORT_IMG_SFX)"
# Prevent us from wasting CI time when we have expired timebombs
.PHONY: timebomb-check
timebomb-check:
@now=$$(date -u +%Y%m%d); \
found=; \
while read -r bomb; do \
when=$$(echo "$$bomb" | sed -E -e 's/^.*timebomb ([0-9]+).*/\1/'); \
if [ "$$when" -le "$$now" ]; then \
echo "$$bomb"; \
found=found; \
fi; \
done < <(git grep --line-number '^[ ]*timebomb '); \
if [[ -n "$$found" ]]; then \
echo ""; \
echo "****** FATAL: Please check/fix expired timebomb(s) ^^^^^^"; \
false; \
fi
# Given the path to a file containing 'sha256:<image id>' return <image id>
# or throw error if empty.
define imageid
$(if $(file < $(1)),$(subst sha256:,,$(file < $(1))),$(error Container IID file $(1) doesn't exist or is empty))
endef
# This is intended for use by humans, to debug the image_builder_task in .cirrus.yml
# as well as the scripts under the ci subdirectory. See the 'image_builder_debug`
# target if debugging of the packer builds is necessary.
.PHONY: ci_debug
ci_debug: $(_TEMPDIR)/ci_debug.tar ## Build and enter container for local development/debugging of container-based Cirrus-CI tasks
ci_debug: $(_TEMPDIR)/ci_debug.iid ## Build and enter container for local development/debugging of container-based Cirrus-CI tasks
/usr/bin/podman run -it --rm \
--security-opt label=disable \
-v $(_MKFILE_DIR):$(_MKFILE_DIR) -w $(_MKFILE_DIR) \
@ -146,19 +175,18 @@ ci_debug: $(_TEMPDIR)/ci_debug.tar ## Build and enter container for local develo
-e GAC_FILEPATH=$(GAC_FILEPATH) \
-e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \
-e TEMPDIR=$(_TEMPDIR) \
docker-archive:$<
$(call imageid,$<) $(if $(DBG_TEST_CMD),$(DBG_TEST_CMD),)
# Takes 3 arguments: export filepath, FQIN, context dir
# Takes 3 arguments: IID filepath, FQIN, context dir
define podman_build
podman build -t $(2) \
--iidfile=$(1) \
--build-arg CENTOS_STREAM_RELEASE=$(CENTOS_STREAM_RELEASE) \
--build-arg PACKER_VERSION=$(call err_if_empty,PACKER_VERSION) \
-f $(3)/Containerfile .
rm -f $(1)
podman save --quiet -o $(1) $(2)
endef
$(_TEMPDIR)/ci_debug.tar: $(_TEMPDIR) $(wildcard ci/*)
$(_TEMPDIR)/ci_debug.iid: $(_TEMPDIR) $(wildcard ci/*)
$(call podman_build,$@,ci_debug,ci)
$(_TEMPDIR):
@ -201,7 +229,7 @@ $(_TEMPDIR)/user-data: $(_TEMPDIR) $(_TEMPDIR)/cidata.ssh.pub $(_TEMPDIR)/cidata
cidata: $(_TEMPDIR)/user-data $(_TEMPDIR)/meta-data
define build_podman_container
$(MAKE) $(_TEMPDIR)/$(1).tar BASE_TAG=$(2)
$(MAKE) $(_TEMPDIR)/$(1).iid BASE_TAG=$(2)
endef
# First argument is the path to the template JSON
@ -229,14 +257,17 @@ image_builder: image_builder/manifest.json ## Create image-building image and im
image_builder/manifest.json: image_builder/gce.json image_builder/setup.sh lib.sh systemd_banish.sh $(PACKER_INSTALL_DIR)/packer
$(call packer_build,image_builder/gce.json)
# Note: We assume this repo is checked out somewhere under the caller's
# home-dir for bind-mounting purposes. Otherwise possibly necessary
# files/directories like $HOME/.gitconfig or $HOME/.ssh/ won't be available
# from inside the debugging container.
# Note: It's assumed there are important files in the callers $HOME
# needed for debugging (.gitconfig, .ssh keys, etc.). It's unsafe
# to assume $(_MKFILE_DIR) is also under $HOME. Both are mounted
# for good measure.
.PHONY: image_builder_debug
image_builder_debug: $(_TEMPDIR)/image_builder_debug.tar ## Build and enter container for local development/debugging of targets requiring packer + virtualization
image_builder_debug: $(_TEMPDIR)/image_builder_debug.iid ## Build and enter container for local development/debugging of targets requiring packer + virtualization
/usr/bin/podman run -it --rm \
--security-opt label=disable -v $$HOME:$$HOME -w $(_MKFILE_DIR) \
--security-opt label=disable \
-v $$HOME:$$HOME \
-v $(_MKFILE_DIR):$(_MKFILE_DIR) \
-w $(_MKFILE_DIR) \
-v $(_TEMPDIR):$(_TEMPDIR) \
-v $(call err_if_empty,GAC_FILEPATH):$(GAC_FILEPATH) \
-v $(call err_if_empty,AWS_SHARED_CREDENTIALS_FILE):$(AWS_SHARED_CREDENTIALS_FILE) \
@ -244,113 +275,13 @@ image_builder_debug: $(_TEMPDIR)/image_builder_debug.tar ## Build and enter cont
-e PACKER_INSTALL_DIR=/usr/local/bin \
-e PACKER_VERSION=$(call err_if_empty,PACKER_VERSION) \
-e IMG_SFX=$(call err_if_empty,_IMG_SFX) \
-e IMPORT_IMG_SFX=$(call err_if_empty,_IMPORT_IMG_SFX) \
-e GAC_FILEPATH=$(GAC_FILEPATH) \
-e AWS_SHARED_CREDENTIALS_FILE=$(AWS_SHARED_CREDENTIALS_FILE) \
docker-archive:$<
$(call imageid,$<) $(if $(DBG_TEST_CMD),$(DBG_TEST_CMD))
$(_TEMPDIR)/image_builder_debug.tar: $(_TEMPDIR) $(wildcard image_builder/*)
$(_TEMPDIR)/image_builder_debug.iid: $(_TEMPDIR) $(wildcard image_builder/*)
$(call podman_build,$@,image_builder_debug,image_builder)
# Avoid re-downloading unnecessarily
# Ref: https://www.gnu.org/software/make/manual/html_node/Special-Targets.html#Special-Targets
.PRECIOUS: $(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).$(IMPORT_FORMAT)
$(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).$(IMPORT_FORMAT): $(_TEMPDIR)
bash import_images/handle_image.sh \
$@ \
$(call err_if_empty,FEDORA_IMAGE_URL) \
$(call err_if_empty,FEDORA_CSUM_URL)
$(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).$(IMPORT_FORMAT): $(_TEMPDIR)
bash import_images/handle_image.sh \
$@ \
$(call err_if_empty,FEDORA_ARM64_IMAGE_URL) \
$(call err_if_empty,FEDORA_ARM64_CSUM_URL)
$(_TEMPDIR)/%.md5: $(_TEMPDIR)/%.$(IMPORT_FORMAT)
openssl md5 -binary $< | base64 > $@.tmp
mv $@.tmp $@
# MD5 metadata value checked by AWS after upload + 5 retries.
# Cache disabled to avoid sync. issues w/ vmimport service if
# image re-uploaded.
# TODO: Use sha256 from ..._CSUM_URL file instead of recalculating
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
# Avoid re-uploading unnecessarily
.SECONDARY: $(_TEMPDIR)/%.uploaded
$(_TEMPDIR)/%.uploaded: $(_TEMPDIR)/%.$(IMPORT_FORMAT) $(_TEMPDIR)/%.md5
-$(AWS) s3 rm --quiet s3://packer-image-import/%.$(IMPORT_FORMAT)
$(AWS) s3api put-object \
--content-md5 "$(file < $(_TEMPDIR)/$*.md5)" \
--content-encoding binary/octet-stream \
--cache-control no-cache \
--bucket packer-image-import \
--key $*.$(IMPORT_FORMAT) \
--body $(_TEMPDIR)/$*.$(IMPORT_FORMAT) > $@.tmp
mv $@.tmp $@
# For whatever reason, the 'Format' value must be all upper-case.
# Avoid creating unnecessary/duplicate import tasks
.SECONDARY: $(_TEMPDIR)/%.import_task_id
$(_TEMPDIR)/%.import_task_id: $(_TEMPDIR)/%.uploaded
$(AWS) ec2 import-snapshot \
--disk-container Format=$(shell tr '[:lower:]' '[:upper:]'<<<"$(IMPORT_FORMAT)"),UserBucket="{S3Bucket=packer-image-import,S3Key=$*.$(IMPORT_FORMAT)}" > $@.tmp.json
@cat $@.tmp.json
jq -r -e .ImportTaskId $@.tmp.json > $@.tmp
mv $@.tmp $@
# Avoid importing multiple snapshots for the same image
.PRECIOUS: $(_TEMPDIR)/%.snapshot_id
$(_TEMPDIR)/%.snapshot_id: $(_TEMPDIR)/%.import_task_id
bash import_images/wait_import_task.sh "$<" > $@.tmp
mv $@.tmp $@
define _register_sed
sed -r \
-e 's/@@@NAME@@@/$(1)/' \
-e 's/@@@IMPORT_IMG_SFX@@@/$(_IMPORT_IMG_SFX)/' \
-e 's/@@@ARCH@@@/$(2)/' \
-e 's/@@@SNAPSHOT_ID@@@/$(3)/' \
import_images/register.json.in \
> $(4)
endef
$(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).register.json: $(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).snapshot_id import_images/register.json.in
$(call _register_sed,fedora-aws,x86_64,$(file <$<),$@)
$(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).register.json: $(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).snapshot_id import_images/register.json.in
$(call _register_sed,fedora-aws-arm64,arm64,$(file <$<),$@)
# Avoid multiple registrations for the same image
.PRECIOUS: $(_TEMPDIR)/%.ami.id
$(_TEMPDIR)/%.ami.id: $(_TEMPDIR)/%.register.json
$(AWS) ec2 register-image --cli-input-json "$$(<$<)" > $@.tmp.json
cat $@.tmp.json
jq -r -e .ImageId $@.tmp.json > $@.tmp
mv $@.tmp $@
$(_TEMPDIR)/%.ami.name: $(_TEMPDIR)/%.register.json
jq -r -e .Name $< > $@.tmp
mv $@.tmp $@
$(_TEMPDIR)/%.ami.json: $(_TEMPDIR)/%.ami.id $(_TEMPDIR)/%.ami.name
$(AWS) ec2 create-tags \
--resources "$$(<$(_TEMPDIR)/$*.ami.id)" \
--tags \
Key=Name,Value=$$(<$(_TEMPDIR)/$*.ami.name) \
Key=automation,Value=false
$(AWS) --output table ec2 describe-images --image-ids "$$(<$(_TEMPDIR)/$*.ami.id)" \
| tee $@
.PHONY: import_images
import_images: $(_TEMPDIR)/fedora-aws-$(_IMPORT_IMG_SFX).ami.json $(_TEMPDIR)/fedora-aws-arm64-$(_IMPORT_IMG_SFX).ami.json import_images/manifest.json.in ## Import generic Fedora cloud images into AWS EC2.
sed -r \
-e 's/@@@IMG_SFX@@@/$(_IMPORT_IMG_SFX)/' \
-e 's/@@@CIRRUS_TASK_ID@@@/$(CIRRUS_TASK_ID)/' \
import_images/manifest.json.in \
> import_images/manifest.json
@echo "Image import(s) successful!"
.PHONY: base_images
# This needs to run in a virt/nested-virt capable environment
base_images: base_images/manifest.json ## Create, prepare, and import base-level images into GCE.
@ -377,77 +308,80 @@ fedora_podman: ## Build Fedora podman development container
prior-fedora_podman: ## Build Prior-Fedora podman development container
$(call build_podman_container,$@,$(PRIOR_FEDORA_RELEASE))
$(_TEMPDIR)/%_podman.tar: podman/Containerfile podman/setup.sh $(wildcard base_images/*.sh) $(_TEMPDIR) $(wildcard cache_images/*.sh)
$(_TEMPDIR)/%_podman.iid: podman/Containerfile podman/setup.sh $(wildcard base_images/*.sh) $(_TEMPDIR) $(wildcard cache_images/*.sh)
podman build -t $*_podman:$(call err_if_empty,_IMG_SFX) \
--security-opt seccomp=unconfined \
--iidfile=$@ \
--build-arg=BASE_NAME=$(subst prior-,,$*) \
--build-arg=BASE_TAG=$(call err_if_empty,BASE_TAG) \
--build-arg=PACKER_BUILD_NAME=$(subst _podman,,$*) \
--build-arg=IMG_SFX=$(_IMG_SFX) \
--build-arg=CIRRUS_TASK_ID=$(CIRRUS_TASK_ID) \
--build-arg=GIT_HEAD=$(call err_if_empty,GIT_HEAD) \
-f podman/Containerfile .
rm -f $@
podman save --quiet -o $@ $*_podman:$(_IMG_SFX)
.PHONY: skopeo_cidev
skopeo_cidev: $(_TEMPDIR)/skopeo_cidev.tar ## Build Skopeo development and CI container
$(_TEMPDIR)/skopeo_cidev.tar: $(_TEMPDIR) $(wildcard skopeo_base/*)
skopeo_cidev: $(_TEMPDIR)/skopeo_cidev.iid ## Build Skopeo development and CI container
$(_TEMPDIR)/skopeo_cidev.iid: $(_TEMPDIR) $(wildcard skopeo_base/*)
podman build -t skopeo_cidev:$(call err_if_empty,_IMG_SFX) \
--iidfile=$@ \
--security-opt seccomp=unconfined \
--build-arg=BASE_TAG=$(FEDORA_RELEASE) \
skopeo_cidev
rm -f $@
podman save --quiet -o $@ skopeo_cidev:$(_IMG_SFX)
.PHONY: ccia
ccia: $(_TEMPDIR)/ccia.tar ## Build the Cirrus-CI Artifacts container image
$(_TEMPDIR)/ccia.tar: ccia/Containerfile $(_TEMPDIR)
ccia: $(_TEMPDIR)/ccia.iid ## Build the Cirrus-CI Artifacts container image
$(_TEMPDIR)/ccia.iid: ccia/Containerfile $(_TEMPDIR)
$(call podman_build,$@,ccia:$(call err_if_empty,_IMG_SFX),ccia)
.PHONY: bench_stuff
bench_stuff: $(_TEMPDIR)/bench_stuff.tar ## Build the Cirrus-CI Artifacts container image
$(_TEMPDIR)/bench_stuff.tar: bench_stuff/Containerfile $(_TEMPDIR)
$(call podman_build,$@,bench_stuff:$(call err_if_empty,_IMG_SFX),bench_stuff)
# Note: This target only builds imgts:c$(_IMG_SFX) it does not push it to
# any container registry which may be required for targets which
# depend on it as a base-image. In CI, pushing is handled automatically
# by the 'ci/make_container_images.sh' script.
.PHONY: imgts
imgts: $(_TEMPDIR)/imgts.tar ## Build the VM image time-stamping container image
$(_TEMPDIR)/imgts.tar: imgts/Containerfile imgts/entrypoint.sh imgts/google-cloud-sdk.repo imgts/lib_entrypoint.sh $(_TEMPDIR)
$(call podman_build,$@,imgts:$(call err_if_empty,_IMG_SFX),imgts)
imgts: imgts/Containerfile imgts/entrypoint.sh imgts/google-cloud-sdk.repo imgts/lib_entrypoint.sh $(_TEMPDIR) ## Build the VM image time-stamping container image
$(call podman_build,/dev/null,imgts:$(call err_if_empty,_IMG_SFX),imgts)
-rm $(_TEMPDIR)/$@.iid
# Helper function to build images which depend on imgts:latest base image
# N/B: There is no make dependency resolution on imgts.iid on purpose,
# imgts:c$(_IMG_SFX) is assumed to have already been pushed to quay.
# See imgts target above.
define imgts_base_podman_build
podman load -i $(_TEMPDIR)/imgts.tar
podman tag imgts:$(call err_if_empty,_IMG_SFX) imgts:latest
podman image exists $(_IMGTS_FQIN) || podman pull $(_IMGTS_FQIN)
podman image exists imgts:latest || podman tag $(_IMGTS_FQIN) imgts:latest
$(call podman_build,$@,$(1):$(call err_if_empty,_IMG_SFX),$(1))
endef
.PHONY: imgobsolete
imgobsolete: $(_TEMPDIR)/imgobsolete.tar ## Build the VM Image obsoleting container image
$(_TEMPDIR)/imgobsolete.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh imgobsolete/Containerfile imgobsolete/entrypoint.sh $(_TEMPDIR)
imgobsolete: $(_TEMPDIR)/imgobsolete.iid ## Build the VM Image obsoleting container image
$(_TEMPDIR)/imgobsolete.iid: imgts/lib_entrypoint.sh imgobsolete/Containerfile imgobsolete/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,imgobsolete)
.PHONY: imgprune
imgprune: $(_TEMPDIR)/imgprune.tar ## Build the VM Image pruning container image
$(_TEMPDIR)/imgprune.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh imgprune/Containerfile imgprune/entrypoint.sh $(_TEMPDIR)
imgprune: $(_TEMPDIR)/imgprune.iid ## Build the VM Image pruning container image
$(_TEMPDIR)/imgprune.iid: imgts/lib_entrypoint.sh imgprune/Containerfile imgprune/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,imgprune)
.PHONY: gcsupld
gcsupld: $(_TEMPDIR)/gcsupld.tar ## Build the GCS Upload container image
$(_TEMPDIR)/gcsupld.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh gcsupld/Containerfile gcsupld/entrypoint.sh $(_TEMPDIR)
gcsupld: $(_TEMPDIR)/gcsupld.iid ## Build the GCS Upload container image
$(_TEMPDIR)/gcsupld.iid: imgts/lib_entrypoint.sh gcsupld/Containerfile gcsupld/entrypoint.sh $(_TEMPDIR)
$(call imgts_base_podman_build,gcsupld)
.PHONY: orphanvms
orphanvms: $(_TEMPDIR)/orphanvms.tar ## Build the Orphaned VM container image
$(_TEMPDIR)/orphanvms.tar: $(_TEMPDIR)/imgts.tar imgts/lib_entrypoint.sh orphanvms/Containerfile orphanvms/entrypoint.sh orphanvms/_gce orphanvms/_ec2 $(_TEMPDIR)
orphanvms: $(_TEMPDIR)/orphanvms.iid ## Build the Orphaned VM container image
$(_TEMPDIR)/orphanvms.iid: imgts/lib_entrypoint.sh orphanvms/Containerfile orphanvms/entrypoint.sh orphanvms/_gce orphanvms/_ec2 $(_TEMPDIR)
$(call imgts_base_podman_build,orphanvms)
.PHONY: .get_ci_vm
get_ci_vm: $(_TEMPDIR)/get_ci_vm.tar ## Build the get_ci_vm container image
$(_TEMPDIR)/get_ci_vm.tar: lib.sh get_ci_vm/Containerfile get_ci_vm/entrypoint.sh get_ci_vm/setup.sh $(_TEMPDIR)
podman build -t get_ci_vm:$(call err_if_empty,_IMG_SFX) -f get_ci_vm/Containerfile .
rm -f $@
podman save --quiet -o $@ get_ci_vm:$(_IMG_SFX)
get_ci_vm: $(_TEMPDIR)/get_ci_vm.iid ## Build the get_ci_vm container image
$(_TEMPDIR)/get_ci_vm.iid: lib.sh get_ci_vm/Containerfile get_ci_vm/entrypoint.sh get_ci_vm/setup.sh $(_TEMPDIR)
podman build --iidfile=$@ -t get_ci_vm:$(call err_if_empty,_IMG_SFX) -f get_ci_vm/Containerfile ./
.PHONY: clean
clean: ## Remove all generated files referenced in this Makefile
-rm -rf $(_TEMPDIR)
-rm -f image_builder/*.json
-rm -f *_images/{*.json,cidata*,*-data}
-rm -f ci_debug.tar
-podman rmi imgts:latest
-podman rmi $(_IMGTS_FQIN)

108
README-simplified.md Normal file
View File

@ -0,0 +1,108 @@
The README here is waaaaaay too complicated for Ed. So here is a
simplified version of the typical things you need to do.
Super Duper Simplest Case
=========================
This is by far the most common case, and the simplest to understand.
You do this when you want to build VMs with newer package versions than
whatever VMs are currently set up in CI. You really need to
understand this before you get into anything more complicated.
```
$ git checkout -b lets-see-what-happens
$ make IMG_SFX
$ git commit -asm"Let's just see what happens"
```
...and push that as a PR.
If you're lucky, in about an hour you will get an email from `github-actions[bot]`
with a nice table of base and cache images, with links. I strongly encourage you
to try to get Ed's
[cirrus-vm-get-versions](https://github.com/edsantiago/containertools/tree/main/cirrus-vm-get-versions)
script working, because this will give you a very quick easy reliable
list of what packages have changed. You don't need this, but life will be painful
for you without it.
(If you're not lucky, the build will break. There are infinite ways for
this to happen, so you're on your own here. Ask for help! This is a great
team, and one or more people may quickly realize the problem.)
Once you have new VMs built, **test in an actual project**! Usually podman
and buildah, but you may want the varks too:
```
$ cd ~/src/github/containers/podman ! or wherever
$ git checkout -b test-new-vms
$ vim .cirrus.yml
[ search for "c202", and replace with your new IMG_SFX.]
[ Don't forget the leading "c"! ]
$ git commit -as
[ Please include a link to the automation_images PR! ]
```
Push this PR and see what happens. If you're very lucky, it will
pass on this and other repos. Get your podman/buildah/vark PRs
reviewed and merged, and then review-merge the automation_images one.
Pushing (har har) Your Luck
---------------------------
Feel lucky? Tag this VM build, so `dependabot` will create PRs
on all the myriad container repos:
```
$ git tag $(<IMG_SFX)
$ git push --no-verify upstream $(<IMG_SFX)
```
Within a few hours you'll see a ton of PRs. It is very likely that
something will go wrong in one or two, and if so, it's impossible to
cover all possibilities. As above, ask for help.
More Complicated Cases
======================
These are the next two most common.
Bumping One Package
-------------------
Quite often we need an emergency bump of only one package that
is not yet stable. Here are examples of the two most typical
cases,
[crun](https://github.com/containers/automation_images/pull/386/files) and
[pasta](https://github.com/containers/automation_images/pull/383/files).
Note the `timebomb` directives. Please use these: the time you save
may be your own, one future day. And please use 2-6 week times.
A timebomb that expires in a year is going to be hard to understand
when it goes off.
Bumping Distros
---------------
Like Fedora 40 to 41. Edit `Makefile`. Change `FEDORA`, `PRIOR_FEDORA`,
and `RAWHIDE`, then proceed with Simple Case.
There is almost zero chance that this will work on the first try.
Sorry, that's just the way it is. See the
[F40 to F41 PR](https://github.com/containers/automation_images/pull/392/files)
for a not-atypical example.
STRONG RECOMMENDATION
=====================
Read [check-imgsfx.sh](check-imgsfx.sh) and follow its instructions. Ed
likes to copy that to `.git/hooks/pre-push`, Chris likes using some
external tool that Ed doesn't trust. Use your judgment.
The reason for this is that you are going to forget to `make IMG_SFX`
one day, and then you're going to `git push --force` an update and walk
away, and come back to a failed run because `IMG_SFX` must always
always always be brand new.
Weak Recommendation
-------------------
Ed likes to fiddle with `IMG_SFX`, zeroing out to the nearest
quarter hour. Absolutely unnecessary, but easier on the eyes
when trying to see which VMs are in use or when comparing
diffs.

View File

@ -52,7 +52,7 @@ However, all steps are listed below for completeness.
For more information on the overall process of importing custom GCE VM
Images, please [refer to the documentation](https://cloud.google.com/compute/docs/import/import-existing-image). For references to the latest pre-build AWS
EC2 Fedora AMI's see [the
upstream cloud page](https://alt.fedoraproject.org/cloud/).
upstream cloud page](https://fedoraproject.org/cloud/download).
For more information on the primary tool (*packer*) used for this process,
please [see it's documentation page](https://www.packer.io/docs).
@ -264,13 +264,11 @@ then automatically pushed to:
* https://quay.io/repository/libpod/fedora_podman
* https://quay.io/repository/libpod/prior-fedora_podman
* https://quay.io/repository/libpod/debian_podman
The meaning of *prior* and *current*, is defined by the contents of
the `*_release` files within the `podman` subdirectory. This is
necessary to support the Makefile target being used manually
(e.g. debugging). These files must be updated manually when introducing
a new VM image version.
the `*_RELEASE` values in the `Makefile`. The images will be tagged
with the value within the `IMG_SFX` file. Additionally, the most
recently merged PR on this repo will tag its images `latest`.
### Tooling
@ -292,8 +290,7 @@ the following are built:
In all cases, when automation runs on a branch (i.e. after a PR is merged)
the actual image tagged `latest` will be pushed. When running in a PR,
only validation and test images are produced. This behavior is controled
by a combination of the `$PUSH_LATEST` and `$CIRRUS_PR` variables.
only validation and test images are produced.
## The Base Images (overview step 3)
@ -377,10 +374,11 @@ infinite-growth of the VM image count.
# Debugging / Locally driving VM Image production
Because the entire automated build process is containerized, it may easily be
performed locally on your laptop/workstation. However, this process will
Much of the CI and image-build process is containerized, so it may be debugged
locally on your laptop/workstation. However, this process will
still involve interfacing with GCE and AWS. Therefore, you must be in possession
of a *Google Application Credentials* (GAC) JSON and AWS credentials INI file.
of a *Google Application Credentials* (GAC) JSON and
[AWS credentials INI file](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds).
The GAC JSON file should represent a service account (contrasted to a user account,
which always uses OAuth2). The name of the service account doesn't matter,
@ -401,44 +399,52 @@ one the following (custom) IAM policies enabled:
Somebody familiar with Google and AWS IAM will need to provide you with the
credential files and ensure correct account configuration. Having these files
stored *in your home directory* on your laptop/workstation, the process of
producing images proceeds as follows:
building and entering the debug containers is as follows:
1. Ensure you have podman installed, and lots of available network and CPU
resources (i.e. turn off YouTube, shut down background VMs and other hungry
tasks). Build the image-builder container image, by executing
tasks).
2. Build and enter either the `ci_debug` or the `image_builder_debug` container
image, by executing:
```
make image_builder_debug GAC_FILEPATH=</home/path/to/gac.json> \
AWS_SHARED_CREDENTIALS_FILE=</path/to/credentials>
make <ci_debug|image_builder_debug> \
GAC_FILEPATH=</home/path/to/gac.json> \
AWS_SHARED_CREDENTIALS_FILE=</path/to/credentials>
```
2. You will be dropped into a debugging container, inside a volume-mount of
the repository root. This container is practically identical to the VM
produced and used in *overview step 1*. If changes are made, the container
image should be re-built to reflect them.
* The `ci_debug` image is significantly smaller, and only intended for rudimentary
cases, for example running the scripts under the `ci` subdirectory.
* The `image_builder_debug` image is larger, and has KVM virtualization enabled.
It's needed for more extensive debugging of the packer-based image builds.
3. If you wish to build only a subset of available images, list the names
you want as comma-separated values of the `PACKER_BUILDS` variable. Be
sure you *export* this variable so that `make` has access to it. For
example, `export PACKER_BUILDS=debian,prior-fedora`.
3. Both containers will place you in the default shell, inside a volume-mount of
the repository root. This environment is practically identical to what is
used in Cirrus-CI.
4. Still within the container, again ensure you have plenty of network and CPU
4. For the `image_builder_debug` container, If you wish to build only a subset
of available images, list the names you want as comma-separated values of the
`PACKER_BUILDS` variable. Be sure you *export* this variable so that `make`
has access to it. For example, `export PACKER_BUILDS=debian,prior-fedora`.
5. Still within the container, again ensure you have plenty of network and CPU
resources available. Build the VM Base images by executing the command
``make base_images``. This is the equivalent operation as documented by
*overview step 2*. ***N/B*** The GCS -> GCE image conversion can take
some time, be patient. Packer may not produce any output for several minutes
while the conversion is happening.
5. When successful, the names of the produced images will all be referenced
6. When successful, the names of the produced images will all be referenced
in the `base_images/manifest.json` file. If there are problems, fix them
and remove the `manifest.json` file. Then re-run the same *make* command
as before, packer will force-overwrite any broken/partially created
images automatically.
6. Produce the VM Cache Images, equivalent to the operations outlined
7. Produce the VM Cache Images, equivalent to the operations outlined
in *overview step 3*. Execute the following command (still within the
debug image-builder container): ``make cache_images``.
7. Again when successful, you will find the image names are written into
8. Again when successful, you will find the image names are written into
the `cache_images/manifest.json` file. If there is a problem, remove
this file, fix the problem, and re-run the `make` command. No cleanup
is necessary, leftover/disused images will be automatically cleaned up

View File

@ -26,8 +26,6 @@ variables: # Empty value means it must be passed in on command-line
PRIOR_FEDORA_IMAGE_URL: "{{env `PRIOR_FEDORA_IMAGE_URL`}}"
PRIOR_FEDORA_CSUM_URL: "{{env `PRIOR_FEDORA_CSUM_URL`}}"
FEDORA_IMPORT_IMG_SFX: "{{env `FEDORA_IMPORT_IMG_SFX`}}"
DEBIAN_RELEASE: "{{env `DEBIAN_RELEASE`}}"
DEBIAN_BASE_FAMILY: "{{env `DEBIAN_BASE_FAMILY`}}"
@ -63,6 +61,7 @@ builders:
type: 'qemu'
accelerator: "kvm"
qemu_binary: '/usr/libexec/qemu-kvm' # Unique to CentOS, not fedora :(
memory: 12288
iso_url: '{{user `FEDORA_IMAGE_URL`}}'
disk_image: true
format: "raw"
@ -75,12 +74,12 @@ builders:
headless: true
# qemu_binary: "/usr/libexec/qemu-kvm"
qemuargs: # List-of-list format required to override packer-generated args
- - "-m"
- "1024"
- - "-display"
- "none"
- - "-device"
- "virtio-rng-pci"
- - "-chardev"
- "tty,id=pts,path={{user `TTYDEV`}}"
- "file,id=pts,path={{user `TTYDEV`}}"
- - "-device"
- "isa-serial,chardev=pts"
- - "-netdev"
@ -108,20 +107,18 @@ builders:
- &fedora-aws
name: 'fedora-aws'
type: 'amazon-ebs'
source_ami_filter: # Will fail if >1 or no AMI found
source_ami_filter:
# Many of these search filter values (like account ID and name) aren't publicized
# anywhere. They were found by examining AWS EC2 AMIs published/referenced from
# the AWS sections on https://fedoraproject.org/cloud/download
owners:
# Docs are wrong, specifying the Account ID required to make AMIs private.
# The Account ID is hard-coded here out of expediency, since passing in
# more packer args from the command-line (in Makefile) is non-trivial.
- &accountid '449134212816'
# It's necessary to 'search' for the base-image by these criteria. If
# more than one image is found, Packer will fail the build (and display
# the conflicting AMI IDs).
- &fedora_accountid 125523088429
most_recent: true # Required b/c >1 search result likely to be returned
filters: &ami_filters
architecture: 'x86_64'
image-type: 'machine'
is-public: 'false'
name: '{{build_name}}-i{{user `FEDORA_IMPORT_IMG_SFX`}}'
is-public: 'true'
name: 'Fedora-Cloud-Base*-{{user `FEDORA_RELEASE`}}-*'
root-device-type: 'ebs'
state: 'available'
virtualization-type: 'hvm'
@ -145,7 +142,6 @@ builders:
volume_type: 'gp2'
delete_on_termination: true
# These are critical and used by security-polciy to enforce instance launch limits.
tags: &awstags
<<: *imgcpylabels
# EC2 expects "Name" to be capitalized
@ -159,7 +155,7 @@ builders:
# This is necessary for security - The CI service accounts are not permitted
# to use AMI's from any other account, including public ones.
ami_users:
- *accountid
- &accountid '449134212816'
ssh_username: 'fedora'
ssh_clear_authorized_keys: true
# N/B: Required Packer >= 1.8.0
@ -170,7 +166,8 @@ builders:
name: 'fedora-aws-arm64'
source_ami_filter:
owners:
- *accountid
- *fedora_accountid
most_recent: true # Required b/c >1 search result likely to be returned
filters:
<<: *ami_filters
architecture: 'arm64'
@ -187,23 +184,23 @@ provisioners: # Debian images come bundled with GCE integrations provisioned
- type: 'shell'
inline:
- 'set -e'
- 'mkdir -p /tmp/automation_images'
- 'mkdir -p /var/tmp/automation_images'
- type: 'file'
source: '{{ pwd }}/'
destination: '/tmp/automation_images/'
destination: '/var/tmp/automation_images/'
- except: ['debian']
type: 'shell'
inline:
- 'set -e'
- '/bin/bash /tmp/automation_images/base_images/fedora_base-setup.sh'
- '/bin/bash /var/tmp/automation_images/base_images/fedora_base-setup.sh'
- only: ['debian']
type: 'shell'
inline:
- 'set -e'
- 'env DEBIAN_FRONTEND=noninteractive /bin/bash /tmp/automation_images/base_images/debian_base-setup.sh'
- 'env DEBIAN_FRONTEND=noninteractive DEBIAN_RELEASE={{user `DEBIAN_RELEASE`}} /bin/bash /var/tmp/automation_images/base_images/debian_base-setup.sh'
post-processors:
# Must be double-nested to guarantee execution order

View File

@ -16,6 +16,15 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# Cloud-networking in general can sometimes be flaky.
# Increase Apt's tolerance levels.
cat << EOF | $SUDO tee -a /etc/apt/apt.conf.d/99timeouts
// Added during CI VM image build
Acquire::Retries "3";
Acquire::http::timeout "300";
Acquire::https::timeout "300";
EOF
echo "Switch sources to Debian Unstable (SID)"
cat << EOF | $SUDO tee /etc/apt/sources.list
deb http://deb.debian.org/debian/ unstable main
@ -28,7 +37,6 @@ PKGS=( \
curl
cloud-init
gawk
git
openssh-client
openssh-server
rng-tools5
@ -36,9 +44,21 @@ PKGS=( \
)
echo "Updating package source lists"
( set -x; $SUDO apt-get -qq -y update; )
( set -x; $SUDO apt-get -q -y update; )
# Only deps for automation tooling
( set -x; $SUDO apt-get -q -y install git )
install_automation_tooling
# Ensure automation library is loaded
source "$REPO_DIRPATH/lib.sh"
# Workaround 12->13 forward-incompatible change in grub scripts.
# Without this, updating to the SID kernel may fail.
echo "Upgrading grub-common"
( set -x; $SUDO apt-get -q -y upgrade grub-common; )
echo "Upgrading to SID"
( set -x; $SUDO apt-get -qq -y full-upgrade; )
( set -x; $SUDO apt-get -q -y full-upgrade; )
echo "Installing basic, necessary packages."
( set -x; $SUDO apt-get -q -y install "${PKGS[@]}"; )
@ -47,21 +67,15 @@ echo "Installing basic, necessary packages."
dpkg-reconfigure dash; )
# Ref: https://wiki.debian.org/DebianReleases
# CI automation needs a *sortable* OS version/release number to select/perform/apply
# runtime configuration and workarounds. Since switching to Unstable/SID, a
# numeric release version is not available. While an imperfect solution,
# base an artificial version off the 'base-files' package version, right-padded with
# zeros to ensure sortability (i.e. "12.02" < "12.13").
base_files_version=$(dpkg -s base-files | awk '/Version:/{print $2}')
base_major=$(cut -d. -f 1 <<<"$base_files_version")
base_minor=$(cut -d. -f 2 <<<"$base_files_version")
sortable_version=$(printf "%02d.%02d" $base_major $base_minor)
echo "WARN: This is NOT an official version number. It's for CI-automation purposes only."
( set -x; echo "VERSION_ID=\"$sortable_version\"" | \
# CI automation needs an OS version/release number for a variety of uses.
# However, After switching to Unstable/SID, the value from the usual source
# is not available. Simply use the value passed through packer by the Makefile.
req_env_vars DEBIAN_RELEASE
# shellcheck disable=SC2154
warn "Setting '$DEBIAN_RELEASE' as the release number for CI-automation purposes."
( set -x; echo "VERSION_ID=\"$DEBIAN_RELEASE\"" | \
$SUDO tee -a /etc/os-release; )
install_automation_tooling
if ! ((CONTAINER)); then
custom_cloud_init
( set -x; $SUDO systemctl enable rngd; )

View File

@ -18,7 +18,6 @@ source "$REPO_DIRPATH/lib.sh"
declare -a PKGS
PKGS=(rng-tools git coreutils cloud-init)
XARGS=--disablerepo=updates
if ! ((CONTAINER)); then
# Packer defines this automatically for us
# shellcheck disable=SC2154
@ -30,20 +29,28 @@ if ! ((CONTAINER)); then
if ((OS_RELEASE_VER<35)); then
PKGS+=(google-compute-engine-tools)
else
PKGS+=(google-compute-engine-guest-configs)
PKGS+=(google-compute-engine-guest-configs google-guest-agent)
fi
fi
fi
# Due to https://bugzilla.redhat.com/show_bug.cgi?id=1907030
# updates cannot be installed or even looked at during this stage.
# Pawn the problem off to the cache-image stage where more memory
# is available and debugging is also easier. Try to save some more
# memory by pre-populating repo metadata prior to any transactions.
$SUDO dnf makecache $XARGS
# Updates disable, see comment above
# $SUDO dnf -y update $XARGS
$SUDO dnf -y install $XARGS "${PKGS[@]}"
# The Fedora CI VM base images are built using nested-virt with
# limited resources available. Further, cloud-networking in
# general can sometimes be flaky. Increase DNF's tolerance
# levels.
cat << EOF | $SUDO tee -a /etc/dnf/dnf.conf
# Added during CI VM image build
minrate=100
timeout=60
EOF
$SUDO dnf makecache
$SUDO dnf -y update
$SUDO dnf -y install "${PKGS[@]}"
# Occasionally following an install, there are more updates available.
# This may be due to activation of suggested/recommended dependency resolution.
$SUDO dnf -y update
if ! ((CONTAINER)); then
$SUDO systemctl enable rngd
@ -83,7 +90,9 @@ if ! ((CONTAINER)); then
# This is necessary to prevent permission-denied errors on service-start
# and also on the off-chance the package gets updated and context reset.
$SUDO semanage fcontext --add --type bin_t /usr/bin/cloud-init
$SUDO restorecon -v /usr/bin/cloud-init
# This used restorecon before so we don't have to specify the file_contexts.local
# manually, however with f42 that stopped working: https://bugzilla.redhat.com/show_bug.cgi?id=2360183
$SUDO setfiles -v /etc/selinux/targeted/contexts/files/file_contexts.local /usr/bin/cloud-init
else # GCP Image
echo "Setting GCP startup service (for Cirrus-CI agent) SELinux unconfined"
# ref: https://cloud.google.com/compute/docs/startupscript
@ -95,10 +104,4 @@ if ! ((CONTAINER)); then
/lib/$METADATA_SERVICE_PATH | $SUDO tee -a /etc/$METADATA_SERVICE_PATH
fi
if [[ "$OS_RELEASE_ID" == "fedora" ]] && ((OS_RELEASE_VER>=33)); then
# Ref: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=783509
echo "Disabling automatic /tmp (tmpfs) mount"
$SUDO systemctl mask tmp.mount
fi
finalize

View File

@ -1,26 +0,0 @@
#!/bin/bash
# This script is intended to be used from two places only:
# 1) When building the build-push VM image, to install the scripts as-is
# in a PR in order for CI testing to operate on them.
# 2) From the autoupdate.sh script, when $BUILDPUSHAUTOUPDATED is unset
# or '0'. This clones the latest repository to install (possibly)
# updated scripts.
#
# WARNING: Use under any other circumstances will probably screw things up.
if [[ -z "$BUILDPUSHAUTOUPDATED" ]];
then
echo "This script must only be run under Packer or autoupdate.sh"
exit 1
fi
source /etc/automation_environment
source "$AUTOMATION_LIB_PATH/common_lib.sh"
#shellcheck disable=SC2154
cd $(dirname "$SCRIPT_FILEPATH") || exit 1
# Must be installed into $AUTOMATION_LIB_PATH/../bin which is on $PATH
cp ./bin/* $AUTOMATION_LIB_PATH/../bin/
cp ./lib/* $AUTOMATION_LIB_PATH/
chmod +x $AUTOMATION_LIB_PATH/../bin/*

View File

@ -1,5 +0,0 @@
# DO NOT USE
This directory contains scripts/data used by the Cirrus-CI
`test_build-push` task. It is not intended to be used otherwise
and may cause harm.

View File

@ -1,172 +0,0 @@
#!/bin/bash
# This script is not intended for humans. It should be run by automation
# at the branch-level in automation for the skopeo, buildah, and podman
# repositories. It's purpose is to produce a multi-arch container image
# based on the contents of context subdirectory. At runtime, $PWD is assumed
# to be the root of the cloned git repository.
#
# The first argument to the script, should be the URL of the git repository
# in question. Though at this time, this is only used for labeling the
# resulting image.
#
# The second argument to this script is the relative path to the build context
# subdirectory. The basename of this subdirectory may indicates the
# image flavor (i.e. `upstream`, `testing`, or `stable`). Depending
# on this value, the image may be pushed to multiple container registries
# under slightly different rules (see the next option).
#
# If the basename of the context directory (second argument) does NOT reflect
# the image flavor, this name may be passed in as a third argument. Handling
# of this argument may be repository-specific, so check the actual code below
# to understand it's behavior.
set -eo pipefail
if [[ -r "/etc/automation_environment" ]]; then
source /etc/automation_environment # defines AUTOMATION_LIB_PATH
#shellcheck disable=SC1090,SC2154
source "$AUTOMATION_LIB_PATH/common_lib.sh"
#shellcheck source=../lib/autoupdate.sh
source "$AUTOMATION_LIB_PATH/autoupdate.sh"
else
echo "Expecting to find automation common library installed."
exit 1
fi
# Careful: Changing the error message below could break auto-update test.
if [[ "$#" -lt 2 ]]; then
#shellcheck disable=SC2145
die "Must be called with at least two arguments, got '$@'"
fi
if [[ -z $(type -P build-push.sh) ]]; then
die "It does not appear that build-push.sh is installed properly"
fi
if ! [[ -d "$PWD/.git" ]]; then
die "The current directory ($PWD) does not appear to be the root of a git repo."
fi
# Assume transitive debugging state for build-push.sh if set
if [[ "$(automation_version | cut -d '.' -f 1)" -ge 4 ]]; then
# Valid for version 4.0.0 and above only
export A_DEBUG
else
export DEBUG
fi
# Arches to build by default - may be overridden for testing
ARCHES="${ARCHES:-amd64,ppc64le,s390x,arm64}"
# First arg (REPO_URL) is the clone URL for repository for informational purposes
REPO_URL="$1"
REPO_NAME=$(basename "${REPO_URL%.git}")
# Second arg (CTX_SUB) is the context subdirectory relative to the clone path
CTX_SUB="$2"
# Historically, the basename of second arg set the image flavor(i.e. `upstream`,
# `testing`, or `stable`). For cases where this convention doesn't fit,
# it's possible to pass the flavor-name as the third argument. Both methods
# will populate a "FLAVOR" build-arg value.
if [[ "$#" -lt 3 ]]; then
FLAVOR_NAME=$(basename "$CTX_SUB")
elif [[ "$#" -ge 3 ]]; then
FLAVOR_NAME="$3" # An empty-value is valid
else
die "Expecting a non-empty third argument indicating the FLAVOR build-arg value."
fi
_REG="quay.io"
if [[ "$REPO_NAME" =~ testing ]]; then
_REG="example.com"
fi
REPO_FQIN="$_REG/$REPO_NAME/$FLAVOR_NAME"
req_env_vars REPO_URL REPO_NAME CTX_SUB FLAVOR_NAME
# Common library defines SCRIPT_FILENAME
# shellcheck disable=SC2154
dbg "$SCRIPT_FILENAME operating constants:
REPO_URL=$REPO_URL
REPO_NAME=$REPO_NAME
CTX_SUB=$CTX_SUB
FLAVOR_NAME=$FLAVOR_NAME
REPO_FQIN=$REPO_FQIN
"
# Set non-zero to avoid actually executing build-push, simply print
# the command-line that would have been executed
DRYRUN=${DRYRUN:-0}
_DRNOPUSH=""
if ((DRYRUN)); then
_DRNOPUSH="--nopush"
warn "Operating in dry-run mode with $_DRNOPUSH"
fi
### MAIN
declare -a build_args
if [[ -n "$FLAVOR_NAME" ]]; then
build_args=(--build-arg "FLAVOR=$FLAVOR_NAME")
fi
# Labels to add to all images
# N/B: These won't show up in the manifest-list itself, only it's constituents.
lblargs="\
--label=org.opencontainers.image.source=$REPO_URL \
--label=org.opencontainers.image.created=$(date -u --iso-8601=seconds)"
dbg "lblargs=$lblargs"
modcmdarg="tag_version.sh $FLAVOR_NAME"
# For stable images, the version number of the command is needed for tagging.
if [[ "$FLAVOR_NAME" == "stable" ]]; then
# only native arch is needed to extract the version
dbg "Building local-arch image to extract stable version number"
podman build -t $REPO_FQIN "${build_args[@]}" ./$CTX_SUB
case "$REPO_NAME" in
skopeo) version_cmd="--version" ;;
buildah) version_cmd="buildah --version" ;;
podman) version_cmd="podman --version" ;;
testing) version_cmd="cat FAKE_VERSION" ;;
*) die "Unknown/unsupported repo '$REPO_NAME'" ;;
esac
pvcmd="podman run -i --rm $REPO_FQIN $version_cmd"
dbg "Extracting version with command: $pvcmd"
version_output=$($pvcmd)
dbg "version output:
$version_output
"
img_cmd_version=$(awk -r -e '/^.+ version /{print $3}' <<<"$version_output")
dbg "parsed version: $img_cmd_version"
test -n "$img_cmd_version"
lblargs="$lblargs --label=org.opencontainers.image.version=$img_cmd_version"
# Prevent temporary build colliding with multi-arch manifest list (built next)
# but preserve image (by ID) for use as cache.
dbg "Un-tagging $REPO_FQIN"
podman untag $REPO_FQIN
# tag-version.sh expects this arg. when FLAVOR_NAME=stable
modcmdarg+=" $img_cmd_version"
# Stable images get pushed to 'containers' namespace as latest & version-tagged
build-push.sh \
$_DRNOPUSH \
--arches=$ARCHES \
--modcmd="$modcmdarg" \
$_REG/containers/$REPO_NAME \
./$CTX_SUB \
$lblargs \
"${build_args[@]}"
fi
# All images are pushed to quay.io/<reponame>, both
# latest and version-tagged (if available).
build-push.sh \
$_DRNOPUSH \
--arches=$ARCHES \
--modcmd="$modcmdarg" \
$REPO_FQIN \
./$CTX_SUB \
$lblargs \
"${build_args[@]}"

View File

@ -1,69 +0,0 @@
#!/bin/bash
# This script is not intended for humans. It should only be referenced
# as an argument to the build-push.sh `--modcmd` option. It's purpose
# is to ensure stable images are re-tagged with a verison-number
# cooresponding to the included tool's version.
set -eo pipefail
if [[ -r "/etc/automation_environment" ]]; then
source /etc/automation_environment # defines AUTOMATION_LIB_PATH
#shellcheck disable=SC1090,SC2154
source "$AUTOMATION_LIB_PATH/common_lib.sh"
else
echo "Unexpected operating environment"
exit 1
fi
# Vars defined by build-push.sh spec. for mod scripts
req_env_vars SCRIPT_FILENAME SCRIPT_FILEPATH RUNTIME PLATFORMOS FQIN CONTEXT \
PUSH ARCHES REGSERVER NAMESPACE IMGNAME MODCMD
if [[ "$#" -ge 1 ]]; then
FLAVOR_NAME="$1" # upstream, testing, or stable
fi
if [[ "$#" -ge 2 ]]; then
# Enforce all version-tags start with a 'v'
VERSION="v${2#v}" # output of $version_cmd
fi
if [[ -z "$FLAVOR_NAME" ]]; then
# Defined by common_lib.sh
# shellcheck disable=SC2154
warn "$SCRIPT_FILENAME passed empty flavor-name argument (optional)."
elif [[ -z "$VERSION" ]]; then
warn "$SCRIPT_FILENAME received empty version argument (req. for FLAVOR_NAME=stable)."
fi
# shellcheck disable=SC2154
dbg "Mod-command operating on $FQIN in '$FLAVOR_NAME' flavor"
if [[ "$FLAVOR_NAME" == "stable" ]]; then
# Stable images must all be tagged with a version number.
# Confirm this value is passed in by caller.
req_env_vars VERSION
VERSION=v${VERSION#v}
if grep -E -q '^v[0-9]+\.[0-9]+\.[0-9]+'<<<"$VERSION"; then
msg "Found image command version '$VERSION'"
else
die "Encountered unexpected/non-conforming version '$VERSION'"
fi
# shellcheck disable=SC2154
$RUNTIME tag $FQIN:latest $FQIN:$VERSION
msg "Successfully tagged $FQIN:$VERSION"
# Tag as x.y to provide a consistent tag even for a future z+1
xy_ver=$(awk -F '.' '{print $1"."$2}'<<<"$VERSION")
$RUNTIME tag $FQIN:latest $FQIN:$xy_ver
msg "Successfully tagged $FQIN:$xy_ver"
# Tag as x to provide consistent tag even for a future y+1
x_ver=$(awk -F '.' '{print $1}'<<<"$xy_ver")
$RUNTIME tag $FQIN:latest $FQIN:$x_ver
msg "Successfully tagged $FQIN:$x_ver"
else
warn "$SCRIPT_FILENAME not version-tagging for '$FLAVOR_NAME' stage of '$FQIN'"
fi

View File

@ -1,36 +0,0 @@
# This script is not intended for humans. It should only be sourced by
# main.sh. If BUILDPUSHAUTOUPDATED!=0 this it will be a no-op. Otherwise,
# it will download the latest version of the build-push scripts and re-exec
# main.sh. This allows the scripts to be updated without requiring new VM
# images to be composed and deployed.
#
# WARNING: Changes to this script _do_ require new VM images as auto-updating
# the auto-update script would be complex and hard to test.
# Must be exported - .install.sh checks this is set.
export BUILDPUSHAUTOUPDATED="${BUILDPUSHAUTOUPDATED:-0}"
if ! ((BUILDPUSHAUTOUPDATED)); then
msg "Auto-updating build-push operational scripts..."
#shellcheck disable=SC2154
GITTMP=$(mktemp -p '' -d "$MKTEMP_FORMAT")
trap "rm -rf $GITTMP" EXIT
msg "Obtaining latest version..."
git clone --quiet --depth=1 \
https://github.com/containers/automation_images.git \
"$GITTMP"
cd $GITTMP/build-push || exit 1
msg "Replacing build-push scripts from containers/automation_images commit $(git rev-parse --short=8 HEAD)..."
bash ./.install.sh
# Important: Return to directory main.sh was started from
cd - || exit 1
rm -rf "$GITTMP"
#shellcheck disable=SC2145
msg "Re-executing main.sh $@..."
export BUILDPUSHAUTOUPDATED=1
exec main.sh "$@" # guaranteed on $PATH
fi

View File

@ -1,195 +0,0 @@
# DO NOT USE - This script is intended to be called by the Cirrus-CI
# `test_build-push` task. It is not intended to be used otherwise
# and may cause harm. It's purpose is to confirm the 'main.sh' script
# behaves in an expected way, given a local test repository as input.
set -eo pipefail
SCRIPT_DIRPATH=$(dirname $(realpath "${BASH_SOURCE[0]}"))
source $SCRIPT_DIRPATH/../lib.sh
req_env_vars CIRRUS_CI
# No need to test if image wasn't built
if TARGET_NAME=build-push skip_on_pr_label; then exit 0; fi
# Architectures to test with (golang standard names)
TESTARCHES="amd64 arm64"
# main.sh is sensitive to this value
ARCHES=$(tr " " ","<<<"$TESTARCHES")
export ARCHES
# Contrived "version" for testing purposes
FAKE_VER_X=$RANDOM
FAKE_VER_Y=$RANDOM
FAKE_VER_Z=$RANDOM
FAKE_VERSION="$FAKE_VER_X.$FAKE_VER_Y.$FAKE_VER_Z"
# Contrived source repository for testing
SRC_TMP=$(mktemp -p '' -d tmp-build-push-test-XXXX)
# Do not change, main.sh is sensitive to the 'testing' name
TEST_FQIN=example.com/testing/stable
# Stable build should result in manifest list tagged this
TEST_FQIN2=example.com/containers/testing
# Don't allow main.sh or tag_version.sh to auto-update at runtime
export BUILDPUSHAUTOUPDATED=1
trap "rm -rf $SRC_TMP" EXIT
# main.sh expects $PWD to be a git repository.
msg "
##### Constructing local test repository #####"
cd $SRC_TMP
showrun git init -b main testing
cd testing
git config --local user.name "Testy McTestface"
git config --local user.email "test@example.com"
git config --local advice.detachedHead "false"
git config --local commit.gpgsign "false"
# The following paths match the style of sub-dir in the actual
# skopeo/buildah/podman repositories. Only the 'stable' flavor
# is tested here, since it involves the most complex workflow.
mkdir -vp "contrib/testimage/stable"
cd "contrib/testimage/stable"
echo "build-push-test version v$FAKE_VERSION" | tee "FAKE_VERSION"
cat <<EOF | tee "Containerfile"
FROM registry.fedoraproject.org/fedora:latest
ARG FLAVOR
ADD /FAKE_VERSION /
RUN echo "FLAVOUR=\$FLAVOR" > /FLAVOUR
EOF
# As an additional test, build and check images when pasing
# the 'stable' flavor name as a command-line arg instead
# of using the subdirectory dirname (old method).
cd $SRC_TMP/testing/contrib/testimage
cp stable/* ./
cd $SRC_TMP/testing
# The images will have the repo & commit ID set as labels
git add --all
git commit -m 'test repo initial commit'
TEST_REVISION=$(git rev-parse HEAD)
# Given the flavor-name as the first argument, verify built image
# expectations. For 'stable' image, verify that main.sh will properly
# version-tagged both FQINs. For other flavors, verify expected labels
# on the `latest` tagged FQINs.
verify_built_images() {
local _fqin _arch xy_ver x_ver img_ver img_src img_rev _fltr
local _test_tag expected_flavor _test_fqins
expected_flavor="$1"
msg "
##### Testing execution of '$expected_flavor' images for arches $TESTARCHES #####"
podman --version
req_env_vars TESTARCHES FAKE_VERSION TEST_FQIN TEST_FQIN2
declare -a _test_fqins
_test_fqins=("${TEST_FQIN%stable}$expected_flavor")
if [[ "$expected_flavor" == "stable" ]]; then
_test_fqins+=("$TEST_FQIN2")
test_tag="v$FAKE_VERSION"
xy_ver="v$FAKE_VER_X.$FAKE_VER_Y"
x_ver="v$FAKE_VER_X"
else
test_tag="latest"
xy_ver="latest"
x_ver="latest"
fi
for _fqin in "${_test_fqins[@]}"; do
for _arch in $TESTARCHES; do
msg "Testing container can execute '/bin/true'"
showrun podman run -i --arch=$_arch --rm "$_fqin:$test_tag" /bin/true
msg "Testing container FLAVOR build-arg passed correctly"
showrun podman run -i --arch=$_arch --rm "$_fqin:$test_tag" \
cat /FLAVOUR | tee /dev/stderr | fgrep -xq "FLAVOUR=$expected_flavor"
if [[ "$expected_flavor" == "stable" ]]; then
msg "Testing tag '$xy_ver'"
if ! showrun podman manifest exists $_fqin:$xy_ver; then
die "Failed to find manifest-list tagged '$xy_ver'"
fi
msg "Testing tag '$x_ver'"
if ! showrun podman manifest exists $_fqin:$x_ver; then
die "Failed to find manifest-list tagged '$x_ver'"
fi
fi
done
if [[ "$expected_flavor" == "stable" ]]; then
msg "Testing image $_fqin:$test_tag version label"
_fltr='.[].Config.Labels."org.opencontainers.image.version"'
img_ver=$(podman inspect $_fqin:$test_tag | jq -r -e "$_fltr")
showrun test "$img_ver" == "v$FAKE_VERSION"
fi
msg "Testing image $_fqin:$test_tag source label"
_fltr='.[].Config.Labels."org.opencontainers.image.source"'
img_src=$(podman inspect $_fqin:$test_tag | jq -r -e "$_fltr")
showrun test "$img_src" == "git://testing"
done
}
remove_built_images() {
buildah --version
for _fqin in $TEST_FQIN $TEST_FQIN2; do
for tag in latest v$FAKE_VERSION v$FAKE_VER_X.$FAKE_VER_Y v$FAKE_VER_X; do
# Don't care if this fails
podman manifest rm $_fqin:$tag || true
done
done
}
msg "
##### Testing build-push subdir-flavor run of '$TEST_FQIN' & '$TEST_FQIN2' #####"
cd $SRC_TMP/testing
export DRYRUN=1 # Force main.sh not to push anything
req_env_vars ARCHES DRYRUN
# main.sh is sensitive to 'testing' value.
# Also confirms main.sh is on $PATH
env A_DEBUG=1 main.sh git://testing contrib/testimage/stable
verify_built_images stable
msg "
##### Testing build-push flavour-arg run for '$TEST_FQIN' & '$TEST_FQIN2' #####"
remove_built_images
env A_DEBUG=1 main.sh git://testing contrib/testimage foobarbaz
verify_built_images foobarbaz
# This script verifies it's only/ever running inside CI. Use a fake
# main.sh to verify it auto-updates itself w/o actually performing
# a build. N/B: This test must be run last, in a throw-away environment,
# it _WILL_ modify on-disk contents!
msg "
##### Testing auto-update capability #####"
cd $SRC_TMP
#shellcheck disable=SC2154
cat >main.sh<< EOF
#!/bin/bash
source /etc/automation_environment # defines AUTOMATION_LIB_PATH
source "$AUTOMATION_LIB_PATH/common_lib.sh"
source "$AUTOMATION_LIB_PATH/autoupdate.sh"
EOF
chmod +x main.sh
# Back to where we were
cd -
# Expect the real main.sh to bark one of two error messages
# and exit non-zero.
EXP_RX1="Must.be.called.with.at.least.two.arguments"
EXP_RX2="does.not.appear.to.be.the.root.of.a.git.repo"
if output=$(env --ignore-environment \
BUILDPUSHAUTOUPDATED=0 \
AUTOMATION_LIB_PATH=$AUTOMATION_LIB_PATH \
$SRC_TMP/main.sh 2>&1); then
die "Fail. Expected main.sh to exit non-zero"
else
if [[ "$output" =~ $EXP_RX1 ]] || [[ "$output" =~ $EXP_RX2 ]]; then
echo "PASS"
else
die "Fail. Expecting match to '$EXP_RX1' or '$EXP_RX2', got:
$output"
fi
fi

View File

@ -30,6 +30,7 @@ INSTALL_PACKAGES=(\
python3-pip
qemu-user-static
skopeo
unzip
)
echo "Installing general build/test dependencies"
@ -38,14 +39,7 @@ bigto $SUDO dnf install -y "${INSTALL_PACKAGES[@]}"
# It was observed in F33, dnf install doesn't always get you the latest/greatest
lilto $SUDO dnf update -y
# Re-install with the 'build-push' component
install_automation_tooling build-push
# Install main scripts into directory on $PATH
cd $REPO_DIRPATH/build-push
set -x
# Do not auto-update to allow testing inside a PR
$SUDO env BUILDPUSHAUTOUPDATED=1 bash ./.install.sh
# Install wait-for-copr
$SUDO pip3 install git+https://github.com/packit/wait-for-copr.git@main
# Re-install would append to this, making a mess.
$SUDO rm -f /etc/automation_environment
# Re-install the latest version with the 'build-push' component
install_automation_tooling latest build-push

View File

@ -75,9 +75,6 @@ builders:
source_image_family: 'fedora-base'
labels: *fedora_gce_labels
- <<: *aux_fed_img
name: 'fedora-podman-py'
- <<: *aux_fed_img
name: 'fedora-netavark'
@ -183,30 +180,30 @@ provisioners:
- type: 'shell'
inline:
- 'set -e'
- 'mkdir -p /tmp/automation_images'
- 'mkdir -p /var/tmp/automation_images'
- type: 'file'
source: '{{ pwd }}/'
destination: "/tmp/automation_images"
destination: "/var/tmp/automation_images"
- only: ['rawhide']
type: 'shell'
expect_disconnect: true # VM will be rebooted at end of script
inline:
- 'set -e'
- '/bin/bash /tmp/automation_images/cache_images/rawhide_setup.sh'
- '/bin/bash /var/tmp/automation_images/cache_images/rawhide_setup.sh'
- except: ['debian']
type: 'shell'
inline:
- 'set -e'
- '/bin/bash /tmp/automation_images/cache_images/fedora_setup.sh'
- '/bin/bash /var/tmp/automation_images/cache_images/fedora_setup.sh'
- only: ['debian']
type: 'shell'
inline:
- 'set -e'
- 'env DEBIAN_FRONTEND=noninteractive /bin/bash /tmp/automation_images/cache_images/debian_setup.sh'
- 'env DEBIAN_FRONTEND=noninteractive /bin/bash /var/tmp/automation_images/cache_images/debian_setup.sh'
post-processors:
# This is critical for human-interaction. Copntents will be used

View File

@ -15,8 +15,8 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
source "$REPO_DIRPATH/lib.sh"
msg "Updating/Installing repos and packages for $OS_REL_VER"
lilto ooe.sh $SUDO apt-get -qq -y update
bigto ooe.sh $SUDO apt-get -qq -y upgrade
lilto ooe.sh $SUDO apt-get -q -y update
bigto ooe.sh $SUDO apt-get -q -y upgrade
INSTALL_PACKAGES=(\
apache2-utils
@ -39,13 +39,12 @@ INSTALL_PACKAGES=(\
crun
dnsmasq
e2fslibs-dev
emacs-nox
file
fuse3
fuse-overlayfs
gcc
gettext
git-daemon-run
git
gnupg2
go-md2man
golang
@ -60,7 +59,6 @@ INSTALL_PACKAGES=(\
libdevmapper-dev
libdevmapper1.02.1
libfuse-dev
libfuse2
libfuse3-dev
libglib2.0-dev
libgpgme11-dev
@ -105,6 +103,8 @@ INSTALL_PACKAGES=(\
skopeo
slirp4netns
socat
libsqlite3-0
libsqlite3-dev
systemd-container
sudo
time
@ -118,18 +118,18 @@ INSTALL_PACKAGES=(\
zstd
)
# bpftrace is only needed on the host as containers cannot run ebpf
# programs anyway and it is very big so we should not bloat the container
# images unnecessarily.
if ! ((CONTAINER)); then
INSTALL_PACKAGES+=( \
bpftrace
)
fi
msg "Installing general build/testing dependencies"
bigto $SUDO apt-get -q -y install "${INSTALL_PACKAGES[@]}"
msg "Enabling contrib source & installing ZFS support (for containers/storage CI)"
ZFS_PACKAGES=(\
linux-headers-cloud-amd64
zfsutils
)
$SUDO sed -i -r 's/^(deb.*)/\1 contrib/g' /etc/apt/sources.list
lilto ooe.sh $SUDO apt-get -qq -y update
bigto $SUDO apt-get -q -y install "${ZFS_PACKAGES[@]}"
# The nc installed by default is missing many required options
$SUDO update-alternatives --set nc /usr/bin/ncat
@ -162,7 +162,7 @@ echo "deb https://download.docker.com/linux/debian $docker_debian_release stable
if ((CONTAINER==0)) && [[ ${#DOWNLOAD_PACKAGES[@]} -gt 0 ]]; then
$SUDO apt-get clean # no reason to keep previous downloads around
# Needed to install .deb files + resolve dependencies
lilto $SUDO apt-get -qq -y update
lilto $SUDO apt-get -q -y update
echo "Downloading packages for optional installation at runtime."
$SUDO ln -s /var/cache/apt/archives "$PACKAGE_DOWNLOAD_DIR"
bigto $SUDO apt-get -q -y install --download-only "${DOWNLOAD_PACKAGES[@]}"

View File

@ -17,14 +17,44 @@ fi
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# Generate en_US.UTF-8 locale as this is required for a podman test (https://github.com/containers/podman/pull/19635).
$SUDO sed -i '/en_US.UTF-8/s/^#//g' /etc/locale.gen
$SUDO locale-gen
# Debian doesn't mount tmpfs on /tmp as default but we want this to speed tests up so
# they don't have to write to persistent disk.
# https://github.com/containers/podman/pull/22533
$SUDO mkdir -p /etc/systemd/system/local-fs.target.wants/
cat <<EOF | $SUDO tee /etc/systemd/system/tmp.mount
[Unit]
Description=Temporary Directory /tmp
ConditionPathIsSymbolicLink=!/tmp
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/tmp
Type=tmpfs
Options=size=75%%,mode=1777
EOF
# enable the unit by default
$SUDO ln -s ../tmp.mount /etc/systemd/system/local-fs.target.wants/tmp.mount
req_env_vars PACKER_BUILD_NAME
bash $SCRIPT_DIRPATH/debian_packaging.sh
# dnsmasq is set to bind 0.0.0.0:53, that will conflict with our dns tests.
# We don't need a local resolver.
$SUDO systemctl disable dnsmasq.service
$SUDO systemctl mask dnsmasq.service
if ! ((CONTAINER)); then
warn "Making Debian kernel enable cgroup swap accounting"
warn "Forcing CgroupsV1"
SEDCMD='s/^GRUB_CMDLINE_LINUX="(.*)"/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=0"/'
SEDCMD='s/^GRUB_CMDLINE_LINUX="(.*)"/GRUB_CMDLINE_LINUX="\1 cgroup_enable=memory swapaccount=1"/'
ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub.d/*
ooe.sh $SUDO sed -re "$SEDCMD" -i /etc/default/grub
ooe.sh $SUDO update-grub
@ -32,6 +62,10 @@ fi
nm_ignore_cni
if ! ((CONTAINER)); then
initialize_local_cache_registry
fi
finalize
echo "SUCCESS!"

View File

@ -1,98 +0,0 @@
#!/bin/bash
# This script is called from fedora_setup.sh and various Dockerfiles.
# It's not intended to be used outside of those contexts. It assumes the lib.sh
# library has already been sourced, and that all "ground-up" package-related activity
# needs to be done, including repository setup and initial update.
set -e
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# shellcheck disable=SC2154
warn "Enabling updates-testing repository for $PACKER_BUILD_NAME"
lilto ooe.sh $SUDO dnf install -y 'dnf-command(config-manager)'
lilto ooe.sh $SUDO dnf config-manager --set-enabled updates-testing
msg "Updating/Installing repos and packages for $OS_REL_VER"
bigto ooe.sh $SUDO dnf update -y
INSTALL_PACKAGES=(\
bash-completion
bridge-utils
buildah
bzip2
curl
findutils
fuse3
gcc
git
git-daemon
glib2-devel
glibc-devel
hostname
httpd-tools
iproute
iptables
jq
libtool
lsof
make
nmap-ncat
openssl
openssl-devel
pkgconfig
podman
policycoreutils
protobuf
protobuf-devel
python-pip-wheel
python-setuptools-wheel
python-toml
python-wheel-wheel
python3-PyYAML
python3-coverage
python3-dateutil
python3-docker
python3-fixtures
python3-libselinux
python3-libsemanage
python3-libvirt
python3-pip
python3-psutil
python3-pylint
python3-pytest
python3-pyxdg
python3-requests
python3-requests-mock
python3-virtualenv
python3.6
python3.8
python3.9
redhat-rpm-config
rsync
sed
skopeo
socat
tar
time
tox
unzip
vim
wget
xz
zip
zstd
)
echo "Installing general build/test dependencies"
bigto $SUDO dnf install -y "${INSTALL_PACKAGES[@]}"
# It was observed in F33, dnf install doesn't always get you the latest/greatest
lilto $SUDO dnf update -y

View File

@ -28,7 +28,7 @@ req_env_vars PACKER_BUILD_NAME
if [[ "$PACKER_BUILD_NAME" == "fedora" ]] && [[ ! "$PACKER_BUILD_NAME" =~ "prior" ]]; then
warn "Enabling updates-testing repository for $PACKER_BUILD_NAME"
lilto ooe.sh $SUDO dnf install -y 'dnf-command(config-manager)'
lilto ooe.sh $SUDO dnf config-manager --set-enabled updates-testing
lilto ooe.sh $SUDO dnf config-manager setopt updates-testing.enabled=1
else
warn "NOT enabling updates-testing repository for $PACKER_BUILD_NAME"
fi
@ -56,6 +56,7 @@ INSTALL_PACKAGES=(\
curl
device-mapper-devel
dnsmasq
docker-distribution
e2fsprogs-devel
emacs-nox
fakeroot
@ -64,10 +65,12 @@ INSTALL_PACKAGES=(\
fuse3
fuse3-devel
gcc
gh
git
git-daemon
glib2-devel
glibc-devel
glibc-langpack-en
glibc-static
gnupg
go-md2man
@ -80,6 +83,7 @@ INSTALL_PACKAGES=(\
iproute
iptables
jq
koji
krb5-workstation
libassuan
libassuan-devel
@ -99,7 +103,7 @@ INSTALL_PACKAGES=(\
libxslt-devel
lsof
make
mlocate
man-db
msitools
nfs-utils
nmap-ncat
@ -109,22 +113,31 @@ INSTALL_PACKAGES=(\
pandoc
parallel
passt
perl-Clone
perl-FindBin
pigz
pkgconfig
podman
podman-remote
pre-commit
procps-ng
protobuf
protobuf-c
protobuf-c-devel
protobuf-devel
python3-fedora-distro-aliases
python3-koji-cli-plugins
redhat-rpm-config
rpcbind
rsync
runc
sed
ShellCheck
skopeo
slirp4netns
socat
sqlite-libs
sqlite-devel
squashfs-tools
tar
time
@ -138,21 +151,13 @@ INSTALL_PACKAGES=(\
zstd
)
# Test with CNI in Fedora N-1
EXARG=""
if [[ "$PACKER_BUILD_NAME" =~ prior ]]; then
EXARG="--exclude=netavark --exclude=aardvark-dns"
fi
# Rawhide images don't need these pacakges
# Rawhide images don't need these packages
if [[ "$PACKER_BUILD_NAME" =~ fedora ]]; then
INSTALL_PACKAGES+=( \
docker-compose
python-pip-wheel
python-setuptools-wheel
python-toml
python-wheel-wheel
python2
python3-PyYAML
python3-coverage
python3-dateutil
@ -169,24 +174,38 @@ if [[ "$PACKER_BUILD_NAME" =~ fedora ]]; then
python3-requests
python3-requests-mock
)
else # podman-sequoia is only available in Rawhide
timebomb 20251101 "Also install the package in future Fedora releases, and enable Sequoia support in users of the images."
INSTALL_PACKAGES+=( \
podman-sequoia
)
fi
# Workarond: Around the time of this commit, the `criu` package
# was found to be missing a recommends-dependency on criu-libs.
# Until a fixed rpm lands in the Fedora repositories, manually
# include it here. This workaround should be removed once the
# package is corrected (likely > 3.17.1-3).
INSTALL_PACKAGES+=(criu-libs)
# When installing during a container-build, having this present
# will seriously screw up future dnf operations in very non-obvious ways.
# bpftrace is only needed on the host as containers cannot run ebpf
# programs anyway and it is very big so we should not bloat the container
# images unnecessarily.
if ! ((CONTAINER)); then
INSTALL_PACKAGES+=( \
bpftrace
composefs
container-selinux
fuse-overlayfs
libguestfs-tools
selinux-policy-devel
policycoreutils
)
# Extra packages needed by podman-machine-os
INSTALL_PACKAGES+=( \
podman-machine
osbuild
osbuild-tools
osbuild-ostree
xfsprogs
e2fsprogs
)
fi
@ -195,7 +214,6 @@ fi
DOWNLOAD_PACKAGES=(\
parallel
podman-docker
podman-plugins
python3-devel
python3-pip
python3-pytest
@ -203,7 +221,7 @@ DOWNLOAD_PACKAGES=(\
)
msg "Installing general build/test dependencies"
bigto $SUDO dnf install -y $EXARG "${INSTALL_PACKAGES[@]}"
bigto $SUDO dnf install -y "${INSTALL_PACKAGES[@]}"
msg "Downloading packages for optional installation at runtime, as needed."
$SUDO mkdir -p "$PACKAGE_DOWNLOAD_DIR"
@ -217,5 +235,6 @@ $SUDO curl --fail --silent --location -O \
https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
cd -
# It was observed in F33, dnf install doesn't always get you the latest/greatest
# Occasionally following an install, there are more updates available.
# This may be due to activation of suggested/recommended dependency resolution.
lilto $SUDO dnf update -y

View File

@ -17,6 +17,12 @@ fi
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
# Make /tmp tmpfs bigger, by default we only get 50%. Bump it to 75% so the tests have more storage.
# Do not use 100% so we do not run out of memory for the process itself if tests start leaking big
# files on /tmp.
$SUDO mkdir -p /etc/systemd/system/tmp.mount.d
echo -e "[Mount]\nOptions=size=75%%,mode=1777\n" | $SUDO tee /etc/systemd/system/tmp.mount.d/override.conf
# packer and/or a --build-arg define this envar value uniformly
# for both VM and container image build workflows.
req_env_vars PACKER_BUILD_NAME
@ -24,17 +30,10 @@ req_env_vars PACKER_BUILD_NAME
# shellcheck disable=SC2154
if [[ "$PACKER_BUILD_NAME" =~ "netavark" ]]; then
bash $SCRIPT_DIRPATH/fedora-netavark_packaging.sh
elif [[ "$PACKER_BUILD_NAME" =~ "podman-py" ]]; then
bash $SCRIPT_DIRPATH/fedora-podman-py_packaging.sh
elif [[ "$PACKER_BUILD_NAME" =~ "build-push" ]]; then
bash $SCRIPT_DIRPATH/build-push_packaging.sh
# Registers qemu emulation for non-native execution
$SUDO systemctl enable systemd-binfmt
for arch in amd64 s390x ppc64le arm64; do
msg "Caching latest $arch fedora image..."
$SUDO podman pull --quiet --arch=$arch \
registry.fedoraproject.org/fedora:$OS_RELEASE_VER
done
else
bash $SCRIPT_DIRPATH/fedora_packaging.sh
fi
@ -48,6 +47,8 @@ if ! ((CONTAINER)); then
else
msg "Enabling cgroup management from containers"
ooe.sh $SUDO setsebool -P container_manage_cgroup true
initialize_local_cache_registry
fi
fi

345
cache_images/local-cache-registry Executable file
View File

@ -0,0 +1,345 @@
#! /bin/bash
#
# local-cache-registry - set up and manage a local registry with cached images
#
# Used in containers CI, to reduce exposure to registry flakes.
#
# We start with the docker registry image. Pull it, extract the registry
# binary and config, tweak the config, and create a systemd unit file that
# will start the registry at boot.
#
# We also populate that registry with a (hardcoded) list of container
# images used in CI tests. That way a CI VM comes up alreay ready,
# and CI tests do not need to do remote pulls. The image list is
# hardcoded right here in this script file, in the automation_images
# repo. See below for reasons.
#
ME=$(basename $0)
###############################################################################
# BEGIN defaults
# FQIN of registry image. From this image, we extract the registry to run.
PODMAN_REGISTRY_IMAGE=quay.io/libpod/registry:2.8.2
# Fixed path to registry setup. This is the directory used by the registry.
PODMAN_REGISTRY_WORKDIR=/var/cache/local-registry
# Fixed port on which registry listens. This is hardcoded and must be
# shared knowledge among all CI repos that use this registry.
REGISTRY_PORT=60333
# Podman binary to run
PODMAN=${PODMAN:-/usr/bin/podman}
# Temporary directories for podman, so we don't clobber any system files.
# Wipe them upon script exit.
PODMAN_TMPROOT=$(mktemp -d --tmpdir $ME.XXXXXXX)
trap 'status=$?; rm -rf $PODMAN_TMPROOT && exit $status' 0
# Images to cache. Default prefix is "quay.io/libpod/"
#
# It seems evil to hardcode this list as part of the script itself
# instead of a separate file or resource but there's a good reason:
# keeping code and data together in one place makes it possible for
# a podman (and some day other repo?) developer to run a single
# command, contrib/cirrus/get-local-registry-script, which will
# fetch this script and allow the dev to run it to start a local
# registry on their system.
#
# As of 2024-07-02 this list includes podman and buildah images
#
# FIXME: periodically run this to look for no-longer-needed images:
#
# for i in $(sed -ne '/IMAGELIST=/,/^[^ ]/p' <cache_images/local-cache-registry | sed -ne 's/^ *//p');do grep -q -R $i ../podman/test ../buildah/tests || echo "unused $i";done
#
declare -a IMAGELIST=(
alpine:3.10.2
alpine:latest
alpine_healthcheck:latest
alpine_nginx:latest
alpine@sha256:634a8f35b5f16dcf4aaa0822adc0b1964bb786fca12f6831de8ddc45e5986a00
alpine@sha256:f270dcd11e64b85919c3bab66886e59d677cf657528ac0e4805d3c71e458e525
alpine@sha256:fa93b01658e3a5a1686dc3ae55f170d8de487006fb53a28efcd12ab0710a2e5f
autoupdatebroken:latest
badhealthcheck:latest
busybox:1.30.1
busybox:glibc
busybox:latest
busybox:musl
cirros:latest
fedora/python-311:latest
healthcheck:config-only
k8s-pause:3.5
podman_python:latest
redis:alpine
registry:2.8.2
registry:volume_omitted
systemd-image:20240124
testartifact:20250206-single
testartifact:20250206-multi
testartifact:20250206-multi-no-title
testartifact:20250206-evil
testdigest_v2s2
testdigest_v2s2:20200210
testimage:00000000
testimage:00000004
testimage:20221018
testimage:20241011
testimage:multiimage
testimage@sha256:1385ce282f3a959d0d6baf45636efe686c1e14c3e7240eb31907436f7bc531fa
testdigest_v2s2:20200210
testdigest_v2s2@sha256:755f4d90b3716e2bf57060d249e2cd61c9ac089b1233465c5c2cb2d7ee550fdb
volume-plugin-test-img:20220623
podman/stable:v4.3.1
podman/stable:v4.8.0
skopeo/stable:latest
ubuntu:latest
)
# END defaults
###############################################################################
# BEGIN help messages
missing=" argument is missing; see $ME -h for details"
usage="Usage: $ME [options] [initialize | cache IMAGE...]
$ME manages a local instance of a container registry.
When called to initialize a registry, $ME will pull
this image into a local temporary directory:
$PODMAN_REGISTRY_IMAGE
...then extract the registry binary and config, tweak the config,
start the registry, and populate it with a list of images needed by tests:
\$ $ME initialize
To fetch individual images into the cache:
\$ $ME cache libpod/testimage:21120101
Override the default image and/or port with:
-i IMAGE registry image to pull (default: $PODMAN_REGISTRY_IMAGE)
-P PORT port to bind to (on 127.0.0.1) (default: $REGISTRY_PORT)
Other options:
-h display usage message
"
die () {
echo "$ME: $*" >&2
exit 1
}
# END help messages
###############################################################################
# BEGIN option processing
while getopts "i:P:hv" opt; do
case "$opt" in
i) PODMAN_REGISTRY_IMAGE=$OPTARG ;;
P) REGISTRY_PORT=$OPTARG ;;
h) echo "$usage"; exit 0;;
v) verbose=1 ;;
\?) echo "Run '$ME -h' for help" >&2; exit 1;;
esac
done
shift $((OPTIND-1))
# END option processing
###############################################################################
# BEGIN helper functions
function podman() {
${PODMAN} --root ${PODMAN_TMPROOT}/root \
--runroot ${PODMAN_TMPROOT}/runroot \
--tmpdir ${PODMAN_TMPROOT}/tmp \
"$@"
}
###############
# must_pass # Run a command quietly; abort with error on failure
###############
function must_pass() {
local log=${PODMAN_TMPROOT}/log
"$@" &> $log
if [ $? -ne 0 ]; then
echo "$ME: Command failed: $*" >&2
cat $log >&2
# If we ever get here, it's a given that the registry is not running.
exit 1
fi
}
###################
# wait_for_port # Returns once port is available on localhost
###################
function wait_for_port() {
local port=$1 # Numeric port
local host=127.0.0.1
local _timeout=5
# Wait
while [ $_timeout -gt 0 ]; do
{ exec {unused_fd}<> /dev/tcp/$host/$port; } &>/dev/null && return
sleep 1
_timeout=$(( $_timeout - 1 ))
done
die "Timed out waiting for port $port"
}
#################
# cache_image # (singular) fetch one remote image
#################
function cache_image() {
local img=$1
# Almost all our images are under libpod; no need to repeat that part
if ! expr "$img" : "^\(.*\)/" >/dev/null; then
img="libpod/$img"
fi
# Almost all our images are from quay.io, but "domain.tld" prefix overrides
registry=$(expr "$img" : "^\([^/.]\+\.[^/]\+\)/" || true)
if [[ -n "$registry" ]]; then
img=$(expr "$img" : "[^/]\+/\(.*\)")
else
registry=quay.io
fi
echo
echo "...caching: $registry / $img"
# FIXME: inspect, and only pull if missing?
for retry in 1 2 3 0;do
skopeo --registries-conf /dev/null \
copy --all --dest-tls-verify=false \
docker://$registry/$img \
docker://127.0.0.1:${REGISTRY_PORT}/$img \
&& return
sleep $((retry * 30))
done
die "Too many retries; unable to cache $registry/$img"
}
##################
# cache_images # (plural) fetch all remote images
##################
function cache_images() {
for img in "${IMAGELIST[@]}"; do
cache_image "$img"
done
}
# END helper functions
###############################################################################
# BEGIN action processing
###################
# do_initialize # Start, then cache images
###################
#
# Intended to be run only from automation_images repo, or by developer
# on local workstation. This should never be run from podman/buildah/etc
# because it defeats the entire purpose of the cache -- a dead registry
# will cause this to fail.
#
function do_initialize() {
# This action can only be run as root
if [[ "$(id -u)" != "0" ]]; then
die "this script must be run as root"
fi
# For the next few commands, die on any error
set -e
mkdir -p ${PODMAN_REGISTRY_WORKDIR}
# Copy of this script
if ! [[ $0 =~ ${PODMAN_REGISTRY_WORKDIR} ]]; then
rm -f ${PODMAN_REGISTRY_WORKDIR}/$ME
cp $0 ${PODMAN_REGISTRY_WORKDIR}/$ME
fi
# Give it three tries, to compensate for flakes
podman pull ${PODMAN_REGISTRY_IMAGE} &>/dev/null ||
podman pull ${PODMAN_REGISTRY_IMAGE} &>/dev/null ||
must_pass podman pull ${PODMAN_REGISTRY_IMAGE}
# Mount the registry image...
registry_root=$(podman image mount ${PODMAN_REGISTRY_IMAGE})
# ...copy the registry binary into our own bin...
cp ${registry_root}/bin/registry /usr/bin/docker-registry
# ...and copy the config, making a few adjustments to it.
sed -e "s;/var/lib/registry;${PODMAN_REGISTRY_WORKDIR};" \
-e "s;:5000;127.0.0.1:${REGISTRY_PORT};" \
< ${registry_root}/etc/docker/registry/config.yml \
> /etc/local-registry.yml
podman image umount -a
# Create a systemd unit file. Enable it (so it starts at boot)
# and also start it --now.
cat > /etc/systemd/system/$ME.service <<EOF
[Unit]
Description=Local Cache Registry for CI tests
[Service]
ExecStart=/usr/bin/docker-registry serve /etc/local-registry.yml
Type=exec
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now $ME.service
wait_for_port ${REGISTRY_PORT}
cache_images
}
##############
# do_cache # Cache one or more images
##############
function do_cache() {
if [[ -z "$*" ]]; then
die "missing args to 'cache'"
fi
for img in "$@"; do
cache_image "$img"
done
}
# END action processing
###############################################################################
# BEGIN command-line processing
# First command-line arg must be an action
action=${1?ACTION$missing}
shift
case "$action" in
init|initialize) do_initialize ;;
cache) do_cache "$@" ;;
*) die "Unknown action '$action'; must be init | cache IMAGE" ;;
esac
# END command-line processing
###############################################################################
exit 0

View File

@ -16,18 +16,9 @@ source "$REPO_DIRPATH/lib.sh"
# for both VM and container image build workflows.
req_env_vars PACKER_BUILD_NAME
# Going from F38 -> rawhide requires some special handling WRT DNF upgrade to DNF5
if [[ "$OS_RELEASE_VER" -eq 38 ]]; then
warn "Upgrading dnf -> dnf5"
showrun $SUDO dnf update -y dnf
showrun $SUDO dnf install -y dnf5
# Even dnf5 refuses to remove the 'dnf' package.
showrun $SUDO rpm -e yum dnf
else
warn "Upgrading Fedora '$OS_RELEASE_VER' to rawhide, this might break."
# shellcheck disable=SC2154
warn "If so, this script may be found in the repo. as '$SCRIPT_DIRPATH/$SCRIPT_FILENAME'."
fi
warn "Upgrading Fedora '$OS_RELEASE_VER' to rawhide, this might break."
# shellcheck disable=SC2154
warn "If so, this script may be found in the repo. as '$SCRIPT_DIRPATH/$SCRIPT_FILENAME'."
# Show what's happening
set -x

View File

@ -1,5 +1,10 @@
ARG BASE_NAME=registry.fedoraproject.org/fedora-minimal
ARG BASE_TAG=latest
# FIXME FIXME FIXME! 2023-11-16: revert "38" to "latest"
# ...38 is because as of this moment, latest is 39, which
# has python-3.12, which causes something to barf:
# aiohttp/_websocket.c:3744:45: error: PyLongObject {aka struct _longobject} has no member named ob_digit
# Possible cause: https://github.com/cython/cython/issues/5238
ARG BASE_TAG=38
FROM ${BASE_NAME}:${BASE_TAG} as updated_base
RUN microdnf upgrade -y && \

View File

@ -1,17 +0,0 @@
{
"builds": [
{
"name": "fedora-podman-py",
"builder_type": "googlecompute",
"build_time": 1658176090,
"files": null,
"artifact_id": "fedora-podman-py-c5419329914142720",
"packer_run_uuid": "e5b1e6ab-37a5-a695-624d-47bf0060b272",
"custom_data": {
"IMG_SFX": "5419329914142720",
"STAGE": "cache"
}
}
],
"last_run_uuid": "e5b1e6ab-37a5-a695-624d-47bf0060b272"
}

36
check-imgsfx.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
#
# 2024-01-25 esm
# 2024-06-28 cevich
#
# This script is intended to be used by the `pre-commit` utility, or it may
# be manually copied (or symlinked) as local `.git/hooks/pre-push` file.
# It's purpose is to keep track of image-suffix values which have already
# been pushed, to avoid them being immediately rejected by CI validation.
# To use it with the `pre-commit` utility, simply add something like this
# to your `.pre-commit-config.yaml`:
#
# ---
# repos:
# - repo: https://github.com/containers/automation_images.git
# rev: <tag or commit sha>
# hooks:
# - id: check-imgsfx
set -eo pipefail
# Ensure CWD is the repo root
cd $(dirname "${BASH_SOURCE[0]}")
imgsfx=$(<IMG_SFX)
imgsfx_history=".git/hooks/imgsfx.history"
if [[ -e $imgsfx_history ]]; then
if grep -q "$imgsfx" $imgsfx_history; then
echo "FATAL: $imgsfx has already been used" >&2
echo "Please rerun 'make IMG_SFX'" >&2
exit 1
fi
fi
echo $imgsfx >>$imgsfx_history

View File

@ -1,4 +1,4 @@
# This dockerfile defines the environment for Cirrus-CI when
# This Containerfile defines the environment for Cirrus-CI when
# running automated checks and tests. It may also be used
# for development/debugging or manually building most
# Makefile targets.
@ -8,16 +8,16 @@ FROM registry.fedoraproject.org/fedora:${FEDORA_RELEASE}
ARG PACKER_VERSION
MAINTAINER https://github.com/containers/automation_images/ci
ENV CIRRUS_WORKING_DIR=/tmp/automation_images \
ENV CIRRUS_WORKING_DIR=/var/tmp/automation_images \
PACKER_INSTALL_DIR=/usr/local/bin \
PACKER_VERSION=$PACKER_VERSION \
CONTAINER=1
# When using the dockerfile-as-ci feature of Cirrus-CI, it's unsafe
# When using the containerfile-as-ci feature of Cirrus-CI, it's unsafe
# to rely on COPY or ADD instructions. See documentation for warning.
RUN test -n "$PACKER_VERSION"
RUN dnf update -y && \
dnf mark remove $(rpm -qa | grep -Ev '(gpg-pubkey)|(dnf)|(sudo)') && \
dnf -y mark dependency $(rpm -qa | grep -Ev '(gpg-pubkey)|(dnf)|(sudo)') && \
dnf install -y \
ShellCheck \
bash-completion \
@ -38,7 +38,7 @@ RUN dnf update -y && \
util-linux \
unzip \
&& \
dnf mark install dnf sudo $_ && \
dnf -y mark user dnf sudo $_ && \
dnf autoremove -y && \
dnf clean all

View File

@ -35,6 +35,14 @@ if [[ -n "$AWS_INI" ]]; then
set_aws_filepath
fi
id
# FIXME: ssh-keygen seems to fail to create keys with Permission denied
# in the base_images make target, I have no idea why but all CI jobs are
# broken because of this. Let's try without selinux.
if [[ "$(getenforce)" == "Enforcing" ]]; then
setenforce 0
fi
set -x
cd "$REPO_DIRPATH"
export IMG_SFX=$IMG_SFX

View File

@ -44,13 +44,6 @@ SRC_FQIN="$TARGET_NAME:$IMG_SFX"
make "$TARGET_NAME" IMG_SFX=$IMG_SFX
# Prevent pushing 'latest' images from PRs, only branches and tags
# shellcheck disable=SC2154
if [[ $PUSH_LATEST -eq 1 ]] && [[ -n "$CIRRUS_PR" ]]; then
echo -e "\nWarning: Refusing to push 'latest' images when testing from a PR.\n"
PUSH_LATEST=0
fi
# Don't leave credential file sticking around anywhere
trap "podman logout --all" EXIT INT CONT
set +x # protect username/password values
@ -64,9 +57,3 @@ set -x # Easier than echo'ing out status for everything
# shellcheck disable=SC2154
podman tag "$SRC_FQIN" "$DEST_FQIN"
podman push "$DEST_FQIN"
if ((PUSH_LATEST)); then
LATEST_FQIN="${DEST_FQIN%:*}:latest"
podman tag "$SRC_FQIN" "$LATEST_FQIN"
podman push "$LATEST_FQIN"
fi

36
ci/tag_latest.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
set -eo pipefail
if [[ -z "$CI" ]] || [[ "$CI" != "true" ]] || [[ -z "$IMG_SFX" ]]; then
echo "This script is intended to be run by CI and nowhere else."
exit 1
fi
# This envar is set by the CI system
# shellcheck disable=SC2154
if [[ "$CIRRUS_CHANGE_MESSAGE" =~ .*CI:DOCS.* ]]; then
echo "This script must never tag anything after a [CI:DOCS] PR merge"
exit 0
fi
# Ensure no secrets leak via debugging var expansion
set +x
# This secret envar is set by the CI system
# shellcheck disable=SC2154
echo "$REG_PASSWORD" | \
skopeo login --password-stdin --username "$REG_USERNAME" "$REGPFX"
declare -a imgnames
imgnames=( imgts imgobsolete imgprune gcsupld get_ci_vm orphanvms ccia )
# A [CI:TOOLING] build doesn't produce CI VM images
if [[ ! "$CIRRUS_CHANGE_MESSAGE" =~ .*CI:TOOLING.* ]]; then
imgnames+=( skopeo_cidev fedora_podman prior-fedora_podman )
fi
for imgname in "${imgnames[@]}"; do
echo "##### Tagging $imgname -> latest"
# IMG_SFX is defined by CI system
# shellcheck disable=SC2154
skopeo copy "docker://$REGPFX/$imgname:c${IMG_SFX}" "docker://$REGPFX/${imgname}:latest"
done

View File

@ -13,7 +13,7 @@ REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
req_env_vars CIRRUS_PR CIRRUS_BASE_SHA CIRRUS_PR_TITLE
req_env_vars CIRRUS_PR CIRRUS_PR_TITLE CIRRUS_USER_PERMISSION CIRRUS_BASE_BRANCH
show_env_vars
@ -21,6 +21,16 @@ show_env_vars
[[ "$CIRRUS_CI" == "true" ]] || \
die "This script is only/ever intended to be run by Cirrus-CI."
# This is imperfect security-wise, but attempt to catch an accidental
# change in Cirrus-CI Repository settings. Namely the hard-to-read
# "slider" that enables non-contributors to run jobs. We don't want
# that on this repo, ever. because there are sensitive secrets in use.
# This variable is set by CI and validated non-empty above
# shellcheck disable=SC2154
if [[ "$CIRRUS_USER_PERMISSION" != "write" ]] && [[ "$CIRRUS_USER_PERMISSION" != "admin" ]]; then
die "CI Execution not supported with permission level '$CIRRUS_USER_PERMISSION'"
fi
for target in image_builder/gce.json base_images/cloud.json \
cache_images/cloud.json win_images/win-server-wsl.json; do
if ! make $target; then
@ -42,17 +52,20 @@ if [[ "$CIRRUS_PR_TITLE" =~ CI:DOCS ]]; then
exit 0
fi
# Variable is defined by Cirrus-CI at runtime
# Fix "Not a valid object name main" error from Cirrus's
# incomplete checkout.
git remote update origin
# Determine where PR branched off of $CIRRUS_BASE_BRANCH
# shellcheck disable=SC2154
if ! git diff --name-only ${CIRRUS_BASE_SHA}..HEAD | grep -q IMG_SFX; then
base_sha=$(git merge-base origin/${CIRRUS_BASE_BRANCH:-main} HEAD)
if ! git diff --name-only ${base_sha}..HEAD | grep -q IMG_SFX; then
die "Every PR that builds images must include an updated IMG_SFX file.
Simply run 'make IMG_SFX', commit the result, and re-push."
else
IMG_SFX="$(<./IMG_SFX)"
# IMG_SFX was modified vs PR's base-branch, confirm version moved forward
# shellcheck disable=SC2154
v_prev=$(git show ${CIRRUS_BASE_SHA}:IMG_SFX 2>&1 || true)
v_prev=$(git show ${base_sha}:IMG_SFX 2>&1 || true)
# Verify new IMG_SFX value always version-sorts later than previous value.
# This prevents screwups due to local timezone, bad, or unset clocks, etc.
new_img_ver=$(awk -F 't' '{print $1"."$2}'<<<"$IMG_SFX" | cut -dz -f1)

View File

@ -0,0 +1,43 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-symlinks
- id: mixed-line-ending
- id: no-commit-to-branch
args: [--branch, main]
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
args: [--config, .codespellrc]
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 3.0.0
hooks:
- id: forbid-binary
exclude: >
(?x)^(
get_ci_vm/good_repo_test/dot_git.tar.gz
)$
- id: script-must-have-extension
- id: shellcheck
# These come from ci/shellcheck.sh
args:
- --color=always
- --format=tty
- --shell=bash
- --external-sources
- --enable=add-default-case,avoid-nullary-conditions,check-unassigned-uppercase
- --exclude=SC2046,SC2034,SC2090,SC2064
- --wiki-link-count=0
- --severity=warning
- repo: https://github.com/containers/automation_images.git
rev: 2e5a2acfe21cc4b13511b453733b8875e592ad9c
hooks:
- id: check-imgsfx

View File

@ -1,14 +1,13 @@
# This is a listing of GCP Project IDs which use images produced by
# this repo. It's used by the "Orphan VMs" github action to monitor
# for any leftover/lost VMs.
# This is a listing of Google Cloud Platform Project IDs for
# orphan VM monitoring and possibly other automation tasks.
# Note: CI VM images produced by this repo are all stored within
# the libpod-218412 project (in addition to some AWS EC2)
buildah
conmon-222014
containers-build-source-image
dnsname-8675309
libpod-218412
netavark-2021
oci-seccomp-bpf-hook
podman-py
skopeo
storage-240716
udica-247612

View File

@ -5,6 +5,36 @@ This directory contains the source for building [the
This image image is used by many containers-org repos. `hack/get_ci_vm.sh` script.
It is not intended to be called via any other mechanism.
In general/high-level terms, the architecture and operation is:
1. [containers/automation hosts cirrus-ci_env](https://github.com/containers/automation/tree/main/cirrus-ci_env),
a python mini-implementation of a `.cirrus.yml` parser. It's only job is to extract all required envars,
given a task name (including from a matrix element). It's highly dependent on
[certain YAML formatting requirements](README.md#downstream-repository-cirrusyml-requirements). If the target
repo. doesn't follow those standards, nasty/ugly python errors will vomit forth. Mainly this has to do with
Cirrus-CI's use of a non-standard YAML parser, allowing things like certain duplicate dictionary keys.
1. [containers/automation_images hosts get_ci_vm](https://github.com/containers/automation_images/tree/main/get_ci_vm),
a bundling of the `cirrus-ci_env` python script with an `entrypoint.sh` script inside a container image.
1. When a user runs `hack/get_ci_vm.sh` inside a target repo, the container image is entered, and `.cirrus.yml`
is parsed based on the CLI task-name. A VM is then provisioned based on specific envars (see the "Env. Vars."
entries in the sections for [APIv1](README.md#env-vars) and [APIv2](README.md#env-vars-1) sections below).
This is the most complex part of the process.
1. The remote system will not have **any** of the otherwise automatic Cirrus-CI operations performed (like "clone")
nor any magic CI variables defined. Having a VM ready, the container entrypoint script transfers a copy of
the local repo (including any uncommited changes).
1. The container entrypoint script then performs **_remote_** execution of the `hack/get_ci_vm.sh` script
including the magic `--setup` parameter. Though it varies by repo, typically this will establish everything
necessary to simulate a CI environment, via a call to the repo's own `setup.sh` or equivalent. Typically
The repo's setup scripts will persist any required envars into a `/etc/ci_environment` or similar. Though
this isn't universal.
1. Lastly, the user is dropped into a shell on the VM, inside the repo copy, with all envars defined and
ready to start running tests.
_Note_: If there are any envars found to be missing, they must be defined by updating either the repo normal CI
setup scripts (preferred), or in the `hack/get_ci_vm.sh` `--setup` section.
# Building
Example build (from repository root):
```bash

View File

@ -66,9 +66,9 @@ delvm() {
}
image_hints() {
_BIS=$(egrep -m 1 '_BUILT_IMAGE_SUFFIX:[[:space:]+"[[:print:]]+"' \
_BIS=$(grep -E -m 1 '_BUILT_IMAGE_SUFFIX:[[:space:]+"[[:print:]]+"' \
"$SECCOMPHOOKROOT/.cirrus.yml" | cut -d: -f 2 | tr -d '"[:blank:]')
egrep '[[:space:]]+[[:alnum:]].+_CACHE_IMAGE_NAME:[[:space:]+"[[:print:]]+"' \
grep -E '[[:space:]]+[[:alnum:]].+_CACHE_IMAGE_NAME:[[:space:]+"[[:print:]]+"' \
"$SECCOMPHOOKROOT/.cirrus.yml" | cut -d: -f 2 | tr -d '"[:blank:]' | \
sed -r -e "s/\\\$[{]_BUILT_IMAGE_SUFFIX[}]/$_BIS/" | sort -u
}
@ -141,7 +141,7 @@ cd $SECCOMPHOOKROOT
# Attempt to determine if named 'oci-seccomp-bpf-hook' gcloud configuration exists
showrun $PGCLOUD info > $TMPDIR/gcloud-info
if egrep -q "Account:.*None" $TMPDIR/gcloud-info
if grep -E -q "Account:.*None" $TMPDIR/gcloud-info
then
echo -e "\n${YEL}WARNING: Can't find gcloud configuration for 'oci-seccomp-bpf-hook', running init.${NOR}"
echo -e " ${RED}Please choose '#1: Re-initialize' and 'login' if asked.${NOR}"
@ -151,7 +151,7 @@ then
# Verify it worked (account name == someone@example.com)
$PGCLOUD info > $TMPDIR/gcloud-info-after-init
if egrep -q "Account:.*None" $TMPDIR/gcloud-info-after-init
if grep -E -q "Account:.*None" $TMPDIR/gcloud-info-after-init
then
echo -e "${RED}ERROR: Could not initialize 'oci-seccomp-bpf-hook' configuration in gcloud.${NOR}"
exit 5

View File

@ -235,7 +235,7 @@ has_valid_aws_credentials() {
_awsoutput=$($AWSCLI configure list 2>&1 || true)
dbg "$AWSCLI configure list"
dbg "$_awsoutput"
if egrep -qx 'The config profile.+could not be found'<<<"$_awsoutput"; then
if grep -E -qx 'The config profile.+could not be found'<<<"$_awsoutput"; then
dbg "AWS config/credentials are missing"
return 1
elif [[ ! -r "$EC2_SSH_KEY" ]] || [[ ! -r "${EC2_SSH_KEY}.pub" ]]; then
@ -413,6 +413,9 @@ make_setup_tarball() {
status "Preparing setup tarball for instance."
req_env_vars DESTDIR _TMPDIR SRCDIR UPSTREAM_REPO
mkdir -p "${_TMPDIR}$DESTDIR"
# Mark the volume-mounted source repo as safe system-wide (w/in the container)
git config --global --add safe.directory "$SRCDIR"
git config --global --add safe.directory "$SRCDIR/.git"
# We have no way of knowing what state or configuration the user's
# local repository is in. Work from a local clone, so we can
# specify our own setup and prevent unexpected script breakage.

View File

@ -2,9 +2,9 @@
# This script is intended to be executed as part of the container
# image build process. Using it under any other context is virtually
# guarantied to cause you much pain and suffering.
# guaranteed to cause you much pain and suffering.
set -eo pipefail
set -xeo pipefail
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
@ -14,6 +14,7 @@ source "$REPO_DIRPATH/lib.sh"
declare -a PKGS
PKGS=( \
aws-cli
coreutils
curl
gawk
@ -30,9 +31,7 @@ apk upgrade
apk add --no-cache "${PKGS[@]}"
rm -rf /var/cache/apk/*
pip3 install --upgrade pip
pip3 install --no-cache-dir awscli
aws --version # Confirm it actually runs
aws --version # Confirm that aws actually runs
install_automation_tooling cirrus-ci_env

View File

@ -78,7 +78,7 @@ testf() {
echo "# $@" > /dev/stderr
fi
# Using egrep vs file safer than shell builtin test
# Using grep -E vs file safer than shell builtin test
local a_out_f
local a_exit=0
a_out_f=$(mktemp -p '' "tmp_${FUNCNAME[0]}_XXXXXXXX")
@ -109,7 +109,7 @@ testf() {
if ((TEST_DEBUG)); then
echo "Received $(wc -l $a_out_f | awk '{print $1}') output lines of $(wc -c $a_out_f | awk '{print $1}') bytes total"
fi
if egrep -q "$e_out_re" "${a_out_f}.oneline"; then
if grep -E -q "$e_out_re" "${a_out_f}.oneline"; then
_test_report "Command $1 exited as expected with expected output" "0" "$a_out_f"
else
_test_report "Expecting regex '$e_out_re' match to (whitespace-squashed) output" "1" "$a_out_f"

View File

@ -67,7 +67,7 @@ else
fi
# Support both '.CHECKSUM' and '-CHECKSUM' at the end
filename=$(egrep -i -m 1 -- "$extension$" <<<"$by_arch" || true)
filename=$(grep -E -i -m 1 -- "$extension$" <<<"$by_arch" || true)
[[ -n "$filename" ]] || \
die "No '$extension' targets among $by_arch"

View File

@ -4,7 +4,7 @@
# at the root of this repository. It should be built with
# the repository root as the context directory.
ARG CENTOS_STREAM_RELEASE=8
ARG CENTOS_STREAM_RELEASE=9
FROM quay.io/centos/centos:stream${CENTOS_STREAM_RELEASE}
ARG PACKER_VERSION
MAINTAINER https://github.com/containers/automation_images/image_builder

View File

@ -45,16 +45,16 @@ provisioners:
- type: 'shell'
inline:
- 'set -e'
- 'mkdir -p /tmp/automation_images'
- 'mkdir -p /var/tmp/automation_images'
- type: 'file'
source: '{{ pwd }}/'
destination: '/tmp/automation_images/'
destination: '/var/tmp/automation_images/'
- type: 'shell'
inline:
- 'set -e'
- '/bin/bash /tmp/automation_images/image_builder/setup.sh'
- '/bin/bash /var/tmp/automation_images/image_builder/setup.sh'
post-processors:
# Must be double-nested to guarantee execution order

View File

@ -1,16 +1,9 @@
[google-compute-engine]
name=Google Compute Engine
baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el8-x86_64-stable
# Copy-pasted from https://cloud.google.com/sdk/docs/install#red-hatfedoracentos
[google-cloud-cli]
name=Google Cloud CLI
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el9-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

View File

@ -23,6 +23,19 @@ source "$REPO_DIRPATH/lib.sh"
dnf update -y
dnf -y install epel-release
dnf install -y $(<"$INST_PKGS_FP")
# Allow erasing pre-installed curl-minimal package
dnf install -y --allowerasing $(<"$INST_PKGS_FP")
# As of 2024-04-24 installing the EPEL `awscli` package results in error:
# nothing provides python3.9dist(docutils) >= 0.10
# Grab the binary directly from amazon instead
# https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
AWSURL="https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"
cd /tmp
curl --fail --location -O "${AWSURL}"
# There's little reason to see every single file extracted
unzip -q awscli*.zip
./aws/install -i /usr/local/share/aws-cli -b /usr/local/bin
rm -rf awscli*.zip ./aws
install_automation_tooling

View File

@ -1,4 +1,3 @@
awscli
buildah
bash-completion
curl
@ -6,12 +5,13 @@ findutils
gawk
genisoimage
git
google-cloud-sdk
google-cloud-cli
jq
libvirt
libvirt-admin
libvirt-client
libvirt-daemon
libxcrypt-compat
make
openssh
openssl
@ -24,6 +24,7 @@ rng-tools
rootfiles
rsync
sed
skopeo
tar
unzip
util-linux

View File

@ -11,13 +11,13 @@ set -eo pipefail
# shellcheck source=imgts/lib_entrypoint.sh
source /usr/local/bin/lib_entrypoint.sh
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX IMPORT_IMG_SFX
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX
gcloud_init
# Set this to 1 for testing
DRY_RUN="${DRY_RUN:-0}"
OBSOLETE_LIMIT=10
OBSOLETE_LIMIT=50
THEFUTURE=$(date --date='+1 hour' +%s)
TOO_OLD_DAYS='30'
TOO_OLD_DESC="$TOO_OLD_DAYS days ago"
@ -40,8 +40,8 @@ $GCLOUD compute images list --format="$FORMAT" --filter="$FILTER" | \
count_image
reason=""
created_ymd=$(date --date=$creationTimestamp --iso-8601=date)
permanent=$(egrep --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true)
last_used=$(egrep --only-matching --max-count=1 'last-used=[[:digit:]]+' <<< $labels || true)
permanent=$(grep -E --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true)
last_used=$(grep -E --only-matching --max-count=1 'last-used=[[:digit:]]+' <<< $labels || true)
LABELSFX="labels: '$labels'"
@ -147,9 +147,9 @@ for (( i=nr_amis ; i ; i-- )); do
done
unset automation permanent reason
automation=$(egrep --only-matching --max-count=1 \
automation=$(grep -E --only-matching --max-count=1 \
--ignore-case 'automation=true' <<< $tags || true)
permanent=$(egrep --only-matching --max-count=1 \
permanent=$(grep -E --only-matching --max-count=1 \
--ignore-case 'permanent=true' <<< $tags || true)
if [[ -n "$permanent" ]]; then
@ -159,10 +159,10 @@ for (( i=nr_amis ; i ; i-- )); do
continue
fi
# Any image matching the currently in-use IMG_SFX or IMPORT_IMG_SFX
# Any image matching the currently in-use IMG_SFX
# must always be preserved. Values are defined in cirrus.yml
# shellcheck disable=SC2154
if [[ "$name" =~ $IMG_SFX ]] || [[ "$name" =~ $IMPORT_IMG_SFX ]]; then
if [[ "$name" =~ $IMG_SFX ]]; then
msg "Retaining current (latest) image $name | $tags"
continue
fi
@ -201,14 +201,15 @@ for (( i=nr_amis ; i ; i-- )); do
done
COUNT=$(<"$IMGCOUNT")
CANDIDATES=$(wc -l <$TOOBSOLETE)
msg "########################################################################"
msg "Obsoleting $OBSOLETE_LIMIT random images of $COUNT examined:"
msg "Obsoleting $OBSOLETE_LIMIT random image candidates ($CANDIDATES/$COUNT total):"
# Require a minimum number of images to exist. Also if there is some
# horrible scripting accident, this limits the blast-radius.
if [[ "$COUNT" -lt $OBSOLETE_LIMIT ]]
if [[ "$CANDIDATES" -lt $OBSOLETE_LIMIT ]]
then
die 0 "Safety-net Insufficient images ($COUNT) to process ($OBSOLETE_LIMIT required)"
die 0 "Safety-net Insufficient images ($CANDIDATES) to process ($OBSOLETE_LIMIT required)"
fi
# Don't let one bad apple ruin the whole bunch

View File

@ -11,14 +11,14 @@ set -e
# shellcheck source=imgts/lib_entrypoint.sh
source /usr/local/bin/lib_entrypoint.sh
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX IMPORT_IMG_SFX
req_env_vars GCPJSON GCPNAME GCPPROJECT AWSINI IMG_SFX
gcloud_init
# Set this to 1 for testing
DRY_RUN="${DRY_RUN:-0}"
# For safety's sake limit nr deletions
DELETE_LIMIT=10
DELETE_LIMIT=50
ABOUTNOW=$(date --iso-8601=date) # precision is not needed for this use
# Format Ref: https://cloud.google.com/sdk/gcloud/reference/topic/formats
# Field list from `gcloud compute images list --limit=1 --format=text`
@ -39,7 +39,7 @@ $GCLOUD compute images list --show-deprecated \
do
count_image
reason=""
permanent=$(egrep --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true)
permanent=$(grep -E --only-matching --max-count=1 --ignore-case 'permanent=true' <<< $labels || true)
[[ -z "$permanent" ]] || \
die 1 "Refusing to delete a deprecated image labeled permanent=true. Please use gcloud utility to set image active, then research the cause of deprecation."
[[ "$dep_state" == "OBSOLETE" ]] || \
@ -48,7 +48,7 @@ $GCLOUD compute images list --show-deprecated \
# Any image matching the currently in-use IMG_SFX must always be preserved.
# Values are defined in cirrus.yml
# shellcheck disable=SC2154
if [[ "$name" =~ $IMG_SFX ]] || [[ "$name" =~ $IMPORT_IMG_SFX ]]; then
if [[ "$name" =~ $IMG_SFX ]]; then
msg " Skipping current (latest) image $name"
continue
fi
@ -91,9 +91,9 @@ for (( i=nr_amis ; i ; i-- )); do
warn 0 " EC2 AMI ID '$ami_id' is missing a 'Name' tag"
fi
# Any image matching the currently in-use IMG_SFX or IMPORT_IMG_SFX
# Any image matching the currently in-use IMG_SFX
# must always be preserved.
if [[ "$name" =~ $IMG_SFX ]] || [[ "$name" =~ $IMPORT_IMG_SFX ]]; then
if [[ "$name" =~ $IMG_SFX ]]; then
warn 0 " Retaining current (latest) image $name id $ami_id"
$AWS ec2 disable-image-deprecation --image-id "$ami_id" > /dev/null
continue
@ -106,13 +106,14 @@ for (( i=nr_amis ; i ; i-- )); do
done
COUNT=$(<"$IMGCOUNT")
CANDIDATES=$(wc -l <$TODELETE)
msg "########################################################################"
msg "Deleting up to $DELETE_LIMIT random images of $COUNT examined:"
msg "Deleting up to $DELETE_LIMIT random image candidates ($CANDIDATES/$COUNT total)::"
# Require a minimum number of images to exist
if [[ "$COUNT" -lt $DELETE_LIMIT ]]
if [[ "$CANDIDATES" -lt $DELETE_LIMIT ]]
then
die 0 "Safety-net Insufficient images ($COUNT) to process deletions ($DELETE_LIMIT required)"
die 0 "Safety-net Insufficient images ($CANDIDATES) to process deletions ($DELETE_LIMIT required)"
fi
sort --random-sort $TODELETE | tail -$DELETE_LIMIT | \

View File

@ -1,11 +1,11 @@
ARG CENTOS_STREAM_RELEASE=8
ARG CENTOS_STREAM_RELEASE=9
FROM quay.io/centos/centos:stream${CENTOS_STREAM_RELEASE}
# Only needed for installing build-time dependencies
COPY /imgts/google-cloud-sdk.repo /etc/yum.repos.d/google-cloud-sdk.repo
RUN dnf -y update && \
dnf -y install epel-release && \
dnf -y install python3 jq && \
dnf -y install python3 jq libxcrypt-compat && \
dnf -y install google-cloud-sdk && \
dnf clean all

View File

@ -1,19 +1,9 @@
# From https://github.com/GoogleCloudPlatform/compute-image-packages
[google-compute-engine]
name=Google Compute Engine
baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el8-x86_64-stable
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
# Copy-pasted from https://cloud.google.com/sdk/docs/install#red-hatfedoracentos
# From https://cloud.google.com/sdk/docs/install#rpm
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
[google-cloud-cli]
name=Google Cloud CLI
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el9-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

View File

@ -5,7 +5,7 @@ set -e
RED="\e[1;31m"
YEL="\e[1;33m"
NOR="\e[0m"
SENTINEL="__unknown__" # default set in dockerfile
SENTINEL="__unknown__" # default set in Containerfile
# Disable all input prompts
# https://cloud.google.com/sdk/docs/scripting-gcloud
GCLOUD="gcloud --quiet"
@ -55,7 +55,7 @@ gcloud_init() {
then
TMPF="$1"
else
TMPF=$(mktemp -p '' .$(uuidgen)_XXXX.json)
TMPF=$(mktemp -p '' .XXXXXXXX)
trap "rm -f $TMPF &> /dev/null" EXIT
# Required variable must be set by caller
# shellcheck disable=SC2154
@ -77,7 +77,7 @@ aws_init() {
then
TMPF="$1"
else
TMPF=$(mktemp -p '' .$(uuidgen)_XXXX.ini)
TMPF=$(mktemp -p '' .XXXXXXXX)
fi
# shellcheck disable=SC2154
echo "$AWSINI" > $TMPF

View File

@ -1,91 +0,0 @@
# Semi-manual image imports
## Overview
[Due to a bug in
packer](https://github.com/hashicorp/packer-plugin-amazon/issues/264) and
the sheer complexity of EC2 image imports, this process is impractical for
full automation. It tends toward nearly always requiring supervision of a
human:
* There are multiple failure-points, some are not well reported to
the user by tools here or by AWS itself.
* The upload of the image to s3 can be unreliable. Silently corrupting image
data.
* The import-process is managed by a hosted AWS service which can be slow
and is occasionally unreliable.
* Failure often results in one or more leftover/incomplete resources
(s3 objects, EC2 snapshots, and AMIs)
## Requirements
* You're generally familiar with the (manual)
[EC2 snapshot import process](https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html).
* You are in possession of an AWS EC2 account, with the [IAM policy
`vmimport`](https://docs.aws.amazon.com/vm-import/latest/userguide/required-permissions.html#vmimport-role) attached.
* Both "Access Key" and "Secret Access Key" values set in [a credentials
file](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
* Podman is installed and functional
* At least 10gig free space under `/tmp`, more if there are failures / multiple runs.
* *Network bandwidth sufficient for downloading and uploading many GBs of
data, potentially multiple times.*
## Process
Unless there is a problem with the current contents or age of the
imported images, this process does not need to be followed. The
normal PR-based build workflow can simply be followed as usual.
This process is only needed to bring newly updated Fedora images into
AWS to build CI images from. For example, due to a new Beta or GA release.
***Note:*** Most of the steps below will happen within a container environment.
Any exceptions are noted in the individual steps below with *[HOST]*
1. *[HOST]* Edit the `Makefile`, update the Fedora release numbers
under the section
`##### Important image release and source details #####`
1. *[HOST]* Run `make IMPORT_IMG_SFX`
1. *[HOST]* Run
```bash
$ make image_builder_debug \
GAC_FILEPATH=/dev/null \
AWS_SHARED_CREDENTIALS_FILE=/path/to/.aws/credentials
```
1. Run `make import_images` (or `make --jobs=4 import_images` if you're brave).
1. The following steps should all occur successfully for each imported image.
1. Image is downloaded.
1. Image checksum is downloaded.
1. Image is verified against the checksum.
1. Image is converted to `VHDX` format.
1. The `VHDX` image is uploaded to the `packer-image-import` S3 bucket.
1. AWS `import-snapshot` process is started (uses AWS vmimport service)
1. Progress of snapshot import is monitored until completion or failure.
1. The imported snapshot is converted into an AMI
1. Essential tags are added to the AMI
1. Details ascii-table about the new AMI is printed on success.
1. Assuming all image imports were successful, a final success message will be
printed by `make`.
## Failure responses
This list is not exhaustive, and only represents common/likely failures.
Normally there is no need to exit the build container.
* If image download fails, double-check any error output, run `make clean`
and retry.
* If checksum validation fails,
run `make clean`.
Retry `make import_images`.
* If s3 upload fails,
Confirm service availability,
retry `make import_images`.
* If snapshot import fails with a `Disk validation failed` error,
Retry `make import_images`.
* If snapshot import fails with non-validation error,
find snapshot in EC2 and delete it manually.
Retry `make import_images`.
* If AMI registration fails, remove any conflicting AMIs *and* snapshots.
Retry `make import_images`.
* If import was successful but AMI tagging failed, manually add
the required tags to AMI: `automation=false` and `Name=<name>-i${IMG_SFX}`.
Where `<name>` is `fedora-aws` or `fedora-aws-arm64`.

View File

@ -1,45 +0,0 @@
#!/bin/bash
# This script is intended to be run by packer, usage under any other
# environment may behave badly. Its purpose is to download a VM
# image and a checksum file. Verify the image's checksum matches.
# If it does, convert the downloaded image into the format indicated
# by the first argument's `.extension`.
#
# The first argument is the file path and name for the output image,
# the second argument is the image download URL (ending in a filename).
# The third argument is the download URL for a checksum file containing
# details necessary to verify vs filename included in image download URL.
set -eo pipefail
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
SCRIPT_DIRPATH=$(dirname "$SCRIPT_FILEPATH")
REPO_DIRPATH=$(realpath "$SCRIPT_DIRPATH/../")
# shellcheck source=./lib.sh
source "$REPO_DIRPATH/lib.sh"
[[ "$#" -eq 3 ]] || \
die "Expected to be called with three arguments, not: $#"
# Packer needs to provide the desired filename as it's unable to parse
# a filename out of the URL or interpret output from this script.
dest_dirpath=$(dirname "$1")
dest_filename=$(basename "$1")
dest_format=$(cut -d. -f2<<<"$dest_filename")
src_url="$2"
src_filename=$(basename "$src_url")
cs_url="$3"
req_env_vars dest_dirpath dest_filename dest_format src_url src_filename cs_url
mkdir -p "$dest_dirpath"
cd "$dest_dirpath"
[[ -r "$src_filename" ]] || \
curl --fail --location -O "$src_url"
echo "Downloading & verifying checksums in $cs_url"
curl --fail --location "$cs_url" -o - | \
sha256sum --ignore-missing --check -
echo "Converting '$src_filename' to ($dest_format format) '$dest_filename'"
qemu-img convert "$src_filename" -O "$dest_format" "${dest_filename}"

View File

@ -1,31 +0,0 @@
{
"builds": [
{
"name": "fedora-aws",
"builder_type": "hamsterwheel",
"build_time": 0,
"files": null,
"artifact_id": "",
"packer_run_uuid": null,
"custom_data": {
"IMG_SFX": "fedora-aws-i@@@IMPORT_IMG_SFX@@@",
"STAGE": "import",
"TASK": "@@@CIRRUS_TASK_ID@@@"
}
},
{
"name": "fedora-aws-arm64",
"builder_type": "hamsterwheel",
"build_time": 0,
"files": null,
"artifact_id": "",
"packer_run_uuid": null,
"custom_data": {
"IMG_SFX": "fedora-aws-arm64-i@@@IMPORT_IMG_SFX@@@",
"STAGE": "import",
"TASK": "@@@CIRRUS_TASK_ID@@@"
}
}
],
"last_run_uuid": "00000000-0000-0000-0000-000000000000"
}

View File

@ -1,18 +0,0 @@
{
"Name": "@@@NAME@@@-i@@@IMPORT_IMG_SFX@@@",
"VirtualizationType": "hvm",
"Architecture": "@@@ARCH@@@",
"EnaSupport": true,
"RootDeviceName": "/dev/sda1",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"SnapshotId": "@@@SNAPSHOT_ID@@@",
"VolumeSize": 10,
"VolumeType": "gp2"
}
}
]
}

View File

@ -1,84 +0,0 @@
#!/bin/bash
# This script is intended to be called by the main Makefile
# to wait for and confirm successful import and conversion
# of an uploaded image object from S3 into EC2. It expects
# the path to a file containing the import task ID as the
# first argument.
#
# If the import is successful, the snapshot ID is written
# to stdout. Otherwise, all output goes to stderr, and
# the script exits non-zero on failure or timeout. On
# failure, the file containing the import task ID will
# be removed.
set -eo pipefail
AWS="${AWS:-aws --output json --region us-east-1}"
# The import/conversion process can take a LONG time, have observed
# > 10 minutes on occasion. Normally, takes 2-5 minutes.
SLEEP_SECONDS=10
TIMEOUT_SECONDS=720
TASK_ID_FILE="$1"
tmpfile=$(mktemp -p '' tmp.$(basename ${BASH_SOURCE[0]}).XXXX)
die() { echo "ERROR: ${1:-No error message provided}" > /dev/stderr; exit 1; }
msg() { echo "${1:-No error message provided}" > /dev/stderr; }
unset snapshot_id
handle_exit() {
set +e
rm -f "$tmpfile" &> /dev/null
if [[ -n "$snapshot_id" ]]; then
msg "Success ($task_id): $snapshot_id"
echo -n "$snapshot_id" > /dev/stdout
return 0
fi
rm -f "$TASK_ID_FILE"
die "Timeout or other error reported while waiting for snapshot import"
}
trap handle_exit EXIT
[[ -n "$AWS_SHARED_CREDENTIALS_FILE" ]] || \
die "\$AWS_SHARED_CREDENTIALS_FILE must not be unset/empty."
[[ -r "$1" ]] || \
die "Can't read task id from file '$TASK_ID_FILE'"
task_id=$(<$TASK_ID_FILE)
msg "Waiting up to $TIMEOUT_SECONDS seconds for '$task_id' import. Checking progress every $SLEEP_SECONDS seconds."
for (( i=$TIMEOUT_SECONDS ; i ; i=i-$SLEEP_SECONDS )); do \
# Sleep first, to give AWS time to start meaningful work.
sleep ${SLEEP_SECONDS}s
$AWS ec2 describe-import-snapshot-tasks \
--import-task-ids $task_id > $tmpfile
if ! st_msg=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.StatusMessage?' $tmpfile) && \
[[ -n $st_msg ]] && \
[[ ! "$st_msg" =~ null ]]
then
die "Unexpected result: $st_msg"
elif grep -Eiq '(error)|(fail)' <<<"$st_msg"; then
die "$task_id: $st_msg"
fi
msg "$task_id: $st_msg (${i}s remaining)"
# Why AWS you use StatusMessage && Status? Bad names! WHY!?!?!?!
if status=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.Status?' $tmpfile) && \
[[ "$status" == "completed" ]] && \
snapshot_id=$(jq -r -e '.ImportSnapshotTasks[0].SnapshotTaskDetail.SnapshotId?' $tmpfile)
then
msg "Import complete to: $snapshot_id"
break
else
unset snapshot_id
fi
done

38
lib.sh
View File

@ -19,9 +19,8 @@ OS_REL_VER="$OS_RELEASE_ID-$OS_RELEASE_VER"
# This location is checked by automation in other repos, please do not change.
PACKAGE_DOWNLOAD_DIR=/var/cache/download
INSTALL_AUTOMATION_VERSION="4.2.1"
PUSH_LATEST="${PUSH_LATEST:-0}"
# N/B: This is managed by renovate
INSTALL_AUTOMATION_VERSION="5.0.1"
# Mask secrets in show_env_vars() from automation library
SECRET_ENV_RE='(^PATH$)|(^BASH_FUNC)|(^_.*)|(.*PASSWORD.*)|(.*TOKEN.*)|(.*SECRET.*)|(.*ACCOUNT.*)|(.+_JSON)|(AWS.+)|(.*SSH.*)|(.*GCP.*)'
@ -49,12 +48,20 @@ if [[ "$UID" -ne 0 ]]; then
fi
install_automation_tooling() {
local version_arg
version_arg="$INSTALL_AUTOMATION_VERSION"
if [[ "$1" == "latest" ]]; then
version_arg="latest"
shift
fi
# This script supports installing all current and previous versions
local installer_url="https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh"
curl --silent --show-error --location \
--url "$installer_url" | \
$SUDO env INSTALL_PREFIX=/usr/share /bin/bash -s - \
"$INSTALL_AUTOMATION_VERSION" "$@"
"$version_arg" "$@"
# This defines AUTOMATION_LIB_PATH
source /usr/share/automation/environment
#shellcheck disable=SC1090
@ -279,6 +286,16 @@ unmanaged-devices=interface-name:*podman*;interface-name:veth*
EOF
}
# Create a local registry, seed it with remote images
initialize_local_cache_registry() {
msg "Initializing local cache registry"
#shellcheck disable=SC2154
$SUDO ${SCRIPT_DIRPATH}/local-cache-registry initialize
msg "du -sh /var/cache/local-registry"
du -sh /var/cache/local-registry
}
common_finalize() {
set -x # extra detail is no-longer necessary
cd /
@ -291,7 +308,7 @@ common_finalize() {
$SUDO rm -rf /var/lib/cloud/instanc*
$SUDO rm -rf /root/.ssh/*
$SUDO rm -rf /etc/ssh/*key*
$SUDO rm -rf /tmp/*
$SUDO rm -rf /tmp/* /var/tmp/automation_images
$SUDO rm -rf /tmp/.??*
echo -n "" | $SUDO tee /etc/machine-id
$SUDO sync
@ -313,7 +330,10 @@ rh_finalize() {
# Packaging cache is preserved across builds of container images
$SUDO rm -f /etc/udev/rules.d/*-persistent-*.rules
$SUDO touch /.unconfigured # force firstboot to run
common_finalize
echo
echo "# PACKAGE LIST"
rpm -qa | sort
}
# Called during VM Image setup, not intended for general use.
@ -329,7 +349,9 @@ debian_finalize() {
fi
set -x
# Packaging cache is preserved across builds of container images
common_finalize
# pipe-cat is not a NOP! It prevents using $PAGER and then hanging
echo "# PACKAGE LIST"
dpkg -l | cat
}
finalize() {
@ -342,4 +364,6 @@ finalize() {
else
die "Unknown/Unsupported Distro '$OS_RELEASE_ID'"
fi
common_finalize
}

View File

@ -15,6 +15,16 @@ ARG PACKER_BUILD_NAME=
ENV AI_PATH=/usr/src/automation_images \
CONTAINER=1
ARG IMG_SFX=
ARG CIRRUS_TASK_ID=
ARG GIT_HEAD=
# Ref: https://github.com/opencontainers/image-spec/blob/main/annotations.md
LABEL org.opencontainers.image.url="https://cirrus-ci.com/task/${CIRRUS_TASK_ID}"
LABEL org.opencontainers.image.documentation="https://github.com/containers/automation_images/blob/${GIT_HEAD}/README.md#container-images-overview-step-2"
LABEL org.opencontainers.image.source="https://github.com/containers/automation_images/blob/${GIT_HEAD}/podman/Containerfile"
LABEL org.opencontainers.image.version="${IMG_SFX}"
LABEL org.opencontainers.image.revision="${GIT_HEAD}"
# Only add needed files to avoid invalidating build cache
ADD /lib.sh "$AI_PATH/"
ADD /podman/* "$AI_PATH/podman/"

View File

@ -12,7 +12,6 @@ RUN dnf -y update && \
dnf clean all
ENV REG_REPO="https://github.com/docker/distribution.git" \
REG_COMMIT="b5ca020cfbe998e5af3457fda087444cf5116496" \
REG_COMMIT_SCHEMA1="ec87e9b6971d831f0eff752ddb54fb64693e51cd" \
OSO_REPO="https://github.com/openshift/origin.git" \
OSO_TAG="v1.5.0-alpha.3"

View File

@ -9,7 +9,6 @@ set -e
declare -a req_vars
req_vars=(\
REG_REPO
REG_COMMIT
REG_COMMIT_SCHEMA1
OSO_REPO
OSO_TAG
@ -43,12 +42,6 @@ cd "$REG_GOSRC"
(
# This is required to be set like this by the build system
export GOPATH="$PWD/Godeps/_workspace:$GOPATH"
# This comes in from the Containerfile
# shellcheck disable=SC2154
git checkout -q "$REG_COMMIT"
go build -o /usr/local/bin/registry-v2 \
github.com/docker/distribution/cmd/registry
# This comes in from the Containerfile
# shellcheck disable=SC2154
git checkout -q "$REG_COMMIT_SCHEMA1"
@ -68,6 +61,10 @@ sed -i -e 's/\[\[ "\${go_version\[2]}" < "go1.5" ]]/false/' ./hack/common.sh
# 8 characters long. This can happen if/when systemd-resolved adds 'trust-ad'.
sed -i '/== "attempts:"/s/ 8 / 9 /' vendor/github.com/miekg/dns/clientconfig.go
# Backport https://github.com/ugorji/go/commit/8286c2dc986535d23e3fad8d3e816b9dd1e5aea6
# Go ≥ 1.22 panics with a base64 encoding using duplicated characters.
sed -i -e 's,"encoding/base64","encoding/base32", ; s,base64.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789__"),base32.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef"),' vendor/github.com/ugorji/go/codec/gen.go
make build
make all WHAT=cmd/dockerregistry
cp -a ./_output/local/bin/linux/*/* /usr/local/bin/

View File

@ -12,7 +12,7 @@ if [[ "$UID" -ne 0 ]]; then
export SUDO="sudo env DEBIAN_FRONTEND=noninteractive"
fi
EVIL_UNITS="cron crond atd apt-daily-upgrade apt-daily fstrim motd-news systemd-tmpfiles-clean update-notifier-download mlocate-updatedb"
EVIL_UNITS="cron crond atd apt-daily-upgrade apt-daily fstrim motd-news systemd-tmpfiles-clean update-notifier-download mlocate-updatedb plocate-updatedb"
if [[ "$1" == "--list" ]]
then

View File

@ -0,0 +1,4 @@
<powershell>
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 0
Enable-NetFirewallRule -DisplayGroup "Remote Desktop"
</powershell>

50
win_images/win-lib.ps1 Normal file
View File

@ -0,0 +1,50 @@
$ErrorActionPreference = "stop"
Set-ExecutionPolicy Bypass -Scope Process -Force
function Check-Exit {
param(
[parameter(ValueFromRemainingArguments = $true)]
[string[]] $codes = @(0)
)
if ($LASTEXITCODE -eq $null) {
return
}
foreach ($code in $codes) {
if ($LASTEXITCODE -eq $code) {
return
}
}
Exit $LASTEXITCODE
}
# Retry installation on failure or 5-minute timeout (for all packages)
function retryInstall {
param([Parameter(ValueFromRemainingArguments)] [string[]] $pkgs)
foreach ($pkg in $pkgs) {
for ($retries = 0; ; $retries++) {
if ($retries -gt 5) {
throw "Could not install package $pkg"
}
if ($pkg -match '(.[^\@]+)@(.+)') {
$pkg = @("--version", $Matches.2, $Matches.1)
}
# Chocolatey best practices as of 2024-04:
# https://docs.chocolatey.org/en-us/choco/commands/#scripting-integration-best-practices-style-guide
# Some of those are suboptimal, e.g., using "upgrade" to mean "install",
# hardcoding a specific API URL. We choose to reject those.
choco install $pkg -y --allow-downgrade --execution-timeout=300
if ($LASTEXITCODE -eq 0) {
break
}
Write-Host "Error installing, waiting before retry..."
Start-Sleep -Seconds 6
}
}
}

View File

@ -17,24 +17,29 @@ builders:
most_recent: true
owners:
- amazon
# While this image should run on metal, we can build it on smaller/cheaper systems
# While this image should run on metal, we can build it on smaller/cheaper systems
instance_type: t3.large
force_deregister: true # Remove AMI with same name if exists
force_delete_snapshot: true # Also remove snapshots of force-removed AMI
# Note that we do not set shutdown_behavior to terminate, as a clean shutdown is required
# for windows provisioning to complete successfully.
communicator: winrm
winrm_username: Administrator # AWS provisions Administrator, unlike GCE
winrm_username: Administrator # AWS provisions Administrator, unlike GCE
winrm_insecure: true
winrm_use_ssl: true
winrm_timeout: 25m
# Script that runs on server start, needed to prep and enable winrm
user_data_file: '{{template_dir}}/bootstrap.ps1'
user_data_file: '{{template_dir}}/bootstrap.ps1'
# Required for network access, must be the 'default' group used by Cirrus-CI
security_group_id: "sg-042c75677872ef81c"
ami_name: &ami_name '{{build_name}}-c{{user `IMG_SFX`}}'
ami_description: 'Built in https://cirrus-ci.com/task/{{user `CIRRUS_TASK_ID`}}'
launch_block_device_mappings:
- device_name: '/dev/sda1'
volume_size: 200
volume_type: 'gp3'
iops: 6000
delete_on_termination: true
# These are critical and used by security-polciy to enforce instance launch limits.
tags: &awstags
# EC2 expects "Name" to be capitalized
@ -53,18 +58,22 @@ builders:
provisioners:
- type: powershell
script: '{{template_dir}}/win_packaging.ps1'
inline:
- '$ErrorActionPreference = "stop"'
- 'New-Item -Path "c:\" -Name "temp" -ItemType "directory" -Force'
- 'New-Item -Path "c:\temp" -Name "automation_images" -ItemType "directory" -Force'
- type: 'file'
source: '{{ pwd }}/'
destination: "c:\\temp\\automation_images\\"
- type: powershell
inline:
- 'c:\temp\automation_images\win_images\win_packaging.ps1'
# Several installed items require a reboot, do that now in case it would
# cause a problem with final image preperations.
- type: windows-restart
- type: powershell
inline:
# Disable WinRM as a security precuation (cirrus launches an agent from user-data, so we don't need it)
- Set-Service winrm -StartupType Disabled
# Also disable RDP (can be enabled via user-data manually)
- Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 1
- Disable-NetFirewallRule -DisplayGroup "Remote Desktop"
# Setup Autologon and reset, must be last, due to pw change
- type: powershell
script: '{{template_dir}}/auto_logon.ps1'
- 'c:\temp\automation_images\win_images\win_finalization.ps1'
post-processors:
@ -75,4 +84,3 @@ post-processors:
IMG_SFX: '{{ user `IMG_SFX` }}'
STAGE: cache
TASK: '{{user `CIRRUS_TASK_ID`}}'

View File

@ -1,6 +1,13 @@
$ErrorActionPreference = "stop"
$username = "Administrator"
. $PSScriptRoot\win-lib.ps1
# Disable WinRM as a security precuation (cirrus launches an agent from user-data, so we don't need it)
Set-Service winrm -StartupType Disabled
# Also disable RDP (can be enabled via user-data manually)
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 1
Disable-NetFirewallRule -DisplayGroup "Remote Desktop"
$username = "Administrator"
# Temporary random password to allow autologon that will be replaced
# before the instance is put into service.
$syms = [char[]]([char]'a'..[char]'z' `
@ -15,8 +22,8 @@ $encPass = ConvertTo-SecureString $password -AsPlainText -Force
Set-LocalUser -Name $username -Password $encPass
$winLogon= "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon"
Set-ItemProperty $winLogon "AutoAdminLogon" -Value "1" -type String
Set-ItemProperty $winLogon "DefaultUsername" -Value $username -type String
Set-ItemProperty $winLogon "AutoAdminLogon" -Value "1" -type String
Set-ItemProperty $winLogon "DefaultUsername" -Value $username -type String
Set-ItemProperty $winLogon "DefaultPassword" -Value $password -type String
# Lock the screen immediately, even though it's unattended, just in case
@ -28,6 +35,6 @@ Set-ItemProperty `
# NOTE: For now, we do not run sysprep, since initialization with reboots
# are exceptionally slow on metal nodes, which these target to run. This
# will lead to a duplicate machine id, which is not ideal, but allows
# instances to start instantly. So, instead of sysprep, trigger a reset so
# that the admin password reset, and activation rerun on boot
# instances to start quickly. So, instead of sysprep, trigger a reset so
# that the admin password reset, and activation rerun on boot.
& 'C:\Program Files\Amazon\EC2Launch\ec2launch' reset --block

View File

@ -1,36 +1,36 @@
function CheckExit {
param(
[parameter(ValueFromRemainingArguments = $true)]
[string[]] $codes = @(0)
)
if ($LASTEXITCODE -eq $null) {
return
}
foreach ($code in $codes) {
if ($LASTEXITCODE -eq $code) {
return
}
}
Exit $LASTEXITCODE
}
. $PSScriptRoot\win-lib.ps1
# Disables runtime process virus scanning, which is not necessary
Set-MpPreference -DisableRealtimeMonitoring 1
$ErrorActionPreference = "stop"
Set-ExecutionPolicy Bypass -Scope Process -Force
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
# Install Git, BZ2 archive support, Go, and the MingW (GCC for Win) compiler for CGO support
# Add pstools to workaorund sess 0 WSL bug
choco install -y git mingw archiver psexec; CheckExit
choco install golang --version 1.19.2 -y; CheckExit
# Install basic required tooling.
# psexec needed to workaround session 0 WSL bug
retryInstall 7zip git archiver psexec golang mingw StrawberryPerl zstandard; Check-Exit
# Update service is required for dotnet
Set-Service -Name wuauserv -StartupType "Manual"; Check-Exit
# Install dotnet as that's the best way to install WiX 4+
# Choco does not support installing anything over WiX 3.14
Invoke-WebRequest -Uri https://dotnet.microsoft.com/download/dotnet/scripts/v1/dotnet-install.ps1 -OutFile dotnet-install.ps1
.\dotnet-install.ps1 -InstallDir 'C:\Program Files\dotnet'
# Configure NuGet sources for dotnet to fetch wix (and other packages) from
& 'C:\Program Files\dotnet\dotnet.exe' nuget add source https://api.nuget.org/v3/index.json -n nuget.org
# Install wix
& 'C:\Program Files\dotnet\dotnet.exe' tool install --global wix
# Install Hyper-V
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Management-PowerShell -All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Management-Clients -All -NoRestart
# Install WSL, and capture text output which is not normally visible
$x = wsl --install; CheckExit 0 1 # wsl returns 1 on reboot required
Write-Output $x
$x = wsl --install; Check-Exit 0 1 # wsl returns 1 on reboot required
Write-Host $x
Exit 0