Compare commits

...

505 Commits
v1.0.1 ... main

Author SHA1 Message Date
Podman Bot cc7d9b2a26 Add certificate for mohanboddu from containers/automation_sandbox (PR #122)
Signed-off-by: Podman Bot <podman.bot@example.com>
2025-08-12 12:37:41 -04:00
Podman Bot 0af8676cb8 Add certificate for mohanboddu from containers/automation_sandbox (PR #122)
Signed-off-by: Podman Bot <podman.bot@example.com>
2025-08-12 12:35:04 -04:00
Matt Heon f55fe34cfb
Merge pull request #251 from mohanboddu/fix-html
Fixing the PR link certificate_generator.html
2025-08-06 16:12:04 -04:00
Mohan Boddu 987689cc34 Fixing the PR link certificate_generator.html
Signed-off-by: Mohan Boddu <mboddu@redhat.com>
2025-08-06 15:40:43 -04:00
Neil Smith cb12019fba
Add certificate generator for first-time contributors (#249)
Add certificate generator for first-time contributors

This adds a web-based certificate generator to celebrate first-time
contributors to containers organization projects. The generator includes:

- Interactive HTML interface for creating certificates
- Customizable certificate template with Podman branding
- Real-time preview and HTML download functionality

The certificates can be used to recognize and celebrate community
members who make their first contribution to any containers project.
2025-07-17 17:48:30 +02:00
Paul Holzinger e1231d1520
Merge pull request #248 from containers/renovate/urllib3-2.x
chore(deps): update dependency urllib3 to <2.5.1
2025-06-18 19:17:55 +02:00
renovate[bot] b0959cb192
chore(deps): update dependency urllib3 to <2.5.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-06-18 17:01:57 +00:00
Paul Holzinger 7f213bf685
Merge pull request #247 from Luap99/macos-go
Revert "mac_pw_pool: hotfix go install"
2025-06-05 21:09:23 +02:00
Paul Holzinger 79e68ef97c
Revert "mac_pw_pool: hotfix go install"
This reverts commit d805c0c822.

Podman should build on 5.5 and main again due
db65baaa21

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-06-05 18:17:47 +02:00
Paul Holzinger aba42ca8ff
Merge pull request #246 from Luap99/macos-go
mac_pw_pool: hotfix go install
2025-05-07 20:15:05 +02:00
Paul Holzinger d805c0c822
mac_pw_pool: hotfix go install
We have to pin back the go version as it contains a regression that
causes podman compile failures.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-05-07 18:47:19 +02:00
Paul Holzinger e83dcfcabf
Merge pull request #243 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.6.2
2025-04-15 17:34:35 +02:00
renovate[bot] 7f13540563
[skip-ci] Update actions/upload-artifact action to v4.6.2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-15 15:32:37 +00:00
Paul Holzinger 50c43af45e
Merge pull request #237 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.6.0
2025-04-15 17:32:13 +02:00
Paul Holzinger cd259102d4
Merge pull request #240 from containers/renovate/urllib3-2.x
chore(deps): update dependency urllib3 to <2.4.1
2025-04-15 17:31:51 +02:00
renovate[bot] 051f0951f1
chore(deps): update dependency urllib3 to <2.4.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-15 14:47:01 +00:00
Paul Holzinger e8a30ae1ea
Merge pull request #242 from Luap99/comment-action
github: fix wrong action call
2025-04-15 16:43:39 +02:00
Paul Holzinger a4888b2ce9
github: fix wrong action call
Missed one place where I had to replace the arguments.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 16:31:54 +02:00
Paul Holzinger 8faa8b216c
Merge pull request #241 from Luap99/comment-action
github: use thollander/actions-comment-pull-request
2025-04-15 15:45:36 +02:00
Paul Holzinger fd6f70913e
action: debug retropective
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 15:33:19 +02:00
Paul Holzinger f3777be65b
github: use thollander/actions-comment-pull-request
jungwinter/comment doesn't seem very much maintained and makes use of
the deprecated set-output[1].

[1] https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-04-15 15:04:21 +02:00
Paul Holzinger 16f757f699
Merge pull request #239 from Luap99/go
renovate: update to go 1.23
2025-03-13 11:23:52 +01:00
Paul Holzinger 26ab1b7744
renovate: update to go 1.23
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-03-12 17:33:30 +01:00
Paul Holzinger 994ba027c2
Merge pull request #238 from Luap99/zstd
mac_pw_pool: add zstd
2025-02-18 15:23:54 +01:00
Paul Holzinger fa70d9e3af
ci: remove python3-flake8-docstrings
This package no longer exists in fedora.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-18 15:07:33 +01:00
Paul Holzinger 3e2662f02b
mac_pw_pool: add zstd
The new macos 15 base image does not contain it and the repo_prep in
podman is failing because we need it to compress the tar with it.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2025-02-18 14:55:33 +01:00
renovate[bot] 0f5226e050
[skip-ci] Update actions/upload-artifact action to v4.6.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-01-10 17:47:14 +00:00
Paul Holzinger 24800f0f77
Merge pull request #236 from containers/renovate/urllib3-2.x
chore(deps): update dependency urllib3 to <2.3.1
2025-01-06 18:47:10 +01:00
Paul Holzinger 5ae1659c96
Merge pull request #235 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.5.0
2025-01-06 18:46:47 +01:00
renovate[bot] 3c034bcadc
chore(deps): update dependency urllib3 to <2.3.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-12-22 10:19:41 +00:00
renovate[bot] 7067540a52
[skip-ci] Update actions/upload-artifact action to v4.5.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-12-17 22:43:35 +00:00
Paul Holzinger e3c74c2aa4
Merge pull request #234 from Luap99/renovate
renovate: remove edsantiago as default reviewer
2024-11-26 16:02:45 +01:00
Paul Holzinger 8c5bb22af7
Merge pull request #233 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.4.3
2024-11-26 15:54:18 +01:00
Paul Holzinger 3b33514d26
Merge pull request #231 from containers/renovate/urllib3-2.x
chore(deps): update dependency urllib3 to <2.2.4
2024-11-26 15:53:56 +01:00
Paul Holzinger 973aa8c2fe
renovate: remove edsantiago as default reviewer
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-11-26 15:50:34 +01:00
Ed Santiago 4d23dd41f0
Merge pull request #232 from Luap99/image-update-reviewers
renovate: update image update PR reviewers
2024-10-11 11:54:49 -06:00
renovate[bot] b9186a2b38
[skip-ci] Update actions/upload-artifact action to v4.4.3
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-10-11 12:24:16 +00:00
Paul Holzinger 8b1776b799
renovate: update image update PR reviewers
Chris no longer works on our team and has no time to review them. Add
Ed and myself as reviewers for these PR (we already reviewed them
anyway) so we get a notification for all PRs and do not miss them.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-11 11:44:18 +02:00
Paul Holzinger 8218f24c4d
Merge pull request #226 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.4.0
2024-10-11 11:38:02 +02:00
Paul Holzinger 8f39f4b1af
Merge pull request #230 from containers/renovate/ubuntu-24.x
chore(deps): update dependency ubuntu to v24
2024-10-11 11:37:35 +02:00
renovate[bot] 99d1c2662e
chore(deps): update dependency urllib3 to <2.2.4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-10-11 09:36:37 +00:00
Paul Holzinger 32b94cedea
Merge pull request #228 from containers/renovate/urllib3-2.x
chore(deps): update dependency urllib3 to v2
2024-10-11 11:36:21 +02:00
Paul Holzinger 5ad53bd723
Merge pull request #229 from cevich/rm_renovate_cevich
Remove renovate cevich auto-assign
2024-10-11 11:35:35 +02:00
renovate[bot] 24a62a63d3
chore(deps): update dependency ubuntu to v24
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-09-26 19:23:47 +00:00
Chris Evich ab1f7624a0
Remove renovate cevich auto-assign
Previously renovate auto-assigned all updates in this repo to cevich
who's no longer on the team.  Fix this, and update the container FQIN
comment to a non-docker-hub location (to avoid rate-limit restrictions).

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-09-05 17:28:17 -04:00
renovate[bot] 35a29e5dfe
chore(deps): update dependency urllib3 to v2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-09-05 19:11:08 +00:00
Ed Santiago 657247095b
Merge pull request #227 from Luap99/go-1.22
renovate: update to go 1.22
2024-09-05 13:10:48 -06:00
Paul Holzinger cc18e81abf
fix skopeo exit code
A change[1] in skope made it exit with 2 if the image is not found so
fix the test assumption here.

[1] 16b6f0ade5

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-09-04 18:49:58 +02:00
Paul Holzinger d2e5f7815e
remove broken timebomb test
This test doesn't wotk when run before 1pm UTC only after, we could add
a +24 hours but it is not clear what the purpose of this function is so
just remove it. We know that timebomb seems to work good enough in
practice and regressions are unlikely.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-09-04 15:08:44 +02:00
Paul Holzinger 48c9554a6c
renovate: update to go 1.22
We have pinned the renovate go version to the lowest version we support
as otherwise it will create PRs that update to new go version which we
always want to take care of manually as it included more changes
usually. While it doesn't prevent renovate from creating these PRs they
always fail as it cannot update to a newer go version so it is clear to
reviewers what is going on.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-09-04 14:39:57 +02:00
Paul Holzinger 0a0bc4f395
renovate: remove CI:DOCS from linter updates
Podman no longer uses CI:DOCS as it skips based on source changes.
As such this title doesn't add anything besides confusion why it is
there.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-09-04 14:30:52 +02:00
renovate[bot] b8969128d0
[skip-ci] Update actions/upload-artifact action to v4.4.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-30 19:20:40 +00:00
Chris Evich 4739c8921c
Merge pull request #222 from cevich/deduplicate_pw_pool_docs
De-duplicate PW pool readme
2024-08-12 14:02:37 -04:00
Chris Evich 34ea41cc7f
De-duplicate PW pool readme
Several sections and individual items were duplicated or did not belong
in this file.  They've been moved to the private google-doc linked
in the "Prerequisites" section and included in the monitoring
website `index.html`.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-12 13:32:45 -04:00
Ed Santiago ee5fba7664
Merge pull request #221 from cevich/mac_pw_pool_worker_docs
Add debugging section to PW pool docs
2024-08-12 06:33:09 -06:00
Chris Evich 34e2995cd7
Add debugging section to PW pool docs
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-08 14:43:46 -04:00
Chris Evich 51a2c1fbed
Merge pull request #217 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.3.6
2024-08-06 15:46:07 -04:00
Chris Evich 718ecdb04e
Merge pull request #220 from cevich/mac_pw_pool_fix_max_tasks
Fix possible max-tasks PW pool cascade failure
2024-08-06 14:17:45 -04:00
Chris Evich 7ae84eb74c
Fix possible max-tasks PW pool cascade failure
For integrity and safety reasons, there are multiple guardrails in place
to limit the potential damage of a rogue/broken/misconfigured worker
instance may cause. One of these restrictions is a maximum limit on the
number of tasks that a worker may execute. However, if the pool is
experiencing extraordinary utilization, it's possible that a large number
of workers could encounter this limit at/near the same time. Assuming the
pool load remains high, this will then further shorten the lifetime of
the remaining online instances.

Also:

* Double the limit on allowed tasks (12 was too small based on heavy
  utilization).
* Double the allowed setup time to account for network slowdowns.
* Show both the soft and hard uptime limits for each worker.
* Issue warning if worker exceeds soft uptime limit.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-06 14:09:59 -04:00
renovate[bot] d81a56f85b
[skip-ci] Update actions/upload-artifact action to v4.3.6
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-06 15:23:33 +00:00
Chris Evich 27f6f9363f
Merge pull request #216 from cevich/mac_pw_pool_fix_shutdown_timeout
Fix instance shutdown never timing out
2024-08-06 11:23:14 -04:00
Chris Evich 1b35e0e24d
Fix instance shutdown never timing out
In an attempt to try and prevent terminating an instance while a CI task
is running, the shutdown script checks for the existance of an agent
process.  Previously a calculation of a timeout for this delay was
stored, however it was never actually used.  Fix this by aborting the
delay after the timeout has expired.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-05 17:00:18 -04:00
Chris Evich 2c1ee35362
Merge pull request #211 from cevich/mac_pw_pool_fix_force_stagger
Fix extending PW beyond PW_MAX_HOURS
2024-08-05 16:59:30 -04:00
Chris Evich 447f70e9c7
Fix extending PW beyond PW_MAX_HOURS
Previously when using the `--force` option to `SetupInstances.sh` each
instance created would have its lifetime extended by
`$CREATE_STAGGER_HOURS`. For any instance beyond the first, that will
immediately put it beyond the `$PW_MAX_HOURS` hard-limit.  Eventually
this will result in multiple instances going offline at the same time,
which is undesirable.

Fix this by staggering instance lifetimes with decreasing values instead.
Include extra checks to make sure the value remains positive and sane.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-05 16:47:31 -04:00
Chris Evich 1809c5b6c0
Merge pull request #212 from cevich/mac_pw_pool_confirm_ssh_agent
Fail loudly when ssh-agent not running
2024-08-05 15:40:53 -04:00
Chris Evich c552d5bba1
Fail loudly when ssh-agent not running
The agent is required to keep the public key secure since the local and
remote user has sudo access.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-05 15:34:16 -04:00
Chris Evich 3568a50f52
Merge pull request #213 from cevich/mac_pw_pool_fix_pub_dns
Fix 'Expecting pub_dns to be set/non-empty' error
2024-08-05 15:31:57 -04:00
Chris Evich 436dceb68f
Fix 'Expecting pub_dns to be set/non-empty' error
While processing instances, if the script encounters an instance running
past PW_MAX_HOURS, it will force-terminate it.  However, this check was
happening before the script had looked up the required 'pub_dns' value.
Fix this by relocating the check.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-05 15:22:42 -04:00
Chris Evich 13be11668c
Merge pull request #214 from cevich/mac_pw_pool_fix_deadlock
Fix deadlock induced MacOS PW Pool collapse
2024-08-05 15:18:20 -04:00
Chris Evich 47a5015b07
Fix deadlock induced MacOS PW Pool collapse
Every night a script runs to check and possibly update all the scripts
in the repo.  When this happens, two important activities take place:

1. The script is restarted (presuming it's own code changed).
2. The container running nginx (for the usage graph) is restarted.

For unknown reasons, possibly due to a system update, a pasta
(previously slirp4netns) sub-process spawned by podman is holding open
the lock-file required by both the maintenance script and the (very
important) `Cron.sh`.  This leads to a deadlock situation where
the entire pool becomes unmanaged since `Cron.sh` can't run.

To prevent unchecked nefarious/unintended use, all workers automatically
recycle themselves after some time should they become unmanaged.
Therefore, without `Cron.sh` operating, the entire pool will eventually
collapse.

Though complex, as a (hopefully) temporary fix, ensure all non-stdio FDs
are closed (in a sub-shell) prior to restarting the container.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-05 15:08:26 -04:00
Chris Evich b0dde0f4fc
Merge pull request #210 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.3.5
2024-08-05 13:40:40 -04:00
Chris Evich 689cfa189c
Merge pull request #215 from cevich/fix_build-push_test
Fix build-push CI env setup failure
2024-08-05 13:15:06 -04:00
Chris Evich bb3343c0c4
Fix build-push CI env setup failure
For whatever reason, the `registries.conf` alias setup is no-longer
working and the docker rate-limiting is causing CI breakage.  Fix this
by simplifying to pulling directly from the google proxy.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-08-05 13:07:03 -04:00
renovate[bot] b1d7d1d447
[skip-ci] Update actions/upload-artifact action to v4.3.5
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-02 16:59:31 +00:00
Chris Evich 256fefe0dd
Merge pull request #208 from cevich/libkrun_on_mac_pw_pool
Mac PW Pool: Install libkrun (krunkit)
2024-07-31 16:27:27 -04:00
Chris Evich 11359412d4
Mac PW Pool: Install libkrun (krunkit)
In order to test accessibility of the host GPU inside a podman machine
container, it's necessary to install support for krun.  However, since
the list of brew recipes is ever growing, split it up into sections with
comments explaining why each is necessary and what uses it.

Also fix a minor bug WRT re-running setup with already disabled
softwareupdate.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-31 16:06:27 -04:00
Chris Evich 378249996e
Merge pull request #209 from cevich/fix_renovate_validation
Fix running renovate-config-validator
2024-07-31 15:19:28 -04:00
Chris Evich 12b7b27dda
Fix running renovate-config-validator
Newer renovate container images place the binary elsewhere, resulting in
this check encountering a file-not-found error.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-31 15:06:40 -04:00
Chris Evich 720ba14043
Merge pull request #207 from cevich/manual_testing_mac
Mac PW Pool: Add testing helper script
2024-07-16 14:20:36 -04:00
Chris Evich a69abee410
Mac PW Pool: Add testing helper script
Previously a lot of intricate and painful steps were requred to setup a
Mac dedicated-host for testing.  Make this process easier with a script
that does most of the work.  Update documentation accordingly.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-16 13:45:34 -04:00
Chris Evich 399120c350
Mac PW Pool: Allow variable DH name prefixes
Previously every dedicated-host and instance was named with the prefix
`MacM1`.  Support management of other sets of DHs with different
prefixes by turning this value into a variable.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-07-16 13:45:34 -04:00
Ed Santiago 4302d62c26
Merge pull request #206 from edsantiago/more-task-map
Simplify the new podman CI skips
2024-07-08 07:22:26 -06:00
Ed Santiago 8204fd5794 Simplify the new podman CI skips
They are now under only_if, not skip. And there's really no need
for individual names, just say "SKIP if not needed"

Also, add handling for 'skip CI=CI', currently used in minikube

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-07-08 07:11:37 -06:00
Chris Evich d0474a3847
Merge pull request #205 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.3.4
2024-07-05 15:23:31 -04:00
renovate[bot] 14fd648920
[skip-ci] Update actions/upload-artifact action to v4.3.4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-05 17:14:46 +00:00
Ed Santiago 420ed9a467
Merge pull request #203 from edsantiago/automation-images
cirrus-task-map: tweaks for automation_images CI
2024-07-02 11:05:32 -06:00
Ed Santiago dc21cdf863 cirrus-task-map: add skips/only-ifs for automation_images
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-07-02 10:55:32 -06:00
Ed Santiago b813ad7981 ImageMagick v7 deprecates "convert" command
Use "magick" instead, with a little shuffling of args

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-07-02 10:55:32 -06:00
Ed Santiago 415e21b68b
Merge pull request #202 from edsantiago/sort-by-type
cirrus-task-map: uptodateize
2024-07-01 06:19:31 -06:00
Ed Santiago 8b9ae348a0 handle the new 2024-06-18 CI skips
Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-06-27 12:57:44 -06:00
Ed Santiago 663cb85121 task-map: sort jobs by task type
Now that it's just one huge parallel blob, change our sorting
so we cluster all the int/sys/machine tests together.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-06-27 12:57:44 -06:00
Chris Evich 9c771bf862
Merge pull request #201 from cevich/doc_golang_ind_vuln_config
Unconfigure golang indirect vulnerability support
2024-06-25 11:09:02 -04:00
Chris Evich 13aaf6100f
Unconfigure golang indirect vulnerability support
Discovered by log analysis, Renovate will initially setup a vulnerable
golang indirect dep for immediate PR creation.  However, later on in
its run, PR creation will be disabled by the global indirect-golang
default setting (disabled).  Extensive review of `packageRules`
configuration shows no way to filter based on vulnerability status.
This would be the only conceivable way to override the default.

Fix this by replacing the misleading/useless config. section with a
comment block indicating that indirect golang vulnerabilities must be
handled by hand.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-06-25 10:55:39 -04:00
Chris Evich 46d69a3969
Merge pull request #200 from cevich/add_mac_temp_docs
Add Mac PW Pool Launch Template docs
2024-06-07 11:01:05 -04:00
Chris Evich 081b9c3be5
Bump build-push test CI VM image
CentOS-stream 8 is EOL.

Also, use the latest buildah container image and update a build-push
test to cope with some minor behavior changes.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-06-07 10:52:54 -04:00
Chris Evich e4e0cdbd51
Add Mac PW Pool Launch Template docs
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-06-07 10:52:54 -04:00
Chris Evich ae7f68a9ac
Merge pull request #199 from cevich/fewer_jobs
PW Pool: Reduce task-to-task corruption risk
2024-06-03 11:02:46 -04:00
Chris Evich 836d5a7487
PW Pool: Reduce task-to-task corruption risk
Previously instances would shutdown and auto-terminate if the
controlling VM's `SetupInstances.sh` examined the remote worker
log and found >= `PW_MAX_TASKS` logged.  However after examining the
production `Cron.log`, it was found that nowhere near this number of
tasks is actually running during `PW_MAX_HOURS`.  Cut the value in
half to lower the risk of one/more tasks corrupting processes or the
filesystem for other tasks.

Note: Eyeball average tasks before timed auto-shutdown was about 7

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-06-03 09:47:39 -04:00
Chris Evich 02d3c0a99c
Merge pull request #198 from cevich/more_mac_packages
Mac PW Pool: Install packages needed for skopeo CI
2024-05-31 09:55:21 -04:00
Chris Evich f750079c85
Mac PW Pool: Install packages needed for skopeo CI
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-30 14:23:44 -04:00
Chris Evich 0eb6675f13
Merge pull request #197 from cevich/restrict_mac_sw_install
Mac PW Pool: Restrict software installation/updates
2024-05-29 12:55:46 -04:00
Chris Evich 3a39b5cafc
Mac PW Pool: Restrict software installation/updates
For whatever reason, non-admin users are permitted to install and update
software on Macs by default.  This is highly undesirable in a CI
environment, and especially so in one where the underlying resources are
shared across testing contexts.  Block this by altering system settings
to require admin access.

Further through experimentation, it was found that rosetta (allows arm64
macs to run x86_64 code) ignores the admin-required settings.  To give
pause to any users trying to run `softwareupdate`, move it out of general
reach.  This isn't a perfect solution, but should at least discourage all
simple usage.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 11:46:50 -04:00
Chris Evich 8a0e087c4b
Update Mac PW Pool docs
Specifically, detail the manual testing steps.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-29 11:46:50 -04:00
Chris Evich c910e69c12
Merge pull request #196 from cevich/install_rosetta
Mac PW Pool: Install rosetta
2024-05-22 10:00:49 -04:00
Chris Evich 37e71d45af
Mac PW Pool: Install rosetta
Podman machine testing needs rosetta to confirm running x86_64 binaries.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-05-21 16:40:02 -04:00
Chris Evich 9a8a1a2413
Merge pull request #195 from cevich/ignore_go_toolchain_updates
Never update golang toolchain
2024-04-30 12:06:59 -04:00
Chris Evich 2e805276bb
Never update golang toolchain
Fixes: #193

Despite restrictions on `go` directive updates by Renovate, it was still
proposing updates to the `toolchain` directive.  In order to maintain
consistency across all projects, this value needs to be managed
manually.  Detect when Renovate is trying to update it and shut it down.

Ref: Upstream https://github.com/renovatebot/renovate PR 28476

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-29 16:02:40 -04:00
Chris Evich 5d234f1e4a
Merge pull request #192 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.3.3
2024-04-23 15:27:26 -04:00
Chris Evich badedd4968
Merge pull request #194 from cevich/fix_egrep
Minor: egrep fixes + more debugging
2024-04-23 10:36:37 -04:00
Chris Evich 2cdb0b15ee
Minor: More debugging
For some reason, it seems to still be possible for `get_manifest_tags()`
to return non-zero despite `result_json` being an empty list.  Add some
more debugging to the function to help figure out why.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-23 10:21:14 -04:00
Chris Evich f27c7ae6d9
Minor: Fix use of egrep + some shellcheck findings
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-23 09:52:42 -04:00
Chris Evich d7a884b8cf
Merge pull request #191 from cevich/warn_empty
Fix build-push failing on empty push list
2024-04-22 14:00:32 -04:00
Chris Evich 9336e20516
Resolve build-push test TODO
The mentioned bug has long-since been fixed.  This test should pass
despite there being no images present after mod-command runs.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-22 13:46:47 -04:00
Chris Evich 7feb7435c2
Fix build-push failing on empty push list
Prior to https://github.com/containers/image_build/pull 23 the
automation using `build-push.sh` always pushed its images.  This
obscured a bug that occurs when `fqin_names` is an empty string in
`get_manifest_tags()`.  In this case, the `grep` command will exit
non-zero, causing `push_images()` to:

```
die "Error retrieving set of manifest-list tags to push for '$FQIN'"
```

Fix this by adding an empty-string check and removing the unnecessary
`grep`.  Also, `push_images()` change `die "No FQIN(s) to be pushed."`
into a warning, since the condition should not be considered fatal.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-22 13:46:42 -04:00
Chris Evich 478b8d9d30
Minor: Fix build-push shellcheck findings
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-22 13:46:42 -04:00
renovate[bot] 1bd2fbdfe3
[skip-ci] Update actions/upload-artifact action to v4.3.3
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-22 17:39:50 +00:00
Chris Evich d061d8061e
Merge pull request #190 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.3.2
2024-04-19 12:16:33 -04:00
renovate[bot] 13f6c9fb53
[skip-ci] Update actions/upload-artifact action to v4.3.2
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-18 16:41:32 +00:00
Chris Evich af1016e668
Merge pull request #189 from cevich/golang121
Bump golang to version 1.21
2024-04-17 14:02:08 -04:00
Chris Evich 74f8447d45
Bump golang to version 1.21
Lots of module updates are arriving which require this version, unblock
all repos that depend on it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-17 13:38:25 -04:00
Chris Evich 3bf3cfd233
Merge pull request #188 from cevich/kill_inaccessable_instances
Fix unmanaged crashed/inaccessible worker
2024-04-05 11:17:56 -04:00
Chris Evich 428f06ed36
Fix unmanaged crashed/inaccessible worker
If a worker instances is inaccessible for an extended period of time,
it's a sign it may have crashed or been compromised in some way.
Previously, due to the order of status checks this condition would not
be noticed for multiple days.  Fix this by relocating the `PW_MAX_HOURS`
check to the beginning of the worker-loop.  This will force-terminate
any timed-out instances regardless of all other status checks.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-04-05 10:59:23 -04:00
Chris Evich b9ce71232f
Merge pull request #187 from cevich/constrain_go
Add big-fat-warning re: golang 1.21+ toolchain
2024-03-15 13:08:27 -04:00
Chris Evich 36c2bc68e9
Add big-fat-warning re: golang 1.21+ toolchain
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-15 09:22:50 -04:00
Chris Evich df5c5e90ac
Update to the github hosted container image
This prevents running into docker-hub rate limits

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-15 09:22:49 -04:00
Chris Evich 11026c20a3
Renovate config reformat/cleanup
Updating to the latest config. linter reformats the entire config file.
Incorporate the new format, with some minor adjustments to comments.
No settings are actually changed here.  It's all cosmetic and
formatting.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-03-14 12:27:58 -04:00
Chris Evich 1f2ccedbfd
Merge pull request #186 from cevich/simplify_updates
Simplify pool maintenance script updates
2024-02-27 13:28:23 -05:00
Chris Evich 2c1a0c6c4c
Merge pull request #183 from cevich/docs_update
[skip-ci] Mac PW Pool script docs update
2024-02-27 13:27:36 -05:00
Chris Evich fb6ba4a224
Simplify pool maintenance script updates
Previously an unnecessarily complex mechanism was used to automatically
update the code on the Mac PW Pool maintenance VM.  Simplify this to a
short fixed time interval to improve reliability.  Also fix a minor bug
where the web container restarted attached rather than detached.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-27 13:14:02 -05:00
Chris Evich f12157050c
Merge pull request #184 from edsantiago/taskmap-shortcuts
cirrus-task-map: more shortcuts
2024-02-27 13:06:35 -05:00
Chris Evich 4353f8c5b1
Merge pull request #185 from cevich/stop_disk_indexing
Mac PW Pool: Stop indexing local disks
2024-02-21 13:25:35 -05:00
Chris Evich 86ddf63ac5
Mac PW Pool: Stop indexing local disks
There's no point of this operation on a CI machine, and it creates
non-deletable files for every user on the system.  Stop it for all
volumes, ignoring any failures.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-21 11:59:56 -05:00
Ed Santiago 948206e893 cirrus-task-map: more shortcuts
For handling recent (Feb 2024) changes to .cirrus.yml

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-02-19 08:28:13 -07:00
Chris Evich c0112c254c
Merge pull request #175 from cevich/stop_truncating_stdio
[5.0.0] Fix truncating stdio magic devices
2024-02-12 11:47:19 -05:00
Chris Evich 86660f745e
[5.0.0] Fix truncating stdio magic devices
Redirecting to `/dev/stderr` or `/dev/stdout` can have a normally
unintended side-effect when the caller wishes to send either of those
elsewhere (like an actual file).  Namely, it will truncate the file
before writing.  This is almost never the expected behavior.  Update all
redirects to magic devices to append instead.

N/B: These scripts are used far and wide.  On the off-chance some
downstream caller has previously depended on this side-effect, I'm
marking this commit as 'breaking' accordingly.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-12 10:49:20 -05:00
Chris Evich 679575c7d1
Ignore deprecation warnings while running tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-12 10:49:07 -05:00
Chris Evich 0e328d6db5
Merge pull request #182 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.3.1
2024-02-06 09:36:21 -05:00
Chris Evich 71ede1b334
Mac PW Pool script docs update
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-06 09:34:47 -05:00
renovate[bot] 1f5d6b5691
[skip-ci] Update actions/upload-artifact action to v4.3.1
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-02-05 22:34:26 +00:00
Chris Evich f425d902df
Merge pull request #181 from cevich/log_maintenance
Synchronize maintenance script changes
2024-02-01 13:53:26 -05:00
Chris Evich d4f5d65014
Synchronize maintenance script changes
Previously, the automation repo was updated by a cron job w/o regard to
possibly, currently executing scripts.  This is bad.  Fix the situation
by only updating the repo. while holding a `Cron.sh` lock, taking care
to restart the graph-presenting webserver container as required.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-02-01 12:27:07 -05:00
Chris Evich 0a0d617ee9
Merge pull request #180 from cevich/fix_podman_cmd
Minor: Update example crontab
2024-01-30 12:17:47 -05:00
Chris Evich 420d72a42e
Minor: Update example crontab
Also relocate usage-graph web container and logfile maintenance to
a dedicated script + crontab entry.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-30 12:07:37 -05:00
Chris Evich 907e840d64
Merge pull request #177 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.3.0
2024-01-24 14:50:53 -05:00
Chris Evich a19393dd92
Merge pull request #179 from cevich/fix_timebomb_test
Fix timebomb test using wrong basis
2024-01-24 13:15:26 -05:00
Chris Evich 72ed4a5532
Fix timebomb test using wrong basis
The "timebomb() function ignores TZ envar and forces UTC" test started
failing (triggering the bomb unintentionally).  Fixed by forcing the
in-line date-calculation to be based on UTC (which the test was
assuming previously).  Also updated the subsequent test similarly, for
consistency.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-24 13:04:29 -05:00
renovate[bot] 99a94ca880
[skip-ci] Update actions/upload-artifact action to v4.3.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-23 19:42:35 +00:00
Chris Evich 25651a0a31
Merge pull request #174 from cevich/timebomb
Add common timebomb function to mark workarounds
2024-01-23 12:02:59 -05:00
Chris Evich 47cf77670e
Add common timebomb function to mark workarounds
Because otherwise, as the saying goes:
    "There's nothing more permanent than temporary"

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-23 09:40:43 -05:00
Chris Evich 7ce27001a4
Merge pull request #176 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.2.0
2024-01-22 10:16:36 -05:00
renovate[bot] d4314cc954
[skip-ci] Update actions/upload-artifact action to v4.2.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-18 21:58:23 +00:00
Chris Evich 92ed5911d6
Resolve a bunch of shellcheck findings
Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-16 15:43:40 -05:00
Chris Evich 93455e8a08
Fix script failure
Error: `line 0: Cannot load input from 'Utilization.gnuplot'`

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-16 12:49:22 -05:00
Chris Evich 778e26b27c
Merge pull request #173 from cevich/webplot
Output web page with utilization graph
2024-01-16 11:56:51 -05:00
Chris Evich 3cd711bba5
Output web page with utilization graph
This makes it easy to serve a simple website with the
graph, so more than one person may observe easily.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-16 11:26:43 -05:00
Chris Evich 75c0f0bb47
Increase build-push test timeout
Network slowdowns can make package installs run slowly.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-16 11:26:43 -05:00
Chris Evich 22a0e4db8f
Merge pull request #172 from cevich/use_local_disk
Create, mount, and use local storage
2024-01-15 10:03:06 -05:00
Chris Evich 22fcddc3c2
Create, mount, and use local storage
Podman machine testing is very much storage-bound in terms of
performance.  The stock AWS setup uses networked storage for the system,
and a small local disk for `/tmp`.  However there is plenty of empty
space available on the local disk, and it's *MUCH* faster than network
storage.  Use this disk as the worker-user's home directory (where tests
run from).

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-15 09:54:15 -05:00
Chris Evich dfdb3ffd29
Merge pull request #171 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4.1.0
2024-01-12 16:02:55 -05:00
renovate[bot] 2441295d69
[skip-ci] Update actions/upload-artifact action to v4.1.0
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-01-12 18:01:25 +00:00
Chris Evich d74cf63fb4
Merge pull request #168 from cevich/simplify_pool_management
Improve/overhaul pool management/monitoring scripts
2024-01-11 14:20:51 -05:00
Chris Evich b182b9ba96
Resolve worker-testing TODO
This will allow executing tasks against the workers-under-test.

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-11 14:11:35 -05:00
Chris Evich a5b7947fed
Improve/overhaul pool management/monitoring scripts
The initial implementation was rushed into production with a minimum of
required features, and necessary amount of slop and bugs.  Attend to
a litany of needed improvements and enhancements:

* Add tracking of both started and completed tasks.
* Update utilization CSV entry addition to include tasks-ended
  (`taskf`).
* Update instance-ssh helper to support specifying by name or ID
* Fix multiple instance-ssh helper executions clashing over VNC port
  forwards.
* Update many comments
* Fix handling of case where no dedicated hosts or instances are found.
* Relocate `CREATE_STAGGER_HOURS` to `pw_lib.sh` and lower value to 2.
  This value should not include a margin representing boot/setup time.
  Also a lower value will allow for faster automated pool recovery
  should the entire thing collapse for some reason.
* Support dividing/managing a subset of all dedicated hosts and
  instances via a required tag and value.  This allows for easier
  testing of script changes w/o affecting the in-use (production) pool.
* Add check to confirm host name always matches instance name - in case
  a human screws this up.  Many/most of these management scripts
  otherwise assume the two name-tags always match.
* Update documentation for initializing a new set of dedicated hosts and
  instances.
* Forcibly terminate instances when certain exceptionally "bad" conditions
  are detected.  i.e. those which may signal a security breach or other
  issue the scripts will never be able to cope with.
* Add support for yanking an instance out of service by changing it's
  `PWPoolReady` tag.  Allow re-adding instance when set `true` again.
* Reimplement max instance lifetime check.
* Implement a check on maximum completed tasks per instance.
* Stop outputting normal-status lines when examining instances.  Keep
  output to the bare minimum, unless there is some fault condition.
* Move the scheduled instance shutdown timer from the setup script into
  the instance maintenance script.  Add a check to confirm the sleep +
  shutdown process is running.
* Check and enforce a maximum amount of time `setup.sh` is allowed to
  run.
* Greatly simplify pool-listener service script.
* Simplify instance `setup.sh` script.
* Update utilization GNUplot command file to obtain the number
  of active workers from `dh_status`.  Extend the timespan of
  the graph.  Plot worker utilization as a percentage based on
  number of running tasks (instead of the total completed).

Signed-off-by: Chris Evich <cevich@redhat.com>
2024-01-11 14:11:35 -05:00
Chris Evich cac7b02d4f
Merge pull request #170 from containers/renovate/actions-upload-artifact-4.x
[skip-ci] Update actions/upload-artifact action to v4
2023-12-14 13:24:09 -05:00
renovate[bot] 4f066e397d
[skip-ci] Update actions/upload-artifact action to v4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-12-14 17:09:24 +00:00
Chris Evich f7a85f3a80
Merge pull request #169 from cevich/service_pool_fix
Fix two pool service script failure-modes
2023-12-14 12:09:10 -05:00
Chris Evich 646016818c
Fix two pool service script failure-modes
Fix typo in calculating sleep seconds.  Remove mode `e` from script, so
any failing command (i.e. a pgrep) doesn't cause the script to exit.
Also redirect null input into shutdown command, since it can behaves
oddly otherwise.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-14 12:01:21 -05:00
Chris Evich 851d152282
Merge pull request #167 from cevich/ignore_released
Properly handle 'released' DH status
2023-12-08 10:12:24 -05:00
Chris Evich 9a08aa2aed
Properly handle 'released' DH status
This is set when somebody removes a slot.  There's no current way for
that to ever happen except for human-action.  Try not to freak an
observer out by presenting it as a failure of some sort.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-08 09:57:09 -05:00
Chris Evich 61556ac3e9
Merge pull request #166 from cevich/fix_sleep
Fix sleep typo + reduce times
2023-12-07 14:09:57 -05:00
Chris Evich e8b260f41d
Fix sleep typo + reduce times
The darwin version of sleep doesn't support any suffix, and breaks if
you use one.  Fix the script and adjust the timings so the loop runs
quicker.

This has been tested on the currently in-use pool.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-07 13:54:47 -05:00
Chris Evich 8d8e12b3dd
Merge pull request #165 from cevich/further_limit_dh_by_tag
Allow dividing DH pool based on tag name/value
2023-12-05 11:30:38 -05:00
Chris Evich a9eb5b1f12
Allow dividing DH pool based on tag name/value
With an active and in-use dedicated host pool, it's very hard to test
changes to management scripts.  Add support for filtering the list of
DH to operate on, based on a defined tag name and value.  This way,
inactive DH can be manually re-tagged (temporarily) to allow testing
script changes against them.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-05 11:00:55 -05:00
Chris Evich 20df1f7904
Merge pull request #164 from cevich/minor_fixes
A Collection of minor fixes
2023-12-01 11:16:26 -05:00
Chris Evich 111991e6eb
Fix pkill permission-denied failure
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-01 10:51:08 -05:00
Chris Evich 67c74ffe7c
Remove unnecessary/dangerous -u option
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-01 10:51:07 -05:00
Chris Evich 8b968401af
Fix a handful of shellcheck complaints
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-01 10:51:07 -05:00
Chris Evich e368472ce7
Enable remote VNC access to mac instances
There are some mac tools that can ONLY be used on the GUI.  Setting this
up requires some specialized manual work.  Make this a bit easier by
removing a required step (i.e. ssh forwarding).

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-12-01 09:49:11 -05:00
Chris Evich 93962e6cf1
Merge pull request #163 from cevich/add_mac_management_goodies
Add mac management goodies
2023-11-29 15:11:13 -05:00
Chris Evich 32554b55cd
Add GNUPlot command file
Simply displays an auto-refreshing graph showing alive pool workers
divided by the total number of CI tasks run.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-29 14:38:50 -05:00
Chris Evich 90da395f0a
Add example pool management cron script
Also update docs regarding its use.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-29 14:38:50 -05:00
Chris Evich 2aea32e1a4
Merge pull request #162 from cevich/log_exp_time
Better logging of worker expiration
2023-11-29 11:44:06 -05:00
Chris Evich 3e8e4726f6
Better logging of worker expiration
It's helpful for operators to be aware of the expiration-time for
workers.  Ensure this, along with any other `service_pool.sh` messages
are logged.  Extract and display the logged expiration notice,
or a warning if missing.  The constant log-grep is secondarily
useful as indication of worker log-file manipulation.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-29 11:21:36 -05:00
Chris Evich cc10ff405a
Merge pull request #160 from cevich/force_pool
Workaround lengthy startup of many instances
2023-11-21 11:19:27 -05:00
Chris Evich 77f63d7765
Workaround lengthy startup of many instances
When a pool is empty of instances, the launch-stagger mechanism can
introduce a substantial delay to achieving a full-pool of active
workers.  This will negatively impact service availability and worker
utilization - likely resulting in CI tasks queuing.

Add a simple workaround for this condition with the addition of a
`--force` option.  When used, it will force instance creation on
all available dedicated hosts.  Similarly it will also force instance
setup, though with an extended shutdown delay timer.

Update documentation regarding this operational mode and it's purpose.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-20 12:17:19 -05:00
Chris Evich 71622bfde6
Merge pull request #159 from cevich/mac_pw_pool_adjustments
PW pool management script adjustments
2023-11-20 10:58:37 -05:00
Chris Evich 723fbf1039
Fix last-launch time query failure behavior
If for whatever reason there is a failure in the query or search
for last-launch times, `$latest_launched` could be set to the current
time.  This will ultimately result in no instances being launched.  Fix
this by improved detection of an empty/null launch time in
`${launchtimes[@]}`.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-20 10:09:34 -05:00
Chris Evich d1a3503a7f
Minor: Adjust status message
The term "BUSY" implies the dedicated host is doing something else.
This is not the case for staggering launches.  Use a more descriptive
status indicator for this.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-17 14:50:27 -05:00
Chris Evich 3a9c2d4675
Fix truncating duplicated & redirected script output
For whatever reason, when a script that duplicates and redirects
stdout/stderr to a log-file calls one of the management scripts, the
log-file is truncated.  Updating output functions to append their output
seems to resolve this issue.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-16 16:41:21 -05:00
Chris Evich 7244323cef
Fix several DH management script bugs
Previously it was possible to fail to launch any instances do to bugs
and assumptions in the last-launch-time determination.  Fix this by
actually querying running instances, and searching for the most
recent launch time.  If there are no instances found, print a warning
operators may observe.  Also, fix missing `-t` option to several
readarray() calls.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-16 16:32:30 -05:00
Chris Evich d6ec0981eb
Alpha-sort dedicated host state file
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-16 16:32:30 -05:00
Chris Evich c5b3a9a9e1
Record status details for each worker
Record the most recent status of all workers in a dedicated file.
Intended for use by humans or other automation.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-16 16:32:30 -05:00
Chris Evich 475167d677
Rename state file to better indicate content type
The file relates to dedicated hosts (DH), not persistent-workers (PW).

Also, don't exit non-zero if there is an error-status.  Rely on
consumers of state file to take appropriate action.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-16 10:54:52 -05:00
Chris Evich d41b3455df
Merge pull request #158 from cevich/mac_pw_pool
Cirrus-CI persistent worker pool management
2023-11-15 09:39:50 -05:00
Chris Evich aba52cf01f
Cirrus-CI persistent worker pool management
Implement a set of scripts to help with management of a Cirrus-CI
persistent worker pool of M1 Mac instances on AWS EC2.

* Implement script to help monitor a set of M1 Mac dedicated hosts,
  creating new instances as slots become available.

* Implement a script to help monitor M1 Mac instances, deploying
  and executing a setup script on newly created instances.

* Implement a ssh-helper script for humans, to quickly access
  instances based on their EC2 instance ID.

* Implement a setup script intended to run on M1 Macs, to help
  configure and join them to a pre-existing worker pool.

* Implement a helper script intended to run on M1 Macs, to
  support developers with a CI-like environment.

* At this time, all scripts are intended for manual/human-supervised
  use.  Future commits may improve this and/or better support use
  inside automation.

* Add very basic/expedient documentation.

N/B: The majority of this content, including the EC2-side setup has
been developed in a rush.  There are very likely major architecture,
design, and scripting bugs and shortfalls.  Some of these may be
addressed in future commits.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-11-14 13:45:44 -05:00
Chris Evich 6abea9345e
Merge pull request #154 from containers/renovate/actions-checkout-4.x
[skip-ci] Update actions/checkout action to v4
2023-10-20 14:08:40 -04:00
renovate[bot] b42bbe547b
[skip-ci] Update actions/checkout action to v4
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-19 11:15:11 +00:00
Chris Evich d277f04f02
Merge pull request #157 from cevich/minor_install_timestamp
Minor: Breadcrumb version and UTC timestamp
2023-09-26 16:58:23 -04:00
Chris Evich d4fb87ec3c
Minor: Breadcrumb version and UTC timestamp
Otherwise the timestamp is localized, which may be harder for humans
to relate/translate WRT other time-based items.  For example, Cirrus-CI
and GHA cron specifications.  Also add mention of the just-installed
version to the env. file, also to help with any needed auditing.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-09-26 11:23:44 -04:00
Chris Evich 6039ae9c96
Merge pull request #155 from containers/renovate/actions-upload-artifact-3.x
[skip-ci] Update actions/upload-artifact action to v3.1.3
2023-09-13 14:23:31 -04:00
renovate[bot] 849ff94def
[skip-ci] Update actions/upload-artifact action to v3.1.3
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-09-06 20:17:43 +00:00
Chris Evich ac050a015d
Merge pull request #153 from Luap99/podman-golangci
renovate: add CI-colon-DOCS prefix for golangci-lint updates
2023-08-21 11:10:00 -04:00
Paul Holzinger 10847d5e03
renovate: add CI:DOCS prefix for golangci-lint updates
In podman CI golangci-lint is only run in the validate step so there is
no point in running the full test suite for such updates. The validate
task is included with CI:DOCS so that should be good enough even if it
is technically not a doc change.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2023-08-21 16:00:21 +02:00
Chris Evich b4b74c0ca9
Merge pull request #150 from cevich/more_better_build_push_debugging
Improve build-push error handling
2023-08-10 13:50:23 -04:00
Chris Evich 2da3679e46
Improve build-push error handling
Around the time of this commit, the automated multiarch manifest-list
builds for both skopeo and podman have been failing somewhere in the
`build-push.sh` script.  The actual build appears to work fine, the
`tag-version.sh` mod-command runs fine, but the tag-search in
`get_manifest_tags()` (called by `push_images()`) fails with the error:

`jq: error (at <stdin>:29): Cannot iterate over null (null)`

Unfortunately the problem does not reproduce for me locally, nor can it
be reproduced using a dry-run build (`--nopush` bypasses the tag search.)
Improve debugging of this situation by moving the `if ((PUSH))` check and
adding an exception clause to display the would-be pushed images (and
tags).

Also:

* Simplify the `get_manifest_tags()` tag search by adjusting the jq filter
  to gracefully ignore an empty set of images and/or images without
  any list of names.  Rely on `push_images()` catching the empty-list
  and throwing an error.
* Add a comment regarding the need for the `confirm_arches` call
  after the `parallel_build` call in the main part of the script.
* Improve the debug-ability of `confirm_arches()` in the case of
  a bad/incomplete/unreadable manifest-list (see item above).
  Detect both inspect command errors and jq/pipeline errors.  In
  the case of jq/pipeline failure, show the input JSON to aid
  debugging.
* Improve variable-name consistency by removing many `_` prefixes.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-09 15:42:43 -04:00
Chris Evich 9bee18f881
Merge pull request #152 from cevich/slow_down_clap_and_serde
Renovate: Slow down 'serde' and 'clap' updates
2023-08-09 13:13:29 -04:00
Chris Evich badfb3a09e
Renovate: Slow down 'serde' and 'clap' updates
Ref: https://github.com/containers/netavark/issues/772

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-08-09 13:03:32 -04:00
Chris Evich 880840c20a
Merge pull request #151 from containers/renovate/ubuntu-22.x
Update dependency ubuntu to v22
2023-08-09 12:47:55 -04:00
renovate[bot] b6959491e3
Update dependency ubuntu to v22
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-08-09 13:55:46 +00:00
Paul Holzinger 6dc87f5330
Merge pull request #149 from Luap99/renovate2
renovate: disable rollbackPrs
2023-07-12 15:18:11 +02:00
Paul Holzinger 0e134f9243
renovate: disable rollbackPrs
Not sure why but the config change in commit 8f61a71 caused us to now
get rollback PRs for digest updates which is wrong and very noisy.
Let's keep them disabled for now and let Chris figure it out when he is
back.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2023-07-12 15:05:39 +02:00
Paul Holzinger ac96839c65
Merge pull request #148 from Luap99/renovate
fix broken renovate config
2023-07-12 13:48:52 +02:00
Paul Holzinger 8f61a71bf9
fix broken renovate config
checked with:
podman run -it \
-v ./renovate/defaults.json5:/usr/src/app/renovate.json5:z \
docker.io/renovate/renovate:latest \
renovate-config-validator

Due the nested packageRules section in golang the auto migration is not
working correctly and caused an error for us. This caused renovate to
propse PRs without the proper settings.
Fix the config by (hopefully) migrating correctly to the new format.
The nested packageRule is now on the same level which should fix the
breakage. The config validator is happy now but I have no way of
actually testing if this still works correctly, I guess we will find
out.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2023-07-12 11:58:14 +02:00
Chris Evich f95465c2a5
Merge pull request #146 from cevich/add_passthrough_env
Add passthrough_envars() function and test
2023-06-23 11:17:00 -04:00
Chris Evich a5fb655295
Add passthrough_envars() function and test
This function is otherwise duplicated in both buildah and podman CI,
along with it's associated env. vars.  Provide it here to help limit
duplication and cover it with testing.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-22 16:11:38 -04:00
Chris Evich a2ccd7e494
Minor: Fix deprecated use of egrep
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-22 15:54:37 -04:00
Chris Evich 81fc66e54a
Attempt to fix golangci-lint management
According to multiple downstream usage logs, they "see" the `Makefile`
regex manager but fail to report any data-source details or propose a
(currently) known update.  Tweak a few things I guess may be
affecting operations.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-22 15:39:45 -04:00
Chris Evich 172a5357a2
Merge pull request #147 from cevich/common_lib_updates
Support $SUDO setup for debian environments
2023-06-22 15:09:13 -04:00
Chris Evich adda8b1c76
Support $SUDO setup for debian environments
This is esp. used during CI VM image builds where only a user account is
available for some stages.  It's important to block `sudo apt-get` calls
from asking for user input during update/install.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-22 14:57:06 -04:00
Chris Evich 983cf6575a
Merge pull request #145 from cevich/renovate_manage_golangci_lint
Teach renovate how to manage golangci-lint versions
2023-06-21 15:30:12 -04:00
Chris Evich abcf6f4370
Teach renovate how to manage golangci-lint versions
The podman and skopeo repos. both install this tool at runtime via a
`Makefile` target.  Rather than duplicate update configurations in both
repos., do it here.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-21 15:10:21 -04:00
Chris Evich 62979df383
Merge pull request #143 from cevich/vuln_gomod_indirect_enable
Enable gomod ind. deps for vulnerabilities.
2023-06-08 18:00:49 -04:00
Chris Evich c005bb4c47
Enable gomod ind. deps for vulnerabilities.
Indirect deps are disabled by default for the `gomod` manager.
Ref:
https://docs.renovatebot.com/modules/manager/gomod/#post-update-options

Indirect deps are also broken for golang updates due to `go mod tidy`
problems. Ref:
https://github.com/renovatebot/renovate/issues/ number 12999

However, for vulnerability related updates, perhaps we want a PR opened
anyway.  Then at least a developer is able to fixup any `go mod tidy`
related problems.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-08 12:11:46 -04:00
Chris Evich c0b7e90d1c
Merge pull request #144 from cevich/fix_missing_whitespace
Minor: Fix flake8 finding
2023-06-08 12:11:26 -04:00
Chris Evich 3816822eea
Minor: Fix flake8 finding
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-06-08 11:58:17 -04:00
Chris Evich e37b001fec
Merge pull request #135 from edsantiago/taskmap_anchor_override
task-map: handle rawhide, treadmill, "bench stuff"
2023-05-05 13:13:59 -04:00
Chris Evich 0f199e3379
Merge pull request #141 from cevich/assign_to_review
Switch assignees to reviewers + fix broken urllib3 dep.
2023-05-05 11:02:29 -04:00
Chris Evich 59a21c91f4
Fix CCIA dependencies
They're currently failing to install due to an incompatible
urllib3 nested-dependency.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-05 10:54:25 -04:00
Chris Evich 4583a89895
Minor: Switch assignees to reviewers
For CI VM Image update PRs, add me as a reviewer instead of assignee.
This is more appropriate to the action needed for these PRs.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-05 10:43:14 -04:00
Chris Evich bc78af7371
Merge pull request #142 from cevich/rm_bench_stuff
Remove disused bench-stuff script & tests
2023-05-05 10:37:14 -04:00
Chris Evich 68f51fc116
Remove disused bench-stuff script & tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-05 10:32:23 -04:00
Chris Evich 368147bae7
Merge pull request #140 from cevich/fix_ci_vm_img_update
Simplify renovate CI VM image updates
2023-05-02 15:49:34 -04:00
Chris Evich 27e2dc2bea
Simplify renovate CI VM image updates
There's no reason to capture major/minor/patch components (and they may
be unhelpful/confusing/broken in some cases).  Simplify the setup to
use "autoReplaceStringTemplate" to guide the replacement, and  "loose"
versioning tell renovate to do a simple value sort on the github-tags.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-02 15:34:49 -04:00
Chris Evich 53c909b9de
Merge pull request #139 from cevich/ci_vm_updates_anytime
Allow Renovate CI VM Updates at any time
2023-05-02 12:07:51 -04:00
Chris Evich f6ffe2b535
Allow Renovate CI VM Updates at any time
Otherwise this would be limited to the global default: once per week.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-05-02 10:53:12 -04:00
Chris Evich 6e917d6f03
Merge pull request #138 from cevich/remove_ci_vm_compat
CI VM OS updates as 'major' vs 'compatibility'
2023-05-01 11:30:02 -04:00
Chris Evich 08d932a1d4
CI VM OS updates as 'major' vs 'compatibility'
Prior to this commit, a major OS update (e.g. F37 -> F38) was flagged to
renovate as an 'incompatible' change to propose.  This was done to allow
additional manual testing and scrutiny.  However, now that CI VM update
PRs are directly assigned to me, I can directly keep a close eye on
them.  Including when the OS name strings need to be updated.  Further,
by adjusting the major/minor/patch labeling of the various image
"version" components, they will better represent the size of the changes.
This is reflected in the titles of the PRs opened by renovate.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-27 13:40:55 -04:00
Chris Evich 1e0ff5ac17
Merge pull request #136 from cevich/cevich_ci_vm_images
Assign CI VM Image update PRs
2023-04-24 14:19:49 -04:00
Chris Evich 75156208dd
Ensure CI VM Image updates PRs are assigned.
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-24 14:09:05 -04:00
Chris Evich 41795aac2e
Revert "Remove golang rollback PRs workaround"
This reverts commit 1182675918.
2023-04-24 10:30:04 -04:00
Ed Santiago e5417ea731 task-map: hardcode in a few more only-ifs
Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-04-24 06:24:41 -06:00
Ed Santiago 4e6b89ac8b cirrus-task-map: explicit YAMLs always override anchors
When processing YAML anchors, never override an already-set value.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2023-04-24 06:02:37 -06:00
Chris Evich bd25741ea3
Merge pull request #134 from cevich/remove_rollback_workaround
Remove golang rollback PRs workaround
2023-04-21 11:07:17 -04:00
Chris Evich 1182675918
Remove golang rollback PRs workaround
The original discussion about this has been closed.  At the time, I
believe I remember seeing a bugfix go through in the renovate
change-logs.  In any case, it seems [rollback PRs are not working
correctly](https://github.com/containers/podman/issues/18139#issuecomment-1517532310).
Remove the workaround and enable rollbackPRs by default for golang.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-04-21 10:58:16 -04:00
Chris Evich 394eeb9da7
Preserve python sem-ver ranges
Also, simplify the same setting for golang.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-28 11:39:55 -04:00
Chris Evich 7861f60698
Fix preserveSemverRanges overriding rangeStrategy
The `preserveSemverRanges` preset is equivalent to a global package rule
that sets rangeStrategy=replace.  This takes precedence over the
language-specific option, in particular rust's `bump` strategy.  Fix
this by removing the preset, and incorporating the package-rule under
the only other configured language (golang).

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-28 10:49:32 -04:00
Chris Evich 6c7ab6cd3b
Update dep range and lock for rust
This mirrors the behavior of Dependabot.  Ref:
https://docs.renovatebot.com/configuration-options/#rangestrategy

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-27 16:07:44 -04:00
Chris Evich c038bce8c6
Merge pull request #133 from cevich/bench_stuff_dates
Bench_stuff: Several minor fixups
2023-03-17 12:35:26 -04:00
Chris Evich 4a63655328
Bench_stuff: Several minor fixups
* Fix a verbose message referencing an prior-implementation collection
  name (type).
* Rename the timestamp field to `occasion` since IMHO that's a more
  meaningful representation of it's identity.
* Increase the expiration time for new entries to 180 days since IMHO
  that better reflects their useful lifetime.
* Include the HEAD commit-ID with benchmark meta-data
* Bump the schema version to 3 due to field name changes.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-17 11:52:19 -04:00
Chris Evich e419343eb4
Merge pull request #132 from cevich/tweak_verbose
Tweak verbose messages
2023-03-09 11:59:02 -05:00
Chris Evich 57f1c46889
Tweak verbose messages
The script was overly verbose in printing raw-data and less verbose
regarding what it was actually doing.  Improve this by disabling some
messages and adjusting others.

Also fix the dry-run integration test.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-09 11:47:33 -05:00
Chris Evich ac6b0d5ed0
Merge pull request #131 from cevich/bench_stuff_fixups
Update bench_stuff schema
2023-03-08 16:58:59 -05:00
Chris Evich 646fdac890
Fix bench_stuff installer script
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-08 16:26:59 -05:00
Chris Evich 36af60a819
Update bench_stuff schema
Move metadata values from a nested to the top-level document structure.
This makes both indexing and querying the data-point simpler.  Also add
an expiry meta-data field recording a date/time 30-days in the future.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-08 16:26:59 -05:00
Chris Evich a2c7b99e2e
Fix name firebase -> firestore
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-08 15:31:14 -05:00
Chris Evich bbd4a0a1f2
Merge pull request #129 from cevich/bench_stuff
[WIP] Add tool for handling podman benchmark data
2023-03-08 13:53:25 -05:00
Chris Evich aa4ccb1e98
Add tool for handling podman benchmark data
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-08 11:24:14 -05:00
Chris Evich 63703d3191
Merge pull request #130 from cevich/fix_digest_schedule
[CI:DOCS] Attempt to fix ineffective digest schedule
2023-03-01 13:40:57 -05:00
Chris Evich 088ecd39f7
Attempt to fix ineffective digest schedule
Ref: https://github.com/containers/skopeo/issues/1926

According to the PR descriptions linked in the above issue:

    Schedule: Branch creation - At any time

Attempt to impose the intended monthly schedule by disambiguating the
digest-update package-rule under the golang-manager configuration.  I
have no idea if this will actually resolve the underlying problem, but
it does lead to a more organized configuration.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-03-01 13:35:17 -05:00
Chris Evich fada0fa488
Merge pull request #128 from cevich/default_common
Centralize c/common version retraction
2023-02-13 09:19:17 -05:00
Chris Evich a776353038
Centralize c/common version retraction
Two versions released on accident have been retracted but renovate
version retraction isn't working right.  Centralize a package rule
to ignore the "bad" versions for all repos.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-02-13 09:02:13 -05:00
Chris Evich 881ffc3ad5
Merge pull request #127 from cevich/manage_ci_vm_images
Manage CI VM image updates with renovate
2023-01-27 11:25:10 -05:00
Chris Evich 8c9402f8b3
Manage CI VM image updates with renovate
This compliments the date/time based versioning implemented
in https://github.com/containers/automation_images/pull/247

Once merged, these changes will result in renovate opening PRs
for tagged CI VM image commits in containers/automation_images.
Renovate will ignore any existing configurations using the old
`$CIRRUS_BUILD_ID` based *IMAGE_SUFFIX* values.

Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-27 10:56:16 -05:00
Chris Evich 8ff4776dfd
Merge pull request #126 from cevich/remove_workaround_16715
Remove upstream issue 16715 workaround
2023-01-10 10:20:51 -05:00
Chris Evich ddd1bae263
Remove upstream issue 16715 workaround
Signed-off-by: Chris Evich <cevich@redhat.com>
2023-01-09 15:48:09 -05:00
Chris Evich 5cf038f327
Merge pull request #125 from containers/renovate/actions-upload-artifact-3.x
[skip-ci] Update actions/upload-artifact action to v3.1.2
2023-01-09 14:27:26 -05:00
renovate[bot] c1bc95c88b
[skip-ci] Update actions/upload-artifact action to v3.1.2
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2023-01-06 16:55:52 +00:00
Chris Evich cbaa773fc3
Merge pull request #121 from cevich/add_err_function
GHA: Shadow common-lib die/warn/dbg functions
2022-11-29 12:03:29 -05:00
Chris Evich b62d664926
GHA: Shadow common-lib die/warn/dbg functions
Github-action workflow runs can consume additional output "sugar" values
and use them to annotate output in the UI.  When using the github
library, shadow the common-lib functions to add in this extra metadata.
Also, fix use of the debug calls so they are useful despite
ACTIONS_STEP_DEBUG being set or not.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-11-22 13:44:05 -05:00
Chris Evich 467932a357
Merge pull request #124 from edsantiago/task_map_tweaks
task-map: many fixes from last few months
2022-11-10 10:26:23 -05:00
Ed Santiago 3e3387fc97 task-map: many fixes from last few months
- hardcoded special cases for new 'only_if' conditions

 - more hardcoded colors for names in common

 - display expanded task names if they have a dollar sign.
   Useful for knowing what OS (f37) is used on minikube
   and remote-aarch64-sys, because those aren't matrices

 - use custom deep_merge when handling YAML aliases; needed
   because some YAML '<<'s are deep, so a shallow copy
   loses important settings

Signed-off-by: Ed Santiago <santiago@redhat.com>
2022-11-09 18:13:19 -07:00
Chris Evich 8746065b3a
Don't let renovate rebase by default.
On a busy repo, automatic-rebasing will swamp the CI system.
Turn it off here, then allow individual repos. to override/enable
it as appropriate.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-11-07 15:21:51 -05:00
Chris Evich d16c2bf941
Merge pull request #123 from cevich/gha_noci
Teach renovate to [skip-ci] on github-action deps
2022-11-07 15:15:03 -05:00
Chris Evich 9f208b5cd6
Teach renovate to [skip-ci] on github-action deps
Github-action updates cannot consistently be tested in a PR.
This is caused by an unfixable architecture-flaw: Execution
context always depends on trigger, and we (obvious) can't know
that ahead of time for all workflows.  Abandon all hope and
mark github-action dep. update PRs '[skip-ci]'

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-11-07 15:13:00 -05:00
Chris Evich 98ebefeea1
Merge pull request #122 from containers/renovate/actions-upload-artifact-3.x
Update actions/upload-artifact action to v3.1.1
2022-11-02 10:58:07 -04:00
renovate[bot] 9053f79f37
Update actions/upload-artifact action to v3.1.1
Signed-off-by: Renovate Bot <bot@renovateapp.com>
2022-10-24 03:19:59 +00:00
Chris Evich 62b9196f35
Merge pull request #120 from cevich/de-nest
Separate out packageRules from golang config object
2022-10-19 10:44:27 -04:00
Chris Evich ffb31fde7b
Separate out packageRules from golang config object
This was recommended by the upstream Renovate community:
"Nesting packageRules under language objects can have unintended
consequences".

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-19 10:30:29 -04:00
Chris Evich 5245367ad4
Merge pull request #119 from cevich/fix_set_output
Update github lib to new output standard
2022-10-18 16:13:42 -04:00
Chris Evich 4521139d0f
Update github lib to new output standard
ref:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-output-parameter

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-18 16:01:05 -04:00
Chris Evich 169064aef8
Merge pull request #118 from cevich/renovate_config_validation
Add renovate config validation check
2022-10-18 15:51:46 -04:00
Chris Evich c8fc0c9247
Add renovate config validation check
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-18 15:38:53 -04:00
Chris Evich 56579d1750
Include generally applicable go rules in preset
This avoids having to copy-paste them across all repos.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-18 15:01:42 -04:00
Chris Evich 96b9192fdc
Merge pull request #117 from cevich/more_prs
[CI:DOCS] Update default renovate config schedule
2022-10-17 15:07:47 -04:00
Chris Evich bc50f835e5
Support [CI:DOCS] magic string
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-17 15:05:51 -04:00
Chris Evich 203c9e3b0a
Add validation comment
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-17 15:05:51 -04:00
Chris Evich 75862d43aa
Update default renovate config schedule
Previously update PRs were limited to one per hour, on Mondays from
midnight to 3am.  Since there could easily be more than 3 updates
needed, increase the time window to 10 hours per Monday.

Also, add an extra `security` label to vulnerability alerts to help make
them stand out even more.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-17 15:05:51 -04:00
Chris Evich 6806a5d8f7
Merge pull request #116 from cevich/explicit_vuln
[CI:DOCS] Explicit timezone and vulnerability scheduling
2022-10-17 12:09:44 -04:00
Chris Evich 0fa6031d53
Explicit timezone and vulnerability scheduling
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-17 12:04:30 -04:00
Chris Evich 379b197a0c
Update preset docstring
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 14:28:08 -04:00
Chris Evich c9a8e43c5d
Make renovate use existing 'dependencies' label
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 14:24:58 -04:00
Chris Evich 739eb91b78
Merge pull request #115 from containers/renovate/configure
Redirect default JSON config to path'd JSON5 config
2022-10-12 14:20:00 -04:00
Chris Evich 6b3f5ff3c7
Redirect default JSON config to path'd JSON5 config
JSON5 is far more readable, but cannot be used for the default.
Workaround this by referencing a JSON5 preset from the default
JSON preset.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 14:19:08 -04:00
Chris Evich d3c8422700
Convert default renovate preset to standard JSON
According to the logs, renovate will not accept a JSON5 formated default
preset, it must be JSON.

Fixes: #114

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 13:56:17 -04:00
Chris Evich 2a7f26ad53
Fix renovate preset not checking .json5 files
According to the logs, only .json files are checked.  Add an explicit
reference to the `default.json5` file.

Fixes: #114

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 13:52:17 -04:00
Chris Evich c4f89407ff
Fix default renovate config
After multiple attempts, it seems renovate doesn't like the default
preset file to be located anywhere other than the root of the repo.
Move it there.

Fixes: #114

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 13:47:01 -04:00
Chris Evich 422ce67d75
De-parameterize default renovate config preset
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 13:42:59 -04:00
Chris Evich 00f6c29ac2
Attempt again to fix renovate preset config.
See #114

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 13:38:11 -04:00
Chris Evich 97a8d96277
Attempt to fix #114
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 13:36:28 -04:00
Chris Evich 16faedda61
Merge pull request #113 from cevich/update_renovate
[skip-ci] Update renovate
2022-10-12 13:28:24 -04:00
Chris Evich 4fad69c4be
Setup and use a default renovate preset
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 13:25:52 -04:00
Chris Evich 5eeb0fe171
Update Renovate inline documentation
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 12:51:35 -04:00
Chris Evich 13d4024e81
Rename renovate config
JSON does not allow comments.  JSON5 does.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-12 12:45:09 -04:00
Chris Evich fd88ae5ae0
Merge pull request #112 from cevich/renovate_tweaks
[skip ci] Renovate configuration tweaks
2022-10-05 15:42:22 -04:00
Chris Evich 4d2cb35dfc
Renovate configuration tweaks
Remove grouping, open separate PRs for each update.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-10-05 15:40:28 -04:00
Chris Evich e66e6fafaa
Merge pull request #111 from containers/renovate/configure
Simplify + Further document config.
2022-09-28 13:50:12 -04:00
Chris Evich 76e6acc97c
Simplify + Further document config.
Remove some experimental settings, retain only those relevant to this
repository.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-09-28 12:25:06 -04:00
Chris Evich ff3aab803f
Merge pull request #110 from cevich/fix_unit_test
Rename GHA jobs to remove ambiguity
2022-09-28 11:09:51 -04:00
Chris Evich 85a6688a4e
Merge pull request #109 from containers/renovate/all
[CI:BUILD] Update all dependencies
2022-09-28 10:49:59 -04:00
Chris Evich afa597d2ab
Rename GHA jobs to remove ambiguity
There are several jobs called `unit-tests`.  Rename them with a prefix
to clarify exactly what they're testing.  Also, add a run of the
(renamed) `automation_unit-tests` to happen at branch-push time.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-09-28 10:41:46 -04:00
Chris Evich c87ad16664
Merge pull request #108 from containers/renovate/configure
Renovate: Limit CI to builds
2022-09-27 16:31:12 -04:00
Chris Evich 75e4d3ed4f
Renovate: Limit CI to builds
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-09-27 16:16:29 -04:00
renovate[bot] 9ea4519afa
[skip ci] Update all dependencies 2022-09-27 19:59:01 +00:00
Chris Evich fd707ba823
Merge pull request #106 from containers/renovate/configure
[skip ci] Renovate: Test without monthly schedule
2022-09-27 15:58:46 -04:00
Chris Evich 939fe05553
Renovate: Test without monthly schedule
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-09-27 15:47:26 -04:00
Chris Evich 7c98d54184
Merge pull request #103 from containers/renovate/configure
[skip ci] Configure Renovate
2022-09-27 13:11:13 -04:00
Chris Evich cd7a142baf
Configure renovate bot for this repo.
Also add a readme file with links relevant to configuration.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-09-27 13:00:28 -04:00
Chris Evich 6cba956155
Merge pull request #104 from cevich/fix_win_container
Add parser support for windows container tasks
2022-09-22 14:53:17 -04:00
Chris Evich d9fc524072
Add parser support for windows container tasks
Also fix attempts to expand `$PATH`, this variable is handled specially
by Cirrus-CI so it can just be ignored.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-09-22 14:42:04 -04:00
renovate[bot] 27b353ce86
Add renovate.json 2022-09-22 12:58:16 -04:00
Ed Santiago fccddf1ce0
Merge pull request #102 from edsantiago/task_map_update
cirrus-task-map: various updates
2022-07-05 11:12:08 -06:00
Ed Santiago 8f15a04151 node labels: readability: spaces, not underscores
convert underscores to spaces (e.g., foo_test -> foo test)

Signed-off-by: Ed Santiago <santiago@redhat.com>
2022-07-05 10:48:44 -06:00
Ed Santiago f9a00a0876 task-map: override colors for some tasks
...using same colormap as github-ci-highlight.user.js

Signed-off-by: Ed Santiago <santiago@redhat.com>
2022-07-05 10:01:18 -06:00
Ed Santiago 0b97dd7a6c task-map: special-case the long swagger conditional
Signed-off-by: Ed Santiago <santiago@redhat.com>
2022-07-05 10:01:15 -06:00
Ed Santiago 7fa5258631 task-map: indicate trigger type if non-null
...e.g., ' (TRIGGER: MANUAL)'

Signed-off-by: Ed Santiago <santiago@redhat.com>
2022-07-05 09:48:40 -06:00
Chris Evich a1010972fb
Merge pull request #101 from cevich/fix_ec2_instance
Add support for ec2_instance task type
2022-06-29 13:18:58 -04:00
Chris Evich 3426cb890d
Add support for ec2_instance task type
Also update test data to confirm EC2 instance task parsing.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-06-29 13:03:23 -04:00
Chris Evich 26f565c564
Merge pull request #100 from cevich/update_reqs
Fix installer failures + update ccia requirements + use exec in wrapper
2022-06-27 12:34:28 -04:00
Chris Evich b23f06d916
Minor: Fix NATIVE_GOARCH command for test
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-06-27 12:25:52 -04:00
Chris Evich 49d322750a
Minor: Update ccia requirements + use exec in wrapper
Importantly, this adds instructions on how to update the requirements
file and confirm it's working.

Also add a flake8 check for CI, updating code to satisfy all findings.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-06-27 12:25:52 -04:00
Chris Evich b21c51cf1f
Fix installer exiting on component failure
When installing components other than `common`, the install script
chains to `.install.sh` files within the component subdirectories.
However, this was not done in a `set -e` environment possibly hiding
failures.  Fix this by making the component install scripts executable
and `set -eo pipefail`.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-06-24 13:58:50 -04:00
Chris Evich f774ca2aa2
Merge pull request #99 from cevich/use_artifact_api
Update cirrus-ci_artifacts to use download API
2022-06-23 16:21:20 -04:00
Chris Evich 48ab491cc6
Update cirrus-ci_artifacts to use download API
In the future, task artifacts may not all come from the same cloud.
Instead of downloading directly from a GCS bucket, use the Cirrus-CI
REST API to download artifacts.  Also update the tests and
documentation.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-06-23 15:55:37 -04:00
Chris Evich 25056207c3
Merge pull request #98 from cevich/fix_docs
Fix installer download URL
2022-06-14 16:33:23 -04:00
Chris Evich 4ccd41a24a
Fix installer download URL
The github annotated URL was referenced instead of the "raw" download
URL.  Fix it.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-06-14 16:15:09 -04:00
Chris Evich 52caed19d9
Merge pull request #97 from cevich/fix_grep_error
Fix overly inclusive error check
2022-06-14 13:36:21 -04:00
Chris Evich ccedf33056
Fix overly inclusive error check
Should a GraphQL query not satisfy the schema, the server will
frequently return a JSON formatted error message.  The
cirrus-ci_retrospective library checks for this by using a naive
`grep` command.  However, this could mistakenly trigger on the `error`
term naturally appearing in a non-error/valid response.  Remove this
check.  Instead, enhance the existing `filter_json()` checks by
causing jq to exit non-zero for invalid JSON replies.  Assume callers
of cirrus-ci_retrospective will be sensitive to an error reply from
the server, and handle it accordingly.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-06-14 13:12:33 -04:00
Chris Evich d7bf502421
Merge pull request #96 from cevich/fix_debug
Fix use of overly generic DEBUG env. var.
2022-04-20 14:04:11 -04:00
Chris Evich 7dfa5d11e4
Fix use of overly generic DEBUG env. var.
It was an unfortunate mistake to name this variable as such.  It was
observed to collide with other non-conforming usages in downstream.
This was esp. leading to some difficult to debug situations, such as
https://github.com/containers/podman/issues/13932  The common
automation library is used far/wide by many environments, which
unfortunately may also rely on a generic `$DEBUG`.  Fix the issue here,
by renaming the variable.

Let this serve as a warning to all downstream, everywhere: ***Avoid all
use of similar generic variable names, make them context-specific!***

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-04-20 13:47:11 -04:00
Chris Evich 48039fba21
Merge pull request #95 from cevich/export_vars
Make /etc/automation_environment easier to consume
2022-03-29 12:06:51 -04:00
Chris Evich 345ede04c5
Make /etc/automation_environment easier to consume
In nearly all use-cases, users of the automation libraries/tools need to
both load and export variables in this file.  Otherwise loading
additional libraries (which depend on the variables) will fail.  Make
this easier on downstream users by exporting important variables in-line
in this file.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-03-29 11:29:34 -04:00
Chris Evich f5c59a92ef
Merge pull request #94 from cevich/fix_python
Fix cirrus-ci_retrospective image missing python3
2022-03-04 12:13:54 -05:00
Chris Evich d71575858d
Fix cirrus-ci_retrospective image missing python3
Signed-off-by: Chris Evich <cevich@redhat.com>
2022-03-04 12:05:01 -05:00
Chris Evich 9431fef2bb
Merge pull request #93 from cevich/fix_bp_arches
Fix automatic inclusion of local arch
2022-02-22 18:29:54 -05:00
Chris Evich ba22c13a54
Fix automatic inclusion of local arch
When executing build-push.sh with the `--arches=*` option, the script
automatically includes the local architecture by default.  This may
be counter-intuitive and is contrary to the documentation for this
option.  Worse, if the local architecture is specified as an
`--arches` argument, buildah will build TWO images for it. Fix this
by excluding the default local-arch value when processing the
`--arches` command-line option.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-02-22 18:19:16 -05:00
Chris Evich 90aaa537f4
Merge pull request #90 from edsantiago/taskmap_new_podman_rules
task-map: handle two new podman conditions
2022-02-02 16:27:45 -05:00
Ed Santiago 864c5c9b5f task-map: handle two new podman conditions
CI:BUILD and the new 'release|bump' check

Signed-off-by: Ed Santiago <santiago@redhat.com>
2022-02-02 14:20:02 -07:00
Chris Evich be130dbaa1
Merge pull request #91 from cevich/fix_gql
Fix cirrus-ci_artifacts testing in CI
2022-02-02 13:59:28 -05:00
Chris Evich f926f3b540
Fix cirrus-ci_artifacts testing in CI
Recently gql released version 3.0 with significant changes to it's
dependencies.  Rebuild the `requirements.txt` to account for them.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-02-02 13:51:22 -05:00
Chris Evich df7e562b2e
Merge pull request #89 from cevich/fix_exports
Fix exports
2022-01-20 12:05:27 -05:00
Chris Evich c564a9ed9b
Fix $ARCHES export + special-chars in $BUILD_ARGS
It's necessary for downstream usage that both a --modcmd and --prepcmd
script have access to the parsed value from --arches.  Prior to this
commit, the value was stored in an array.  Unf., many versions of bash
do not yet support export of array variables.  This commit converts it
into a simple space-separated string and adds a test to confirm.

While fixing this, it was discovered that the handling of \$BUILD_ARGS
was also wrong.  Both quoting and any use of embedded special characters
(like whitespace) would not be preserved.  It was also marked as a
variable exported to to --prep/--modcmd, which is not possible.  Fix
this by removing it from the export list, and tweak the processing and
use of \$BUILD_ARGS in general.  Add a test to confirm proper handling
of both quotes and special characters.

Lastly, some seemingly unrelated changes were made to simplify the
addition of the above tests.

Signed-off-by: Chris Evich <cevich@redhat.com>
2022-01-20 11:57:31 -05:00
Chris Evich 69671f9d10
Switch to using a dedicated/specific buildah image
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-12-17 16:49:27 -05:00
Chris Evich f4f1069923
Merge pull request #88 from cevich/update_build-push_docs
[CI:DOCS] Update readme's
2021-09-23 15:19:17 -04:00
Chris Evich cd13b74be4
[CI:DOCS] Update readmes
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-09-23 15:11:10 -04:00
Chris Evich 8edb596ab2
Merge pull request #87 from cevich/codespell
Mass spelling/typo fix
2021-09-13 14:41:02 -04:00
Chris Evich 979968704e
Mass spelling/typo fix
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-09-13 14:24:48 -04:00
Chris Evich 4f304babfc
Merge pull request #84 from cevich/build-push
[WIP] Multi-arch build-push helper script
2021-09-13 11:17:40 -04:00
Chris Evich de236cdc47
Script for multi-arch parallel image build + push
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-09-13 10:54:43 -04:00
Chris Evich 118c39e3e7
Add commonly used platform definitions
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-24 15:50:36 -04:00
Chris Evich efcdd1b74e
Merge pull request #86 from cevich/fix_master_ref
Fix master branch reference
2021-08-23 15:51:26 -04:00
Chris Evich 267df4f115
Fix master branch reference
The branch was renamed long ago, but this change was missed due to the
only check using it was negative.  Fix this so it's easier to understand
the task which should only run in PRs.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-23 15:41:21 -04:00
Chris Evich 51c2e503a5
Merge pull request #85 from cevich/fix_artifacts_testing
Fix running tests for cirrus-ci_artifacts
2021-08-20 13:40:10 -04:00
Chris Evich dc835adf05
Fix running tests for cirrus-ci_artifacts
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-20 13:21:55 -04:00
Chris Evich 03d3f9518f
Merge pull request #83 from cevich/kill_cirrus_asr
Remove disused/unmaintained tools
2021-08-16 10:57:27 -04:00
Chris Evich a13e4b9f15
Remove unmaintained/disused ephemeral_gpg tool
This tool was handy/useful for securely containing a gpg environment.
It was originally intended to assist in automated releases of podman,
but that effort was abandoned due to implementation time and difficulty.
Since the tool is security related and unmaintained, it's safer to
remove remove it to avoid giving anyone the wrong impression of its
status.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-16 10:00:07 -04:00
Chris Evich a1dff1c110
Remove unmaintained/disused cirrus-ci_asr tool
This tool was handy/useful for monitoring the status of a running build
from the command-line.  However, to my knowledge it's not actually been
used in quite a while.  Recently dependabot alerted on a security update
for the websockets python module, but fixing it broke CI.  Simply
remove the script instead.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-16 09:59:47 -04:00
Chris Evich 613728782b
Merge pull request #82 from cevich/stderr_perm_denied
Fix /dev/stderr: permission denied
2021-08-12 17:12:27 -04:00
Chris Evich d83a035f58
Fix /dev/stderr: permission denied
Under some execution contexts (i.e. `sudo`), under some flavors of bash,
these special device files may not be accessable.  Refrain from using
them during the install process.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-08-12 17:09:04 -04:00
Chris Evich d2192b5756
Merge pull request #79 from cevich/fix_links
[CI:DOCS] Fix docs links due to branch rename
2021-06-14 15:19:09 -04:00
Chris Evich b07ac29cde
Fix docs links due to branch rename
Ref: https://github.com/containers/common/issues/549

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-06-11 10:43:32 -04:00
Chris Evich 6f59dd347f
Merge pull request #77 from cevich/minor
Minor/cosmetic master->main workflow update
2021-05-14 11:45:32 -04:00
Chris Evich c786cba1f3
Minor/cosmetic master->main workflow update
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-05-14 11:40:46 -04:00
Chris Evich 6f0e750b90
Merge pull request #76 from cevich/fix_old_master_refs
Update prior installer versions def. branch
2021-05-14 11:32:03 -04:00
Chris Evich d07b34acc8
Update prior installer versions def. branch
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-05-14 11:28:52 -04:00
Chris Evich c3ae42ad03
Merge pull request #75 from cevich/master_to_main
Handle update from master -> main
2021-05-14 10:50:29 -04:00
Chris Evich ed5bf885d6
Handle update from master -> main
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-05-14 10:43:08 -04:00
Chris Evich ba57b00378
Merge pull request #70 from cevich/fix_osx_again
Fix bug caused by prior commit
2021-04-16 13:02:13 -04:00
Chris Evich b0061236aa
Fix bug caused by prior commit
Also update tests accordingly

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-04-16 12:57:47 -04:00
Chris Evich 86033d4605
Merge pull request #69 from cevich/fix_deprecated_osx_instance
Fix deprecated "osx_instance" type
2021-04-16 11:47:25 -04:00
Chris Evich 2d8f88d7d2
Fix deprecated "osx_instance" type
This was replaced in Cirrus with 'macos_instance', update code
accordingly to support both.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-04-16 11:37:58 -04:00
Chris Evich ee40700dbe
Merge pull request #68 from cevich/more_bug_fixes
Fix traceback when task or matrix has no env.
2021-03-27 10:44:39 -04:00
Chris Evich 8da19857fb
Fix traceback when task or matrix has no env.
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-03-26 14:12:14 -04:00
Chris Evich 1a55eaaac6
Merge pull request #67 from cevich/cirrus-ci_env-debugging
Cirrus-ci_env debugging and matrix-error context message
2021-03-26 13:47:42 -04:00
Chris Evich a08311041e
Show helpful message for legacy-format matrix
Initially Cirrus-CI supported using multiple 'matrix' attributes under
any other task attribute.  This requires a custom YAML parser as it
breaks compliance with the format standard.  Later/currently, standard
YAML parsing is supported by a dedicated 'matrix' attribute on the task.
This `.cirrus.yml` style is now preferred over the legacy format.

However, there are still some existing `.cirrus.yml` files in use
with the legacy format.  The `CirrusCfg` does not and will not ever
support this, but throws an unhelpful exception.  Fix this by tracking
task-parsing status, and using that to provide context in a more helpful
error message.

Also add a unit-test to verify this special-case continues to present
useful context info, going forward.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-03-26 13:31:27 -04:00
Chris Evich 3fa52f8e04
Use logging module for debugging and errors
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-03-26 11:27:13 -04:00
Chris Evich f8be3e6318
Merge pull request #66 from cevich/cirrus-ci_env-installer
Add cirrus-ci_env installer script and test
2021-03-24 13:20:07 -04:00
Chris Evich 613754c58d
Add cirrus-ci_env installer script and test
Signed-off-by: Chris Evich <cevich@redhat.com>
2021-03-24 13:14:55 -04:00
Chris Evich 6aa58cb715
Merge pull request #65 from containers/dependabot/pip/cirrus-ci_artifacts/aiohttp-3.7.4
Bump aiohttp from 3.6.2 to 3.7.4 in /cirrus-ci_artifacts
2021-03-24 12:16:08 -04:00
Chris Evich f88b97a33d
Merge pull request #64 from cevich/cirrus-ci_env
Implement cirrus-ci config task env. var. renderer
2021-03-24 11:42:44 -04:00
Chris Evich 38fd0ec35f
Implement cirrus-ci config task env. var. renderer
The current `get_ci_vm.sh` script is duplicated in each repo with
slight changes, and is therefore difficult to maintain.  This
commit adds a helper script to solve several problems blocking
development of a unified `get_ci_vm.sh`.  Given a (Cirrus-CI
validated) `.cirrus.yml`:

* What is the complete set of task names configured (including matrix
  tasks)
* What environment variables should be set for any given task.
* What kind of runtime instance should be used (i.e. VM or Container)
  for a task.
* What is the VM or container image name to use used for a task

Also, add unit, integration, and system tests for this tool along with
lint and style checking.

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-03-23 12:50:32 -04:00
dependabot[bot] 83c2cad5b7
Bump aiohttp from 3.6.2 to 3.7.4 in /cirrus-ci_artifacts
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.6.2 to 3.7.4.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.6.2...v3.7.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-17 14:44:07 +00:00
Chris Evich b61c105e17
Merge pull request #63 from containers/dependabot/pip/cirrus-ci_asr/aiohttp-3.7.4
Bump aiohttp from 3.6.2 to 3.7.4 in /cirrus-ci_asr
2021-03-17 10:43:45 -04:00
dependabot[bot] ce065d4896
Bump aiohttp from 3.6.2 to 3.7.4 in /cirrus-ci_asr
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.6.2 to 3.7.4.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.6.2...v3.7.4)

Signed-off-by: dependabot[bot] <support@github.com>
2021-02-26 03:07:37 +00:00
Ed Santiago 380c4e140b
Merge pull request #62 from edsantiago/dynamic-svg
Dynamic svg
2021-02-02 08:37:49 -07:00
Ed Santiago 77c830bbe4
Merge pull request #61 from edsantiago/map-show-skip
map show skip
2021-02-02 08:37:33 -07:00
Ed Santiago eb1600b47a cirrus-task-map: craft dynamic SVG
When generating SVGs, make a dynamic one that will highlight
and de-highlight SKIP and ONLY-IF tasks on cursor :hover

Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-02-01 12:58:38 -07:00
Ed Santiago 97e6f2dc4f More skip special-cases
Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-02-01 11:40:58 -07:00
Ed Santiago 39a9a0f338 add only-ifs from automation_images repo
Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-01-26 13:46:05 -07:00
Ed Santiago 04c3b08e1d task-map: show skippable tasks
Signed-off-by: Ed Santiago <santiago@redhat.com>
2021-01-26 13:46:05 -07:00
Chris Evich 21645ac2d0
Merge pull request #58 from cevich/even_more_simple_environment
Simplify global automation env. vars
2021-01-21 10:16:15 -05:00
Chris Evich 34791cc5cd
Simplify global automation env. vars
tl;dr; Helps simplify this nonsense:
https://github.com/containers/podman/pull/8046/files#diff-87b53ab90b08478b1624b8f218f6d0b6eab834f2cf81dd73581540c3a0dfb63cR9-R14

The automation libraries may be located in different places
based on install-time options.  In order for a callers to utilize the
libraries, they must first load the definition of `$AUTOMATION_LIB_PATH`.

Prior to this commit, this env. var definition could be in one/more
locations depending upon the platform.  This arrangement added
unnecessary complexity. Fix this with a **non-backward compatible
change** such that the only file that needs to be loaded for all
platforms, is in a well defined location:

`/etc/automation_environment`

If needed, this filename may be changed at install-time by overriding
`$INSTALL_ENV_FILEPATH`

Signed-off-by: Chris Evich <cevich@redhat.com>
2021-01-21 10:10:20 -05:00
Chris Evich 62735fbf70
Merge pull request #60 from cevich/add_showrun
Add common showrun() function and unit-tests
2020-12-14 14:38:42 -05:00
Chris Evich 9748964ec9
Add common showrun() function and unit-tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-12-14 14:26:01 -05:00
Chris Evich a0d276053e
Merge pull request #59 from cevich/cirrus-ci_artifacts
Script to download Cirrus-CI artifacts
2020-12-08 15:07:25 -05:00
Chris Evich 1bca301ae5
Basic cirrus-ci_artifacts.py unit-tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-12-02 15:24:13 -05:00
Chris Evich 7ffc2271a5
Download task(s) artifact(s) in parallel
Trade an increase in complexity for moderate increase in
performance when downloading many files from many tasks.

Comparison of downloading all files in all artifacts in all
tasks from a podman Cirrus-CI run (1,166 files):

Sync.:  3:22 minutes
Async.: 1:54 minutes
2020-12-02 15:24:13 -05:00
Chris Evich 1ffa609d39
Initial impl. - Cirrus-CI artifact downloader
Given references to a repository, GCS bucket name, and a Cirrus-CI build
ID, download artifacts from all tasks/files matching an optional regex.
Because there are likely to be overlapping artifact names and/or
filenames, the script creates directories for each task name, and each
artifact name.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-11-30 12:58:53 -05:00
Chris Evich ed872b6427
Merge pull request #57 from cevich/fix_ubuntu_release
Deduplicate Ubuntu unit tests
2020-11-02 13:06:27 -05:00
Chris Evich 513dee8e9b
Deduplicate Ubuntu unit tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-11-02 13:04:22 -05:00
Chris Evich 065d49a4d5
Merge pull request #56 from cevich/fix_ephemeral_gpg
fix ephemeral_gpg unit tests
2020-11-02 12:56:21 -05:00
Chris Evich 698fdb0263
Run unit-tests on latest ubuntu
This is necessary because Cirrus-CI only tests Fedora and
Ubuntu is a supported target.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-11-02 12:52:22 -05:00
Chris Evich 01c9a9d6b1
fix ephemeral_gpg unit tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-11-02 10:37:09 -05:00
Chris Evich c0eb73e1b0
Merge pull request #53 from cevich/cirrus-ci_arr
Implement helper for processing specific failed CI jobs
2020-11-02 08:58:36 -05:00
Chris Evich 571f4ac344
Merge pull request #55 from cevich/fix_ctx
Fix code-context references
2020-10-30 14:33:49 -04:00
Chris Evich ce894eed2e
Fix code-context references
Previously due to practical reasons, the unit-tests did a very poor
jobs checking the code-context information shown by functions like
`die()`.  Prior to this commit, the stack-level used was too great,
resulting in unhelpful output such as:

`ERROR: blah blah blah (<stdin>:0 in ())`

Fix the stack-level and context output.  Add a helper-script
and new unit-test to confirm it does not break again in the future.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-10-30 14:23:42 -04:00
Chris Evich 585c5da3c5
Implement helper for processing CI task statuses
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-10-29 12:02:51 -04:00
Chris Evich 95c2a7c6fa
Merge pull request #54 from cevich/update_ci_container
Update CI Container packages + export env. vars.
2020-10-29 10:48:45 -04:00
Chris Evich c557224dba
Update CI Container packages + export env. vars.
As of the release of Fedora 33, the FindBin perl module is now provided
by a dedicated package.  Previously it was part of the perl-interpreter
package.

For some unit-tests, it may become necessary in the future to utilize
Cirrus-CI env. vars. if they are set.  Ensure the top-level runner
exports them when they are available.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-10-29 10:46:50 -04:00
Chris Evich 19143aeb7b
Merge pull request #52 from cevich/dedicated_ci_container
Use dedicated CI container
2020-10-26 10:36:43 -04:00
Chris Evich d2f7dc9edf
Use dedicated CI container
Specifically, running tests for `cirrus-task-map` requires some perl
modules which are not needed in the `cirrus-ci_retrospective`
container (formerly used for CI).  Having a dedicated container
means the package-set can be specialized for testing needs.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-10-23 14:34:16 -04:00
Chris Evich fbfefa7361
Unconditionally test cirrus-task-map
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-10-23 12:35:34 -04:00
Chris Evich 6a9840e547
Merge pull request #51 from cevich/common_lib
Add shortcut for loading all common libraries
2020-10-23 10:08:30 -04:00
Chris Evich 11182c3fb5
Add shortcut for loading all common libraries
One internal, and several external users of this library don't bother
sourcing individual `common/lib/*.sh` files.  Instead they iterate
over all of them by name, which is messy and inconvenient.  Add a
short-cut script which callers can source to load all common libraries.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-10-23 10:05:26 -04:00
Chris Evich ef1e3f934e
Merge pull request #49 from edsantiago/sort_by_refcnt
Sort by refcnt
2020-10-01 14:25:00 -04:00
Chris Evich 3eb664aaa1
Merge pull request #50 from cevich/fix_double_escape
Avoid double-escaping env. var values
2020-09-28 16:17:49 -04:00
Chris Evich dd7246c569
Avoid double-escaping env. var values
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-09-28 15:59:34 -04:00
Ed Santiago 356af42b10 Sort nodes by "size" instead of alphabetical
...where "size" is defined as the sum of input and output
edges plus the number of matrix jobs. The effect is to
put the "big" boxes (lots of arrows, lots of jobs) down
at the bottom, and the "small" ones (ellipses) at top.
Makes for a somewhat more readable graph IMO.

(OBTW the key insight here is that 'dot' draws the nodes
top to bottom in input order.)

Signed-off-by: Ed Santiago <santiago@redhat.com>
2020-09-24 14:33:20 -06:00
Ed Santiago 42f9946539 Deduplicate: do a better job of pruning dup output
I was using a list to keep track of dependencies, and what
with aliases and all, the same dependency got pushed twice
to some lists. Use a hash instead, then convert to list.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2020-09-24 13:59:11 -06:00
Chris Evich d4e3bdb7f0
Merge pull request #48 from cevich/simplify_indent
Simplify indent function
2020-09-24 13:37:01 -04:00
Chris Evich bae2d2d018
Simplify indent function
For some input line values, escape-characters were being eaten.  Fix this by
not interpreting the input line in any way, simply prefix each line with
spaces.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-09-24 13:28:17 -04:00
Chris Evich 24557da14e
Merge pull request #47 from edsantiago/mergekeys
UGLY: deal with YAML anchors.
2020-09-24 10:44:23 -04:00
Ed Santiago 81ca30c3b2 UGLY: deal with YAML anchors.
Stole code from: https://www.perlmonks.org/?node_id=813443
Further context: https://www.perlmonks.org/?node_id=1124136

Signed-off-by: Ed Santiago <santiago@redhat.com>
2020-09-24 08:41:10 -06:00
Chris Evich eb47f1b125
Merge pull request #46 from edsantiago/green_success
Green success
2020-09-24 09:51:57 -04:00
Ed Santiago 6ed9ebaf33 deal with weird '<<' in matrix in auto_images
Signed-off-by: Ed Santiago <santiago@redhat.com>
2020-09-23 14:55:31 -06:00
Ed Santiago 894619310e Terminal nodes: draw in green
...pulling colors from the tail (pop) of the color list
instead of the head. Also add more colors, because the
"performant" map is growing out of control.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2020-09-23 14:55:31 -06:00
Chris Evich 665a31817b
Merge pull request #45 from cevich/one_default_indent
Simplify indent function and tests
2020-09-23 13:06:26 -04:00
Chris Evich 74f60bf47b
Simplify indent function and tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-09-23 11:58:15 -04:00
Chris Evich 11add5b172
Merge pull request #44 from cevich/show_env_vars
Add show_env_vars function and tests
2020-09-18 14:17:15 -04:00
Chris Evich 6cda65f9a1
Add show_env_vars function and tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-09-18 14:15:31 -04:00
Chris Evich 6e5dc65b2a
Merge pull request #43 from edsantiago/cirrus-task-map
New tool: cirrus-task-map
2020-09-18 14:11:04 -04:00
Ed Santiago 889437250a New tool: cirrus-task-map
Generates a human-readable graph showing dependencies between
Cirrus tasks.

Requires: graphviz. Will use ImageMagick, if available, to
add a signature at bottom left (timestamp, tool version)

Signed-off-by: Ed Santiago <santiago@redhat.com>
2020-09-17 18:59:23 -06:00
Chris Evich 3a6e4459a8
Merge pull request #42 from cevich/fix_ubuntu_pathing_again
Fix Ubuntu environment again
2020-08-27 11:20:03 -04:00
Chris Evich 54cdf2a226
Fix Ubuntu environment again
For whatever reason, the embedded PATH variable reference does not get
expanded in /etc/environment.  Fix this by simply dumping whatever
value resolves at install-time.

Also include small typo-fix for xrtry.sh script

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-08-27 11:01:46 -04:00
Chris Evich 2685ca6edb
Merge pull request #41 from cevich/fix_ubuntu_pathing
Use dash compatible environment
2020-08-27 10:40:38 -04:00
Chris Evich 74568cbed8
Use dash compatible environment
Ubuntu defaults to `dash` as the system-wide shell which
means bash-isms in /etc/environment will fail.  Fix this
by reverting to simple-interpretation when referencing `$PATH`
to avoid screwing it up system-wide.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-08-27 10:18:33 -04:00
Chris Evich b2546a6bba
Merge pull request #40 from cevich/fix_error_retry
Fix exit behavior & tests
2020-08-05 10:04:51 -04:00
Chris Evich e4ffb24b3c
Fix exit behavior & tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-08-04 16:40:22 -04:00
Chris Evich 00ec2253b2
Merge pull request #39 from cevich/more_common_functions
Add req_env_vars() function + tests
2020-08-04 14:04:16 -04:00
Chris Evich c479f7bb03
Add req_env_vars() function + tests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-08-04 13:48:50 -04:00
Chris Evich f26a58c143
Merge pull request #38 from cevich/action_helper_name
Fix action-helper name
2020-07-28 14:53:52 -04:00
Chris Evich 276de9f91f
Fix action-helper name
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-28 14:49:29 -04:00
Chris Evich e27cd7495f
Merge pull request #35 from cevich/xrtry
Implement an exponential retry function and script
2020-07-28 14:48:58 -04:00
Chris Evich ee240049ae
Minor test output check fix
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-28 14:35:52 -04:00
Chris Evich 602fe22adf
Implement exponential retry function and script
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-28 14:30:01 -04:00
Chris Evich a48cb086d5
Implement tests for copy/rename_function utils
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-28 14:30:00 -04:00
Chris Evich 8782676973
Implement indent function and unittests
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-28 14:30:00 -04:00
Chris Evich af956ca0c1
Merge pull request #37 from cevich/disable_broken_smoke
Disable broken smoke
2020-07-28 14:24:56 -04:00
Chris Evich 0d6938e60d
Disable inline smoke-testing.
Building container images from the PR code, and testing them is too
onerous of a task to encode directly into the workflow yaml.  Replace
with a TODO for now, and rely on a future commit to implement this step
as a proper github-action or a simple script.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-28 11:58:32 -04:00
Chris Evich dae7114d66
Add github action common lib installer
Some users of the common library are not operating under a github
actions environment.  To support installation under a github actions
environment while retaining that characteristic, new installer behavior
is required to allow subdirectory installers to modify the environment
file before "installation finished" (version file) is put in place.

Add a github action common lib installer which adds it's installation
directory into the system-wide environment file.

Update cirrus-ci_retrospective to utilize the new github action common
lib for some operations.

Update unit-tests.

Fix bug installing system-wide environment in Ubuntu vs Fedora

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-28 11:40:34 -04:00
Chris Evich 2f8bd2e214
Relocate github action helper lib
In order to better re-use common Github Action-specific helper script
libraries, move them into a more promintnt location, and add a README.md

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-07-27 13:03:38 -04:00
Chris Evich 05a85112e9
Merge pull request #32 from cevich/fix_name_typo
Fix incorrect artifact path and c_name typo
2020-06-23 10:36:32 -04:00
Chris Evich 9e47a0534b
Fix incorrect artifact path and c_name typo
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-23 10:34:34 -04:00
Chris Evich 886da6cdc8
Merge pull request #31 from cevich/remove_more_matrix
Remove leftover matrix substitutions
2020-06-23 10:23:57 -04:00
Chris Evich a0b90d6a16
Remove leftover matrix substitutions
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-23 10:16:39 -04:00
Chris Evich e0210c5693
Merge pull request #29 from cevich/no_anchors
Remove unsupported yaml-anchor/alias
2020-06-23 10:02:00 -04:00
Chris Evich bd249cde18
Remove unsupported yaml-anchor/alias
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-23 09:55:10 -04:00
Chris Evich 8054a77b03
Merge pull request #28 from cevich/fix_syntax
Correct improper workflow matrix use
2020-06-23 09:35:41 -04:00
Chris Evich 2915eb8403
Correct improper workflow matrix use
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-22 18:30:11 -04:00
Chris Evich b14d4cf9ef
Merge pull request #26 from cevich/ephemeral_gpg
Image for performing secure ephemeral gpg
2020-06-22 16:24:45 -04:00
Chris Evich 40c4608cd3
Image for performing secure ephemeral gpg+git
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-22 16:11:36 -04:00
Chris Evich d3645dcba6
Add copy/rename function utilities
Did not add unit tests for these, since they would be quite complex.
Both functions were manually tested.

Also, added use of `realpath` in a few places to prevent changing
direcories from breaking debugging messages.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-22 06:51:06 -04:00
Chris Evich a2caa7d775
Add additional test debugging output
Show the exit-code and output statitistics as early as possible

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-22 06:51:06 -04:00
Chris Evich cf1aae1c70
Merge pull request #25 from cevich/minor_update
Clarify purpose of defaulting to latest
2020-06-15 10:23:41 -04:00
Chris Evich 9c8f4dcabb
Clarify purpose of defaulting to latest
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-06-15 10:17:44 -04:00
Chris Evich 8fead35fd4
Merge pull request #24 from TomSweeneyRedHat/sec
Add Code of Conduct and Security policies
2020-05-15 15:35:12 -04:00
TomSweeneyRedHat f485144afc Add Code of Conduct and Security policies
As the title says.

Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2020-05-15 12:22:15 -04:00
Chris Evich ad39c8f53f
Merge pull request #23 from cevich/avoid_race
Avoid build status race
2020-04-21 13:04:39 -04:00
Chris Evich 376f46516b
Avoid build status race
It's possible that Cirrus-CI could re-execute a build/task after
previously having completed.  For example, due to a manual trigger or
re-run of a task.  This can result in the workflow triggered by the
first completion event, finding a build not in a final state.  If this
condition is detected, the workflow should not take any further action.
Presumably the in-flight (manual or re-run) will complete again at some
future time, causing the `on: check_suite: completed' workflow to execute
again.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-21 11:18:32 -04:00
Chris Evich b42ea4651d
Merge pull request #20 from cevich/fix_release
Minor fix to release workflow
2020-04-07 16:38:01 -04:00
Chris Evich 0244eff5f8
Minor fix to release workflow
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-07 16:33:07 -04:00
Chris Evich 369a8f5b57
Merge pull request #19 from cevich/release_1.1.2
Update README to use the latest tag
2020-04-07 16:30:01 -04:00
Chris Evich 31c6fe5b10
Update README to use the latest tag
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-07 16:26:07 -04:00
Chris Evich 296048df66
Merge pull request #18 from cevich/blocking_task
Blocking task
2020-04-07 16:21:42 -04:00
Chris Evich 4e185d6e32
Trigger task on action success
Previously, if the cirrus-ci_retrospective action and integration test
failed, it would not block accidental merging of a PR.  This commit
adds a cirrus-ci task that does not start automatically upon push.
Instead, it will be state-validated and executed *only* after
successfull execution of the integration test.

Also add a small github action helper library, unit-tests, and
verify they pass for PRs and releases.  These all had to be
manually verified in a 'sandbox' repo., as they will not
execute in a PR (until it's merged).

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-07 16:11:38 -04:00
Chris Evich 1985a6353a
TODO: BETTER COMMIT MESSAGE: DO NOT MERGE
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-02 12:35:08 -04:00
Chris Evich d654ea2475
Merge pull request #17 from cevich/add_status
Add support for getting retro task status
2020-04-02 11:26:54 -04:00
Chris Evich 90dbe1738c
Add support for getting retro task status
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-02 11:12:52 -04:00
Chris Evich 4c6f1a9c25
Merge pull request #16 from cevich/add_ids
Add retrieval of task and build id values
2020-04-01 15:46:55 -04:00
Chris Evich 8f1a889bb4
Add retrieval of task and build id values
Both of these values are handy to have, and in some cases required.  For
example, the task ID is needed for triggering a re-run via the Cirrus-CI
GraphQL API.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-01 15:31:03 -04:00
Chris Evich 0945f2b28e
Merge pull request #14 from cevich/more_docs
Update retro. docs for manual tasks
2020-04-01 14:36:14 -04:00
Chris Evich 5062a5dd0d
Merge pull request #15 from cevich/fix_cancel_typo
Fix YAML syntax typo
2020-04-01 14:32:25 -04:00
Chris Evich 4e77eb0092
Fix YAML syntax typo
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-01 14:31:44 -04:00
Chris Evich 94ca05b62a
Update retro. docs for manual tasks
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-01 14:05:49 -04:00
Chris Evich c6aad13fec
Merge pull request #13 from cevich/fix_cancel
Minor.  Empty body no work
2020-04-01 13:58:40 -04:00
Chris Evich b8a8027072
Minor. Empty body no work
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-04-01 13:47:39 -04:00
Chris Evich 6731e04e6c
Merge pull request #12 from cevich/more_docs
Minor docs update
2020-03-31 18:08:29 -04:00
Chris Evich f73d37aa8a
Minor docs update
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-03-31 16:24:20 -04:00
Chris Evich d4dffe1974
Merge pull request #11 from cevich/kill_mergify
Remove mergify from repository
2020-03-31 16:20:17 -04:00
Chris Evich f22a7a9aa2
Remove mergify from repository
While it initially seemed handy, the danger of automatically merging a
broken PR is too great in this specific project.  This is because the
cirrus-ci_retrospective action must necessarily execute after cirrus-ci
is finished.  Due to limitations of the github issue-labling
system/permissions, it is likely mergify could merge before the
self-test run of cirrus-ci_retrospective on a PR has completed.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-03-31 16:01:56 -04:00
Chris Evich 25e0a234d3
Merge pull request #8 from cevich/docs_update
Improve documentation
2020-03-31 15:46:25 -04:00
mergify[bot] 1aabda3843
Merge branch 'master' into docs_update 2020-03-31 19:42:32 +00:00
Chris Evich e622904561
Improve documentation
Signed-off-by: Chris Evich <cevich@redhat.com>
2020-03-31 15:40:31 -04:00
Chris Evich e9093baf7c
Merge pull request #9 from cevich/fix_installer
Fix installer not finding ooe.sh
2020-03-31 15:33:35 -04:00
Chris Evich dadf0acaf8
Merge pull request #10 from cevich/fix_label
Fix removal of label
2020-03-31 15:12:29 -04:00
Chris Evich 66f4e1671d
Fix removal of label
The github-script action will fail when instructed to remove a label
which is not already present on an issue.  Place this step at the end
of the workflow, and make it conditional on cancellation or failure only.

Signed-off-by: Chris Evich <cevich@redhat.com>
2020-03-31 14:58:36 -04:00
107 changed files with 9816 additions and 535 deletions

View File

@ -2,28 +2,113 @@
# Ref: https://cirrus-ci.org/guide/writing-tasks/
# Default task runtime environment
container:
image: quay.io/libpod/cirrus-ci_retrospective:latest
cpu: 1
memory: 1
# Global environment variables
env:
# Name of the typical destination branch for PRs.
DEST_BRANCH: "main"
# Execute all unit-tests in the repo
cirrus-ci/test_task:
cirrus-ci/unit-test_task:
only_if: &not_docs $CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*'
# Default task runtime environment
container: &ci_container
dockerfile: ci/Dockerfile
cpu: 1
memory: 1
env:
CIRRUS_CLONE_DEPTH: 0
script:
- git fetch --tags |& tee /tmp/test_output.log
- $CIRRUS_WORKING_DIR/bin/run_all_tests.sh |& tee -a /tmp/output.log
- $CIRRUS_WORKING_DIR/bin/run_all_tests.sh |& tee -a $CIRRUS_WORKING_DIR/output.log
always:
test_output_artifacts:
path: '/tmp/*.log'
path: '*.log'
# Must pass for automatic-merge to occur, see .mergify.yml
cirrus-ci/success_task:
alias: cirrus-ci/success
cirrus-ci/renovate_validation_task:
only_if: *not_docs
container:
image: "ghcr.io/renovatebot/renovate:latest"
preset_validate_script:
- renovate-config-validator $CIRRUS_WORKING_DIR/renovate/defaults.json5
repo_validate_script:
- renovate-config-validator $CIRRUS_WORKING_DIR/.github/renovate.json5
# This is the same setup as used for Buildah CI
gcp_credentials: ENCRYPTED[fc95bcc9f4506a3b0d05537b53b182e104d4d3979eedbf41cf54205be6397ca0bce0831d0d47580cf578dae5776548a5]
cirrus-ci/build-push_test_task:
only_if: *not_docs
container: *ci_container
depends_on:
- cirrus-ci/test
- cirrus-ci/unit-test
gce_instance:
cpu: 2
memory: "4Gb"
disk: 200 # Gigabytes, do not set less as per gcloud warning message
# re: I/O performance
# This repo. is subsequently used in and for building custom VM images
# in containers/automation_images. Avoid circular dependencies by using
# only stock, google-managed generic image. This also avoids needing to
# update custom-image last-used timestamps.
image_project: centos-cloud
image_family: centos-stream-9
timeout_in: 30
env:
CIMG: quay.io/buildah/stable:latest
TEST_FQIN: quay.io/buildah/do_not_use
# Robot account credentials for test-push to
# $TEST_FQIN registry by build-push/test/testbuilds.sh
BUILDAH_USERNAME: ENCRYPTED[53fd8becb599dda19f335d65cb067c46da3f0907eb83281a10554def11efc89925f7ca145ba7436afc3c32d936575142]
BUILDAH_PASSWORD: ENCRYPTED[aa6352251eba46e389e4cfc6e93eee3852008ecff67b940cba9197fd8bf95de15d498a6df2e7d5edef052e97d9b93bf0]
setup_script:
- dnf install -y podman
- bash build-push/test/qemusetup.sh
- >-
podman run --detach --name=buildah
--net=host --ipc=host --pid=host
--cgroupns=host --privileged
--security-opt label=disable
--security-opt seccomp=unconfined
--device /dev/fuse:rw
-v $PWD:$PWD:Z -w $PWD
-e BUILD_PUSH_TEST_BUILDS=true
-e CIRRUS_CI -e TEST_FQIN
-e BUILDAH_USERNAME -e BUILDAH_PASSWORD
$CIMG
sh -c 'while true ;do sleep 2h ; done'
- podman exec -i buildah dnf install -y jq skopeo
test_script:
- podman exec -i buildah ./build-push/test/run_all_tests.sh
noop_script: /bin/true
# Represent primary Cirrus-CI based testing (Required for merge)
cirrus-ci/success_task:
container: *ci_container
depends_on: &everything
- cirrus-ci/unit-test
- cirrus-ci/build-push_test
- cirrus-ci/renovate_validation
clone_script: mkdir -p "$CIRRUS_WORKING_DIR"
script: >-
echo "Required for Action Workflow: https://github.com/${CIRRUS_REPO_FULL_NAME}/actions/runs/${GITHUB_CHECK_SUITE_ID}"
# Represent secondary Github Action based testing (Required for merge)
# N/B: NO other task should depend on this task. Doing so will prevent
# the cirrus-ci_retrospective github action. This is because the
# action trigers `on: check-suite: completed` event, which cannot
# fire since the manual task has dependencies that cannot be
# satisfied.
github-actions/success_task:
container: *ci_container
# Note: ***DO NOT*** manually trigger this task under normal circumstances.
# It is triggered automatically by the cirrus-ci_retrospective
# Github Action. This action is responsible for testing the PR changes
# to the action itself.
trigger_type: manual
# Only required for PRs, never tag or branch testing
only_if: $CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*' && $CIRRUS_PR != ''
depends_on: *everything
clone_script: mkdir -p "$CIRRUS_WORKING_DIR"
script: >-
echo "Triggered by Github Action Workflow: https://github.com/${CIRRUS_REPO_FULL_NAME}/actions/runs/${GITHUB_CHECK_SUITE_ID}"

45
.github/renovate.json5 vendored Normal file
View File

@ -0,0 +1,45 @@
/*
Renovate is a service similar to GitHub Dependabot, but with
(fantastically) more configuration options. So many options
in fact, if you're new I recommend glossing over this cheat-sheet
prior to the official documentation:
https://www.augmentedmind.de/2021/07/25/renovate-bot-cheat-sheet
Configuration Update/Change Procedure:
1. Make changes
2. Manually validate changes (from repo-root):
podman run -it \
-v ./.github/renovate.json5:/usr/src/app/renovate.json5:z \
ghcr.io/renovatebot/renovate:latest \
renovate-config-validator
3. Commit.
Configuration Reference:
https://docs.renovatebot.com/configuration-options/
Monitoring Dashboard:
https://app.renovatebot.com/dashboard#github/containers
Note: The Renovate bot will create/manage it's business on
branches named 'renovate/*'. Otherwise, and by
default, the only the copy of this file that matters
is the one on the `main` branch. No other branches
will be monitored or touched in any way.
*/
{
/*************************************************
****** Global/general configuration options *****
*************************************************/
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
// Re-use predefined sets of configuration options to DRY
"extends": [
// https://github.com/containers/automation/blob/main/renovate/defaults.json5
"github>containers/automation//renovate/defaults.json5"
],
/*************************************************
*** Repository-specific configuration options ***
*************************************************/
}

View File

@ -0,0 +1,39 @@
---
# Perform unit-testing of the helper scripts used by github actions workflows
on: [push, pull_request]
# Variables required by multiple jobs/steps
env:
# Authoritative Cirrus-CI task to monitor for completion info of all other cirrus-ci tasks.
MONITOR_TASK: 'MONITOR/TEST/VALUE'
# Authoritative Github Action task (in cirrus-ci) to trigger / check for completion of _this_ workflow
ACTION_TASK: 'ACTION/TEST/VALUE'
HELPER_LIB_TEST: 'github/test/run_action_tests.sh'
# Enables debugging of github actions itself
# (see https://help.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-a-debug-message)
ACTIONS_STEP_DEBUG: '${{ secrets.ACTIONS_STEP_DEBUG }}'
jobs:
helper_unit-test:
runs-on: ubuntu-latest
steps:
- name: Clone the repository code
uses: actions/checkout@v4
with:
persist-credentials: false
path: ./
- name: Execute helper library unit-tests using code from PR
run: |
./$HELPER_LIB_TEST
event-debug:
runs-on: ubuntu-latest
steps:
- name: Collect the originating event and result JSON
run: cp "${{ github.event_path }}" ./
- name: Log colorized and formatted event JSON
run: jq --indent 4 --color-output . ./event.json

View File

@ -13,112 +13,90 @@ on:
types:
- completed
# Variables required by multiple jobs/steps
env:
# Default 'sh' behaves slightly but significantly different
CIRRUS_SHELL: '/bin/bash'
# Authoritative Cirrus-CI task to monitor for completion info of all other cirrus-ci tasks.
MONITOR_TASK: 'cirrus-ci/success'
# Authoritative Github Action task (in cirrus-ci) to trigger / check for completion of _this_ workflow
ACTION_TASK: 'github-actions/success'
# Relative locations to help with safe use and testing
HELPER_LIB: 'github/lib/github.sh'
HELPER_LIB_TEST: 'github/test/run_action_tests.sh'
# Enable debugging of github actions itself
# (see https://help.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-a-debug-message)
ACTIONS_STEP_DEBUG: '${{ secrets.ACTIONS_STEP_DEBUG }}'
jobs:
# Obtain task details and validate required execution conditions
cirrus-ci_retrospective:
# Do not execute for other github applications, only works with cirrus-ci
if: github.event.check_suite.app.name == 'Cirrus CI'
runs-on: ubuntu-latest
steps:
# This container image is built and pushed (after testing), when a
# new tag like 'vXX.YY.ZZ' is pushed. These tagged versions are
# intended to provide behavioral consistency when used outside
# of this repository.
- name: Execute latest upstream cirrus-ci_retrospective
id: cirrus-ci_retrospective
# Actually use the (not-normally recommended) latest version,
# since it likely represents the most recent and behaviors.
# This avoids needing to rebuild the container image for every
# run, saving time at the possible expense of stability.
# since it likely represents the behaviors most similar to
# what this action expects.
uses: docker://quay.io/libpod/cirrus-ci_retrospective:latest
env:
GITHUB_TOKEN: ${{ github.token }}
# Consume the output JSON from running the container (above).
# This could be made into a complete/sophisticated script. However,
# In this workflow, if the build was on PR in this repo, we will be
# re-execute the PR version of the container anyway, so this can
# remain simple inline commands.
- name: Check output for a Cirrus-CI build versus Pull Request
id: retro
shell: bash
run: |
prn=$(jq --raw-output '.[] | select(.name == "cirrus-ci/success") | .build.pullRequest' ./cirrus-ci_retrospective.json)
sha=$(jq --raw-output '.[] | select(.name == "cirrus-ci/success") | .build.changeIdInRepo' ./cirrus-ci_retrospective.json)
if [[ -n "$prn" ]] && [[ "$prn" != "null" ]] && [[ $prn -gt 0 ]] && [[ -n "$sha" ]]; then
printf "\n::set-output name=was_pr::true\n"
printf "\n::set-output name=prn::%d\n" "$prn"
else
printf "\n::set-output name=was_pr::false\n"
printf "\n::set-output name=prn::null\n"
fi
printf "\n::set-output name=sha::%s\n" "$sha"
# In case there was a problem, provide details about what might have gone wrong.
- if: always()
name: Debug latest upstream cirrus-ci_retrospective output Values
run: |
echo ""
echo "Analyzed Cirrus-CI task:"
jq --indent 4 --color-output '.[] | select(.name == "cirrus-ci/success")' ./cirrus-ci_retrospective.json
echo ""
echo "Analysis Result:"
echo "Was PR: ${{ steps.retro.outputs.was_pr }}"
echo "PR Number: ${{ steps.retro.outputs.prn }}"
echo "SHA: ${{ steps.retro.outputs.sha }}"
# Block mergify from merging the PR
- if: steps.retro.outputs.was_pr == 'true'
name: Remove cirrus-ci_retrospective self-test success label
uses: actions/github-script@0.9.0
- name: Clone latest main branch repository code
uses: actions/checkout@v4
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: ${{ steps.retro.outputs.prn }},
name: 'Cirrus-CI Retrospective Self-tested'
})
fetch-depth: 1
path: ./main
# DO NOT build-in any unnecessary permissions
persist-credentials: 'false'
# Provide feedback to PR in the form of a comment, referncing this run.
- if: steps.retro.outputs.was_pr == 'true'
- name: Load cirrus-ci_retrospective JSON and set action output variables
id: retro
env:
A_DEBUG: 1
run: |
source ./main/$HELPER_LIB
load_ccir $GITHUB_WORKSPACE
set_ccir
# Provide feedback in PR for normal workflow ($ACTION-TASK task has not run).
- if: steps.retro.outputs.do_intg == 'true'
id: create_pr_comment
name: Create a status comment in the PR
# Ref: https://github.com/marketplace/actions/comment-action
uses: jungwinter/comment@v1
uses: thollander/actions-comment-pull-request@v3
with:
issue_number: '${{ steps.retro.outputs.prn }}'
type: 'create'
token: '${{ secrets.GITHUB_TOKEN }}'
pr-number: '${{ steps.retro.outputs.prn }}'
comment-tag: retro
# N/B: At the time of this comment, it is not possible to provide
# direct links to specific job-steps (here) nor links to artifact
# files. There are open RFE's for this capability to be added.
body: >-
message: >-
[Cirrus-CI Retrospective Github
Action](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
has started. Running against
[${{ steps.retro.outputs.sha }}](https://github.com/${{github.repository}}/pull/${{steps.retro.outputs.prn}}/commits/${{steps.retro.outputs.sha}})
in this pull request.
# Since we're executing from the master branch, github-actions will not
# allow us to directly checkout the PR code at this point. We will do
# that in the subsequent step.
- if: steps.retro.outputs.was_pr == 'true'
# Since we're executing from the main branch, github will silently
# block allow direct checkout of PR code.
- if: steps.retro.outputs.do_intg == 'true'
name: Clone all repository code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Get ALL available history to avoid problems during any run of
# 'git describe' from any script in the repo.
fetch-depth: 0
path: ./pull_request
# ignored for some inexplicable reason
# ref: ${{ steps.retro.outputs.sha }}
# Will be used to execute code from the PR
# DO NOT build-in any unnecessary permissions
persist-credentials: 'false'
# This workflow always runs from the master branch, this is not helpful
# This workflow always runs from the main branch, this is not helpful
# for PR authors wanting to change the container or script's behavior.
# Clone down a copy of the code from the PR, so it may be utilized for
# a test-build and secondary execution of cirrus-ci_retrospective
- if: steps.retro.outputs.was_pr == 'true'
- if: steps.retro.outputs.do_intg == 'true'
name: Fetch PR code used by Cirrus-CI during completed build
run: |
mkdir -p test_artifacts
@ -129,163 +107,146 @@ jobs:
git checkout -b 'pr${{ steps.retro.outputs.prn }}' FETCH_HEAD
git log -1 | tee ../test_artifacts/commit.txt
# Update the comment posted to the PR, replace it's content with the current
# execution status and links.
- if: steps.retro.outputs.was_pr == 'true'
- if: steps.retro.outputs.do_intg == 'true'
name: Execute helper library unit-tests using code from PR
run: |
cd pull_request
./$HELPER_LIB_TEST | tee ../test_artifacts/unit_test_output.txt
# Update the status comment posted to the PR
- if: steps.retro.outputs.do_intg == 'true'
id: edit_pr_comment_build
name: Update status comment on PR
uses: jungwinter/comment@v1
uses: thollander/actions-comment-pull-request@v3
with:
type: 'edit'
comment_id: '${{ steps.create_pr_comment.outputs.id }}'
token: '${{ secrets.GITHUB_TOKEN }}'
body: >-
pr-number: '${{ steps.retro.outputs.prn }}'
comment-tag: retro
message: >-
Unit-testing passed (`${{ env.HELPER_LIB_TEST }}`)passed.
[Cirrus-CI Retrospective Github
Action](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
is building [test cirrus-ci_retrospective container image
Dockerfile](https://github.com/${{ github.repository}}/blob/${{steps.retro.outputs.sha}}/cirrus-ci_retrospective/Dockerfile) from this PR.
is smoke-testing PR changes to images.
# The Dockerfile and container environment may have changed in addition
# to scripts. Re-build a testing container image for use in exercising
# the code from the PR.
- if: steps.retro.outputs.was_pr == 'true'
name: Build cirrus-ci_retrospective container image from PR code
run: |
cd pull_request
docker build -t test_container \
-f cirrus-ci_retrospective/Dockerfile \
--build-arg INSTALL_AUTOMATION_VERSION=0.0.0 \
./ &> ../test_artifacts/build_output.txt
# TODO: Not sure if this is helpful to have or a burden to download-
# docker save test_container | gzip > ../test_artifacts/test_container.tar.gz
# TODO: Implement container build + smoke-test coverage changes in PR
# The container build can take a few minutes, update status comment when it finishes.
- if: steps.retro.outputs.was_pr == 'true'
- if: steps.retro.outputs.do_intg == 'true'
id: edit_pr_comment_exec
name: Update status comment on PR again
uses: jungwinter/comment@v1
uses: thollander/actions-comment-pull-request@v3
with:
type: 'edit'
comment_id: '${{ steps.edit_pr_comment_build.outputs.id }}'
token: '${{ secrets.GITHUB_TOKEN }}'
body: >-
[Cirrus-CI Retrospective Github
pr-number: '${{ steps.retro.outputs.prn }}'
comment-tag: retro
message: >-
Smoke testing passed [Cirrus-CI Retrospective Github
Action](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
is executing test container.
is triggering Cirrus-CI ${{ env.ACTION_TASK }} task.
# Finally, execute the PR's version of the container, against the same event.json
# used to trigger this workflow run - since it's guaranteed to represent a PR.
- if: steps.retro.outputs.was_pr == 'true'
name: Execute PR's cirrus-ci_retrospective container image
# Allow PR to be merged by triggering required action-status marker task in Cirrus CI
- if: steps.retro.outputs.do_intg == 'true'
name: Trigger Cirrus-CI ${{ env.ACTION_TASK }} task on PR
env:
# ID invented here to verify the operation performed.
UUID: ${{github.run_id}}.${{steps.retro.outputs.prn}}.${{steps.retro.outputs.sha}}
run: |
cd pull_request
github_event_dirpath=$(dirname "${{ github.event_path }}")
/usr/bin/docker run --rm \
-e GITHUB_TOKEN=${{ github.token }} \
-e GITHUB_EVENT_PATH=/github/workflow/event.json \
-e GITHUB_ACTIONS=true \
-e GITHUB_WORKSPACE=/github/workspace \
-v "$PWD":"/github/workspace" \
-v $github_event_dirpath:/github/workflow \
--entrypoint=/bin/bash test_container \
-c "source /etc/profile && exec /usr/share/automation/bin/debug.sh" \
&> ../test_artifacts/debug_output.txt
mv ./cirrus-ci_retrospective.json ../test_artifacts/ || true
set +x
trap "history -c" EXIT
curl --fail-with-body --request POST \
--url https://api.cirrus-ci.com/graphql \
--header "Authorization: Bearer ${{ secrets.CIRRUS_API_TOKEN }}" \
--header 'content-type: application/json' \
--data '{"query":"mutation {\n trigger(input: {taskId: \"${{steps.retro.outputs.tid}}\", clientMutationId: \"${{env.UUID}}\"}) {\n clientMutationId\n task {\n name\n }\n }\n}"}' \
| tee ./test_artifacts/action_task_trigger.json
# The debug.sh script provides verbose output not suitable for logging.
# Provide an archive of files for debugging/analysis.
- if: always() && steps.retro.outputs.was_pr == 'true'
name: Archive event, build, and debugging output
uses: actions/upload-artifact@v1.0.0
with:
name: pr_${{ steps.retro.outputs.prn }}_test_artifacts_${{ steps.retro.outputs.sha }}.zip
path: ./test_artifacts
actual=$(jq --raw-output '.data.trigger.clientMutationId' ./test_artifacts/action_task_trigger.json)
echo "Verifying '$UUID' matches returned tracking value '$actual'"
test "$actual" == "$UUID"
# Workflow against a PR was successful, provide that feedback in the PR
- if: steps.retro.outputs.was_pr == 'true'
name: Final status comment on PR
uses: jungwinter/comment@v1
- if: steps.retro.outputs.do_intg == 'true'
name: Update comment on workflow success
uses: thollander/actions-comment-pull-request@v3
with:
type: 'edit'
comment_id: '${{ steps.edit_pr_comment_exec.outputs.id }}'
token: '${{ secrets.GITHUB_TOKEN }}'
body: >-
Successfully ran test cirrus-ci_retrospective from this PR's
[${{ steps.retro.outputs.sha }}](https://github.com/${{github.repository}}/pull/${{steps.retro.outputs.prn}}/commits/${{steps.retro.outputs.sha}})
[Results
and artifacts are now
available.](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
pr-number: '${{ steps.retro.outputs.prn }}'
comment-tag: retro
message: >-
Successfully triggered [${{ env.ACTION_TASK }}
task](https://cirrus-ci.com/task/${{ steps.retro.outputs.tid }}?command=main#L0)
to indicate
successful run of [cirrus-ci_retrospective integration and unit
testing](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
from this PR's
[${{ steps.retro.outputs.sha }}](https://github.com/${{github.repository}}/pull/${{steps.retro.outputs.prn}}/commits/${{steps.retro.outputs.sha}}).
# Workflow against a PR was canceled for some reason,
# This can happen because of --force push, manual button press, or some other cause.
- if: cancelled() && steps.retro.outputs.was_pr == 'true'
name: Add comment on workflow cancle
uses: jungwinter/comment@v1
- if: failure() && steps.retro.outputs.do_intg == 'true'
name: Update comment on workflow failure
uses: thollander/actions-comment-pull-request@v3
with:
type: 'edit'
comment_id: '${{ steps.create_pr_comment.outputs.id }}'
token: '${{ secrets.GITHUB_TOKEN }}'
# Don't leave unnecessary clutter comments in the PR, erase them.
body: ''
# Lastly, notice if there was a failure and provide feedback to PR.
- if: failure() && steps.retro.outputs.was_pr == 'true'
name: Add comment on workflow failure
uses: jungwinter/comment@v1
with:
type: 'edit'
comment_id: '${{ steps.create_pr_comment.outputs.id }}'
token: '${{ secrets.GITHUB_TOKEN }}'
body: >-
pr-number: '${{ steps.retro.outputs.prn }}'
comment-tag: retro
message: >-
Failure running [Cirrus-CI Retrospective Github
Action](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
failed against this PR's
[${{ steps.retro.outputs.sha }}](https://github.com/${{github.repository}}/pull/${{steps.retro.outputs.prn}}/commits/${{steps.retro.outputs.sha}})
# Allow mergify to merge the PR
- if: steps.retro.outputs.was_pr == 'true'
name: Remove cirrus-ci_retrospective self-test success label
# Ref: https://github.com/actions/github-script
uses: actions/github-script@0.9.0
# This can happen because of --force push, manual cancel button press, or some other cause.
- if: cancelled() && steps.retro.outputs.do_intg == 'true'
name: Update comment on workflow cancellation
uses: thollander/actions-comment-pull-request@v3
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.issues.addLabels({
issue_number: ${{ steps.retro.outputs.prn }},
owner: context.repo.owner,
repo: context.repo.repo,
labels: 'Cirrus-CI Retrospective Self-tested'
})
pr-number: '${{ steps.retro.outputs.prn }}'
comment-tag: retro
message: '[Cancelled](https://github.com/${{github.repository}}/pull/${{steps.retro.outputs.prn}}/commits/${{steps.retro.outputs.sha}})'
debug_cirrus-ci_retrospective:
# Abnormal workflow ($ACTION-TASK task already ran / not paused on a PR).
- if: steps.retro.outputs.is_pr == 'true' && steps.retro.outputs.do_intg != 'true'
id: create_error_pr_comment
name: Create an error status comment in the PR
# Ref: https://github.com/marketplace/actions/comment-action
uses: thollander/actions-comment-pull-request@v3
with:
pr-number: '${{ steps.retro.outputs.prn }}'
comment-tag: error
message: >-
***ERROR***: [cirrus-ci_retrospective
action](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
found `${{ env.ACTION_TASK }}` task with unexpected `${{ steps.retro.outputs.tst }}`
status. This task should never be triggered manually (or multiple times) under normal
circumstances.
# Negative case followup, fail the build with an error status
- if: steps.retro.outputs.is_pr == 'true' && steps.retro.outputs.do_intg != 'true'
run: >-
printf "::error::Found ${ACTION_TASK} with unexpected ${{ steps.retro.outputs.tst }} status"
exit 1
# Provide an archive of files for debugging/analysis.
- if: always() && steps.retro.outputs.do_intg == 'true'
name: Archive event, build, and debugging output
uses: actions/upload-artifact@v4.6.2
with:
name: pr_${{ steps.retro.outputs.prn }}_debug.zip
path: ./test_artifacts
debug:
if: github.event.check_suite.app.name == 'Cirrus CI'
runs-on: ubuntu-latest
steps:
- name: Collect the originating event and result JSON
run: cp "${{ github.event_path }}" ./
- name: Log colorized and formatted event JSON
run: jq --indent 4 --color-output . ./event.json
# Do this in parallel for simplicity since it's just for debugging
# purposes. Assume it will execute the same/similar to the regular job
# above.
- name: Execute latest upstream cirrus-ci_retrospective
- if: always()
name: Execute latest upstream cirrus-ci_retrospective
id: cirrus-ci_retrospective
uses: docker://quay.io/libpod/cirrus-ci_retrospective:latest
env:
GITHUB_TOKEN: ${{ github.token }}
- if: always()
name: Collect the originating event and result JSON
run: cp "${{ github.event_path }}" ./
- if: always()
name: Log colorized and formatted event JSON
run: jq --indent 4 --color-output . ./event.json
- if: always()
name: Log colorized and formatted cirrus-ci_retrospective JSON
run: jq --indent 4 --color-output . ./cirrus-ci_retrospective.json
- if: always()
uses: actions/upload-artifact@v1.0.0
name: Archive triggering event JSON and latest cirrus-ci_retrospective output
with:
# There is no way to avoid this being zipped :(
name: debug_cirrus-ci_retrospective.zip
path: ./

View File

@ -5,30 +5,47 @@ on:
# ref: https://help.github.com/en/actions/reference/events-that-trigger-workflows#example-using-multiple-events-with-activity-types-or-configuration
tags:
- 'v*'
env:
# Authoritative Cirrus-CI task to monitor for completion info of all other cirrus-ci tasks.
MONITOR_TASK: 'MONITOR/TEST/VALUE'
# Authoritative Github Action task (in cirrus-ci) to trigger / check for completion of _this_ workflow
ACTION_TASK: 'ACTION/TEST/VALUE'
HELPER_LIB_TEST: 'github/test/run_action_tests.sh'
jobs:
smoke:
runs-on: ubuntu-latest
steps:
- name: Confirm provledged registry access
- name: Confirm privileged registry access
env:
DOCKER_CONFIG_JSON: ${{secrets.DOCKER_CONFIG_JSON}}
run: |
set -e
set +x
trap "history -c" EXIT
if [[ -z "$DOCKER_CONFIG_JSON" ]]; then
echo "::error::Empty/unset \$DOCKER_CONFIG_JSON for quay.io/libpod write access"
exit 1
fi
test:
runs-on: ubuntu-latest
unit-tests: # N/B: Duplicates `ubuntu_unit_tests.yml` - templating not supported
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
# Testing installer requires a full repo. history
fetch-depth: 0
persist-credentials: false
path: ./
- name: Install dependencies
run: |
sudo apt-get -qq update
sudo apt-get -qq -y install libtest-differences-perl libyaml-libyaml-perl
- name: Execute helper library unit-tests using code from PR
run: |
$GITHUB_WORKSPACE/$HELPER_LIB_TEST
- name: Fetch all repository tags
run: git fetch --tags --force
@ -37,8 +54,9 @@ jobs:
release:
needs:
- test
- unit-tests
- smoke
# Don't blindly trust the 'v*' push event filter.
if: startsWith(github.ref, 'refs/tags/v') && contains(github.ref, '.')
runs-on: ubuntu-latest
@ -48,18 +66,18 @@ jobs:
# context data.
- id: get_tag
name: Retrieve the tag name
run: printf "::set-output name=TAG_NAME::%s\n" $(basename "$GITHUB_REF" | tee /dev/stderr)
run: printf "TAG_NAME=%s\n" $(basename "$GITHUB_REF") >> $GITHUB_OUTPUT
- id: create_release # Pre-req for upload-release-asset below
name: Create a new Github Release item for tag
uses: actions/create-release@v1.0.1
uses: actions/create-release@v1.1.4
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ steps.get_tag.outputs.TAG_NAME }}
release_name: ${{ steps.get_tag.outputs.TAG_NAME }}
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
fetch-depth: 0
path: ./
@ -76,7 +94,7 @@ jobs:
container_image:
needs:
- test
- unit-tests
- smoke
runs-on: ubuntu-latest
env:
@ -84,7 +102,7 @@ jobs:
REPO_USER: libpod
REPO_NAME: cirrus-ci_retrospective
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
fetch-depth: 0
path: ./
@ -110,7 +128,7 @@ jobs:
- name: Retrieve the tag name
id: get_tag
run: printf "::set-output name=TAG_NAME::%s\n" $(basename "$GITHUB_REF" | tee /dev/stderr)
run: printf "TAG_NAME=%s\n" $(basename "$GITHUB_REF" | tee /dev/stderr) >> $GITHUB_OUTPUT
- name: Tag and push cirrus-ci_retrospective container image to registry
run: |
@ -127,7 +145,7 @@ jobs:
run: jq --indent 4 --color-output . ${{ github.event_path }}
- if: always()
uses: actions/upload-artifact@v1.0.0
uses: actions/upload-artifact@v4.6.2
name: Archive triggering event JSON
with:
name: event.json.zip

24
.github/workflows/ubuntu_unit_tests.yml vendored Normal file
View File

@ -0,0 +1,24 @@
---
on: [push, pull_request]
jobs:
automation_unit-tests:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
path: ./
- name: Install dependencies
run: |
sudo apt-get -qq update
sudo apt-get -qq -y install libtest-differences-perl libyaml-libyaml-perl
- name: Fetch all repository tags
run: git fetch --tags --force
- name: Execute all unit-tests
run: $GITHUB_WORKSPACE/bin/run_all_tests.sh

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
__pycache__

View File

@ -1,62 +0,0 @@
---
# Format Ref: https://doc.mergify.io/configuration.html
pull_request_rules:
- name: 'automatic labeling of PRs that require manual merging'
# Github blocks apps from merging if they modify any github action or workflow file.
# Mergify blocks merging PRs if they modify _this_ file.
# For consistency sake, also block auto-merge for PRs that modify the cirrus config.
conditions:
- 'files~=(^.github/workflow)|(^.cirrus.y.+l)|(^.mergify.y.+l)'
- '-label=Manual Merge'
actions:
label:
add:
- 'Manual Merge'
- name: 'automatic labeling of work-in-progress PRs by title'
# All conditions must match for action to occur
conditions:
- 'title~=.*WIP.*'
- '-label=WIP'
actions: &add_wip
label:
add:
- 'WIP'
- name: 'automatic labeling of work-in-progress PRs by body'
conditions:
- 'body~=.*WIP.*'
- '-label=WIP'
actions: *add_wip
- name: 'automatic work-in-progress label removal'
conditions:
- 'label=WIP'
- '-title~=.*WIP.*'
- '-body~=.*WIP.*'
actions:
label:
remove:
- 'WIP'
# N/B: This will _NEVER_ fire if this file is also modified by the same PR
- name: 'automatic merge when Cirrus-CI Successful'
conditions:
- 'label=Cirrus-CI Retrospective Self-tested' # Managed by cirrus-ci_retrospective workflow
- '-label=WIP' # Will NOT match a PR labeled during this run
- '-title~=.*WIP' # Don't merge with WIP label still applied
- '-body~=.*WIP' # "
- '-label=Manual Merge' # Automation config. file modified
- '-files~=(^.github/workflow)|(^.cirrus.y.+l)|(^.mergify.y.+l)'
- '-title~=.*CI.*SKIP' # Cirrus-CI feature
- '-title~=.*SKIP.*CI' # "
- 'status-success=cirrus-ci/success' # defined in .cirrus.yml
actions:
label:
remove:
- 'WIP'
merge:
strict: true
method: 'rebase'

3
CODE-OF-CONDUCT.md Normal file
View File

@ -0,0 +1,3 @@
## The Automation Scrips for Containers Project Community Code of Conduct
The Automation Scrips for Containers Project follows the [Containers Community Code of Conduct](https://github.com/containers/common/blob/main/CODE-OF-CONDUCT.md).

146
README.md
View File

@ -1,29 +1,139 @@
# automation
Automation scripts, libraries, and other tooling for re-use by other containers org.
repositories
# Automation scripts, libraries for re-use in other repositories
## bin
Ths directory contains scripts intended for execution under multiple environments,
## Dependencies
The install script and `common` subdirectory components require the following
system packages (or their equivalents):
* bash
* core-utils
* git
* install
## Installation
During build of an environment (VM, container image, etc), execute *any version*
of [the install
script](https://github.com/containers/automation/releases/download/latest/install_automation.sh),
preferably as root. The script ***must*** be passed the version number of [the project
release to install](https://github.com/containers/automation/releases). Alternatively
it may be passed `latest` to install the HEAD of the main branch.
For example, to install the `v1.1.3` release, run:
```bash
~# url='https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh'
~# curl -sL "$url" | bash -s 1.1.3
```
To install `latest`, run:
```bash
~# url='https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh'
~# curl -sL "$url" | bash -s latest
```
### Alt. Installation
If you're leery of piping to bash and/or a local clone of the repository is already
available locally, the installer can be invoked with the *magic version* '0.0.0'.
Note this will limit the install to the local clone (as-is). The installer script
will still reach out to github.com to retrieve version information. For example:
```bash
~# cd /path/to/clone
/path/to/clone# ./bin/install_automation.sh 0.0.0
```
### Component installation
The installer may also be passed the names of one or more components to
install system-wide. Available components are simply any subdirectory in the repo
which contain a `.install.sh` file. For example, to install the latest `build-push` system-wide run:
```bash
~# url='https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh'
~# curl -sL "$url" | bash -s latest build-push
```
## Usage
The basic install consists of copying the contents of the `common` (subdirectory) and
the installer script into a central location on the system. Because this location
can vary by platform, a global shell variable `$AUTOMATION_LIB_PATH` is established
by a central configuration at install-time. It is highly recommended that all
callers explicitly load and export the contents of the file
`/etc/automation_environment` before making use of the common library or any
components. For example:
```bash
#!/bin/bash
set -a
if [[ -r "/etc/automation_environment" ]]; then
source /etc/automation_environment
fi
set +a
if [[ -n "$AUTOMATION_LIB_PATH" ]]; then
source $AUTOMATION_LIB_PATH/common_lib.sh
else
(
echo "WARNING: It doesn't appear containers/automation common was installed."
) >> /dev/stderr
fi
...do stuff...
```
## Subdirectories
### `.github/workflows`
Directory containing workflows for Github Actions.
### `bin`
This directory contains scripts intended for execution under multiple environments,
pertaining to operations on this whole repository. For example, executing all
unit tests, installing components, etc.
## common
### `build-push`
Handy automation too to help with parallel building and pushing container images,
including support for multi-arch (via QEMU emulation). See the
[README.md file in the subdirectory](build-push/README.md) for more information.
### `cirrus-ci_artifacts`
Handy python script that may be used to download artifacts from any build,
based on knowing its ID. Downloads will be stored properly nested, by task
name and artifact so there are no name clashes.
### `cirrus-ci_env`
Python script used to minimally parse `.cirrus.yml` tasks as written/formatted
in other containers projects. This is not intended to be used directly, but
called by other scripts to help extract env. var. values from matrix tasks.
### `cirrus-ci_retrospective`
See the [README.md file in the subdirectory](cirrus-ci_retrospective/README.md) for more information.
### `cirrus-task-map`
Handy script that parses a `.cirrus.yml` and outputs an flow-diagram to illustrate
task dependencies. Useful for visualizing complex configurations, like that of
`containers/podman`.
### `common`
This directory contains general-purpose scripts, libraries, and their unit-tests.
They're intended to be used individually or as a whole from within automation of
other repositories.
## cirrus-ci_retrospective
### `github`
This directory contains items intended for use in/by a github-action, under a
that environment. It helps perform automated analysis of a Cirrus-CI execution
after-the-fact. Providing cross-references and other runtime details in a JSON
output file.
An commented example of using the cirrus-ci_retrospective container is present in
this repository, and used to bootstrap testing of PRs that modify it's files.
Outside of the github-action environment, there is a `bin/debug.sh` script. This
is intended for local use, and will provide additional runtime operational details.
See the comments in the script for its usage.
Contains some helper scripts/libraries for using `cirrus-ci_retrospective` from
within github-actions workflow. Not intended to be used otherwise.

3
SECURITY.md Normal file
View File

@ -0,0 +1,3 @@
## Security and Disclosure Information Policy for the Automation Scripts for Containers Project
The Automation Scripts for Containers Project follows the [Security and Disclosure Information Policy](https://github.com/containers/common/blob/main/SECURITY.md) for the Containers Projects.

View File

@ -9,20 +9,22 @@ set +x
# the following dependencies are already installed:
#
# bash
# core-utils
# coreutils
# curl
# git
# install
AUTOMATION_REPO_URL=${AUTOMATION_REPO_URL:-https://github.com/containers/automation.git}
AUTOMATION_REPO_BRANCH=${AUTOMATION_REPO_BRANCH:-master}
AUTOMATION_REPO_BRANCH=${AUTOMATION_REPO_BRANCH:-main}
# This must be hard-coded for executing via pipe to bash
SCRIPT_FILENAME=install_automation.sh
# When non-empty, contains the installation source-files
INSTALLATION_SOURCE="${INSTALLATION_SOURCE:-}"
# The source version requested for installing
AUTOMATION_VERSION="$1"
shift || true # ignore if no more args
# Set non-zero to enable
DEBUG=${DEBUG:-0}
A_DEBUG=${A_DEBUG:-0}
# Save some output eyestrain (if script can be found)
OOE=$(realpath $(dirname "${BASH_SOURCE[0]}")/../common/bin/ooe.sh 2>/dev/null || echo "")
# Sentinel value representing whatever version is present in the local repository
@ -30,82 +32,116 @@ MAGIC_LOCAL_VERSION='0.0.0'
# Needed for unit-testing
DEFAULT_INSTALL_PREFIX=/usr/local/share
INSTALL_PREFIX="${INSTALL_PREFIX:-$DEFAULT_INSTALL_PREFIX}"
INSTALL_PREFIX="${INSTALL_PREFIX%%/}" # Make debugging path problems easier
# When installing as root, allow sourcing env. vars. from this file
INSTALL_ENV_FILEPATH="${INSTALL_ENV_FILEPATH:-/etc/automation_environment}"
# Used internally here and in unit-testing, do not change without a really, really good reason.
_ARGS="$@"
_ARGS="$*"
_MAGIC_JUJU=${_MAGIC_JUJU:-XXXXX}
_DEFAULT_MAGIC_JUJU=d41d844b68a14ee7b9e6a6bb88385b4d
msg() { echo -e "${1:-No Message given}" > /dev/stderr; }
msg() { echo -e "${1:-No Message given}"; }
dbg() { if ((DEBUG)); then msg "\n# $1"; fi }
dbg() { if ((A_DEBUG)); then msg "\n# $1"; fi }
# Represents specific installer behavior, should that ever need to change
d41d844b68a14ee7b9e6a6bb88385b4d() {
TEMPDIR=$(realpath "$(dirname $0)/../")
trap "rm -rf $TEMPDIR" EXIT
dbg "Will clean up \$TEMPDIR upon script exit"
# On 5/14/2021 the default branch was renamed to 'main'.
# Since prior versions of the installer reference the old
# default branch, the version-specific installer could fail.
# Work around this with some inline editing of the downloaded
# script, before re-exec()ing it.
fix_branch_ref() {
local filepath="$1"
if [[ ! -w "$filepath" ]]; then
msg "Error updating default branch name in installer script at '$filepath'"
exit 19
fi
sed -i -r -e \
's/^(AUTOMATION_REPO_BRANCH.+)master/\1main/' \
"$filepath"
}
# System-wide access to special environment, not used during installer testing.
install_environment() {
msg "##### Installing automation environment file."
local inst_perm_arg=""
if [[ $UID -eq 0 ]]; then
inst_perm_arg="-o root -g root"
fi
install -v $inst_perm_arg -D -t "$INSTALL_PREFIX/automation/" "$INSTALLATION_SOURCE/environment"
if [[ $UID -eq 0 ]]; then
# Since INSTALL_PREFIX can vary, this path must be static / hard-coded
# so callers always know where to find it, when installed globally (as root)
msg "##### Installing automation env. vars. into $INSTALL_ENV_FILEPATH"
cat "$INSTALLATION_SOURCE/environment" >> "$INSTALL_ENV_FILEPATH"
fi
}
install_automation() {
local actual_inst_path="$INSTALL_PREFIX/automation"
msg "\n##### Installing the 'common' component into '$actual_inst_path'"
if [[ ! -x "$INSTALLATION_SOURCE/bin/$SCRIPT_FILENAME" ]]; then
msg "Bug: install_automation() called with invalid \$INSTALLATION_SOURCE '$INSTALLATION_SOURCE'"
exit 17
fi
# Assume temporary source dir is valid, clean it up on exit
trap "rm -rf $INSTALLATION_SOURCE" EXIT
if [[ "$actual_inst_path" == "/automation" ]]; then
msg "Bug: install_automation() refusing install into the root of a filesystem"
exit 18
fi
if [[ "$AUTOMATION_VERSION" == "$MAGIC_LOCAL_VERSION" ]] || [[ "$AUTOMATION_VERSION" == "latest" ]]; then
msg "BUG: Actual installer requires actual version number, not '$AUTOMATION_VERSION'"
exit 16
fi
local actual_inst_path="$INSTALL_PREFIX/automation"
# Name Hack: if/when installed globally, should work for both Fedora and Debian-based
spp="etc/profile.d/zz_automation.sh"
local sys_profile_path="${actual_inst_path}/$spp"
local inst_perm_arg="-o root -g root"
local am_root=0
if [[ $UID -eq 0 ]]; then
dbg "Will try to install and configure system-wide"
am_root=1
sys_profile_path="/$spp"
else
msg "Warning: Not installing as root, this is not recommended other than for testing purposes"
inst_perm_arg=""
fi
# Allow re-installing different versions, clean out old version if found
if [[ -d "$actual_inst_path" ]] && [[ -r "$actual_inst_path/AUTOMATION_VERSION" ]]; then
local installed_version=$(cat "$actual_inst_path/AUTOMATION_VERSION")
local installed_version
installed_version=$(<"$actual_inst_path/AUTOMATION_VERSION")
msg "Warning: Removing existing installed version '$installed_version'"
rm -rvf "$actual_inst_path"
if ((am_root)); then
msg "Warning: Removing any existing, system-wide environment configuration"
rm -vf "/$spp"
fi
elif [[ -d "$actual_inst_path" ]]; then
msg "Error: Unable to deal with unknown contents of '$actual_inst_path', manual removal required"
msg " Including any relevant lines in /$spp"
msg "Error: Unable to deal with unknown contents of '$actual_inst_path',"
msg " the file AUTOMATION_VERSION not found, manual removal required."
exit 12
fi
msg "Installing common scripts/libraries version '$AUTOMATION_VERSION' into '$actual_inst_path'"
cd "$INSTALLATION_SOURCE/common"
install -v $inst_perm_arg -D -t "$actual_inst_path/bin" $INSTALLATION_SOURCE/common/bin/*
install -v $inst_perm_arg -D -t "$actual_inst_path/lib" $INSTALLATION_SOURCE/common/lib/*
install -v $inst_perm_arg -D -t "$actual_inst_path/bin" $INSTALLATION_SOURCE/bin/$SCRIPT_FILENAME
cd "$TEMPDIR/common"
install -v $inst_perm_arg -D -t "$actual_inst_path/bin" ./bin/*
install -v $inst_perm_arg -D -t "$actual_inst_path/lib" ./lib/*
cd "$actual_inst_path"
dbg "Configuring example environment in $actual_inst_path/environment"
cat <<EOF>"./environment"
# Added on $(date --iso-8601=minutes) by $actual_inst_path/bin/$SCRIPT_FILENAME"
dbg "Configuring environment file $INSTALLATION_SOURCE/environment"
cat <<EOF>"$INSTALLATION_SOURCE/environment"
# Added on $(date --utc --iso-8601=minutes) by $actual_inst_path/bin/$SCRIPT_FILENAME"
# for version '$AUTOMATION_VERSION'. Any manual modifications will be lost upon upgrade or reinstall.
export AUTOMATION_LIB_PATH="$actual_inst_path/lib"
export PATH="\${PATH:+\$PATH:}$actual_inst_path/bin"
export PATH="$PATH:$actual_inst_path/bin"
EOF
if ((am_root)); then
msg "Installing example environment files system-wide"
install -v $inst_perm_arg --no-target-directory "./environment" "/$spp"
fi
echo -n "Installation complete for " > /dev/stderr
echo "$AUTOMATION_VERSION" | tee "./AUTOMATION_VERSION" > /dev/stderr
}
exec_installer() {
# Actual version string may differ from $AUTOMATION_VERSION argument
local version_arg
if [[ -z "$TEMPDIR" ]] || [[ ! -d "$TEMPDIR" ]]; then
msg "Error: exec_installer() expected $TEMPDIR to exist"
# Prior versions spelled it '$TEMPDIR'
INSTALLATION_SOURCE="${INSTALLATION_SOURCE:-$TEMPDIR}"
if [[ -z "$INSTALLATION_SOURCE" ]] || \
[[ ! -d "$INSTALLATION_SOURCE" ]]; then
msg "Error: exec_installer() expected $INSTALLATION_SOURCE to exist"
exit 13
fi
@ -113,54 +149,61 @@ exec_installer() {
# Special-case, use existing source repository
if [[ "$AUTOMATION_VERSION" == "$MAGIC_LOCAL_VERSION" ]]; then
cd $(realpath "$(dirname ${BASH_SOURCE[0]})/../")
dbg "Will try to use installer from local repository $PWD"
cd $(realpath "$(dirname ${BASH_SOURCE[0]})/../")
# Make sure it really is a git repository
if [[ ! -r "./.git/config" ]]; then
msg "ErrorL Must execute $SCRIPT_FILENAME from a repository clone."
msg "Error: Must execute $SCRIPT_FILENAME from repository clone when specifying version 0.0.0."
exit 6
fi
# Allow installer to clean-up TEMPDIR as with updated source
dbg "Copying repository into \$TEMPDIR"
cp --archive ./* ./.??* "$TEMPDIR/."
# Allow installer to clean-up as with updated source
dbg "Copying repository into '$INSTALLATION_SOURCE'"
cp --archive ./* ./.??* "$INSTALLATION_SOURCE/."
else # Retrieve the requested version (tag) of the source code
version_arg="v$AUTOMATION_VERSION"
if [[ "$AUTOMATION_VERSION" == "latest" ]]; then
version_arg=$AUTOMATION_REPO_BRANCH
fi
msg "Attempting to clone branch/tag '$version_arg'"
dbg "Cloning from $AUTOMATION_REPO_URL into \$TEMPDIR"
dbg "Cloning from $AUTOMATION_REPO_URL into $INSTALLATION_SOURCE"
git clone --quiet --branch "$version_arg" \
--config advice.detachedHead=false \
"$AUTOMATION_REPO_URL" "$TEMPDIR/."
"$AUTOMATION_REPO_URL" "$INSTALLATION_SOURCE"
fi
dbg "Now working from \$TEMPDIR"
cd "$TEMPDIR"
msg "Retrieving complete version information for temp. repo. clone"
dbg "Now working from '$INSTALLATION_SOURCE'"
cd "$INSTALLATION_SOURCE"
if [[ "$(git rev-parse --is-shallow-repository)" == "true" ]]; then
msg "Retrieving complete remote details to unshallow temp. copy of local clone"
$OOE git fetch --unshallow --tags --force
else
elif ! git describe HEAD &> /dev/null; then
msg "Retrieving complete remote version information for temp. copy of local clone"
$OOE git fetch --tags --force
else
msg "Using local version information in temp. copy of local clone"
fi
msg "Attempting to rettrieve actual version based on all configured remotes"
version_arg=$(git describe HEAD)
# Full path is required so script can find and install itself
DOWNLOADED_INSTALLER="$TEMPDIR/bin/$SCRIPT_FILENAME"
DOWNLOADED_INSTALLER="$INSTALLATION_SOURCE/bin/$SCRIPT_FILENAME"
if [[ -x "$DOWNLOADED_INSTALLER" ]]; then
msg "Executing install for actial version '$version_arg'"
dbg "Using \$INSTALL_PREFIX '$INSTALL_PREFIX'; installer $DOWNLOADED_INSTALLER"
fix_branch_ref "$DOWNLOADED_INSTALLER"
msg "Executing installer version '$version_arg'\n"
dbg "Using \$INSTALL_PREFIX '$INSTALL_PREFIX'; installer '$DOWNLOADED_INSTALLER'"
# Execution likely trouble-free, cancel removal on exit
trap EXIT
# _MAGIC_JUJU set to signal actual installation work should commence
set -x
exec env \
DEBUG="$DEBUG" \
TEMPDIR="$TEMPDIR" \
A_DEBUG="$A_DEBUG" \
INSTALLATION_SOURCE="$INSTALLATION_SOURCE" \
INSTALL_PREFIX="$INSTALL_PREFIX" \
AUTOMATION_REPO_URL="$AUTOMATION_REPO_URL" \
AUTOMATION_REPO_BRANCH="$AUTOMATION_REPO_BRANCH" \
_MAGIC_JUJU="$_DEFAULT_MAGIC_JUJU" \
/bin/bash "$DOWNLOADED_INSTALLER" "$version_arg" $_ARGS
else
msg "Error: '$DOWNLOADED_INSTALLER' does not exist or is not executable" > /dev/stderr
msg "Error: '$DOWNLOADED_INSTALLER' does not exist or is not executable"
# Allow exi
exit 8
fi
@ -169,53 +212,73 @@ exec_installer() {
check_args() {
local arg_rx="^($AUTOMATION_REPO_BRANCH)|^(latest)|^(v?[0-9]+\.[0-9]+\.[0-9]+(-.+)?)"
dbg "Debugging enabled; Command-line was '$0${AUTOMATION_VERSION:+ $AUTOMATION_VERSION}${_ARGS:+ $_ARGS}'"
dbg "Argument validation regular-expresion '$arg_rx'"
dbg "Argument validation regular-expression '$arg_rx'"
if [[ -z "$AUTOMATION_VERSION" ]]; then
msg "Error: Must specify the version number to install, as the first and only argument."
msg "Error: Must specify the version number to install, as the first argument."
msg " Use version '$MAGIC_LOCAL_VERSION' to install from local source."
msg " Use version 'latest' to install from current upstream"
exit 2
elif ! echo "$AUTOMATION_VERSION" | egrep -q "$arg_rx"; then
elif ! echo "$AUTOMATION_VERSION" | grep -E -q "$arg_rx"; then
msg "Error: '$AUTOMATION_VERSION' does not appear to be a valid version number"
exit 4
elif [[ -z "$_ARGS" ]] && [[ "$_MAGIC_JUJU" == "XXXXX" ]]; then
msg "Warning: Installing 'common' component only. Additional component(s) may be"
msg " specified as arguments. Valid components depend on the version."
fi
}
##### MAIN #####
check_args
if [[ "$_MAGIC_JUJU" == "XXXXX" ]]; then
dbg "Operating in source prep. mode"
TEMPDIR=$(mktemp -p '' -d "tmp_${SCRIPT_FILENAME}_XXXXXXXX")
dbg "Using temporary directory '$TEMPDIR'"
trap "rm -rf $TEMPDIR" EXIT # version may be invalid or clone could fail or some other error
INSTALLATION_SOURCE=$(mktemp -p '' -d "tmp_${SCRIPT_FILENAME}_XXXXXXXX")
dbg "Using temporary directory '$INSTALLATION_SOURCE'"
# version may be invalid or clone could fail or some other error
trap "rm -rf $INSTALLATION_SOURCE" EXIT
exec_installer # Try to obtain version from source then run it
elif [[ "$_MAGIC_JUJU" == "$_DEFAULT_MAGIC_JUJU" ]]; then
dbg "Operating in actual install mode (ID $_MAGIC_JUJU)"
# Running from $TEMPDIR in requested version of source
$_MAGIC_JUJU
dbg "Operating in actual install mode for '$AUTOMATION_VERSION'"
dbg "from \$INSTALLATION_SOURCE '$INSTALLATION_SOURCE'"
install_automation
# Validate the common library can load
source "$INSTALL_PREFIX/automation/lib/anchors.sh"
# Allow subcomponent installers to modify environment file before it's installed"
msg "##### Installation complete for 'common' component"
# Additional arguments specify subdirectories to check and chain to their installer script
for arg in $_ARGS; do
CHAIN_TO="$TEMPDIR/$arg/.install.sh"
msg "\n##### Installing the '$arg' component"
CHAIN_TO="$INSTALLATION_SOURCE/$arg/.install.sh"
if [[ -r "$CHAIN_TO" ]]; then
msg " "
msg "Chaining to additional install script for $arg"
# Cannot assume common was installed system-wide
# AUTOMATION_LIB_PATH defined by anchors.sh
# shellcheck disable=SC2154
env AUTOMATION_LIB_PATH=$AUTOMATION_LIB_PATH \
DEBUG=$DEBUG \
/bin/bash $CHAIN_TO
AUTOMATION_VERSION=$AUTOMATION_VERSION \
INSTALLATION_SOURCE=$INSTALLATION_SOURCE \
A_DEBUG=$A_DEBUG \
MAGIC_JUJU=$_MAGIC_JUJU \
$CHAIN_TO
msg "##### Installation complete for '$arg' subcomponent"
else
msg "Warning: Cannot find installer for $CHAIN_TO"
fi
done
install_environment
# Signify finalization of installation process
(
echo -n "##### Finalizing successful installation of version "
echo -n "$AUTOMATION_VERSION" | tee "$AUTOMATION_LIB_PATH/../AUTOMATION_VERSION"
echo " of 'common'${_ARGS:+, and subcomponents: $_ARGS}"
)
else # Something has gone horribly wrong
msg "Error: The executed installer script is incompatible with source version $AUTOMATION_VERSION"
msg "Error: The installer script is incompatible with version $AUTOMATION_VERSION"
msg "Please obtain and use a newer version of $SCRIPT_FILENAME which supports ID $_MAGIC_JUJU"
exit 10
fi

View File

@ -4,16 +4,26 @@
set -e
if [[ "$CIRRUS_CI" == "true" ]]; then
echo "Running under Cirrus-CI: Exporting all \$CIRRUS_* variables"
# Allow tests access to details presented by Cirrus-CI
for env_var in $(awk 'BEGIN{for(v in ENVIRON) print v}' | grep -E "^CIRRUS_")
do
echo " $env_var=${!env_var}"
export $env_var="${!env_var}"
done
fi
this_script_filepath="$(realpath $0)"
runner_script_filename="$(basename $0)"
for test_subdir in $(find "$(realpath $(dirname $0)/../)" -type d -name test | sort -r); do
test_runner_filepath="$test_subdir/$runner_script_filename"
if [[ -x "$test_runner_filepath" ]] && [[ "$test_runner_filepath" != "$this_script_filepath" ]]; then
echo -e "\nExecuting $test_runner_filepath..." > /dev/stderr
echo -e "\nExecuting $test_runner_filepath..." >> /dev/stderr
$test_runner_filepath
else
echo -e "\nWARNING: Skipping $test_runner_filepath" > /dev/stderr
echo -e "\nWARNING: Skipping $test_runner_filepath" >> /dev/stderr
fi
done

29
build-push/.install.sh Executable file
View File

@ -0,0 +1,29 @@
#!/bin/bash
# Installs 'build-push' script system-wide. NOT intended to be used directly
# by humans, should only be used indirectly by running
# ../bin/install_automation.sh <ver> build-push
set -eo pipefail
source "$AUTOMATION_LIB_PATH/anchors.sh"
source "$AUTOMATION_LIB_PATH/console_output.sh"
INSTALL_PREFIX=$(realpath $AUTOMATION_LIB_PATH/..)
# Assume the directory this script is in, represents what is being installed
INSTALL_NAME=$(basename $(dirname ${BASH_SOURCE[0]}))
AUTOMATION_VERSION=$(automation_version)
[[ -n "$AUTOMATION_VERSION" ]] || \
die "Could not determine version of common automation libs, was 'install_automation.sh' successful?"
echo "Installing $INSTALL_NAME version $(automation_version) into $INSTALL_PREFIX"
unset INST_PERM_ARG
if [[ $UID -eq 0 ]]; then
INST_PERM_ARG="-o root -g root"
fi
cd $(dirname $(realpath "${BASH_SOURCE[0]}"))
install -v $INST_PERM_ARG -D -t "$INSTALL_PREFIX/bin" ./bin/*
echo "Successfully installed $INSTALL_NAME"

114
build-push/README.md Normal file
View File

@ -0,0 +1,114 @@
# Build-push script
This is a wrapper around buildah build, coupled with pre and post
build commands and automatic registry server push. Its goal is to
provide an abstraction layer for additional build automation. Though
it may be useful on its own, this is not its primary purpose.
## Requirements
* Executables for `jq`, and `buildah` (1.23 or later) are available.
* Automation common-library is installed & env. var set.
* Installed system-wide as per
[the top-level documentation](https://github.com/containers/automation#installation)
* -or-
* Run directly from repository clone by first doing
`export AUTOMATION_LIB_PATH=/path/to/clone/common/lib`
* Optionally, the kernel may be configured to use emulation (such as QEMU)
for non-native binary execution (where available and supported). See
[the section below for more
infomration](README.md#qemu-user-static-emulation).
## QEMU-user-static Emulation
On platforms/distro's that support it (Like F34+) this is a handy
way to enable non-native binary execution. It can therefore be
used to build container images for other non-native architectures.
Though setup may vary by distro/version, in F34 all that's needed
is to install the `qemu-user-static` package. It will take care
of automatically registering the emulation executables with the
kernel.
Otherwise, you may find these [handy/dandy scripts and
container images useful](https://github.com/multiarch/qemu-user-static#multiarchqemu-user-static-images) for environments without native support (like
CentOS and RHEL). However, be aware I cannot atest to the safety
or quality of those binaries/images, so use them at your own risk.
Something like this (as **root**):
```bash
~# install qemu user static binaries somehow
~# qemu_setup_fqin="docker.io/multiarch/qemu-user-static:latest"
~# vol_awk='{print "-v "$1":"$1""}'
~# bin_vols=$(find /usr/bin -name 'qemu-*-static' | awk -e "$vol_awk" | tr '\n' ' ')
~# podman run --rm --privileged $bin_vols $qemu_setup_fqin --reset -p yes
```
Note: You may need to alter `$vol_awk` or the `podman` command line
depending on what your platform supports.
## Use in build automation
This script may be useful as a uniform interface for building and pushing
for multiple architectures, all in one go. A simple example would be:
```bash
$ export SOME_USERNAME=foo # normally hidden/secured in the CI system
$ export SOME_PASSWORD=bar # along with this password value.
$ build-push.sh --arches=arm64,ppc64le,s390x quay.io/some/thing ./path/to/contextdir
```
In this case, the image `quay.io/some/thing:latest` would be built for the
listed architectures, then pushed to the remote registry server.
### Use in automation with additional preparation
When building for multiple architectures using emulation, it's vastly
more efficient to execute as few non-native RUN instructions as possible.
This is supported by the `--prepcmd` option, which specifies a shell
command-string to execute prior to building the image. The command-string
will have access to a set of exported env. vars. for use and/or
substitution (see the `--help` output for details).
For example, this command string could be used to seed the build cache
by pulling down previously built image of the same name:
```bash
$ build-push.sh ... quay.io/test/ing --prepcmd='$RUNTIME pull $FQIN:latest'
```
In this example, the command `buildah pull quay.io/test/ing:latest` will
be executed prior to the build.
### Use in automation with modified images
Sometimes additional steps need to be performed after the build, to modify,
inspect or additionally tag the built image before it's pushed. This could
include (for example) running tests on the image, or modifying its metadata
in some way. All these and more are supported by the `--modcmd` option.
Simply feed it a command string to be run after a successful build. The
command-string script will have access to a set of exported env. vars.
for use and/or substitution (see the `--help` output for details).
After executing a `--modcmd`, `build-push.sh` will take care to identify
all images related to the original FQIN (minus the tag). Should
additional tags be present, they will also be pushed (absent the
`--nopush` flag). If any/all images are missing, they will be silently
ignored.
For example you could use this to only push version-tagged images, and
never `latest`:
```
$ build-push.sh ... --modcmd='$RUNTIME tag $FQIN:latest $FQIN:9.8.7 && \
$RUNTIME manifest rm $FQIN:latest'
```
Note: If your `--modcmd` command or script removes **ALL** tags, and
`--nopush` was **not** specified, an error message will be printed
followed by a non-zero exit. This is intended to help automation
catch an assumed missed-expectation.

481
build-push/bin/build-push.sh Executable file
View File

@ -0,0 +1,481 @@
#!/bin/bash
# This is a wrapper around buildah build, coupled with pre and post
# build commands and automatic registry server push. Its goal is to
# provide an abstraction layer for additional build automation. Though
# it may be useful on its own, this is not its primary purpose.
#
# See the README.md file for more details
set -eo pipefail
# This is a convenience for callers that don't separately source this first
# in their automation setup.
if [[ -z "$AUTOMATION_LIB_PATH" ]] && [[ -r /etc/automation_environment ]]; then
set -a
source /etc/automation_environment
set +a
fi
if [[ ! -r "$AUTOMATION_LIB_PATH/common_lib.sh" ]]; then
(
echo "ERROR: Expecting \$AUTOMATION_LIB_PATH to contain the installation"
echo " directory path for the common automation tooling."
echo " Please refer to the README.md for installation instructions."
) >> /dev/stderr
exit 2 # Verified by tests
fi
source $AUTOMATION_LIB_PATH/common_lib.sh
SCRIPT_FILEPATH=$(realpath "${BASH_SOURCE[0]}")
# Useful for non-standard installations & testing
RUNTIME="${RUNTIME:-$(type -P buildah||echo /bin/true)}" # see check_dependencies()
# List of variable names to export for --prepcmd and --modcmd
# N/B: Bash cannot export arrays
_CMD_ENV="SCRIPT_FILEPATH RUNTIME PLATFORMOS FQIN CONTEXT
PUSH ARCHES REGSERVER NAMESPACE IMGNAME PREPCMD MODCMD"
# Simple error-message strings
E_FQIN="Must specify a valid 3-component FQIN w/o a tag, not:"
E_CONTEXT="Given context path is not an existing directory:"
E_ONEARCH="Must specify --arches=<value> with '=', and <value> being a comma-separated list, not:"
_E_PREPMOD_SFX="with '=', and <value> being a (quoted) string, not:"
E_USERPASS="When --nopush not specified, must export non-empty value for variable:"
E_USAGE="
Usage: $(basename ${BASH_SOURCE[0]}) [options] <FQIN> <Context> [extra...]
With the required arguments (See also, 'Required Environment Variables'):
<FQIN> is the fully-qualified image name to build and push. It must
contain only three components: Registry FQDN:PORT, Namespace, and
Image Name. The image tag must NOT be specified, see --modcmd=<value>
option below.
<Context> is the full build-context DIRECTORY path containing the
target Dockerfile or Containerfile. This must be a local path to
an existing directory.
Zero or more [options] and [extra...] optional arguments:
--help if specified, will display this usage/help message.
--arches=<value> specifies a comma-separated list of architectures
to build. When unspecified, the local system's architecture will
be used. Architecture names must be the canonical values used/supported
by golang and available/included in the base-image's manifest list.
Note: The '=' is required.
--prepcmd=<value> specifies a bash string to execute just prior to
building. Any embedded quoting will be preserved. Any output produced
will be displayed, but ignored. See the 'Environment for...' section
below for details on what env. vars. are made available for use
by/substituted in <value>.
--modcmd=<value> specifies a bash string to execute after a successful
build but prior to pushing any image(s). Any embedded quoting will be
preserved. Output from the script will be displayed, but ignored.
Any tags which should/shouldn't be pushed must be handled by this
command/script (including complete removal or replacement). See the
'Environment for...' section below for details on what env. vars.
are made available for use by/substituted in <value>. If no
FQIN tags remain, an error will be printed and the script will exit
non-zero.
--nopush will bypass pushing the built/tagged image(s).
[extra...] specifies optional, additional arguments to pass when building
images. For example, this may be used to pass in [actual] build-args, or
volume-mounts.
Environment for --prepcmd and --modcmd
The shell environment for executing these strings will contain the
following environment variables and their values at runtime:
$_CMD_ENV
Additionally, unless --nopush was specified, the host will be logged
into the registry server.
Required Environment Variables
Unless --nopush is used, \$<NAMESPACE>_USERNAME and
\$<NAMESPACE>_PASSWORD must contain the necessary registry
credentials. The value for <NAMESPACE> is always capitalized.
The account is assumed to have 'write' access to push the built
image.
Optional Environment Variables:
\$RUNTIME specifies the complete path to an alternate executable
to use for building. Defaults to the location of 'buildah'.
\$PARALLEL_JOBS specifies the number of builds to execute in parallel.
When unspecified, it defaults to the number of processor (threads) on
the system.
"
# Show an error message, followed by usage text to stderr
die_help() {
local err="${1:-No error message specified}"
msg "Please use --help for usage information."
die "$err"
}
init() {
# /bin/true is used by unit-tests
if [[ "$RUNTIME" =~ true ]] || [[ ! $(type -P "$RUNTIME") ]]; then
die_help "Unable to find \$RUNTIME ($RUNTIME) on path: $PATH"
fi
if [[ -n "$PARALLEL_JOBS" ]] && [[ ! "$PARALLEL_JOBS" =~ ^[0-9]+$ ]]; then
PARALLEL_JOBS=""
fi
# Can't use $(uname -m) because (for example) "x86_64" != "amd64" in registries
# This will be verified, see check_dependencies().
NATIVE_GOARCH="${NATIVE_GOARCH:-$($RUNTIME info --format='{{.host.arch}}')}"
PARALLEL_JOBS="${PARALLEL_JOBS:-$($RUNTIME info --format='{{.host.cpus}}')}"
dbg "Found native go-arch: $NATIVE_GOARCH"
dbg "Found local CPU count: $PARALLEL_JOBS"
if [[ -z "$NATIVE_GOARCH" ]]; then
die_help "Unable to determine the local system architecture, is \$RUNTIME correct: '$RUNTIME'"
elif ! type -P jq &>/dev/null; then
die_help "Unable to find 'jq' executable on path: $PATH"
fi
# Not likely overridden, but keep the possibility open
PLATFORMOS="${PLATFORMOS:-linux}"
# Env. vars set by parse_args()
FQIN="" # required (fully-qualified-image-name)
CONTEXT="" # required (directory path)
PUSH=1 # optional (1 means push, 0 means do not)
ARCHES="$NATIVE_GOARCH" # optional (Native architecture default)
PREPCMD="" # optional (--prepcmd)
MODCMD="" # optional (--modcmd)
declare -a BUILD_ARGS
BUILD_ARGS=() # optional
REGSERVER="" # parsed out of $FQIN
NAMESPACE="" # parsed out of $FQIN
IMGNAME="" # parsed out of $FQIN
LOGGEDIN=0 # indicates successful $REGSERVER/$NAMESPACE login
unset NAMESPACE_USERNAME # lookup based on $NAMESPACE when $PUSH=1
unset NAMESPACE_PASSWORD # lookup based on $NAMESPACE when $PUSH=1
}
cleanup() {
set +e
if ((LOGGEDIN)) && ! $RUNTIME logout "$REGSERVER/$NAMESPACE"; then
warn "Logout of registry '$REGSERVER/$NAMESPACE' failed."
fi
}
parse_args() {
local -a args
local arg
local archarg
local nsu_var
local nsp_var
dbg "in parse_args()"
if [[ $# -lt 2 ]]; then
die_help "Must specify non-empty values for required arguments."
fi
args=("$@") # Special-case quoting: Will NOT separate quoted arguments
for arg in "${args[@]}"; do
dbg "Processing parameter '$arg'"
case "$arg" in
--arches=*)
archarg=$(tr ',' ' '<<<"${arg:9}")
if [[ -z "$archarg" ]]; then die_help "$E_ONEARCH '$arg'"; fi
ARCHES="$archarg"
;;
--arches)
# Argument format not supported (to simplify parsing logic)
die_help "$E_ONEARCH '$arg'"
;;
--prepcmd=*)
# Bash argument processing automatically strips any outside quotes
PREPCMD="${arg:10}"
;;
--prepcmd)
die_help "Must specify --prepcmd=<value> $_E_PREPMOD_SFX '$arg'"
;;
--modcmd=*)
MODCMD="${arg:9}"
;;
--modcmd)
die_help "Must specify --modcmd=<value> $_E_PREPMOD_SFX '$arg'"
;;
--nopush)
dbg "Nopush flag detected, will NOT push built images."
PUSH=0
;;
*)
if [[ -z "$FQIN" ]]; then
dbg "Grabbing FQIN parameter: '$arg'."
FQIN="$arg"
REGSERVER=$(awk -F '/' '{print $1}' <<<"$FQIN")
NAMESPACE=$(awk -F '/' '{print $2}' <<<"$FQIN")
IMGNAME=$(awk -F '/' '{print $3}' <<<"$FQIN")
elif [[ -z "$CONTEXT" ]]; then
dbg "Grabbing Context parameter: '$arg'."
CONTEXT=$(realpath -e -P $arg || die_help "$E_CONTEXT '$arg'")
else
# Hack: Allow array addition to handle any embedded special characters
# shellcheck disable=SC2207
BUILD_ARGS+=($(printf "%q" "$arg"))
fi
;;
esac
done
if ((PUSH)) && [[ -n "$NAMESPACE" ]]; then
set +x # Don't expose any secrets if somehow we got into -x mode
nsu_var="$(tr '[:lower:]' '[:upper:]'<<<${NAMESPACE})_USERNAME"
nsp_var="$(tr '[:lower:]' '[:upper:]'<<<${NAMESPACE})_PASSWORD"
dbg "Confirming non-empty \$$nsu_var and \$$nsp_var"
# These will be unset after logging into the registry
NAMESPACE_USERNAME="${!nsu_var}"
NAMESPACE_PASSWORD="${!nsp_var}"
# Leak as little as possible into any child processes
unset "$nsu_var" "$nsp_var"
fi
# validate parsed argument contents
if [[ -z "$FQIN" ]]; then
die_help "$E_FQIN '<empty>'"
elif [[ -z "$REGSERVER" ]] || [[ -z "$NAMESPACE" ]] || [[ -z "$IMGNAME" ]]; then
die_help "$E_FQIN '$FQIN'"
elif [[ -z "$CONTEXT" ]]; then
die_help "$E_CONTEXT ''"
fi
test $(tr -d -c '/' <<<"$FQIN" | wc -c) = '2' || \
die_help "$E_FQIN '$FQIN'"
test -r "$CONTEXT/Containerfile" || \
test -r "$CONTEXT/Dockerfile" || \
die_help "Given context path does not contain a Containerfile or Dockerfile: '$CONTEXT'"
if ((PUSH)); then
test -n "$NAMESPACE_USERNAME" || \
die_help "$E_USERPASS '\$$nsu_var'"
test -n "$NAMESPACE_PASSWORD" || \
die_help "$E_USERPASS '\$$nsp_var'"
fi
dbg "Processed:
RUNTIME='$RUNTIME'
FQIN='$FQIN'
CONTEXT='$CONTEXT'
PUSH='$PUSH'
ARCHES='$ARCHES'
MODCMD='$MODCMD'
BUILD_ARGS=$(echo -n "${BUILD_ARGS[@]}")
REGSERVER='$REGSERVER'
NAMESPACE='$NAMESPACE'
IMGNAME='$IMGNAME'
namespace username chars: '${#NAMESPACE_USERNAME}'
namespace password chars: '${#NAMESPACE_PASSWORD}'
"
}
# Build may have a LOT of output, use a standard stage-marker
# to ease reading and debugging from the wall-o-text
stage_notice() {
local msg
# N/B: It would be nice/helpful to resolve any env. vars. in '$@'
# for display. Unfortunately this is hard to do safely
# with (e.g.) eval echo "$@" :(
msg="$*"
(
echo "############################################################"
echo "$msg"
echo "############################################################"
) >> /dev/stderr
}
BUILTIID="" # populated with the image-id on successful build
parallel_build() {
local arch
local platforms=""
local output
local _fqin
local -a _args
_fqin="$1"
dbg "in parallel_build($_fqin)"
req_env_vars FQIN ARCHES CONTEXT REGSERVER NAMESPACE IMGNAME
req_env_vars PARALLEL_JOBS PLATFORMOS RUNTIME _fqin
for arch in $ARCHES; do
platforms="${platforms:+$platforms,}$PLATFORMOS/$arch"
done
# Need to build up the command from parts b/c array conversion is handled
# in strange and non-obvious ways when it comes to embedded whitespace.
_args=(--layers --force-rm --jobs="$PARALLEL_JOBS" --platform="$platforms"
--manifest="$_fqin" "$CONTEXT")
# Keep user-specified BUILD_ARGS near the beginning so errors are easy to spot
# Provide a copy of the output in case something goes wrong in a complex build
stage_notice "Executing build command: '$RUNTIME build ${BUILD_ARGS[*]} ${_args[*]}'"
"$RUNTIME" build "${BUILD_ARGS[@]}" "${_args[@]}"
}
confirm_arches() {
local inspjson
local filter=".manifests[].platform.architecture"
local arch
local maniarches
dbg "in confirm_arches()"
req_env_vars FQIN ARCHES RUNTIME
if ! inspjson=$($RUNTIME manifest inspect "containers-storage:$FQIN:latest"); then
die "Error reading manifest list metadata for 'containers-storage:$FQIN:latest'"
fi
# Convert into space-delimited string for grep error message (below)
# TODO: Use an array instead, could be simpler? Would need testing.
if ! maniarches=$(jq -r "$filter" <<<"$inspjson" | \
grep -v 'null' | \
tr -s '[:space:]' ' ' | \
sed -z '$ s/[\n ]$//'); then
die "Error processing manifest list metadata:
$inspjson"
fi
dbg "Found manifest arches: $maniarches"
for arch in $ARCHES; do
grep -q "$arch" <<<"$maniarches" || \
die "Failed to locate the $arch arch. in the $FQIN:latest manifest-list: $maniarches"
done
}
registry_login() {
dbg "in registry_login()"
req_env_vars PUSH LOGGEDIN
if ((PUSH)) && ! ((LOGGEDIN)); then
req_env_vars NAMESPACE_USERNAME NAMESPACE_PASSWORD REGSERVER NAMESPACE
dbg " Logging in"
echo "$NAMESPACE_PASSWORD" | \
$RUNTIME login --username "$NAMESPACE_USERNAME" --password-stdin \
"$REGSERVER/$NAMESPACE"
LOGGEDIN=1
elif ((PUSH)); then
dbg " Already logged in"
fi
# No reason to keep these around any longer
unset NAMESPACE_USERNAME NAMESPACE_PASSWORD
}
run_prepmod_cmd() {
local kind="$1"
shift
dbg "Exporting variables '$_CMD_ENV'"
# The indirect export is intentional here
# shellcheck disable=SC2163
export $_CMD_ENV
stage_notice "Executing $kind-command: " "$@"
bash -c "$@"
dbg "$kind command successful"
}
# Outputs sorted list of FQIN w/ tags to stdout, silent otherwise
get_manifest_tags() {
local result_json
local fqin_names
dbg "in get_manifest_fqins()"
# At the time of this comment, there is no reliable way to
# lookup all tags based solely on inspecting a manifest.
# However, since we know $FQIN (remember, value has no tag) we can
# use it to search all related names in container storage. Unfortunately
# because images can have multiple tags, the `reference` filter
# can return names we don't care about. Work around this with a
# grep of $FQIN in the results.
if ! result_json=$($RUNTIME images --json --filter=reference=$FQIN); then
die "Error listing manifest-list images that reference '$FQIN'"
fi
dbg "Image listing json: $result_json"
if [[ -n "$result_json" ]]; then # N/B: value could be '[]'
# Rely on the caller to handle an empty list, ignore items missing a name key.
if ! fqin_names=$(jq -r '.[]? | .names[]?'<<<"$result_json"); then
die "Error obtaining image names from '$FQIN' manifest-list search result:
$result_json"
fi
dbg "Sorting fqin_names"
# Don't emit an empty newline when the list is empty
[[ -z "$fqin_names" ]] || \
sort <<<"$fqin_names"
fi
dbg "get_manifest_tags() returning successfully"
}
push_images() {
local fqin_list
local fqin
dbg "in push_images()"
# It's possible that --modcmd=* removed all images, make sure
# this is known to the caller.
if ! fqin_list=$(get_manifest_tags); then
die "Retrieving set of manifest-list tags to push for '$FQIN'"
fi
if [[ -z "$fqin_list" ]]; then
warn "No FQIN(s) to be pushed."
fi
if ((PUSH)); then
dbg "Will try to push FQINs: '$fqin_list'"
registry_login
for fqin in $fqin_list; do
# Note: --all means push manifest AND images it references
msg "Pushing $fqin"
$RUNTIME manifest push --all $fqin docker://$fqin
done
else
# Even if --nopush was specified, be helpful to humans with a lookup of all the
# relevant tags for $FQIN that would have been pushed and display them.
warn "Option --nopush specified, not pushing: '$fqin_list'"
fi
}
##### MAIN() #####
# Handle requested help first before anything else
if grep -q -- '--help' <<<"$@"; then
echo "$E_USAGE" >> /dev/stdout # allow grep'ing
exit 0
fi
init
parse_args "$@"
if [[ -n "$PREPCMD" ]]; then
registry_login
run_prepmod_cmd prep "$PREPCMD"
fi
parallel_build "$FQIN:latest"
# If a parallel build or the manifest-list assembly fails, buildah
# may still exit successfully. Catch this condition by verifying
# all expected arches are present in the manifest list.
confirm_arches
if [[ -n "$MODCMD" ]]; then
registry_login
run_prepmod_cmd mod "$MODCMD"
fi
# Handles --nopush internally
push_images

43
build-push/test/fake_buildah.sh Executable file
View File

@ -0,0 +1,43 @@
#!/bin/bash
set -e
# Need to keep track of values from 'build' to 'manifest' calls
DATF='/tmp/fake_buildah.json'
if [[ "$1" == "build" ]]; then
echo '{"manifests":[' > $DATF
for arg; do
if [[ "$arg" =~ --platform= ]]; then
for platarch in $(cut -d '=' -f 2 <<<"$arg" | tr ',' ' '); do
arch=$(cut -d '/' -f 2 <<<"$platarch")
[[ -n "$arch" ]] || continue
echo "FAKEBUILDAH ($arch)" > /dev/stderr
echo -n ' {"platform":{"architecture":"' >> $DATF
echo -n "$arch" >> $DATF
echo '"}},' >> $DATF
done
fi
done
# dummy-value to avoid dealing with JSON oddity: last item must not
# end with a comma
echo ' {}' >> $DATF
echo ']}' >> $DATF
# Tests expect to see this
echo "FAKEBUILDAH $@"
elif [[ "$1" == "manifest" ]]; then
# validate json while outputing it
jq . $DATF
elif [[ "$1" == "info" ]]; then
case "$@" in
*arch*) echo "amd64" ;;
*cpus*) echo "2" ;;
*) exit 1 ;;
esac
elif [[ "$1" == "images" ]]; then
echo '[{"names":["localhost/foo/bar:latest"]}]'
else
echo "ERROR: Unexpected arg '$1' to fake_buildah.sh" >> /dev/stderr
exit 9
fi

View File

@ -0,0 +1,24 @@
# This script is intend for use by tests, DO NOT EXECUTE.
set -eo pipefail
# shellcheck disable=SC2154
if [[ "$CIRRUS_CI" == "true" ]]; then
# Cirrus-CI is setup (see .cirrus.yml) to run tests on CentOS
# for simplicity, but it has no native qemu-user-static. For
# the benefit of CI testing, cheat and use whatever random
# emulators are included in the container image.
# N/B: THIS IS NOT SAFE FOR PRODUCTION USE!!!!!
podman run --rm --privileged \
mirror.gcr.io/multiarch/qemu-user-static:latest \
--reset -p yes
elif [[ -x "/usr/bin/qemu-aarch64-static" ]]; then
# TODO: Better way to determine if kernel already setup?
echo "Warning: Assuming qemu-user-static is already setup"
else
echo "Error: System does not appear to have qemu-user-static setup"
exit 1
fi

View File

@ -0,0 +1 @@
../../common/test/run_all_tests.sh

View File

@ -0,0 +1,4 @@
FROM registry.fedoraproject.org/fedora-minimal:latest
RUN /bin/true
ENTRYPOINT /bin/false
# WARNING: testbuilds.sh depends on the number of build steps

View File

@ -0,0 +1,103 @@
#!/bin/bash
TEST_SOURCE_DIRPATH=$(realpath $(dirname "${BASH_SOURCE[0]}"))
# Load standardized test harness
source $TEST_SOURCE_DIRPATH/testlib.sh || exit 1
SUBJ_FILEPATH="$TEST_DIR/$SUBJ_FILENAME"
TEST_CONTEXT="$TEST_SOURCE_DIRPATH/test_context"
EMPTY_CONTEXT=$(mktemp -d -p '' .tmp_$(basename ${BASH_SOURCE[0]})_XXXX)
export NATIVE_GOARCH=$(buildah info --format='{{.host.arch}}')
test_cmd "Verify error when automation library not found" \
2 'ERROR: Expecting \$AUTOMATION_LIB_PATH' \
bash -c "AUTOMATION_LIB_PATH='' RUNTIME=/bin/true $SUBJ_FILEPATH 2>&1"
export AUTOMATION_LIB_PATH="$TEST_SOURCE_DIRPATH/../../common/lib"
test_cmd "Verify error when buildah can't be found" \
1 "ERROR: Unable to find.+/usr/local/bin" \
bash -c "RUNTIME=/bin/true $SUBJ_FILEPATH 2>&1"
# These tests don't actually need to actually build/run anything
export RUNTIME="$TEST_SOURCE_DIRPATH/fake_buildah.sh"
test_cmd "Verify error when executed w/o any arguments" \
1 "ERROR: Must.+required arguments." \
bash -c "$SUBJ_FILEPATH 2>&1"
test_cmd "Verify error when specify partial required arguments" \
1 "ERROR: Must.+required arguments." \
bash -c "$SUBJ_FILEPATH foo 2>&1"
test_cmd "Verify error when executed bad Containerfile directory" \
1 "ERROR:.+directory: 'bar'" \
bash -c "$SUBJ_FILEPATH foo bar 2>&1"
test_cmd "Verify error when specify invalid FQIN" \
1 "ERROR:.+FQIN.+foo" \
bash -c "$SUBJ_FILEPATH foo $EMPTY_CONTEXT 2>&1"
test_cmd "Verify error when specify slightly invalid FQIN" \
1 "ERROR:.+FQIN.+foo/bar" \
bash -c "$SUBJ_FILEPATH foo/bar $EMPTY_CONTEXT 2>&1"
test_cmd "Verify error when executed bad context subdirectory" \
1 "ERROR:.+Containerfile or Dockerfile: '$EMPTY_CONTEXT'" \
bash -c "$SUBJ_FILEPATH foo/bar/baz $EMPTY_CONTEXT 2>&1"
# no-longer needed
rm -rf "$EMPTY_CONTEXT"
unset EMPTY_CONTEXT
test_cmd "Verify --help output to stdout can be grepped" \
0 "Optional Environment Variables:" \
bash -c "$SUBJ_FILEPATH --help | grep 'Optional Environment Variables:'"
test_cmd "Confirm required username env. var. unset error" \
1 "ERROR.+BAR_USERNAME" \
bash -c "$SUBJ_FILEPATH foo/bar/baz $TEST_CONTEXT 2>&1"
test_cmd "Confirm required password env. var. unset error" \
1 "ERROR.+BAR_PASSWORD" \
bash -c "BAR_USERNAME=snafu $SUBJ_FILEPATH foo/bar/baz $TEST_CONTEXT 2>&1"
for arg in 'prepcmd' 'modcmd'; do
test_cmd "Verify error when --$arg specified without an '='" \
1 "ERROR:.+with '='" \
bash -c "BAR_USERNAME=snafu BAR_PASSWORD=ufans $SUBJ_FILEPATH foo/bar/baz $TEST_CONTEXT --$arg notgoingtowork 2>&1"
done
test_cmd "Verify numeric \$PARALLEL_JOBS is handled properly" \
0 "FAKEBUILDAH.+--jobs=42 " \
bash -c "PARALLEL_JOBS=42 $SUBJ_FILEPATH localhost/foo/bar --nopush $TEST_CONTEXT 2>&1"
test_cmd "Verify non-numeric \$PARALLEL_JOBS is handled properly" \
0 "FAKEBUILDAH.+--jobs=[0-9]+ " \
bash -c "PARALLEL_JOBS=badvalue $SUBJ_FILEPATH localhost/foo/bar --nopush $TEST_CONTEXT 2>&1"
PREPCMD='echo "#####${ARCHES}#####"'
test_cmd "Verify \$ARCHES value is available to prep-command" \
0 "#####correct horse battery staple#####.+FAKEBUILDAH.+test_context" \
bash -c "$SUBJ_FILEPATH --arches=correct,horse,battery,staple localhost/foo/bar --nopush --prepcmd='$PREPCMD' $TEST_CONTEXT 2>&1"
rx="FAKEBUILDAH build \\$'--test-build-arg=one \\\"two\\\" three\\\nfour' --anotherone=foo\\\ bar"
test_cmd "Verify special characters preserved in build-args" \
0 "$rx" \
bash -c "PARALLEL_JOBS=badvalue $SUBJ_FILEPATH localhost/foo/bar $TEST_CONTEXT --test-build-arg=\"one \\\"two\\\" three
four\" --nopush --anotherone=\"foo bar\" 2>&1"
# A specialized non-container environment required to run these
if [[ -n "$BUILD_PUSH_TEST_BUILDS" ]]; then
export RUNTIME=$(type -P buildah)
export PARALLEL_JOBS=$($RUNTIME info --format='{{.host.cpus}}')
source $(dirname "${BASH_SOURCE[0]}")/testbuilds.sh
else
echo "WARNING: Set \$BUILD_PUSH_TEST_BUILDS non-empty to fully test build_push."
echo ""
fi
# Must always happen last
exit_with_status

View File

@ -0,0 +1,146 @@
# This script is intended to be sourced from testbin-build-push.sh.
# Any/all other usage is virtually guaranteed to fail and/or cause
# harm to the system.
for varname in RUNTIME SUBJ_FILEPATH TEST_CONTEXT TEST_SOURCE_DIRPATH TEST_FQIN BUILDAH_USERNAME BUILDAH_PASSWORD; do
value=${!varname}
if [[ -z "$value" ]]; then
echo "ERROR: Required \$$varname variable is unset/empty."
exit 1
fi
done
unset value
# RUNTIME is defined by caller
# shellcheck disable=SC2154
$RUNTIME --version
test_cmd "Confirm $(basename $RUNTIME) is available" \
0 "buildah version .+" \
$RUNTIME --version
skopeo --version
test_cmd "Confirm skopeo is available" \
0 "skopeo version .+" \
skopeo --version
PREPCMD='echo "SpecialErrorMessage:$REGSERVER" >> /dev/stderr && exit 42'
# SUBJ_FILEPATH and TEST_CONTEXT are defined by caller
# shellcheck disable=SC2154
test_cmd "Confirm error output and exit(42) from --prepcmd" \
42 "SpecialErrorMessage:localhost" \
bash -c "$SUBJ_FILEPATH --nopush localhost/foo/bar $TEST_CONTEXT --prepcmd='$PREPCMD' 2>&1"
# N/B: The following are stateful - each depends on precedding test success
# and assume empty container-storage (podman system reset).
test_cmd "Confirm building native-arch test image w/ --nopush" \
0 "STEP 3/3: ENTRYPOINT /bin/false.+COMMIT" \
bash -c "A_DEBUG=1 $SUBJ_FILEPATH localhost/foo/bar $TEST_CONTEXT --nopush 2>&1"
native_arch=$($RUNTIME info --format='{{.host.arch}}')
test_cmd "Confirm native_arch was set to non-empty string" \
0 "" \
test -n "$native_arch"
test_cmd "Confirm built image manifest contains the native arch '$native_arch'" \
0 "$native_arch" \
bash -c "$RUNTIME manifest inspect localhost/foo/bar:latest | jq -r '.manifests[0].platform.architecture'"
test_cmd "Confirm rebuilding with same command uses cache" \
0 "STEP 3/3.+Using cache" \
bash -c "A_DEBUG=1 $SUBJ_FILEPATH localhost/foo/bar $TEST_CONTEXT --nopush 2>&1"
test_cmd "Confirm manifest-list can be removed by name" \
0 "untagged: localhost/foo/bar:latest" \
$RUNTIME manifest rm containers-storage:localhost/foo/bar:latest
test_cmd "Verify expected partial failure when passing bogus architectures" \
125 "no image found in image index for architecture" \
bash -c "A_DEBUG=1 $SUBJ_FILEPATH --arches=correct,horse,battery,staple localhost/foo/bar --nopush $TEST_CONTEXT 2>&1"
MODCMD='$RUNTIME tag $FQIN:latest $FQIN:9.8.7-testing'
test_cmd "Verify --modcmd is able to tag the manifest" \
0 "Executing mod-command" \
bash -c "A_DEBUG=1 $SUBJ_FILEPATH localhost/foo/bar $TEST_CONTEXT --nopush --modcmd='$MODCMD' 2>&1"
test_cmd "Verify the tagged manifest is also present" \
0 "[a-zA-Z0-9]+" \
bash -c "$RUNTIME images --quiet localhost/foo/bar:9.8.7-testing"
test_cmd "Confirm tagged image manifest contains native arch '$native_arch'" \
0 "$native_arch" \
bash -c "$RUNTIME manifest inspect localhost/foo/bar:9.8.7-testing | jq -r '.manifests[0].platform.architecture'"
TEST_TEMP=$(mktemp -d -p '' .tmp_$(basename ${BASH_SOURCE[0]})_XXXX)
test_cmd "Confirm digest can be obtained from 'latest' manifest list" \
0 ".+" \
bash -c "$RUNTIME manifest inspect localhost/foo/bar:latest | jq -r '.manifest[0].digest' | tee $TEST_TEMP/latest_digest"
test_cmd "Confirm digest can be obtained from '9.8.7-testing' manifest list" \
0 ".+" \
bash -c "$RUNTIME manifest inspect localhost/foo/bar:9.8.7-testing | jq -r '.manifest[0].digest' | tee $TEST_TEMP/tagged_digest"
test_cmd "Verify tagged manifest image digest matches the same in latest" \
0 "" \
test "$(<$TEST_TEMP/tagged_digest)" == "$(<$TEST_TEMP/latest_digest)"
MODCMD='
set -x;
$RUNTIME images && \
$RUNTIME manifest rm $FQIN:latest && \
$RUNTIME manifest rm $FQIN:9.8.7-testing && \
echo "AllGone";
'
test_cmd "Verify --modcmd can execute command string that removes all tags" \
0 "AllGone.*No FQIN.+to be pushed" \
bash -c "A_DEBUG=1 $SUBJ_FILEPATH --modcmd='$MODCMD' localhost/foo/bar --nopush $TEST_CONTEXT 2>&1"
test_cmd "Verify previous --modcmd removed the 'latest' tagged image" \
125 "image not known" \
$RUNTIME images --quiet containers-storage:localhost/foo/bar:latest
test_cmd "Verify previous --modcmd removed the '9.8.7-testing' tagged image" \
125 "image not known" \
$RUNTIME images --quiet containers-storage:localhost/foo/bar:9.8.7-testing
FAKE_VERSION=$RANDOM
MODCMD="set -ex;
\$RUNTIME tag \$FQIN:latest \$FQIN:$FAKE_VERSION;
\$RUNTIME manifest rm \$FQIN:latest;"
# TEST_FQIN and TEST_SOURCE_DIRPATH defined by caller
# shellcheck disable=SC2154
test_cmd "Verify e2e workflow w/ additional build-args" \
0 "Pushing $TEST_FQIN:$FAKE_VERSION" \
bash -c "env A_DEBUG=1 $SUBJ_FILEPATH \
--prepcmd='touch $TEST_SOURCE_DIRPATH/test_context/Containerfile' \
--modcmd='$MODCMD' \
--arches=amd64,s390x,arm64,ppc64le \
$TEST_FQIN \
$TEST_CONTEXT \
--device=/dev/fuse --label testing=true \
2>&1"
test_cmd "Verify latest tagged image was not pushed" \
2 'reading manifest latest in quay\.io/buildah/do_not_use: manifest unknown' \
skopeo inspect docker://$TEST_FQIN:latest
test_cmd "Verify architectures can be obtained from manifest list" \
0 "" \
bash -c "$RUNTIME manifest inspect $TEST_FQIN:$FAKE_VERSION | \
jq -r '.manifests[].platform.architecture' > $TEST_TEMP/maniarches"
for arch in amd64 s390x arm64 ppc64le; do
test_cmd "Verify $arch architecture present in $TEST_FQIN:$FAKE_VERSION" \
0 "" \
grep -Fqx "$arch" $TEST_TEMP/maniarches
done
test_cmd "Verify pushed image can be removed" \
0 "" \
skopeo delete docker://$TEST_FQIN:$FAKE_VERSION
# Cleanup
rm -rf "$TEST_TEMP"

1
build-push/test/testlib.sh Symbolic link
View File

@ -0,0 +1 @@
../../common/test/testlib.sh

View File

@ -0,0 +1,27 @@
# Podman First-Time Contributor Certificate Generator
This directory contains a simple web-based certificate generator to celebrate first-time contributors to the Podman project.
## Files
- **`certificate_generator.html`** - Interactive web interface for creating certificates
- **`certificate_template.html`** - The certificate template used for generation
- **`first_pr.png`** - Podman logo/branding image used in certificates
## Usage
1. Open `certificate_generator.html` in a web browser
2. Fill in the contributor's details:
- Name
- Pull Request number
- Date (defaults to current date)
3. Preview the certificate in real-time
4. Click "Download Certificate" to save as HTML
## Purpose
These certificates are designed to recognize and celebrate community members who make their first contribution to the Podman project. The certificates feature Podman branding and can be customized for each contributor.
## Contributing
Feel free to improve the design, add features, or suggest enhancements to make the certificate generator even better for recognizing our amazing contributors!

View File

@ -0,0 +1,277 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Podman Certificate Generator</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;600&display=swap');
@import url('https://fonts.googleapis.com/css2?family=Merriweather:wght@400;700;900&display=swap');
body {
font-family: 'Inter', sans-serif;
background-color: #f0f2f5;
margin: 0;
padding: 2rem;
}
.container {
display: grid;
grid-template-columns: 380px 1fr;
gap: 2rem;
max-width: 1600px;
margin: auto;
}
.form-panel {
background-color: white;
padding: 2rem;
border-radius: 8px;
box-shadow: 0 4px 12px rgba(0,0,0,0.1);
height: fit-content;
position: sticky;
top: 2rem;
}
.form-panel h2 {
margin-top: 0;
color: #333;
font-family: 'Merriweather', serif;
}
.form-group {
margin-bottom: 1.5rem;
}
.form-group label {
display: block;
margin-bottom: 0.5rem;
font-weight: 600;
color: #555;
}
.form-group input {
width: 100%;
padding: 0.75rem;
border: 1px solid #ccc;
border-radius: 4px;
box-sizing: border-box;
font-size: 1rem;
}
.action-buttons {
display: flex;
gap: 1rem;
margin-top: 1.5rem;
}
.action-buttons button {
flex-grow: 1;
padding: 0.75rem;
border: none;
border-radius: 4px;
font-size: 1rem;
font-weight: 600;
cursor: pointer;
transition: background-color 0.3s;
}
#downloadBtn {
background-color: #28a745;
color: white;
}
#downloadBtn:hover {
background-color: #218838;
}
.preview-panel {
display: flex;
justify-content: center;
align-items: flex-start;
}
/* Certificate Styles (copied from template and scaled) */
.certificate {
width: 800px;
height: 1100px;
background: #fdfaf0;
border: 2px solid #333;
position: relative;
box-shadow: 0 10px 30px rgba(0,0,0,0.2);
padding: 50px;
box-sizing: border-box;
display: flex;
flex-direction: column;
align-items: center;
font-family: 'Merriweather', serif;
transform: scale(0.8);
transform-origin: top center;
}
.party-popper { position: absolute; font-size: 40px; }
.top-left { top: 40px; left: 40px; }
.top-right { top: 40px; right: 40px; }
.main-title { font-size: 48px; font-weight: 900; color: #333; text-align: center; margin-top: 60px; line-height: 1.2; text-transform: uppercase; }
.subtitle { font-size: 24px; font-weight: 400; color: #333; text-align: center; margin-top: 30px; text-transform: uppercase; letter-spacing: 2px; }
.contributor-name { font-size: 56px; font-weight: 700; color: #333; text-align: center; margin: 15px 0 50px; }
.mascot-image { width: 450px; height: 450px; background-image: url('first_pr.png'); background-size: contain; background-repeat: no-repeat; background-position: center; margin-top: 20px; -webkit-print-color-adjust: exact; print-color-adjust: exact; }
.description { font-size: 22px; color: #333; line-height: 1.6; text-align: center; margin-top: 40px; }
.description strong { font-weight: 700; }
.footer { width: 100%; margin-top: auto; padding-top: 30px; border-top: 1px solid #ccc; display: flex; justify-content: space-between; align-items: flex-end; font-size: 16px; color: #333; }
.pr-info { text-align: left; }
.signature { text-align: right; font-style: italic; }
@media print {
body {
background: #fff;
margin: 0;
padding: 0;
}
.form-panel, .action-buttons {
display: none;
}
.container {
display: block;
margin: 0;
padding: 0;
}
.preview-panel {
padding: 0;
margin: 0;
}
.certificate {
transform: scale(1);
box-shadow: none;
width: 100%;
height: 100vh;
page-break-inside: avoid;
}
}
</style>
</head>
<body>
<div class="container">
<div class="form-panel">
<h2>Certificate Generator</h2>
<div class="form-group">
<label for="contributorName">Contributor Name</label>
<input type="text" id="contributorName" value="Mike McGrath">
</div>
<div class="form-group">
<label for="prNumber">PR Number</label>
<input type="text" id="prNumber" value="26393">
</div>
<div class="form-group">
<label for="mergeDate">Date</label>
<input type="text" id="mergeDate" value="June 13, 2025">
</div>
<div class="action-buttons">
<button id="downloadBtn">Download HTML</button>
</div>
</div>
<div class="preview-panel">
<div id="certificatePreview">
<!-- Certificate HTML will be injected here by script -->
</div>
</div>
</div>
<script>
const nameInput = document.getElementById('contributorName');
const prNumberInput = document.getElementById('prNumber');
const dateInput = document.getElementById('mergeDate');
const preview = document.getElementById('certificatePreview');
function generateCertificateHTML(name, prNumber, date) {
const prLink = `https://github.com/containers/podman/pull/${prNumber}`;
// This is the full, self-contained HTML for the certificate
return `
<div class="certificate">
<div class="party-popper top-left">🎉</div>
<div class="party-popper top-right">🎉</div>
<div class="main-title">Certificate of<br>Contribution</div>
<div class="subtitle">Awarded To</div>
<div class="contributor-name">${name}</div>
<div class="mascot-image"></div>
<div class="description">
For successfully submitting and merging their <strong>First Pull Request</strong> to the <strong>Podman project</strong>.<br>
Your contribution helps make open source better—one PR at a time!
</div>
<div class="footer">
<div class="pr-info">
<div>🔧 Merged PR: <a href="${prLink}" target="_blank">${prLink}</a></div>
<div style="margin-top: 5px;">${date}</div>
</div>
<div class="signature">
Keep hacking, keep contributing!<br>
The Podman Community
</div>
</div>
</div>
`;
}
function updatePreview() {
const name = nameInput.value || '[CONTRIBUTOR_NAME]';
const prNumber = prNumberInput.value || '[PR_NUMBER]';
const date = dateInput.value || '[DATE]';
preview.innerHTML = generateCertificateHTML(name, prNumber, date);
}
document.getElementById('downloadBtn').addEventListener('click', () => {
const name = nameInput.value || 'contributor';
const prNumber = prNumberInput.value || '00000';
const date = dateInput.value || 'Date';
const certificateHTML = generateCertificateHTML(name, prNumber, date);
const fullPageHTML = `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Certificate for ${name}</title>
<style>
/* All the CSS from the generator page */
@import url('https://fonts.googleapis.com/css2?family=Merriweather:wght@400;700;900&display=swap');
body { margin: 20px; font-family: 'Merriweather', serif; background: #e0e0e0; }
.certificate {
transform: scale(1);
box-shadow: none;
margin: auto;
}
/* Paste all certificate-related styles here */
.certificate { width: 800px; height: 1100px; background: #fdfaf0; border: 2px solid #333; position: relative; padding: 50px; box-sizing: border-box; display: flex; flex-direction: column; align-items: center; }
.party-popper { position: absolute; font-size: 40px; }
.top-left { top: 40px; left: 40px; }
.top-right { top: 40px; right: 40px; }
.main-title { font-size: 48px; font-weight: 900; color: #333; text-align: center; margin-top: 60px; line-height: 1.2; text-transform: uppercase; }
.subtitle { font-size: 24px; font-weight: 400; color: #333; text-align: center; margin-top: 30px; text-transform: uppercase; letter-spacing: 2px; }
.contributor-name { font-size: 56px; font-weight: 700; color: #333; text-align: center; margin: 15px 0 50px; }
.mascot-image { width: 450px; height: 450px; background-image: url('first_pr.png'); background-size: contain; background-repeat: no-repeat; background-position: center; margin-top: 20px; -webkit-print-color-adjust: exact; print-color-adjust: exact; }
.description { font-size: 22px; color: #333; line-height: 1.6; text-align: center; margin-top: 40px; }
.description strong { font-weight: 700; }
.footer { width: 100%; margin-top: auto; padding-top: 30px; border-top: 1px solid #ccc; display: flex; justify-content: space-between; align-items: flex-end; font-size: 16px; color: #333; }
.pr-info { text-align: left; }
.signature { text-align: right; font-style: italic; }
@media print {
@page { size: A4 portrait; margin: 0; }
body, html { width: 100%; height: 100%; margin: 0; padding: 0; }
.certificate { width: 100%; height: 100%; box-shadow: none; transform: scale(1); }
}
</style>
</head>
<body>${certificateHTML}</body>
</html>
`;
const blob = new Blob([fullPageHTML], { type: 'text/html' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = `podman-contribution-certificate-${name.toLowerCase().replace(/\s+/g, '-')}.html`;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
});
// Add event listeners to update preview on input change
[nameInput, prNumberInput, dateInput].forEach(input => {
input.addEventListener('input', updatePreview);
});
// Initial preview generation
updatePreview();
</script>
</body>
</html>

View File

@ -0,0 +1,175 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Podman Certificate of Contribution</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Merriweather:wght@400;700;900&display=swap');
body {
margin: 0;
padding: 20px;
font-family: 'Merriweather', serif;
background: #e0e0e0;
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
}
.certificate {
width: 800px;
height: 1100px;
background: #fdfaf0;
border: 2px solid #333;
position: relative;
box-shadow: 0 10px 30px rgba(0, 0, 0, 0.2);
padding: 50px;
box-sizing: border-box;
display: flex;
flex-direction: column;
align-items: center;
}
.party-popper {
position: absolute;
font-size: 40px;
}
.top-left {
top: 40px;
left: 40px;
}
.top-right {
top: 40px;
right: 40px;
}
.main-title {
font-size: 48px;
font-weight: 900;
color: #333;
text-align: center;
margin-top: 60px;
line-height: 1.2;
text-transform: uppercase;
}
.subtitle {
font-size: 24px;
font-weight: 400;
color: #333;
text-align: center;
margin-top: 30px;
text-transform: uppercase;
letter-spacing: 2px;
}
.contributor-name {
font-size: 56px;
font-weight: 700;
color: #333;
text-align: center;
margin: 15px 0 50px;
}
.mascot-image {
width: 450px;
height: 450px;
background-image: url('first_pr.png');
background-size: contain;
background-repeat: no-repeat;
background-position: center;
margin-top: 20px;
-webkit-print-color-adjust: exact;
print-color-adjust: exact;
}
.description {
font-size: 22px;
color: #333;
line-height: 1.6;
text-align: center;
margin-top: 40px;
}
.description strong {
font-weight: 700;
}
.footer {
width: 100%;
margin-top: auto;
padding-top: 30px;
border-top: 1px solid #ccc;
display: flex;
justify-content: space-between;
align-items: flex-end;
font-size: 16px;
color: #333;
}
.pr-info {
text-align: left;
}
.signature {
text-align: right;
font-style: italic;
}
@media print {
@page {
size: A4 portrait;
margin: 0;
}
body, html {
width: 100%;
height: 100%;
margin: 0;
padding: 0;
background: #fdfaf0;
}
.certificate {
width: 100%;
height: 100vh;
box-shadow: none;
transform: scale(1);
border-radius: 0;
page-break-inside: avoid;
}
}
</style>
</head>
<body>
<div class="certificate">
<div class="party-popper top-left">🎉</div>
<div class="party-popper top-right">🎉</div>
<div class="main-title">Certificate of<br>Contribution</div>
<div class="subtitle">Awarded To</div>
<div class="contributor-name">[CONTRIBUTOR_NAME]</div>
<div class="mascot-image"></div>
<div class="description">
For successfully submitting and merging their <strong>First Pull Request</strong> to the <strong>Podman project</strong>.<br>
Your contribution helps make open source better—one PR at a time!
</div>
<div class="footer">
<div class="pr-info">
<div>🔧 Merged PR: [PR_LINK]</div>
<div style="margin-top: 5px;">[DATE]</div>
</div>
<div class="signature">
Keep hacking, keep contributing!<br>
The Podman Community
</div>
</div>
</div>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 578 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

14
ci/Dockerfile Normal file
View File

@ -0,0 +1,14 @@
FROM registry.fedoraproject.org/fedora-minimal:latest
RUN microdnf update -y && \
microdnf install -y \
findutils jq git curl python3-pyyaml \
perl-YAML perl-interpreter perl-open perl-Data-TreeDumper \
perl-Test perl-Test-Simple perl-Test-Differences \
perl-YAML-LibYAML perl-FindBin \
python3 python3-virtualenv python3-pip gcc python3-devel \
python3-flake8 python3-pep8-naming python3-flake8-import-order python3-flake8-polyfill python3-mccabe python3-pep8-naming && \
microdnf clean all && \
rm -rf /var/cache/dnf
# Required by perl
ENV LC_ALL="C" \
LANG="en_US.UTF-8"

1
cirrus-ci_artifacts/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
./testvenv/

43
cirrus-ci_artifacts/.install.sh Executable file
View File

@ -0,0 +1,43 @@
#!/bin/bash
# Installs cirrus-ci_artifacts and a python virtual environment
# to execute with. NOT intended to be used directly
# by humans, should only be used indirectly by running
# ../bin/install_automation.sh <ver> cirrus-ci_artifacts
set -eo pipefail
source "$AUTOMATION_LIB_PATH/anchors.sh"
source "$AUTOMATION_LIB_PATH/console_output.sh"
INSTALL_PREFIX=$(realpath $AUTOMATION_LIB_PATH/../)
# Assume the directory this script is in, represents what is being installed
INSTALL_NAME=$(basename $(dirname ${BASH_SOURCE[0]}))
AUTOMATION_VERSION=$(automation_version)
[[ -n "$AUTOMATION_VERSION" ]] || \
die "Could not determine version of common automation libs, was 'install_automation.sh' successful?"
[[ -n "$(type -P virtualenv)" ]] || \
die "$INSTALL_NAME requires python3-virtualenv"
echo "Installing $INSTALL_NAME version $(automation_version) into $INSTALL_PREFIX"
unset INST_PERM_ARG
if [[ $UID -eq 0 ]]; then
INST_PERM_ARG="-o root -g root"
fi
cd $(dirname $(realpath "${BASH_SOURCE[0]}"))
virtualenv --clear --download \
$AUTOMATION_LIB_PATH/ccia.venv
(
source $AUTOMATION_LIB_PATH/ccia.venv/bin/activate
pip3 install --requirement ./requirements.txt
deactivate
)
install -v $INST_PERM_ARG -m '0644' -D -t "$INSTALL_PREFIX/lib/ccia.venv/bin" \
./cirrus-ci_artifacts.py
install -v $INST_PERM_ARG -D -t "$INSTALL_PREFIX/bin" ./cirrus-ci_artifacts
# Needed for installer testing
echo "Successfully installed $INSTALL_NAME"

View File

@ -0,0 +1,33 @@
# Description
This is a small script which examines a Cirrus-CI build and downloads
available artifacts in parallel, into a subdirectory tree corresponding
with the Cirrus-CI build ID, followed by the task-name, artifact-name
and file-path. Optionally, a regex may be provided to download only
specific artifacts matching the subdirectory path.
The script may be executed from a currently running Cirrus-CI build
(utilizing `$CIRRUS_BUILD_ID`), but only previously uploaded artifacts
will be downloaded, and the task must have a `depends_on` statement
to synchronize with tasks providing expected artifacts.
# Installation
Install the python3 module requirements using pip3:
(Note: These go into `$HOME/.local/lib/python<version>`)
```
$ pip3 install --user --requirement ./requirements.txt
```
# Usage
Create and change to the directory where artifact tree should be
created. Call the script, passing in the following arguments:
1. Optional, `--verbose` prints out artifacts as they are
downloaded or skipped.
2. The Cirrus-CI build id (required) to retrieve (doesn't need to be
finished running).
3. Optional, a filter regex e.g. `'runner_stats/.*fedora.*'` to
only download artifacts matching `<task>/<artifact>/<file-path>`

View File

@ -0,0 +1,24 @@
#!/bin/bash
# This script wrapps cirrus-ci_artifacts.sh inside a python
# virtual environment setup at install time. It should not
# be executed prior to installation.
set -e
# This is a convenience for callers that don't separately source this first
# in their automation setup.
if [[ -z "$AUTOMATION_LIB_PATH" ]] && [[ -r /etc/automation_environment ]]; then
source /etc/automation_environment
fi
if [[ -z "$AUTOMATION_LIB_PATH" ]]; then
(
echo "ERROR: Expecting \$AUTOMATION_LIB_PATH to be defined with the"
echo " installation directory of automation tooling."
) >> /dev/stderr
exit 1
fi
source $AUTOMATION_LIB_PATH/ccia.venv/bin/activate
exec python3 $AUTOMATION_LIB_PATH/ccia.venv/bin/cirrus-ci_artifacts.py "$@"

View File

@ -0,0 +1,161 @@
#!/usr/bin/env python3
"""
Download all artifacts from a Cirrus-CI Build into a subdirectory tree.
Subdirectory naming format: <build ID>/<task-name>/<artifact-name>/<file-path>
Input arguments (in order):
Build ID - string, the build containing tasks w/ artifacts to download
e.g. "5790771712360448"
Path RX - Optional, regular expression to match against subdirectory
tree naming format.
"""
import asyncio
import re
import sys
from argparse import ArgumentParser
from os import makedirs
from os.path import split
from urllib.parse import quote, unquote
# Ref: https://docs.aiohttp.org/en/stable/http_request_lifecycle.html
from aiohttp import ClientSession
# Ref: https://gql.readthedocs.io/en/latest/index.html
# pip3 install --user --requirement ./requirements.txt
# (and/or in a python virtual environment)
from gql import Client as GQLClient
from gql import gql
from gql.transport.requests import RequestsHTTPTransport
# GraphQL API URL for Cirrus-CI
CCI_GQL_URL = "https://api.cirrus-ci.com/graphql"
# Artifact download base-URL for Cirrus-CI.
# Download URL will be formed by appending:
# "/<CIRRUS_BUILD_ID>/<TASK NAME OR ALIAS>/<ARTIFACTS_NAME>/<PATH>"
CCI_ART_URL = "https://api.cirrus-ci.com/v1/artifact/build"
# Set True when --verbose is first argument
VERBOSE = False
def get_tasks(gqlclient, buildId): # noqa N803
"""Given a build ID, return a list of task objects."""
# Ref: https://cirrus-ci.org/api/
query = gql('''
query tasksByBuildId($buildId: ID!) {
build(id: $buildId) {
tasks {
name,
id,
buildId,
artifacts {
name,
files {
path
}
}
}
}
}
''')
query_vars = {"buildId": buildId}
tasks = gqlclient.execute(query, variable_values=query_vars)
if "build" in tasks and tasks["build"]:
b = tasks["build"]
if "tasks" in b and len(b["tasks"]):
return b["tasks"]
raise RuntimeError(f"No tasks found for build with ID {buildId}")
raise RuntimeError(f"No Cirrus-CI build found with ID {buildId}")
def task_art_url_sfxs(task):
"""Given a task dict return list CCI_ART_URL suffixes for all artifacts."""
result = []
bid = task["buildId"]
tname = quote(task["name"]) # Make safe for URLs
for art in task["artifacts"]:
aname = quote(art["name"])
for _file in art["files"]:
fpath = quote(_file["path"])
result.append(f"{bid}/{tname}/{aname}/{fpath}")
return result
async def download_artifact(session, dest_path, dl_url):
"""Asynchronous download contents of art_url as a byte-stream."""
# Last path component assumed to be the filename
makedirs(split(dest_path)[0], exist_ok=True) # os.path.split
async with session.get(dl_url) as response:
with open(dest_path, "wb") as dest_file:
dest_file.write(await response.read())
async def download_artifacts(task, path_rx=None):
"""Given a task dict, download all artifacts or matches to path_rx."""
downloaded = []
skipped = []
async with ClientSession() as session:
for art_url_sfx in task_art_url_sfxs(task):
dest_path = unquote(art_url_sfx) # Strip off URL encoding
dl_url = f"{CCI_ART_URL}/{dest_path}"
if path_rx is None or bool(path_rx.search(dest_path)):
if VERBOSE:
print(f" Downloading '{dest_path}'")
sys.stdout.flush()
await download_artifact(session, dest_path, dl_url)
downloaded.append(dest_path)
else:
if VERBOSE:
print(f" Skipping '{dest_path}'")
skipped.append(dest_path)
return {"downloaded": downloaded, "skipped": skipped}
def get_args(argv):
"""Return parsed argument namespace object."""
parser = ArgumentParser(prog="cirrus-ci_artifacts",
description=('Download Cirrus-CI artifacts by Build ID'
' number, into a subdirectory of the form'
' <Build ID>/<Task Name>/<Artifact Name>'
'/<File Path>'))
parser.add_argument('-v', '--verbose',
dest='verbose', action='store_true', default=False,
help='Show "Downloaded" | "Skipped" + relative artifact file-path.')
parser.add_argument('buildId', nargs=1, metavar='<Build ID>', type=int,
help="A Cirrus-CI Build ID number.")
parser.add_argument('path_rx', nargs='?', default=None, metavar='[Reg. Exp.]',
help="Reg. exp. include only <task>/<artifact>/<file-path> matches.")
return parser.parse_args(args=argv[1:])
async def download(tasks, path_rx=None):
"""Return results from all async operations."""
# Python docs say to retain a reference to all tasks so they aren't
# "garbage-collected" while still active.
results = []
for task in tasks:
if len(task["artifacts"]):
results.append(asyncio.create_task(download_artifacts(task, path_rx)))
await asyncio.gather(*results)
return results
def main(buildId, path_rx=None): # noqa: N803,D103
if path_rx is not None:
path_rx = re.compile(path_rx)
transport = RequestsHTTPTransport(url=CCI_GQL_URL, verify=True, retries=3)
with GQLClient(transport=transport, fetch_schema_from_transport=True) as gqlclient:
tasks = get_tasks(gqlclient, buildId)
transport.close()
async_results = asyncio.run(download(tasks, path_rx))
return [r.result() for r in async_results]
if __name__ == "__main__":
args = get_args(sys.argv)
VERBOSE = args.verbose
main(args.buildId[0], args.path_rx)

View File

@ -0,0 +1,19 @@
# Producing this list was done using the following process:
# 1. Create a temporary `req.txt` file containing only the basic
# non-distribution provided packages, e.g. `aiohttp[speedups]`,
# `PyYAML`, `gql[requests]`, `requests` (see cirrus-ci_artifacts.py,
# actual requirements may have changed)
# 2. From a Fedora:latest container, install python3 & python3-virtualenv
# 3. Setup & activate a temporary virtual environment
# 4. Execute `pip3 install --requirements req.txt`
# 5. Run pip3 freeze
# 6. Edit `requirements.txt`, add the `~=` specifier to each line along
# with the correct two-component version number (from freeze output)
# 7. In a fresh container, confirm the automation installer
# functions with the cirrus-ci_artifacts component (see main README
# for installer instructions)
PyYAML~=6.0
aiohttp[speedups]~=3.8
gql[requests]~=3.3
requests>=2,<3
urllib3<2.5.1

View File

@ -0,0 +1 @@
../cirrus-ci_artifacts.py

View File

@ -0,0 +1,29 @@
#!/bin/bash
set -e
TESTDIR=$(dirname ${BASH_SOURCE[0]})
if [[ "$GITHUB_ACTIONS" == "true" ]]; then
echo "Lint/Style checking not supported under github actions: Skipping"
exit 0
fi
if [[ -x $(type -P flake8-3) ]]; then
cd "$TESTDIR"
set -a
virtualenv testvenv
source testvenv/bin/activate
testvenv/bin/python -m pip install --upgrade pip
pip3 install --requirement ../requirements.txt
set +a
./test_cirrus-ci_artifacts.py -v
cd ..
flake8-3 --max-line-length=100 ./cirrus-ci_artifacts.py
flake8-3 --max-line-length=100 --extend-ignore=D101,D102,D103,D105 test/test_cirrus-ci_artifacts.py
else
echo "Can't find flake-8-3 binary, is script executing inside CI container?"
exit 1
fi

View File

@ -0,0 +1,194 @@
#!/usr/bin/env python3
"""Verify contents of .cirrus.yml meet specific expectations."""
import asyncio
import os
import re
import unittest
from contextlib import redirect_stderr, redirect_stdout
from io import StringIO
from tempfile import TemporaryDirectory
from unittest.mock import MagicMock, mock_open, patch
import ccia
import yaml
def fake_makedirs(*args, **dargs):
return None
# Needed for testing asyncio functions and calls
# ref: https://agariinc.medium.com/strategies-for-testing-async-code-in-python-c52163f2deab
class AsyncMock(MagicMock):
async def __call__(self, *args, **dargs):
return super().__call__(*args, **dargs)
class AsyncContextManager(MagicMock):
async def __aenter__(self, *args, **dargs):
return self.__enter__(*args, **dargs)
async def __aexit__(self, *args, **dargs):
return self.__exit__(*args, **dargs)
class TestBase(unittest.TestCase):
FAKE_CCI = "sql://fake.url.invalid/graphql"
FAKE_API = "smb://fake.url.invalid/artifact"
def setUp(self):
ccia.VERBOSE = True
patch('ccia.CCI_GQL_URL', new=self.FAKE_CCI).start()
patch('ccia.CCI_ART_URL', new=self.FAKE_API).start()
self.addCleanup(patch.stopall)
class TestUtils(TestBase):
# YAML is easier on human eyeballs
# Ref: https://github.com/cirruslabs/cirrus-ci-web/blob/master/schema.graphql
# type Artifacts and ArtifactFileInfo
TEST_TASK_YAML = """
- &test_task
name: task_1
id: 1
buildId: 0987654321
artifacts:
- name: test_art-0
type: test_type-0
format: art_format-0
files:
- path: path/test/art/0
size: 0
- name: test_art-1
type: test_type-1
format: art_format-1
files:
- path: path/test/art/1
size: 1
- path: path/test/art/2
size: 2
- name: test_art-2
type: test_type-2
format: art_format-2
files:
- path: path/test/art/3
size: 3
- path: path/test/art/4
size: 4
- path: path/test/art/5
size: 5
- path: path/test/art/6
size: 6
- <<: *test_task
name: task_2
id: 2
"""
TEST_TASKS = yaml.safe_load(TEST_TASK_YAML)
TEST_URL_RX = re.compile(r"987654321/task_.+/test_art-.+/path/test/art/.+")
def test_task_art_url_sfxs(self):
for test_task in self.TEST_TASKS:
actual = ccia.task_art_url_sfxs(test_task)
with self.subTest(test_task=test_task):
for url in actual:
with self.subTest(url=url):
self.assertRegex(url, self.TEST_URL_RX)
# N/B: The ClientSession mock causes a (probably) harmless warning:
# ResourceWarning: unclosed transport <_SelectorSocketTransport fd=7>
# I have no idea how to fix or hide this, leaving it as-is.
def test_download_artifacts_all(self):
for test_task in self.TEST_TASKS:
with self.subTest(test_task=test_task), \
patch('ccia.download_artifact', new_callable=AsyncMock), \
patch('ccia.ClientSession', new_callable=AsyncContextManager), \
patch('ccia.makedirs', new=fake_makedirs), \
patch('ccia.open', new=mock_open()):
# N/B: This makes debugging VERY difficult, comment out for pdb use
fake_stdout = StringIO()
fake_stderr = StringIO()
with redirect_stderr(fake_stderr), redirect_stdout(fake_stdout):
asyncio.run(ccia.download_artifacts(test_task))
self.assertEqual(fake_stderr.getvalue(), '')
for line in fake_stdout.getvalue().splitlines():
with self.subTest(line=line):
self.assertRegex(line.strip(), self.TEST_URL_RX)
class TestMain(unittest.TestCase):
def setUp(self):
ccia.VERBOSE = True
try:
self.bid = os.environ["CIRRUS_BUILD_ID"]
except KeyError:
self.skipTest("Requires running under Cirrus-CI")
self.tmp = TemporaryDirectory(prefix="test_ccia_tmp")
self.cwd = os.getcwd()
os.chdir(self.tmp.name)
def tearDown(self):
os.chdir(self.cwd)
self.tmp.cleanup()
def main_result_has(self, results, stdout_filepath, action="downloaded"):
for result in results:
for action_filepath in result[action]:
if action_filepath == stdout_filepath:
exists = os.path.isfile(os.path.join(self.tmp.name, action_filepath))
if "downloaded" in action:
self.assertTrue(exists,
msg=f"Downloaded not found: '{action_filepath}'")
return
# action==skipped
self.assertFalse(exists,
msg=f"Skipped file found: '{action_filepath}'")
return
self.fail(f"Expecting to find {action_filepath} entry in main()'s {action} results")
def test_cirrus_ci_download_all(self):
expect_rx = re.compile(f".+'{self.bid}/[^/]+/[^/]+/.+'")
# N/B: This makes debugging VERY difficult, comment out for pdb use
fake_stdout = StringIO()
fake_stderr = StringIO()
with redirect_stderr(fake_stderr), redirect_stdout(fake_stdout):
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
results = ccia.main(self.bid)
self.assertEqual(fake_stderr.getvalue(), '')
for line in fake_stdout.getvalue().splitlines():
with self.subTest(line=line):
s_line = line.lower().strip()
filepath = line.split(sep="'", maxsplit=3)[1]
self.assertRegex(s_line, expect_rx)
if s_line.startswith("download"):
self.main_result_has(results, filepath)
elif s_line.startswith("skip"):
self.main_result_has(results, filepath, "skipped")
else:
self.fail(f"Unexpected stdout line: '{s_line}'")
def test_cirrus_ci_download_none(self):
# N/B: This makes debugging VERY difficult, comment out for pdb use
fake_stdout = StringIO()
fake_stderr = StringIO()
with redirect_stderr(fake_stderr), redirect_stdout(fake_stdout):
results = ccia.main(self.bid, r"this-will-match-nothing")
for line in fake_stdout.getvalue().splitlines():
with self.subTest(line=line):
s_line = line.lower().strip()
filepath = line.split(sep="'", maxsplit=3)[1]
self.assertRegex(s_line, r"skipping")
self.main_result_has(results, filepath, "skipped")
if __name__ == "__main__":
unittest.main()

30
cirrus-ci_env/.install.sh Executable file
View File

@ -0,0 +1,30 @@
#!/bin/bash
# Installs cirrus-ci_env system-wide. NOT intended to be used directly
# by humans, should only be used indirectly by running
# ../bin/install_automation.sh <ver> cirrus-ci_env
set -eo pipefail
source "$AUTOMATION_LIB_PATH/anchors.sh"
source "$AUTOMATION_LIB_PATH/console_output.sh"
INSTALL_PREFIX=$(realpath $AUTOMATION_LIB_PATH/../)
# Assume the directory this script is in, represents what is being installed
INSTALL_NAME=$(basename $(dirname ${BASH_SOURCE[0]}))
AUTOMATION_VERSION=$(automation_version)
[[ -n "$AUTOMATION_VERSION" ]] || \
die "Could not determine version of common automation libs, was 'install_automation.sh' successful?"
echo "Installing $INSTALL_NAME version $(automation_version) into $INSTALL_PREFIX"
unset INST_PERM_ARG
if [[ $UID -eq 0 ]]; then
INST_PERM_ARG="-o root -g root"
fi
cd $(dirname $(realpath "${BASH_SOURCE[0]}"))
install -v cirrus-ci_env.py -D "$INSTALL_PREFIX/bin/"
# Needed for installer testing
echo "Successfully installed $INSTALL_NAME"

325
cirrus-ci_env/cirrus-ci_env.py Executable file
View File

@ -0,0 +1,325 @@
#!/usr/bin/env python3
"""Utility to provide canonical listing of Cirrus-CI tasks and env. vars."""
import argparse
import logging
import re
import sys
from traceback import extract_stack
from typing import Any, Mapping
import yaml
def dbg(msg: str) -> None:
"""Shorthand for calling logging.debug()."""
caller = extract_stack(limit=2)[0]
logging.debug(msg, extra=dict(loc=f'(line {caller.lineno})'))
def err(msg: str) -> None:
"""Print an error message to stderr and exit non-zero."""
caller = extract_stack(limit=2)[0]
logging.error(msg, extra=dict(loc=f'(line {caller.lineno})'))
sys.exit(1)
class DefFmt(dict):
"""
Defaulting-dict helper class for render_env()'s str.format_map().
See: https://docs.python.org/3.7/library/stdtypes.html#str.format_map
"""
dollar_env_var = re.compile(r"\$(\w+)")
dollarcurly_env_var = re.compile(r"\$\{(\w+)\}")
def __missing__(self, key: str) -> str:
"""Not-found items converted back to shell env var format."""
return "${{{0}}}".format(key)
class CirrusCfg:
"""Represent a fully realized list of .cirrus.yml tasks."""
# Dictionary of global, configuration-wide environment variable values.
global_env = None
# String values representing instance type and image name/path/uri
global_type = None
global_image = None
# Tracks task-parsing status, internal-only, do not use.
_working = None
def __init__(self, config: Mapping[str, Any]) -> None:
"""Create a new instance, given a parsed .cirrus.yml config object."""
if not isinstance(config, dict):
whatsit = config.__class__
raise TypeError(f"Expected 'config' argument to be a dictionary, not a {whatsit}")
CirrusCfg._working = "global"
# This makes a copy, doesn't touch the original
self.global_env = self.render_env(config.get("env", dict()))
dbg(f"Rendered globals: {self.global_env}")
self.global_type, self.global_image = self.get_type_image(config)
dbg(f"Using global type '{self.global_type}' and image '{self.global_image}'")
self.tasks = self.render_tasks(config)
dbg(f"Processed {len(self.tasks)} tasks")
self.names = list(self.tasks.keys())
self.names.sort()
self.names = tuple(self.names) # help notice attempts to modify
def render_env(self, env: Mapping[str, str]) -> Mapping[str, str]:
"""
Repeatedly call format_env() to render out-of-order env key values.
This is a compromise vs recursion. Since substitution values may be
referenced while processing, and dictionary keys have no defined
order. Simply provide multiple chances for the substitution to
occur. On failure, a shell-compatible variable reference is simply
left in place.
"""
# There's no simple way to detect when substitutions are
# complete, so we mirror Cirrus-CI's behavior which
# loops 10 times (according to their support) through
# the substitution routine.
out = self.format_env(env, self.global_env)
for _ in range(9):
out = self.format_env(out, self.global_env)
return out
@staticmethod
def format_env(env, global_env: Mapping[str, str]) -> Mapping[str, str]:
"""Replace shell-style references in env values, from global_env then env."""
# This method is also used to initialize self.global_env
if global_env is None:
global_env = dict()
rep = r"{\1}" # Shell env var to python format string conversion regex
def_fmt = DefFmt(**global_env) # Assumes global_env already rendered
for k, v in env.items():
if "ENCRYPTED" in str(v):
continue
elif k == "PATH":
# Handled specially by Cirrus, preserve value as-is.
def_fmt[k] = str(v)
continue
_ = def_fmt.dollarcurly_env_var.sub(rep, str(v))
def_fmt[k] = def_fmt.dollar_env_var.sub(rep, _)
out = dict()
for k, v in def_fmt.items():
if k in env: # Don't unnecessarily duplicate globals
if k == "PATH":
out[k] = str(v)
continue
try:
out[k] = str(v).format_map(def_fmt)
except ValueError as xcpt:
if k == 'matrix':
err(f"Unsupported '{k}' key encountered in"
f" 'env' attribute of '{CirrusCfg._working}' task")
raise xcpt
return out
def render_tasks(self, tasks: Mapping[str, Any]) -> Mapping[str, Any]:
"""Return new tasks dict with envs rendered and matrices unrolled."""
result = dict()
for k, v in tasks.items():
if not k.endswith("_task"):
continue
# Cirrus-CI uses this defaulting priority order
alias = v.get("alias", k.replace("_task", ""))
name = v.get("name", alias)
if "matrix" in v:
dbg(f"Processing matrix '{alias}'")
CirrusCfg._working = alias
# Assume Cirrus-CI accepted this config., don't check name clashes
result.update(self.unroll_matrix(name, alias, v))
CirrusCfg._working = 'global'
else:
dbg(f"Processing task '{name}'")
CirrusCfg._working = name
task = dict(alias=alias)
task["env"] = self.render_env(v.get("env", dict()))
task_name = self.render_value(name, task["env"])
_ = self.get_type_image(v, self.global_type, self.global_image)
self.init_task_type_image(task, *_)
result[task_name] = task
CirrusCfg._working = 'global'
return result
def unroll_matrix(self, name_default: str, alias_default: str,
task: Mapping[str, Any]) -> Mapping[str, Any]:
"""Produce copies of task with attributes replaced from matrix list."""
result = dict()
for item in task["matrix"]:
if "name" not in task and "name" not in item:
# Cirrus-CI goes a step further, attempting to generate a
# unique name based on alias + matrix attributes. This is
# a very complex process that would be insane to attempt to
# duplicate. Instead, simply require a defined 'name'
# attribute in every case, throwing an error if not found.
raise ValueError(f"Expecting 'name' attribute in"
f" '{alias_default}_task'"
f" or matrix definition: {item}"
f" for task definition: {task}")
# default values for the rendered task - not mutable, needs a copy.
matrix_task = dict(alias=alias_default, env=task.get("env", dict()).copy())
matrix_name = item.get("name", name_default)
CirrusCfg._working = matrix_name
# matrix item env. overwrites task env.
matrix_task["env"].update(item.get("env", dict()))
matrix_task["env"] = self.render_env(matrix_task["env"])
matrix_name = self.render_value(matrix_name, matrix_task["env"])
dbg(f" Unrolling matrix for '{matrix_name}'")
CirrusCfg._working = matrix_name
# Matrix item overrides task dict, overrides global defaults.
_ = self.get_type_image(item, self.global_type, self.global_image)
matrix_type, matrix_image = self.get_type_image(task, *_)
self.init_task_type_image(matrix_task, matrix_type, matrix_image)
result[matrix_name] = matrix_task
return result
def render_value(self, value: str, env: Mapping[str, str]) -> str:
"""Given a string value and task env dict, safely render references."""
tmp_env = env.copy() # don't mess up the original
tmp_env["__value__"] = value
return self.format_env(tmp_env, self.global_env)["__value__"]
def get_type_image(self, item: dict,
default_type: str = None,
default_image: str = None) -> tuple:
"""Given Cirrus-CI config or task dict., return instance type and image."""
# Order is significant, VMs always override containers
if "gce_instance" in item:
return "gcevm", item["gce_instance"].get("image_name", default_image)
if "ec2_instance" in item:
return "ec2vm", item["ec2_instance"].get("image", default_image)
elif "osx_instance" in item or "macos_instance" in item:
_ = item.get("osx_instance", item.get("macos_instance"))
return "osx", _.get("image", default_image)
elif "image" in item.get("windows_container", ""):
return "wincntnr", item["windows_container"].get("image", default_image)
elif "image" in item.get("container", ""):
return "container", item["container"].get("image", default_image)
elif "dockerfile" in item.get("container", ""):
return "dockerfile", item["container"].get("dockerfile", default_image)
else:
inst_type = "unsupported"
if self.global_type is not None:
inst_type = default_type
inst_image = "unknown"
if self.global_image is not None:
inst_image = default_image
return inst_type, inst_image
def init_task_type_image(self, task: Mapping[str, Any],
task_type: str, task_image: str) -> None:
"""Render any envs. and assert non-none values for task."""
if task_type is None or task_image is None:
raise ValueError(f"Invalid instance type "
f"({task_type}) or image ({task_image}) "
f"for task ({task})")
task["inst_type"] = task_type
inst_image = self.render_value(task_image, task["env"])
task["inst_image"] = inst_image
dbg(f" Using type '{task_type}' and image '{inst_image}'")
class CLI:
"""Represent command-line-interface runtime state and behaviors."""
# An argparse parser instance
parser = None
# When valid, namespace instance from parser
args = None
# When loaded successfully, instance of CirrusCFG
ccfg = None
def __init__(self) -> None:
"""Initialize runtime context based on command-line options and parameters."""
self.parser = self.args_parser()
self.args = self.parser.parse_args()
# loc will be added at dbg() call time.
logging.basicConfig(format='{levelname}: {message} {loc}', style='{')
logger = logging.getLogger()
if self.args.debug:
logger.setLevel(logging.DEBUG)
dbg("Debugging enabled")
else:
logger.setLevel(logging.ERROR)
self.ccfg = CirrusCfg(yaml.safe_load(self.args.filepath))
if not len(self.ccfg.names):
self.parser.print_help()
err(f"No Cirrus-CI tasks found in '{self.args.filepath.name}'")
def __call__(self) -> None:
"""Execute request command-line actions."""
if self.args.list:
dbg("Will be listing task names")
for task_name in self.ccfg.names:
sys.stdout.write(f"{task_name}\n")
elif bool(self.args.inst):
dbg("Will be showing task inst. type and image")
task = self.ccfg.tasks[self.valid_name()]
inst_type = task['inst_type']
inst_image = task['inst_image']
sys.stdout.write(f"{inst_type} {inst_image}\n")
elif bool(self.args.envs):
dbg("Will be listing task env. vars.")
task = self.ccfg.tasks[self.valid_name()]
env = self.ccfg.global_env.copy()
env.update(task['env'])
keys = list(env.keys())
keys.sort()
for key in keys:
if key.startswith("_"):
continue # Assume private to Cirrus-CI
value = env[key]
sys.stdout.write(f'{key}="{value}"\n')
def args_parser(self) -> argparse.ArgumentParser:
"""Parse command-line options and arguments."""
epilog = "Note: One of --list, --envs, or --inst MUST be specified"
parser = argparse.ArgumentParser(description=__doc__,
epilog=epilog)
parser.add_argument('filepath', type=argparse.FileType("rt"),
help="File path to .cirrus.yml",
metavar='<filepath>')
parser.add_argument('--debug', action='store_true',
help="Enable output of debbuging messages")
mgroup = parser.add_mutually_exclusive_group(required=True)
mgroup.add_argument('--list', action='store_true',
help="List canonical task names")
mgroup.add_argument('--envs', action='store',
help="List env. vars. for task <name>",
metavar="<name>")
mgroup.add_argument('--inst', action='store',
help="List instance type and image for task <name>",
metavar="<name>")
return parser
def valid_name(self) -> str:
"""Print helpful error message when task name is invalid, or return it."""
if self.args.envs is not None:
task_name = self.args.envs
else:
task_name = self.args.inst
file_name = self.args.filepath.name
if task_name not in self.ccfg.names:
self.parser.print_help()
err(f"Unknown task name '{task_name}' from '{file_name}'")
return task_name
if __name__ == "__main__":
cli = CLI()
cli()

View File

@ -0,0 +1,792 @@
---
# Main collection of env. vars to set for all tasks and scripts.
env:
####
#### Global variables used for all tasks
####
# Name of the ultimate destination branch for this CI run, PR or post-merge.
DEST_BRANCH: "master"
# Overrides default location (/tmp/cirrus) for repo clone
GOPATH: &gopath "/var/tmp/go"
GOBIN: "${GOPATH}/bin"
GOCACHE: "${GOPATH}/cache"
GOSRC: &gosrc "/var/tmp/go/src/github.com/containers/podman"
CIRRUS_WORKING_DIR: *gosrc
# The default is 'sh' if unspecified
CIRRUS_SHELL: "/bin/bash"
# Save a little typing (path relative to $CIRRUS_WORKING_DIR)
SCRIPT_BASE: "./contrib/cirrus"
# Runner statistics log file path/name
STATS_LOGFILE_SFX: 'runner_stats.log'
STATS_LOGFILE: '$GOSRC/${CIRRUS_TASK_NAME}-${STATS_LOGFILE_SFX}'
####
#### Cache-image names to test with (double-quotes around names are critical)
####
FEDORA_NAME: "fedora-33"
PRIOR_FEDORA_NAME: "fedora-32"
UBUNTU_NAME: "ubuntu-2010"
PRIOR_UBUNTU_NAME: "ubuntu-2004"
# Google-cloud VM Images
IMAGE_SUFFIX: "c6524344056676352"
FEDORA_AMI_ID: "ami-04f37091c3ec43890"
FEDORA_CACHE_IMAGE_NAME: "fedora-${IMAGE_SUFFIX}"
PRIOR_FEDORA_CACHE_IMAGE_NAME: "prior-fedora-${IMAGE_SUFFIX}"
UBUNTU_CACHE_IMAGE_NAME: "ubuntu-${IMAGE_SUFFIX}"
PRIOR_UBUNTU_CACHE_IMAGE_NAME: "prior-ubuntu-${IMAGE_SUFFIX}"
# Container FQIN's
FEDORA_CONTAINER_FQIN: "quay.io/libpod/fedora_podman:${IMAGE_SUFFIX}"
PRIOR_FEDORA_CONTAINER_FQIN: "quay.io/libpod/prior-fedora_podman:${IMAGE_SUFFIX}"
UBUNTU_CONTAINER_FQIN: "quay.io/libpod/ubuntu_podman:${IMAGE_SUFFIX}"
PRIOR_UBUNTU_CONTAINER_FQIN: "quay.io/libpod/prior-ubuntu_podman:${IMAGE_SUFFIX}"
####
#### Control variables that determine what to run and how to run it.
#### N/B: Required ALL of these are set for every single task.
####
TEST_FLAVOR: # int, sys, ext_svc, validate, automation, etc.
TEST_ENVIRON: host # 'host' or 'container'
PODBIN_NAME: podman # 'podman' or 'remote'
PRIV_NAME: root # 'root' or 'rootless'
DISTRO_NV: # any {PRIOR_,}{FEDORA,UBUNTU}_NAME value
VM_IMAGE_NAME: # One of the "Google-cloud VM Images" (above)
CTR_FQIN: # One of the "Container FQIN's" (above)
# Default timeout for each task
timeout_in: 60m
gcp_credentials: ENCRYPTED[a28959877b2c9c36f151781b0a05407218cda646c7d047fc556e42f55e097e897ab63ee78369dae141dcf0b46a9d0cdd]
aws_credentials: ENCRYPTED[4ca070bffe28eb9b27d63c568b52970dd46f119c3a83b8e443241e895dbf1737580b4d84eed27a311a2b74287ef9f79f]
# Default/small container image to execute tasks with
container: &smallcontainer
image: ${CTR_FQIN}
# Resources are limited across ALL currently executing tasks
# ref: https://cirrus-ci.org/guide/linux/#linux-containers
cpu: 2
memory: 2
# Attempt to prevent flakes by confirming all required external/3rd-party
# services are available and functional.
ext_svc_check_task:
alias: 'ext_svc_check' # int. ref. name - required for depends_on reference
name: "Ext. services" # Displayed Title - has no other significance
skip: &tags "$CIRRUS_TAG != ''" # Don't run on tags
env:
TEST_FLAVOR: ext_svc
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
# NOTE: The default way Cirrus-CI clones is *NOT* compatible with
# environment expectations in contrib/cirrus/lib.sh. Specifically
# the 'origin' remote must be defined, and all remote branches/tags
# must be available for reference from CI scripts.
clone_script: &full_clone |
cd /
rm -rf $CIRRUS_WORKING_DIR
mkdir -p $CIRRUS_WORKING_DIR
git clone --recursive --branch=$DEST_BRANCH https://x-access-token:${CIRRUS_REPO_CLONE_TOKEN}@github.com/${CIRRUS_REPO_FULL_NAME}.git $CIRRUS_WORKING_DIR
cd $CIRRUS_WORKING_DIR
git remote update origin
if [[ -n "$CIRRUS_PR" ]]; then # running for a PR
git fetch origin pull/$CIRRUS_PR/head:pull/$CIRRUS_PR
git checkout pull/$CIRRUS_PR
else
git reset --hard $CIRRUS_CHANGE_IN_REPO
fi
make install.tools
setup_script: &setup '$GOSRC/$SCRIPT_BASE/setup_environment.sh'
main_script: &main '/usr/bin/time --verbose --output="$STATS_LOGFILE" $GOSRC/$SCRIPT_BASE/runner.sh'
always: &runner_stats
runner_stats_artifacts:
path: ./*-${STATS_LOGFILE_SFX}
type: text/plain
# Execute some quick checks to confirm this YAML file and all
# automation-related shell scripts are sane.
automation_task:
alias: 'automation'
name: "Check Automation"
skip: &branches_and_tags "$CIRRUS_PR == '' || $CIRRUS_TAG != ''" # Don't run on branches/tags
container: *smallcontainer
env:
TEST_FLAVOR: automation
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
TEST_ENVIRON: container
clone_script: *full_clone
setup_script: *setup
main_script: *main
always: *runner_stats
# N/B: This task is critical. It builds all binaries and release archives
# for the project, using all primary OS platforms and versions. Assuming
# the builds are successful, a cache is stored of the entire `$GOPATH`
# contents. For all subsequent tasks, the _BUILD_CACHE_HANDLE value
# is used as a key to reuse this cache, saving both time and money.
# The only exceptions are tasks which only run inside a container, they
# will not have access the cache and therefore must rely on cloning the
# repository.
build_task:
alias: 'build'
name: 'Build for $DISTRO_NV'
gce_instance: &standardvm
image_project: libpod-218412
zone: "us-central1-a"
cpu: 2
memory: "4Gb"
# Required to be 200gig, do not modify - has i/o performance impact
# according to gcloud CLI tool warning messages.
disk: 200
image_name: "${VM_IMAGE_NAME}" # from stdenvars
matrix: &platform_axis
# Ref: https://cirrus-ci.org/guide/writing-tasks/#matrix-modification
- env: &stdenvars
DISTRO_NV: ${FEDORA_NAME}
# Not used here, is used in other tasks
VM_IMAGE_NAME: ${FEDORA_CACHE_IMAGE_NAME}
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
# ID for re-use of build output
_BUILD_CACHE_HANDLE: ${FEDORA_NAME}-build-${CIRRUS_BUILD_ID}
# - env:
# DISTRO_NV: ${PRIOR_FEDORA_NAME}
# VM_IMAGE_NAME: ${PRIOR_FEDORA_CACHE_IMAGE_NAME}
# CTR_FQIN: ${PRIOR_FEDORA_CONTAINER_FQIN}
# _BUILD_CACHE_HANDLE: ${PRIOR_FEDORA_NAME}-build-${CIRRUS_BUILD_ID}
- env:
DISTRO_NV: ${UBUNTU_NAME}
VM_IMAGE_NAME: ${UBUNTU_CACHE_IMAGE_NAME}
CTR_FQIN: ${UBUNTU_CONTAINER_FQIN}
_BUILD_CACHE_HANDLE: ${UBUNTU_NAME}-build-${CIRRUS_BUILD_ID}
- env:
DISTRO_NV: ${PRIOR_UBUNTU_NAME}
VM_IMAGE_NAME: ${PRIOR_UBUNTU_CACHE_IMAGE_NAME}
CTR_FQIN: ${PRIOR_UBUNTU_CONTAINER_FQIN}
_BUILD_CACHE_HANDLE: ${PRIOR_UBUNTU_NAME}-build-${CIRRUS_BUILD_ID}
env:
TEST_FLAVOR: build
# Ref: https://cirrus-ci.org/guide/writing-tasks/#cache-instruction
gopath_cache: &gopath_cache
folder: *gopath # Required hard-coded path, no variables.
fingerprint_script: echo "$_BUILD_CACHE_HANDLE"
# Cheat: Clone here when cache is empty, guaranteeing consistency.
populate_script: *full_clone
# A normal clone would invalidate useful cache
clone_script: &noop mkdir -p $CIRRUS_WORKING_DIR
setup_script: *setup
main_script: *main
always: &binary_artifacts
<<: *runner_stats
gosrc_artifacts:
path: ./* # Grab everything in top-level $GOSRC
type: application/octet-stream
binary_artifacts:
path: ./bin/*
type: application/octet-stream
# Confirm the result of building on at least one platform appears sane.
# This confirms the binaries can be executed, checks --help vs docs, and
# other essential post-build validation checks.
validate_task:
name: "Validate $DISTRO_NV Build"
alias: validate
# This task is primarily intended to catch human-errors early on, in a
# PR. Skip it for branch-push, branch-create, and tag-push to improve
# automation reliability/speed in those contexts. Any missed errors due
# to nonsequential PR merging practices, will be caught on a future PR,
# build or test task failures.
skip: *branches_and_tags
depends_on:
- ext_svc_check
- automation
- build
# golangci-lint is a very, very hungry beast.
gce_instance: &bigvm
<<: *standardvm
cpu: 8
memory: "16Gb"
env:
<<: *stdenvars
TEST_FLAVOR: validate
gopath_cache: &ro_gopath_cache
<<: *gopath_cache
reupload_on_changes: false
clone_script: *noop
setup_script: *setup
main_script: *main
always: *runner_stats
# Exercise the "libpod" API with a small set of common
# operations to ensure they are functional.
bindings_task:
name: "Test Bindings"
alias: bindings
only_if: &not_docs $CIRRUS_CHANGE_TITLE !=~ '.*CI:DOCS.*'
skip: *branches_and_tags
depends_on:
- build
gce_instance: *standardvm
env:
<<: *stdenvars
TEST_FLAVOR: bindings
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
setup_script: *setup
main_script: *main
always: *runner_stats
# Build the "libpod" API documentation `swagger.yaml` and
# publish it to google-cloud-storage (GCS).
swagger_task:
name: "Test Swagger"
alias: swagger
depends_on:
- build
gce_instance: *standardvm
env:
<<: *stdenvars
TEST_FLAVOR: swagger
# TODO: Due to podman 3.0 activity (including new images), avoid
# disturbing the status-quo just to incorporate this one new
# container image. Uncomment line below when CI activities normalize.
#CTR_FQIN: 'quay.io/libpod/gcsupld:${IMAGE_SUFFIX}'
CTR_FQIN: 'quay.io/libpod/gcsupld:c4813063494828032'
GCPJSON: ENCRYPTED[asdf1234]
GCPNAME: ENCRYPTED[asdf1234]
GCPPROJECT: 'libpod-218412'
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
setup_script: *setup
main_script: *main
always: *binary_artifacts
# Check that all included go modules from other sources match
# what is expected in `vendor/modules.txt` vs `go.mod`. Also
# make sure that the generated bindings in pkg/bindings/...
# are in sync with the code.
consistency_task:
name: "Test Code Consistency"
alias: consistency
skip: *tags
depends_on:
- build
env:
<<: *stdenvars
TEST_FLAVOR: consistency
TEST_ENVIRON: container
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
clone_script: *full_clone # build-cache not available to container tasks
setup_script: *setup
main_script: *main
always: *runner_stats
# There are several other important variations of podman which
# must always build successfully. Most of them are handled in
# this task, though a few need dedicated tasks which follow.
alt_build_task:
name: "$ALT_NAME"
alias: alt_build
only_if: *not_docs
depends_on:
- build
env:
<<: *stdenvars
TEST_FLAVOR: "altbuild"
gce_instance: *standardvm
matrix:
- env:
ALT_NAME: 'Build Each Commit'
- env:
ALT_NAME: 'Windows Cross'
- env:
ALT_NAME: 'Build Without CGO'
- env:
ALT_NAME: 'Test build RPM'
- env:
ALT_NAME: 'Alt Arch. Cross'
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
setup_script: *setup
main_script: *main
always: *binary_artifacts
# Confirm building a statically-linked binary is successful
static_alt_build_task:
name: "Static Build"
alias: static_alt_build
only_if: *not_docs
depends_on:
- build
# Community-maintained task, may fail on occasion. If so, uncomment
# the next line and file an issue with details about the failure.
# allow_failures: $CI == $CI
gce_instance: *bigvm
env:
<<: *stdenvars
TEST_FLAVOR: "altbuild"
# gce_instance variation prevents this being included in alt_build_task
ALT_NAME: 'Static build'
# Do not use 'latest', fixed-version tag for runtime stability.
CTR_FQIN: "docker.io/nixos/nix:2.3.6"
# Authentication token for pushing the build cache to cachix.
# This is critical, it helps to avoid a very lengthy process of
# statically building every dependency needed to build podman.
# Assuming the pinned nix dependencies in nix/nixpkgs.json have not
# changed, this cache will ensure that only the static podman binary is
# built.
CACHIX_AUTH_TOKEN: ENCRYPTED[asdf1234]
setup_script: *setup
main_script: *main
always: *binary_artifacts
# Confirm building the remote client, natively on a Mac OS-X VM.
osx_alt_build_task: &blahblah
name: "OSX Cross"
alias: osx_alt_build
depends_on:
- build
env:
<<: *stdenvars
# OSX platform variation prevents this being included in alt_build_task
TEST_FLAVOR: "altbuild"
ALT_NAME: 'OSX Cross'
osx_instance:
image: 'catalina-base'
script:
- brew install go
- brew install go-md2man
- make podman-remote-darwin
- make install-podman-remote-darwin-docs
always: *binary_artifacts
macos_alt_build_task:
<<: *blahblah
name: "MacOS Cross"
alias: macos_alt_build
macos_instance:
image: 'catalina-base'
# This task is a stub: In the future it will be used to verify
# podman is compatible with the docker python-module.
docker-py_test_task:
name: Docker-py Compat.
alias: docker-py_test
skip: *tags
only_if: *not_docs
depends_on:
- build
gce_instance: *standardvm
env:
<<: *stdenvars
TEST_FLAVOR: docker-py
TEST_ENVIRON: container
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
setup_script: *setup
main_script: *main
always: *runner_stats
# Does exactly what it says, execute the podman unit-tests on all primary
# platforms and release versions.
unit_test_task:
name: "Unit tests on $DISTRO_NV"
alias: unit_test
skip: *tags
only_if: *not_docs
depends_on:
- validate
matrix: *platform_axis
gce_instance: *standardvm
env:
TEST_FLAVOR: unit
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *runner_stats
apiv2_test_task:
name: "APIv2 test on $DISTRO_NV"
alias: apiv2_test
skip: *tags
depends_on:
- validate
gce_instance: *standardvm
env:
<<: *stdenvars
TEST_FLAVOR: apiv2
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: &logs_artifacts
<<: *runner_stats
# Required for `contrib/cirrus/logformatter` to work properly
html_artifacts:
path: ./*.html
type: text/html
package_versions_script: '$SCRIPT_BASE/logcollector.sh packages'
df_script: '$SCRIPT_BASE/logcollector.sh df'
audit_log_script: '$SCRIPT_BASE/logcollector.sh audit'
journal_script: '$SCRIPT_BASE/logcollector.sh journal'
podman_system_info_script: '$SCRIPT_BASE/logcollector.sh podman'
time_script: '$SCRIPT_BASE/logcollector.sh time'
compose_test_task:
name: "compose test on $DISTRO_NV"
alias: compose_test
only_if: *not_docs
skip: *tags
depends_on:
- validate
gce_instance: *standardvm
env:
<<: *stdenvars
TEST_FLAVOR: compose
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *logs_artifacts
# Execute the podman integration tests on all primary platforms and release
# versions, as root, without involving the podman-remote client.
local_integration_test_task: &local_integration_test_task
# Integration-test task name convention:
# <int.|sys.> <podman|remote> <Distro NV> <root|rootless>
name: &std_name_fmt "$TEST_FLAVOR $PODBIN_NAME $DISTRO_NV $PRIV_NAME $TEST_ENVIRON"
alias: local_integration_test
only_if: *not_docs
skip: *branches_and_tags
depends_on:
- unit_test
matrix: *platform_axis
gce_instance: *standardvm
timeout_in: 90m
env:
TEST_FLAVOR: int
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: &int_logs_artifacts
<<: *logs_artifacts
ginkgo_node_logs_artifacts:
path: ./test/e2e/ginkgo-node-*.log
type: text/plain
# Nearly identical to `local_integration_test` except all operations
# are performed through the podman-remote client vs a podman "server"
# running on the same host.
remote_integration_test_task:
<<: *local_integration_test_task
alias: remote_integration_test
env:
TEST_FLAVOR: int
PODBIN_NAME: remote
# Run the complete set of integration tests from inside a container.
# This verifies all/most operations function with "podman-in-podman".
container_integration_test_task:
name: *std_name_fmt
alias: container_integration_test
only_if: *not_docs
skip: *branches_and_tags
depends_on:
- unit_test
matrix: &fedora_vm_axis
- env:
DISTRO_NV: ${FEDORA_NAME}
_BUILD_CACHE_HANDLE: ${FEDORA_NAME}-build-${CIRRUS_BUILD_ID}
VM_IMAGE_NAME: ${FEDORA_CACHE_IMAGE_NAME}
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
# - env:
# DISTRO_NV: ${PRIOR_FEDORA_NAME}
# _BUILD_CACHE_HANDLE: ${PRIOR_FEDORA_NAME}-build-${CIRRUS_BUILD_ID}
# VM_IMAGE_NAME: ${PRIOR_FEDORA_CACHE_IMAGE_NAME}
# CTR_FQIN: ${PRIOR_FEDORA_CONTAINER_FQIN}
gce_instance: *standardvm
timeout_in: 90m
env:
TEST_FLAVOR: int
TEST_ENVIRON: container
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *int_logs_artifacts
# Execute most integration tests as a regular (non-root) user.
rootless_integration_test_task:
name: *std_name_fmt
alias: rootless_integration_test
only_if: *not_docs
skip: *branches_and_tags
depends_on:
- unit_test
matrix: *fedora_vm_axis
gce_instance: *standardvm
timeout_in: 90m
env:
TEST_FLAVOR: int
PRIV_NAME: rootless
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *int_logs_artifacts
podman_machine_task:
name: *std_name_fmt
alias: podman_machine
# FIXME: Added for speedy-testing
only_if: $CIRRUS_CHANGE_TITLE =~ '.*CI:BUILD.*'
depends_on:
- build
- local_integration_test
- remote_integration_test
- container_integration_test
- rootless_integration_test
ec2_instance:
image: "${VM_IMAGE_NAME}"
type: m5zn.metal # Bare-metal instance is required
region: us-east-1
env:
TEST_FLAVOR: "machine"
PRIV_NAME: "rootless" # intended use-case
DISTRO_NV: "${FEDORA_NAME}"
VM_IMAGE_NAME: "${FEDORA_AMI_ID}"
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *int_logs_artifacts
# Always run subsequent to integration tests. While parallelism is lost
# with runtime, debugging system-test failures can be more challenging
# for some golang developers. Otherwise the following tasks run across
# the same matrix as the integration-tests (above).
local_system_test_task: &local_system_test_task
name: *std_name_fmt
alias: local_system_test
skip: *tags
only_if: *not_docs
depends_on:
- local_integration_test
matrix: *platform_axis
gce_instance: *standardvm
env:
TEST_FLAVOR: sys
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *logs_artifacts
remote_system_test_task:
<<: *local_system_test_task
alias: remote_system_test
depends_on:
- remote_integration_test
env:
TEST_FLAVOR: sys
PODBIN_NAME: remote
rootless_system_test_task:
name: *std_name_fmt
alias: rootless_system_test
skip: *tags
only_if: *not_docs
depends_on:
- rootless_integration_test
matrix: *fedora_vm_axis
gce_instance: *standardvm
env:
TEST_FLAVOR: sys
PRIV_NAME: rootless
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *logs_artifacts
# FIXME: we may want to consider running this from nightly cron instead of CI.
# The tests are actually pretty quick (less than a minute) but they do rely
# on pulling images from quay.io, which means we're subject to network flakes.
#
# FIXME: how does this env matrix work, anyway? Does it spin up multiple VMs?
# We might just want to encode the version matrix in runner.sh instead
upgrade_test_task:
name: "Upgrade test: from $PODMAN_UPGRADE_FROM"
alias: upgrade_test
skip: *tags
only_if: *not_docs
depends_on:
- local_system_test
matrix:
- env:
PODMAN_UPGRADE_FROM: v1.9.0
- env:
PODMAN_UPGRADE_FROM: v2.0.6
- env:
PODMAN_UPGRADE_FROM: v2.1.1
gce_instance: *standardvm
env:
TEST_FLAVOR: upgrade_test
DISTRO_NV: ${FEDORA_NAME}
VM_IMAGE_NAME: ${FEDORA_CACHE_IMAGE_NAME}
# ID for re-use of build output
_BUILD_CACHE_HANDLE: ${FEDORA_NAME}-build-${CIRRUS_BUILD_ID}
clone_script: *noop
gopath_cache: *ro_gopath_cache
setup_script: *setup
main_script: *main
always: *logs_artifacts
# This task is critical. It updates the "last-used by" timestamp stored
# in metadata for all VM images. This mechanism functions in tandem with
# an out-of-band pruning operation to remove disused VM images.
meta_task:
name: "VM img. keepalive"
alias: meta
container:
cpu: 2
memory: 2
image: quay.io/libpod/imgts:$IMAGE_SUFFIX
env:
# Space-separated list of images used by this repository state
IMGNAMES: >-
${FEDORA_CACHE_IMAGE_NAME}
${PRIOR_FEDORA_CACHE_IMAGE_NAME}
${UBUNTU_CACHE_IMAGE_NAME}
${PRIOR_UBUNTU_CACHE_IMAGE_NAME}
BUILDID: "${CIRRUS_BUILD_ID}"
REPOREF: "${CIRRUS_REPO_NAME}"
GCPJSON: ENCRYPTED[asdf1234]
GCPNAME: ENCRYPTED[asdf1234]
GCPPROJECT: libpod-218412
clone_script: *noop
script: /usr/local/bin/entrypoint.sh
# Status aggregator for all tests. This task simply ensures a defined
# set of tasks all passed, and allows confirming that based on the status
# of this task.
success_task:
name: "Total Success"
alias: success
# N/B: ALL tasks must be listed here, minus their '_task' suffix.
depends_on:
- ext_svc_check
- automation
- build
- validate
- bindings
- swagger
- consistency
- alt_build
- static_alt_build
- osx_alt_build
- docker-py_test
- unit_test
- apiv2_test
- compose_test
- local_integration_test
- remote_integration_test
- rootless_integration_test
- container_integration_test
- podman_machine
- local_system_test
- remote_system_test
- rootless_system_test
- upgrade_test
- meta
env:
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
TEST_ENVIRON: container
clone_script: *noop
script: /bin/true
win_installer_task:
name: "Verify Win Installer Build"
alias: win_installer
# Don't run for multiarch container image cirrus-cron job.
only_if: $CIRRUS_CRON != 'multiarch'
depends_on:
- alt_build
windows_container:
image: "cirrusci/windowsservercore:2019"
env:
PATH: "${PATH};C:\\ProgramData\\chocolatey\\bin"
CIRRUS_SHELL: powershell
# Fake version, we are only testing the installer functions, so version doesn't matter
WIN_INST_VER: 9.9.9
install_script: '.\contrib\cirrus\win-installer-install.ps1'
main_script: '.\contrib\cirrus\win-installer-main.ps1'
# When a new tag is pushed, confirm that the code and commits
# meet criteria for an official release.
release_task:
name: "Verify Release"
alias: release
only_if: *tags
depends_on:
- success
gce_instance: *standardvm
env:
<<: *stdenvars
TEST_FLAVOR: release
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
setup_script: *setup
main_script: *main
always: *binary_artifacts
# When preparing to release a new version, this task may be manually
# activated at the PR stage to verify the build is proper for a potential
# podman release.
#
# Note: This cannot use a YAML alias on 'release_task' as of this
# comment, it is incompatible with 'trigger_type: manual'
release_test_task:
name: "Optional Release Test"
alias: release_test
only_if: $CIRRUS_PR != ''
trigger_type: manual
depends_on:
- success
gce_instance: *standardvm
env:
<<: *stdenvars
TEST_FLAVOR: release
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
setup_script: *setup
main_script: *main
always: *binary_artifacts

View File

@ -0,0 +1,46 @@
APIv2 test on fedora-33
Alt Arch. Cross
Build Each Commit
Build Without CGO
Build for fedora-33
Build for ubuntu-2004
Build for ubuntu-2010
Check Automation
Docker-py Compat.
Ext. services
OSX Cross
Optional Release Test
Static Build
Test Bindings
Test Code Consistency
Test Swagger
Test build RPM
Total Success
Unit tests on fedora-33
Unit tests on ubuntu-2004
Unit tests on ubuntu-2010
Upgrade test: from v1.9.0
Upgrade test: from v2.0.6
Upgrade test: from v2.1.1
VM img. keepalive
Validate fedora-33 Build
Verify Release
Verify Win Installer Build
Windows Cross
compose test on fedora-33
int podman fedora-33 root container
int podman fedora-33 root host
int podman fedora-33 rootless host
int podman ubuntu-2004 root host
int podman ubuntu-2010 root host
int remote fedora-33 root host
int remote ubuntu-2004 root host
int remote ubuntu-2010 root host
machine podman fedora-33 rootless host
sys podman fedora-33 root host
sys podman fedora-33 rootless host
sys podman ubuntu-2004 root host
sys podman ubuntu-2010 root host
sys remote fedora-33 root host
sys remote ubuntu-2004 root host
sys remote ubuntu-2010 root host

View File

@ -0,0 +1,421 @@
---
global_env:
CIRRUS_SHELL: /bin/bash
CIRRUS_WORKING_DIR: /var/tmp/go/src/github.com/containers/podman
CTR_FQIN: None
DEST_BRANCH: master
DISTRO_NV: None
FEDORA_CACHE_IMAGE_NAME: fedora-c6524344056676352
FEDORA_CONTAINER_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
FEDORA_NAME: fedora-33
GOBIN: /var/tmp/go/bin
GOCACHE: /var/tmp/go/cache
GOPATH: /var/tmp/go
GOSRC: /var/tmp/go/src/github.com/containers/podman
IMAGE_SUFFIX: c6524344056676352
PODBIN_NAME: podman
PRIOR_FEDORA_CACHE_IMAGE_NAME: prior-fedora-c6524344056676352
PRIOR_FEDORA_CONTAINER_FQIN: quay.io/libpod/prior-fedora_podman:c6524344056676352
PRIOR_FEDORA_NAME: fedora-32
PRIOR_UBUNTU_CACHE_IMAGE_NAME: prior-ubuntu-c6524344056676352
PRIOR_UBUNTU_CONTAINER_FQIN: quay.io/libpod/prior-ubuntu_podman:c6524344056676352
PRIOR_UBUNTU_NAME: ubuntu-2004
PRIV_NAME: root
SCRIPT_BASE: ./contrib/cirrus
STATS_LOGFILE: /var/tmp/go/src/github.com/containers/podman/${CIRRUS_TASK_NAME}-runner_stats.log
STATS_LOGFILE_SFX: runner_stats.log
TEST_ENVIRON: host
TEST_FLAVOR: None
UBUNTU_CACHE_IMAGE_NAME: ubuntu-c6524344056676352
UBUNTU_CONTAINER_FQIN: quay.io/libpod/ubuntu_podman:c6524344056676352
UBUNTU_NAME: ubuntu-2010
VM_IMAGE_NAME: None
tasks:
APIv2 test on fedora-33:
alias: apiv2_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: apiv2
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Alt Arch. Cross:
alias: alt_build
env:
ALT_NAME: Alt Arch. Cross
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Build Each Commit:
alias: alt_build
env:
ALT_NAME: Build Each Commit
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Build Without CGO:
alias: alt_build
env:
ALT_NAME: Build Without CGO
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Build for fedora-33:
alias: build
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: build
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Build for ubuntu-2004:
alias: build
env:
CTR_FQIN: quay.io/libpod/prior-ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2004
TEST_FLAVOR: build
VM_IMAGE_NAME: prior-ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2004-build-${CIRRUS_BUILD_ID}
Build for ubuntu-2010:
alias: build
env:
CTR_FQIN: quay.io/libpod/ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2010
TEST_FLAVOR: build
VM_IMAGE_NAME: ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2010-build-${CIRRUS_BUILD_ID}
Check Automation:
alias: automation
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
TEST_ENVIRON: container
TEST_FLAVOR: automation
Docker-py Compat.:
alias: docker-py_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_ENVIRON: container
TEST_FLAVOR: docker-py
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Ext. services:
alias: ext_svc_check
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
TEST_FLAVOR: ext_svc
OSX Cross:
alias: osx_alt_build
env:
ALT_NAME: OSX Cross
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
MacOS Cross:
alias: macos_alt_build
env:
ALT_NAME: MacOS Cross
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Optional Release Test:
alias: release_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: release
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Static Build:
alias: static_alt_build
env:
ALT_NAME: Static build
CTR_FQIN: docker.io/nixos/nix:2.3.6
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Test Bindings:
alias: bindings
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: bindings
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Test Code Consistency:
alias: consistency
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_ENVIRON: container
TEST_FLAVOR: consistency
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Test Swagger:
alias: swagger
env:
CTR_FQIN: quay.io/libpod/gcsupld:c4813063494828032
DISTRO_NV: fedora-33
GCPPROJECT: libpod-218412
TEST_FLAVOR: swagger
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Test build RPM:
alias: alt_build
env:
ALT_NAME: Test build RPM
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Total Success:
alias: success
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
TEST_ENVIRON: container
Unit tests on fedora-33:
alias: unit_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: unit
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Unit tests on ubuntu-2004:
alias: unit_test
env:
CTR_FQIN: quay.io/libpod/prior-ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2004
TEST_FLAVOR: unit
VM_IMAGE_NAME: prior-ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2004-build-${CIRRUS_BUILD_ID}
Unit tests on ubuntu-2010:
alias: unit_test
env:
CTR_FQIN: quay.io/libpod/ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2010
TEST_FLAVOR: unit
VM_IMAGE_NAME: ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2010-build-${CIRRUS_BUILD_ID}
'Upgrade test: from v1.9.0':
alias: upgrade_test
env:
DISTRO_NV: fedora-33
PODMAN_UPGRADE_FROM: v1.9.0
TEST_FLAVOR: upgrade_test
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
'Upgrade test: from v2.0.6':
alias: upgrade_test
env:
DISTRO_NV: fedora-33
PODMAN_UPGRADE_FROM: v2.0.6
TEST_FLAVOR: upgrade_test
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
'Upgrade test: from v2.1.1':
alias: upgrade_test
env:
DISTRO_NV: fedora-33
PODMAN_UPGRADE_FROM: v2.1.1
TEST_FLAVOR: upgrade_test
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
VM img. keepalive:
alias: meta
env:
BUILDID: ${CIRRUS_BUILD_ID}
GCPPROJECT: libpod-218412
IMGNAMES: fedora-c6524344056676352 prior-fedora-c6524344056676352 ubuntu-c6524344056676352
prior-ubuntu-c6524344056676352
REPOREF: ${CIRRUS_REPO_NAME}
Validate fedora-33 Build:
alias: validate
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: validate
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Verify Release:
alias: release
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: release
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
Verify Win Installer Build:
alias: win_installer
env:
PATH: "${PATH};C:\\ProgramData\\chocolatey\\bin"
CIRRUS_SHELL: powershell
WIN_INST_VER: 9.9.9
Windows Cross:
alias: alt_build
env:
ALT_NAME: Windows Cross
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: altbuild
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
compose test on fedora-33:
alias: compose_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: compose
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
int podman fedora-33 root container:
alias: container_integration_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_ENVIRON: container
TEST_FLAVOR: int
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
int podman fedora-33 root host:
alias: local_integration_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: int
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
int podman fedora-33 rootless host:
alias: rootless_integration_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
PRIV_NAME: rootless
TEST_FLAVOR: int
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
int podman ubuntu-2004 root host:
alias: local_integration_test
env:
CTR_FQIN: quay.io/libpod/prior-ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2004
TEST_FLAVOR: int
VM_IMAGE_NAME: prior-ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2004-build-${CIRRUS_BUILD_ID}
int podman ubuntu-2010 root host:
alias: local_integration_test
env:
CTR_FQIN: quay.io/libpod/ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2010
TEST_FLAVOR: int
VM_IMAGE_NAME: ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2010-build-${CIRRUS_BUILD_ID}
int remote fedora-33 root host:
alias: remote_integration_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
PODBIN_NAME: remote
TEST_FLAVOR: int
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
int remote ubuntu-2004 root host:
alias: remote_integration_test
env:
CTR_FQIN: quay.io/libpod/prior-ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2004
PODBIN_NAME: remote
TEST_FLAVOR: int
VM_IMAGE_NAME: prior-ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2004-build-${CIRRUS_BUILD_ID}
int remote ubuntu-2010 root host:
alias: remote_integration_test
env:
CTR_FQIN: quay.io/libpod/ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2010
PODBIN_NAME: remote
TEST_FLAVOR: int
VM_IMAGE_NAME: ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2010-build-${CIRRUS_BUILD_ID}
machine podman fedora-33 rootless host:
alias: podman_machine
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: machine
PRIV_NAME: rootless
VM_IMAGE_NAME: ami-04f37091c3ec43890
sys podman fedora-33 root host:
alias: local_system_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
TEST_FLAVOR: sys
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
sys podman fedora-33 rootless host:
alias: rootless_system_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
PRIV_NAME: rootless
TEST_FLAVOR: sys
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
sys podman ubuntu-2004 root host:
alias: local_system_test
env:
CTR_FQIN: quay.io/libpod/prior-ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2004
TEST_FLAVOR: sys
VM_IMAGE_NAME: prior-ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2004-build-${CIRRUS_BUILD_ID}
sys podman ubuntu-2010 root host:
alias: local_system_test
env:
CTR_FQIN: quay.io/libpod/ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2010
TEST_FLAVOR: sys
VM_IMAGE_NAME: ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2010-build-${CIRRUS_BUILD_ID}
sys remote fedora-33 root host:
alias: remote_system_test
env:
CTR_FQIN: quay.io/libpod/fedora_podman:c6524344056676352
DISTRO_NV: fedora-33
PODBIN_NAME: remote
TEST_FLAVOR: sys
VM_IMAGE_NAME: fedora-c6524344056676352
_BUILD_CACHE_HANDLE: fedora-33-build-${CIRRUS_BUILD_ID}
sys remote ubuntu-2004 root host:
alias: remote_system_test
env:
CTR_FQIN: quay.io/libpod/prior-ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2004
PODBIN_NAME: remote
TEST_FLAVOR: sys
VM_IMAGE_NAME: prior-ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2004-build-${CIRRUS_BUILD_ID}
sys remote ubuntu-2010 root host:
alias: remote_system_test
env:
CTR_FQIN: quay.io/libpod/ubuntu_podman:c6524344056676352
DISTRO_NV: ubuntu-2010
PODBIN_NAME: remote
TEST_FLAVOR: sys
VM_IMAGE_NAME: ubuntu-c6524344056676352
_BUILD_CACHE_HANDLE: ubuntu-2010-build-${CIRRUS_BUILD_ID}

View File

@ -0,0 +1,139 @@
APIv2 test on fedora-33:
- gcevm
- fedora-c6524344056676352
Alt Arch. Cross:
- gcevm
- fedora-c6524344056676352
Build Each Commit:
- gcevm
- fedora-c6524344056676352
Build Without CGO:
- gcevm
- fedora-c6524344056676352
Build for fedora-33:
- gcevm
- fedora-c6524344056676352
Build for ubuntu-2004:
- gcevm
- prior-ubuntu-c6524344056676352
Build for ubuntu-2010:
- gcevm
- ubuntu-c6524344056676352
Check Automation:
- container
- quay.io/libpod/fedora_podman:c6524344056676352
Docker-py Compat.:
- gcevm
- fedora-c6524344056676352
Ext. services:
- container
- quay.io/libpod/fedora_podman:c6524344056676352
OSX Cross: &blahblah
- osx
- catalina-base
MacOS Cross: *blahblah
Optional Release Test:
- gcevm
- fedora-c6524344056676352
Static Build:
- gcevm
- fedora-c6524344056676352
Test Bindings:
- gcevm
- fedora-c6524344056676352
Test Code Consistency:
- container
- quay.io/libpod/fedora_podman:c6524344056676352
Test Swagger:
- gcevm
- fedora-c6524344056676352
Test build RPM:
- gcevm
- fedora-c6524344056676352
Total Success:
- container
- quay.io/libpod/fedora_podman:c6524344056676352
Unit tests on fedora-33:
- gcevm
- fedora-c6524344056676352
Unit tests on ubuntu-2004:
- gcevm
- prior-ubuntu-c6524344056676352
Unit tests on ubuntu-2010:
- gcevm
- ubuntu-c6524344056676352
'Upgrade test: from v1.9.0':
- gcevm
- fedora-c6524344056676352
'Upgrade test: from v2.0.6':
- gcevm
- fedora-c6524344056676352
'Upgrade test: from v2.1.1':
- gcevm
- fedora-c6524344056676352
VM img. keepalive:
- container
- quay.io/libpod/imgts:c6524344056676352
Validate fedora-33 Build:
- gcevm
- fedora-c6524344056676352
Verify Release:
- gcevm
- fedora-c6524344056676352
Verify Win Installer Build:
- wincntnr
- cirrusci/windowsservercore:2019
Windows Cross:
- gcevm
- fedora-c6524344056676352
compose test on fedora-33:
- gcevm
- fedora-c6524344056676352
int podman fedora-33 root container:
- gcevm
- fedora-c6524344056676352
int podman fedora-33 root host:
- gcevm
- fedora-c6524344056676352
int podman fedora-33 rootless host:
- gcevm
- fedora-c6524344056676352
int podman ubuntu-2004 root host:
- gcevm
- prior-ubuntu-c6524344056676352
int podman ubuntu-2010 root host:
- gcevm
- ubuntu-c6524344056676352
int remote fedora-33 root host:
- gcevm
- fedora-c6524344056676352
int remote ubuntu-2004 root host:
- gcevm
- prior-ubuntu-c6524344056676352
int remote ubuntu-2010 root host:
- gcevm
- ubuntu-c6524344056676352
machine podman fedora-33 rootless host:
- ec2vm
- ami-04f37091c3ec43890
sys podman fedora-33 root host:
- gcevm
- fedora-c6524344056676352
sys podman fedora-33 rootless host:
- gcevm
- fedora-c6524344056676352
sys podman ubuntu-2004 root host:
- gcevm
- prior-ubuntu-c6524344056676352
sys podman ubuntu-2010 root host:
- gcevm
- ubuntu-c6524344056676352
sys remote fedora-33 root host:
- gcevm
- fedora-c6524344056676352
sys remote ubuntu-2004 root host:
- gcevm
- prior-ubuntu-c6524344056676352
sys remote ubuntu-2010 root host:
- gcevm
- ubuntu-c6524344056676352

View File

@ -0,0 +1,20 @@
#!/bin/bash
set -e
cd $(dirname ${BASH_SOURCE[0]})
./test_cirrus-ci_env.py
./testbin-cirrus-ci_env.sh
./testbin-cirrus-ci_env-installer.sh
if [[ "$GITHUB_ACTIONS" == "true" ]]; then
echo "Lint/Style checking not supported under github actions: Skipping"
exit 0
elif [[ -x $(type -P flake8-3) ]]; then
cd ..
flake8-3 --max-line-length=100 .
flake8-3 --max-line-length=100 --extend-ignore=D101,D102 test
else
echo "Can't find flake-8-3 binary, is script executing inside CI container?"
exit 1
fi

View File

@ -0,0 +1,298 @@
#!/usr/bin/env python3
"""Verify cirrus-ci_env.py functions as expected."""
import contextlib
import importlib.util
import os
import sys
import unittest
import unittest.mock as mock
from io import StringIO
import yaml
# Assumes directory structure of this file relative to repo.
TEST_DIRPATH = os.path.dirname(os.path.realpath(__file__))
SCRIPT_FILENAME = os.path.basename(__file__).replace('test_', '')
SCRIPT_DIRPATH = os.path.realpath(os.path.join(TEST_DIRPATH, '..', SCRIPT_FILENAME))
class TestBase(unittest.TestCase):
"""Base test class fixture."""
def setUp(self):
"""Initialize before every test."""
super().setUp()
spec = importlib.util.spec_from_file_location("cci_env", SCRIPT_DIRPATH)
self.cci_env = importlib.util.module_from_spec(spec)
spec.loader.exec_module(self.cci_env)
def tearDown(self):
"""Finalize after every test."""
del self.cci_env
try:
del sys.modules["cci_env"]
except KeyError:
pass
class TestEnvRender(TestBase):
"""Confirming Cirrus-CI in-line env. var. rendering behaviors."""
def setUp(self):
"""Initialize before every test."""
super().setUp()
self.fake_cirrus = mock.Mock(spec=self.cci_env.CirrusCfg)
attrs = {"format_env.side_effect": self.cci_env.CirrusCfg.format_env,
"render_env.side_effect": self.cci_env.CirrusCfg.render_env,
"render_value.side_effect": self.cci_env.CirrusCfg.render_value,
"get_type_image.return_value": (None, None),
"init_task_type_image.return_value": None}
self.fake_cirrus.configure_mock(**attrs)
self.render_env = self.fake_cirrus.render_env
self.render_value = self.fake_cirrus.render_value
def test_empty(self):
"""Verify an empty env dict is unmodified."""
self.fake_cirrus.global_env = None
result = self.render_env(self.fake_cirrus, {})
self.assertDictEqual(result, {})
def test_simple_string(self):
"""Verify an simple string value is unmodified."""
self.fake_cirrus.global_env = None
result = self.render_env(self.fake_cirrus, dict(foo="bar"))
self.assertDictEqual(result, dict(foo="bar"))
def test_simple_sub(self):
"""Verify that a simple string substitution is performed."""
self.fake_cirrus.global_env = None
result = self.render_env(self.fake_cirrus, dict(foo="$bar", bar="foo"))
self.assertDictEqual(result, dict(foo="foo", bar="foo"))
def test_simple_multi(self):
"""Verify that multiple string substitution are performed."""
self.fake_cirrus.global_env = None
result = self.render_env(self.fake_cirrus,
dict(foo="$bar", bar="$baz", baz="foobarbaz"))
self.assertDictEqual(result,
dict(foo="foobarbaz", bar="foobarbaz", baz="foobarbaz"))
def test_simple_undefined(self):
"""Verify an undefined substitution falls back to dollar-curly env var."""
self.fake_cirrus.global_env = None
result = self.render_env(self.fake_cirrus, dict(foo="$baz", bar="${jar}"))
self.assertDictEqual(result, dict(foo="${baz}", bar="${jar}"))
def test_simple_global(self):
"""Verify global keys not duplicated into env."""
self.fake_cirrus.global_env = dict(bar="baz")
result = self.render_env(self.fake_cirrus, dict(foo="bar"))
self.assertDictEqual(result, dict(foo="bar"))
def test_simple_globalsub(self):
"""Verify global keys render substitutions."""
self.fake_cirrus.global_env = dict(bar="baz")
result = self.render_env(self.fake_cirrus, dict(foo="${bar}"))
self.assertDictEqual(result, dict(foo="baz"))
def test_readonly_params(self):
"""Verify global keys not modified while rendering substitutions."""
original_global_env = dict(
foo="foo", bar="bar", baz="baz", test="$item")
self.fake_cirrus.global_env = dict(**original_global_env) # A copy
original_env = dict(item="${foo}$bar${baz}")
env = dict(**original_env) # A copy
result = self.render_env(self.fake_cirrus, env)
self.assertDictEqual(self.fake_cirrus.global_env, original_global_env)
self.assertDictEqual(env, original_env)
self.assertDictEqual(result, dict(item="foobarbaz"))
def test_render_value(self):
"""Verify render_value() works by not modifying env parameter."""
self.fake_cirrus.global_env = dict(foo="foo", bar="bar", baz="baz")
original_env = dict(item="snafu")
env = dict(**original_env) # A copy
test_value = "$foo${bar}$baz $item"
expected_value = "foobarbaz snafu"
actual_value = self.render_value(self.fake_cirrus, test_value, env)
self.assertDictEqual(env, original_env)
self.assertEqual(actual_value, expected_value)
class TestRenderTasks(TestBase):
"""Fixture for exercising Cirrus-CI task-level env. and matrix rendering behaviors."""
def setUp(self):
"""Initialize before every test."""
super().setUp()
self.CCfg = self.cci_env.CirrusCfg
self.global_env = dict(foo="foo", bar="bar", baz="baz")
self.patchers = (
mock.patch.object(self.CCfg, 'get_type_image',
mock.Mock(return_value=(None, None))),
mock.patch.object(self.CCfg, 'init_task_type_image',
mock.Mock(return_value=None)))
for patcher in self.patchers:
patcher.start()
def tearDown(self):
"""Finalize after every test."""
for patcher in self.patchers:
patcher.stop()
super().tearDown()
def test_empty_in_empty_out(self):
"""Verify initializing with empty tasks and globals results in empty output."""
result = self.CCfg(dict(env=dict())).tasks
self.assertDictEqual(result, dict())
def test_simple_render(self):
"""Verify rendering of task local and global env. vars."""
env = dict(item="${foo}$bar${baz}", test="$undefined")
task = dict(something="ignored", env=env)
config = dict(env=self.global_env, test_task=task)
expected = {
"test": {
"alias": "test",
"env": {
"item": "foobarbaz",
"test": "${undefined}"
}
}
}
result = self.CCfg(config).tasks
self.assertDictEqual(result, expected)
def test_noenv_render(self):
"""Verify rendering of task w/o local env. vars."""
task = dict(something="ignored")
config = dict(env=self.global_env, test_task=task)
expected = {
"test": {
"alias": "test",
"env": {}
}
}
result = self.CCfg(config).tasks
self.assertDictEqual(result, expected)
def test_simple_matrix(self):
"""Verify unrolling of a simple matrix containing two tasks."""
matrix1 = dict(name="test_matrix1", env=dict(item="${foo}bar"))
matrix2 = dict(name="test_matrix2", env=dict(item="foo$baz"))
task = dict(env=dict(something="untouched"), matrix=[matrix1, matrix2])
config = dict(env=self.global_env, test_task=task)
expected = {
"test_matrix1": {
"alias": "test",
"env": {
"item": "foobar",
"something": "untouched"
}
},
"test_matrix2": {
"alias": "test",
"env": {
"item": "foobaz",
"something": "untouched"
}
}
}
result = self.CCfg(config).tasks
self.assertNotIn('test_task', result)
for task_name in ('test_matrix1', 'test_matrix2'):
self.assertIn(task_name, result)
self.assertDictEqual(expected[task_name], result[task_name])
self.assertDictEqual(result, expected)
def test_noenv_matrix(self):
"""Verify unrolling of single matrix w/o env. vars."""
matrix = dict(name="test_matrix")
task = dict(env=dict(something="untouched"), matrix=[matrix])
config = dict(env=self.global_env, test_task=task)
expected = {
"test_matrix": {
"alias": "test",
"env": {
"something": "untouched"
}
}
}
result = self.CCfg(config).tasks
self.assertDictEqual(result, expected)
def test_rendered_name_matrix(self):
"""Verify env. values may be used in matrix names with spaces."""
test_foobar = dict(env=dict(item="$foo$bar", unique="item"))
bar_test = dict(name="$bar test", env=dict(item="${bar}${foo}", NAME="snafu"))
task = dict(name="test $item",
env=dict(something="untouched"),
matrix=[bar_test, test_foobar])
config = dict(env=self.global_env, blah_task=task)
expected = {
"test foobar": {
"alias": "blah",
"env": {
"item": "foobar",
"something": "untouched",
"unique": "item"
}
},
"bar test": {
"alias": "blah",
"env": {
"NAME": "snafu",
"item": "barfoo",
"something": "untouched"
}
}
}
result = self.CCfg(config).tasks
self.assertDictEqual(result, expected)
def test_bad_env_matrix(self):
"""Verify old-style 'matrix' key of 'env' attr. throws helpful error."""
env = dict(foo="bar", matrix=dict(will="error"))
task = dict(env=env)
config = dict(env=self.global_env, test_task=task)
err = StringIO()
with contextlib.suppress(SystemExit), mock.patch.object(self.cci_env,
'err', err.write):
self.assertRaises(ValueError, self.CCfg, config)
self.assertRegex(err.getvalue(), ".+'matrix'.+'env'.+'test'.+")
class TestCirrusCfg(TestBase):
"""Fixture to verify loading/parsing from an actual YAML file."""
def setUp(self):
"""Initialize before every test."""
super().setUp()
self.CirrusCfg = self.cci_env.CirrusCfg
with open(os.path.join(TEST_DIRPATH, "actual_cirrus.yml")) as actual:
self.actual_cirrus = yaml.safe_load(actual)
def test_complex_cirrus_cfg(self):
"""Verify that CirrusCfg can be initialized from a complex .cirrus.yml."""
with open(os.path.join(TEST_DIRPATH, "expected_cirrus.yml")) as expected:
expected_cirrus = yaml.safe_load(expected)
actual_cfg = self.CirrusCfg(self.actual_cirrus)
self.assertSetEqual(set(actual_cfg.tasks.keys()),
set(expected_cirrus["tasks"].keys()))
def test_complex_type_image(self):
"""Verify that CirrusCfg initializes with expected image types and values."""
with open(os.path.join(TEST_DIRPATH, "expected_ti.yml")) as expected:
expected_ti = yaml.safe_load(expected)
actual_cfg = self.CirrusCfg(self.actual_cirrus)
self.assertEqual(len(actual_cfg.tasks), len(expected_ti))
actual_ti = {k: [v["inst_type"], v["inst_image"]]
for (k, v) in actual_cfg.tasks.items()}
self.maxDiff = None # show the full dif
self.assertDictEqual(actual_ti, expected_ti)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,21 @@
#!/bin/bash
# Load standardized test harness
SCRIPT_DIRPATH=$(dirname "${BASH_SOURCE[0]}")
source $SCRIPT_DIRPATH/testlib.sh || exit 1
# Must go through the top-level install script that chains to ../.install.sh
TEST_DIR=$(realpath "$SCRIPT_DIRPATH/../")
INSTALL_SCRIPT=$(realpath "$TEST_DIR/../bin/install_automation.sh")
TEMPDIR=$(mktemp -p "" -d "tmpdir_cirrus-ci_env_XXXXX")
test_cmd "Verify cirrus-ci_env can be installed under $TEMPDIR" \
0 'Installation complete for.+cirrus-ci_env' \
env INSTALL_PREFIX=$TEMPDIR $INSTALL_SCRIPT 0.0.0 cirrus-ci_env
test_cmd "Verify executing cirrus-ci_env.py gives 'usage' error message" \
2 'cirrus-ci_env.py: error: the following arguments are required:' \
$TEMPDIR/automation/bin/cirrus-ci_env.py
trap "rm -rf $TEMPDIR" EXIT
exit_with_status

View File

@ -0,0 +1,50 @@
#!/bin/bash
# Load standardized test harness
SCRIPT_DIRPATH=$(dirname "${BASH_SOURCE[0]}")
source ${SCRIPT_DIRPATH}/testlib.sh || exit 1
TEST_DIR=$(realpath "$SCRIPT_DIRPATH/../")
SUBJ_FILEPATH="$TEST_DIR/${SUBJ_FILENAME%.sh}.py"
test_cmd "Verify no options results in help and an error-exit" \
2 "cirrus-ci_env.py: error: the following arguments are required:" \
$SUBJ_FILEPATH
test_cmd "Verify missing/invalid filename results in help and an error-exit" \
2 "No such file or directory" \
$SUBJ_FILEPATH /path/to/not/existing/file.yml \
test_cmd "Verify missing mode-option results in help message and an error-exit" \
2 "error: one of the arguments --list --envs --inst is required" \
$SUBJ_FILEPATH $SCRIPT_DIRPATH/actual_cirrus.yml
test_cmd "Verify valid-YAML w/o tasks results in help message and an error-exit" \
1 "ERROR: No Cirrus-CI tasks found in" \
$SUBJ_FILEPATH --list $SCRIPT_DIRPATH/expected_cirrus.yml
CIRRUS=$SCRIPT_DIRPATH/actual_cirrus.yml
test_cmd "Verify invalid task name results in help message and an error-exit" \
1 "ERROR: Unknown task name 'foobarbaz' from" \
$SUBJ_FILEPATH --env foobarbaz $CIRRUS
TASK_NAMES=$(<"$SCRIPT_DIRPATH/actual_task_names.txt")
echo "$TASK_NAMES" | while read LINE; do
test_cmd "Verify task '$LINE' appears in task-listing output" \
0 "$LINE" \
$SUBJ_FILEPATH --list $CIRRUS
done
test_cmd "Verify inherited instance image with env. var. reference is rendered" \
0 "container quay.io/libpod/fedora_podman:c6524344056676352" \
$SUBJ_FILEPATH --inst 'Ext. services' $CIRRUS
test_cmd "Verify DISTRO_NV env. var renders correctly from test task" \
0 'DISTRO_NV="fedora-33"' \
$SUBJ_FILEPATH --env 'int podman fedora-33 root container' $CIRRUS
test_cmd "Verify VM_IMAGE_NAME env. var renders correctly from test task" \
0 'VM_IMAGE_NAME="fedora-c6524344056676352"' \
$SUBJ_FILEPATH --env 'int podman fedora-33 root container' $CIRRUS
exit_with_status

View File

@ -0,0 +1 @@
../../common/test/testlib.sh

3
cirrus-ci_retrospective/.install.sh Normal file → Executable file
View File

@ -1,8 +1,11 @@
#!/bin/bash
# Installs cirrus-ci_retrospective system-wide. NOT intended to be used directly
# by humans, should only be used indirectly by running
# ../bin/install_automation.sh <ver> cirrus-ci_retrospective
set -eo pipefail
source "$AUTOMATION_LIB_PATH/anchors.sh"
source "$AUTOMATION_LIB_PATH/console_output.sh"

View File

@ -1,27 +1,28 @@
FROM registry.fedoraproject.org/fedora-minimal:latest
RUN microdnf update -y && \
microdnf install -y findutils jq git curl && \
microdnf install -y findutils jq git curl python3 && \
microdnf clean all && \
rm -rf /var/cache/dnf
# Assume build is for development/manual testing purposes by default (automation should override with fixed version)
ARG INSTALL_AUTOMATION_VERSION=latest
ARG INSTALL_AUTOMATION_URI=https://raw.githubusercontent.com/containers/automation/master/bin/install_automation.sh
ARG INSTALL_AUTOMATION_URI=https://github.com/containers/automation/releases/latest/download/install_automation.sh
ADD / /usr/src/automation
RUN if [[ "$INSTALL_AUTOMATION_VERSION" == "0.0.0" ]]; then \
env INSTALL_PREFIX=/usr/share \
/usr/src/automation/bin/install_automation.sh 0.0.0 cirrus-ci_retrospective; \
/usr/src/automation/bin/install_automation.sh 0.0.0 github cirrus-ci_retrospective; \
else \
curl --silent --show-error --location \
--url "$INSTALL_AUTOMATION_URI" | env INSTALL_PREFIX=/usr/share \
/bin/bash -s - "$INSTALL_AUTOMATION_VERSION" cirrus-ci_retrospective; \
/bin/bash -s - "$INSTALL_AUTOMATION_VERSION" github cirrus-ci_retrospective; \
fi
# Required environment variables
ENV AUTOMATION_LIB_PATH="" \
GITHUB_ACTIONS="false" \
ACTIONS_STEP_DEBUG="false" \
GITHUB_EVENT_NAME="" \
GITHUB_EVENT_PATH="" \
GITHUB_TOKEN=""
# Optional (recommended) environment variables
ENV OUTPUT_JSON_FILE=""
WORKDIR /root
ENTRYPOINT ["/bin/bash", "-c", "source /etc/profile && exec /usr/share/automation/bin/cirrus-ci_retrospective.sh"]
ENTRYPOINT ["/bin/bash", "-c", "source /etc/automation_environment && exec /usr/share/automation/bin/cirrus-ci_retrospective.sh"]

View File

@ -13,7 +13,7 @@ to tests passing on a tagged commit.
# Example Github Action Workflow
On the master (default) branch of a repository (previously setup and running
On the 'main' (default) branch of a repository (previously setup and running
tasks in Cirrus-CI), add the following file:
`.github/workflows/cirrus-ci_retrospective.yml`
@ -36,6 +36,15 @@ jobs:
...act on contents of ./cirrus-ci_retrospective.json...
```
## Dependencies:
In addition to the basic `common` requirements (see [top-level README.md](../README.md))
the following system packages (or their equivalents) are needed:
* curl
* jq
* sed
## Usage Notes:
* The trigger, `check_suite` type `completed` is the only event currently supported
@ -57,7 +66,7 @@ jobs:
## Warning
Due to security concerns, Github Actions only supports execution vs check_suite events
from workflows already committed on the master branch. This makes it difficult to
from workflows already committed on the 'main' branch. This makes it difficult to
test implementations, since they will not execute until merged.
However, the output JSON does provide all the necessary details to re-create, then possibly
@ -67,47 +76,71 @@ perform test-executions for PRs. See the workflow file for comments on related
# Output Decoding
The output JSON is a list of all Cirrus-CI tasks which completed after being triggered by
The output JSON is an `array` of all Cirrus-CI tasks which completed after being triggered by
one of the supported mechanisms (i.e. PR push, branch push, or tag push). At the time
this was written, CRON-based runs in Cirrus-CI do not trigger a `check_suite` in Github.
Otherwise, based on various values in the output JSON, it is possible to objectively
determine the execution context for the build. For example (condensed):
determine the execution context for the build.
*Note*: The object nesting is backwards from what you may expect. The top-level object
represents an individual `task`, but contains it's `build` object to make parsing
with `jq` easier. In reality, the data model actually represents a single `build`,
containing multiple `tasks`.
## After pushing to pull request number 34
```json
{
"data": {
"task": {
...cut...
"build": {
"changeIdInRepo": "679085b3f2b40797fedb60d02066b3cbc592ae4e",
"branch": "pull/34",
"pullRequest": 34,
...cut...
}
...cut...
}
id: "1234567890",
...cut...
"build": {
"id": "0987654321"
"changeIdInRepo": "679085b3f2b40797fedb60d02066b3cbc592ae4e",
"branch": "pull/34",
"pullRequest": 34,
...cut...
}
...cut...
}
```
## After merging pull request 34 into master branch (merge commit added)
## Pull request 34's `trigger_type: manual` task (not yet triggered)
```json
{
"data": {
"task": {
...cut...
"build": {
"changeIdInRepo": "232bae5d8ffb6082393e7543e4e53f978152f98a",
"branch": "master",
"pullRequest": null,
...cut...
}
...cut...
}
id: "something",
...cut...
"status": "PAUSED",
"automaticReRun": false,
"build": {
"id": "otherthing"
"changeIdInRepo": "679085b3f2b40797fedb60d02066b3cbc592ae4e",
"branch": "pull/34",
"pullRequest": 34,
}
...cut...
}
```
*Important note about manual tasks:* Manually triggering an independent the task
***will not*** result in a new `check_suite`. Therefore, the cirrus-ci_retrospective
action will not execute again, irrespective of pass, fail or any other manual task status.
Also, if any task in Cirrus-CI is dependent on a manual task, the build itself will not
conclude until the manual task is triggered and completes (pass, fail, or other).
## After merging pull request 34 into main branch (merge commit added)
```json
{
...cut...
"build": {
"id": "foobarbaz"
"changeIdInRepo": "232bae5d8ffb6082393e7543e4e53f978152f98a",
"branch": "main",
"pullRequest": null,
...cut...
}
...cut...
}
```
@ -115,20 +148,16 @@ determine the execution context for the build. For example (condensed):
```json
{
"data": {
"task": {
...cut...
"build": {
...cut...
"changeIdInRepo": "679085b3f2b40797fedb60d02066b3cbc592ae4e",
"branch": "v2.2.0",
"pullRequest": null,
...cut...
}
}
...cut...
}
id: "1234567890",
...cut...
"build": {
...cut...
"changeIdInRepo": "679085b3f2b40797fedb60d02066b3cbc592ae4e",
"branch": "v2.2.0",
"pullRequest": null,
...cut...
}
...cut...
}
```
@ -140,6 +169,6 @@ Given a "conclusion" task name in Cirrus-CI (e.g. `cirrus-ci/test_success`):
`'.[] | select(.name == "cirrus-ci/test_success") | .build.pullRequest'`
* Obtain the HEAD commit ID used by Cirrus-CI for the build (always available)
'.[] | select(.name == "cirrus-ci/test_success") | .build.changeIdInRepo'
`'.[] | select(.name == "cirrus-ci/test_success") | .build.changeIdInRepo'`
* ...todo: add more

View File

@ -113,10 +113,11 @@ do
"$CCI_URL" \
"{
task(id: $task_id) {
id
name
status
automaticReRun
build {changeIdInRepo branch pullRequest status repository {
build {id changeIdInRepo branch pullRequest status repository {
owner name cloneUrl masterBranch
}
}
@ -128,10 +129,6 @@ do
done
dbg "# Combining all task data into JSON list as action output and into $OUTPUT_JSON_FILE"
# Github Actions handles this prefix specially: Ensure stdout JSON is all on one line.
# N/B: It is not presently possible to actually _use_ this output value as JSON in
# a github actions workflow.
printf "::set-output name=json::'%s'" \
set_out_var json \
$(jq --indent 4 --slurp '.' $TMPDIR/.*$INTERMEDIATE_OUTPUT_EXT | \
tee "$OUTPUT_JSON_FILE" | \
jq --compact-output '.')
tee "$OUTPUT_JSON_FILE" | jq --compact-output '.')

View File

@ -0,0 +1,10 @@
# This library simply sources the necessary common libraries.
# Not intended for direct execution
AUTOMATION_LIB_PATH="${AUTOMATION_LIB_PATH:-$(dirname $(realpath ${BASH_SOURCE[0]}))/../../common/lib}"
GITHUB_ACTION_LIB="${GITHUB_ACTION_LIB:-$AUTOMATION_LIB_PATH/github_common.sh}"
# Allow in-place use w/o installing, e.g. for testing
[[ -r "$GITHUB_ACTION_LIB" ]] || \
GITHUB_ACTION_LIB="$AUTOMATION_LIB_PATH/../../github/lib/github_common.sh"
# Also loads common lib
source "$GITHUB_ACTION_LIB"

View File

@ -2,7 +2,7 @@
# Library of constants and functions for the cirrus-ci_retrospective script
# Not intended to be executed directly.
source $(dirname "${BASH_SOURCE[0]}")/common.sh
source $(dirname "${BASH_SOURCE[0]}")/ccir_common.sh
# GH GraphQL General Reference: https://developer.github.com/v4/object/
# GH CheckSuite Object Reference: https://developer.github.com/v4/object/checksuite
@ -64,7 +64,7 @@ curl_post() {
die "Expecting non-empty data argument"
[[ -n "$token" ]] || \
dbg "### Warning: \$GITHUB_TOKEN is empty, performing unauthenticated query" > /dev/stderr
dbg "### Warning: \$GITHUB_TOKEN is empty, performing unauthenticated query" >> /dev/stderr
# Don't expose secrets on any command-line
local headers_tmpf
local headers_tmpf=$(tmpfile headers)
@ -74,14 +74,14 @@ content-type: application/json
${token:+authorization: Bearer $token}
EOF
# Avoid needing to pass large strings on te command-line
# Avoid needing to pass large strings on the command-line
local data_tmpf=$(tmpfile data)
echo "$data" > "$data_tmpf"
local curl_cmd="$CURL --silent --request POST --url $url --header @$headers_tmpf --data @$data_tmpf"
dbg "### Executing '$curl_cmd'"
local ret="0"
$curl_cmd > /dev/stdout || ret=$?
$curl_cmd >> /dev/stdout || ret=$?
# Don't leave secrets lying around in files
rm -f "$headers_tmpf" "$data_tmpf" &> /dev/null
@ -99,9 +99,9 @@ filter_json() {
dbg "### Validating JSON in '$json_file'"
# Confirm input json is valid and make filter problems easier to debug (below)
local tmp_json_file=$(tmpfile json)
if ! jq . < "$json_file" > "$tmp_json_file"; then
if ! jq -e . < "$json_file" > "$tmp_json_file"; then
rm -f "$tmp_json_file"
# JQ has alrady shown an error message
# JQ has already shown an error message
die "Error from jq relating to JSON: $(cat $json_file)"
else
dbg "### JSON found to be valid"
@ -111,7 +111,7 @@ filter_json() {
dbg "### Applying filter '$filter'"
if ! jq --indent 4 "$filter" < "$json_file" > "$tmp_json_file"; then
# JQ has alrady shown an error message
# JQ has already shown an error message
rm -f "$tmp_json_file"
die "Error from jq relating to JSON: $(cat $json_file)"
fi
@ -147,11 +147,6 @@ url_query_filter_test() {
[[ "$ret" -eq "0" ]] || \
die "Curl command exited with non-zero code: $ret"
if grep -q "error" "$curl_outputf"; then
# Barely passable attempt to catch GraphQL query errors
die "Found the word 'error' in curl output: $(cat $curl_outputf)"
fi
# Validates both JSON and filter, updates $curl_outputf
filter_json "$filter" "$curl_outputf"
if [[ -n "$test_args" ]]; then

View File

@ -1,12 +0,0 @@
# This library simply sources the necessary common libraries.
# Not intended for direct execution
AUTOMATION_LIB_PATH="${AUTOMATION_LIB_PATH:-$(dirname ${BASH_SOURCE[0]})/../../common/lib}"
# Magic prefixes that receive special treatment by Github Actions
# Ref: https://help.github.com/en/actions/reference/workflow-commands-for-github-actions
DEBUG_MSG_PREFIX="${DEBUG_MSG_PREFIX:-::debug::}"
WARNING_MSG_PREFIX="${WARNING_MSG_PREFIX:-::warning::}"
ERROR_MSG_PREFIX="${ERROR_MSG_PREFIX:-::error::}"
source "$AUTOMATION_LIB_PATH/defaults.sh"
source "$AUTOMATION_LIB_PATH/anchors.sh"
source "$AUTOMATION_LIB_PATH/console_output.sh"

View File

@ -6,14 +6,14 @@ source $(dirname "${BASH_SOURCE[0]}")/testlib.sh || exit 1
# Must go through the top-level install script that chains to ../.install.sh
INSTALL_SCRIPT=$(realpath "$TEST_DIR/../../bin/install_automation.sh")
TEMPDIR=$(mktemp -p "" -d "tmpdir_cirrus-ci_retrospective_XXXXX")
trap "rm -rf $TEMPDIR" EXIT
test_cmd "Verify cirrus-ci_retrospective can be installed under $TEMPDIR" \
0 'Installation complete for.+installed cirrus-ci_retrospective' \
env INSTALL_PREFIX=$TEMPDIR $INSTALL_SCRIPT 0.0.0 cirrus-ci_retrospective
env INSTALL_PREFIX=$TEMPDIR $INSTALL_SCRIPT 0.0.0 github cirrus-ci_retrospective
test_cmd "Verify executing cirrus-ci_retrospective.sh gives 'Expecting' error message" \
2 '::error::.+Expecting' \
2 '::error.+Expecting' \
env AUTOMATION_LIB_PATH=$TEMPDIR/automation/lib $TEMPDIR/automation/bin/cirrus-ci_retrospective.sh
trap "rm -rf $TEMPDIR" EXIT
exit_with_status

View File

@ -45,7 +45,7 @@ for required_var in ${req_env_vars[@]}; do
export $required_var="$invalid_value"
test_cmd \
"Verify exeuction w/ \$$required_var='$invalid_value' (instead of '$valid_value') fails with helpful error message." \
2 "::error::.+\\\$$required_var.+'$invalid_value'" \
2 "::error.+\\\$$required_var.+'$invalid_value'" \
$SUBJ_FILEPATH
export $required_var="$valid_value"
done
@ -61,21 +61,21 @@ EOF
export GITHUB_EVENT_PATH=$MOCK_EVENT_JSON_FILEPATH
test_cmd "Verify expected error when fed empty mock event JSON file" \
1 "::error::.+check_suite.+key" \
1 "::error.+check_suite.+key" \
$SUBJ_FILEPATH
cat << EOF > "$MOCK_EVENT_JSON_FILEPATH"
{"check_suite":{}}
EOF
test_cmd "Verify expected error when fed invalid check_suite value in mock event JSON file" \
1 "::error::.+check_suite.+type.+null" \
1 "::error.+check_suite.+type.+null" \
$SUBJ_FILEPATH
cat << EOF > "$MOCK_EVENT_JSON_FILEPATH"
{"check_suite": {}, "action": "foobar"}
EOF
test_cmd "Verify error and message containing incorrect value from mock event JSON file" \
1 "::error::.+check_suite.+foobar" \
1 "::error.+check_suite.+foobar" \
$SUBJ_FILEPATH
cat << EOF > "$MOCK_EVENT_JSON_FILEPATH"
@ -89,7 +89,7 @@ cat << EOF > "$MOCK_EVENT_JSON_FILEPATH"
{"check_suite": {"app":{"id":null}}, "action": "completed"}
EOF
test_cmd "Verify expected error when 'app' id is wrong type in mock event JSON file" \
1 "::error::.+integer.+null" \
1 "::error.+integer.+null" \
$SUBJ_FILEPATH
# Must always happen last

View File

@ -12,16 +12,6 @@ if [[ -d "$_TMPDIR" ]]; then
trap "rm -rf $_TMPDIR" EXIT # The REAL directory to remove
fi
copy_function() {
test -n "$(declare -f "$1")" || return
eval "${_/$1/$2}"
}
rename_function() {
copy_function "$@" || return
unset -f "$1"
}
# There are many paths to die(), some specific paths need to be tested
SPECIAL_DEATH_CODE=101
rename_function die _die
@ -119,7 +109,7 @@ test_cmd \
'^4 $' \
cat "$TEST_JSON_FILE"
# Makes checking temp-files writen by curl_post() easier
# Makes checking temp-files written by curl_post() easier
TMPDIR=$(mktemp -d -p "$_TMPDIR" "tmpdir_curl_XXXXX")
# Set up a mock for argument checking
_CURL="$CURL"

1068
cirrus-task-map/cirrus-task-map Executable file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,550 @@
#!/usr/bin/perl
use v5.14;
use Test::More;
use Test::Differences;
use FindBin;
# Read tests
my @tests;
my $context = '';
while (my $line = <DATA>) {
if ($line =~ /^<{10,}\s+(.*)$/) {
$context = 'yml';
push @tests, { name => $1, yml => "---\n", expect => '' };
}
elsif ($line =~ /^>{10,}$/) {
$context = 'expect';
}
elsif ($line =~ /\S/) {
$tests[-1]{$context} .= $line;
}
}
plan tests => 1 + @tests;
require_ok "$FindBin::Bin/../cirrus-task-map";
for my $t (@tests) {
my $tasklist = TaskList->new($t->{yml});
my $gv = $tasklist->graphviz( 'a' .. 'z' );
# Strip off the common stuff from start/end
my @gv = grep { /^\s+\"/ } split "\n", $gv;
my @expect = split "\n", $t->{expect};
eq_or_diff \@gv, \@expect, $t->{name};
}
__END__
<<<<<<<<<<<<<<<<<< simple setup: one task, no deps
just_one_task:
name: "One Task"
>>>>>>>>>>>>>>>>>>
"just_one" [shape=ellipse style=bold color=z fontcolor=z]
<<<<<<<<<<<<<<<<<< two tasks, b depends on a
a_task:
alias: "a_alias"
b_task:
alias: "b_alias"
depends_on:
- "a"
>>>>>>>>>>>>>>>>>>
"a" [shape=ellipse style=bold color=a fontcolor=a]
"a" -> "b" [color=a]
"b" [shape=ellipse style=bold color=z fontcolor=z]
<<<<<<<<<<<<<<<<<< four tasks, two in the middle, with aliases
real_name_of_initial_task:
alias: "initial"
middle_1_task:
depends_on:
- "initial"
middle_2_task:
depends_on:
- "initial"
end_task:
depends_on:
- "initial"
- "middle_1"
- "middle_2"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
"real_name_of_initial" [shape=ellipse style=bold color=a fontcolor=a]
"real_name_of_initial" -> "end" [color=a]
"end" [shape=ellipse style=bold color=z fontcolor=z]
"real_name_of_initial" -> "middle_1" [color=a]
"middle_1" [shape=ellipse style=bold color=b fontcolor=b]
"middle_1" -> "end" [color=b]
"real_name_of_initial" -> "middle_2" [color=a]
"middle_2" [shape=ellipse style=bold color=c fontcolor=c]
"middle_2" -> "end" [color=c]
<<<<<<<<<<<<<<<<<< env interpolation 1
env:
NAME: "top-level name"
a_task:
name: "$NAME"
matrix:
- env:
NAME: "name1"
- env:
NAME: "name2"
env:
NAME: "this should never be interpolated"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
"a" [shape=record style=bold color=z fontcolor=z label="a\l|- name1\l- name2\l"]
<<<<<<<<<<<<<<<<<< real-world test: cevich "performant" branch
# Main collection of env. vars to set for all tasks and scripts.
env:
####
#### Global variables used for all tasks
####
# Name of the ultimate destination branch for this CI run, PR or post-merge.
DEST_BRANCH: "master"
# Overrides default location (/tmp/cirrus) for repo clone
GOPATH: &gopath "/var/tmp/go"
GOBIN: "${GOPATH}/bin"
GOCACHE: "${GOPATH}/cache"
GOSRC: &gosrc "/var/tmp/go/src/github.com/containers/podman"
CIRRUS_WORKING_DIR: *gosrc
# The default is 'sh' if unspecified
CIRRUS_SHELL: "/bin/bash"
# Save a little typing (path relative to $CIRRUS_WORKING_DIR)
SCRIPT_BASE: "./contrib/cirrus"
####
#### Cache-image names to test with (double-quotes around names are critical)
####
FEDORA_NAME: "fedora-32"
PRIOR_FEDORA_NAME: "fedora-31"
UBUNTU_NAME: "ubuntu-20"
PRIOR_UBUNTU_NAME: "ubuntu-19"
# Google-cloud VM Images
IMAGE_SUFFIX: "c5363056714711040"
FEDORA_CACHE_IMAGE_NAME: "fedora-${IMAGE_SUFFIX}"
PRIOR_FEDORA_CACHE_IMAGE_NAME: "prior-fedora-${IMAGE_SUFFIX}"
UBUNTU_CACHE_IMAGE_NAME: "ubuntu-${IMAGE_SUFFIX}"
PRIOR_UBUNTU_CACHE_IMAGE_NAME: "prior-ubuntu-${IMAGE_SUFFIX}"
# Container FQIN's
FEDORA_CONTAINER_FQIN: "quay.io/libpod/fedora_podman:${IMAGE_SUFFIX}"
PRIOR-FEDORA_CONTAINER_FQIN: "quay.io/libpod/prior-fedora_podman:${IMAGE_SUFFIX}"
UBUNTU_CONTAINER_FQIN: "quay.io/libpod/ubuntu_podman:${IMAGE_SUFFIX}"
PRIOR-UBUNTU_CONTAINER_FQIN: "quay.io/libpod/prior-ubuntu_podman:${IMAGE_SUFFIX}"
####
#### Control variables that determine what to run and how to run it.
#### (Default's to running inside Fedora community-cluster container)
TEST_FLAVOR: # int, sys, ext_svc, smoke, automation, etc.
TEST_ENVIRON: host # host or container
PODBIN_NAME: podman # podman or remote
PRIV_NAME: root # root or rootless
DISTRO_NV: $FEDORA_NAME # any {PRIOR_,}{FEDORA,UBUNTU}_NAME value
# Default timeout for each task
timeout_in: 60m
gcp_credentials: ENCRYPTED[a28959877b2c9c36f151781b0a05407218cda646c7d047fc556e42f55e097e897ab63ee78369dae141dcf0b46a9d0cdd]
# Attempt to prevent flakes by confirming all required external/3rd-party
# services are available and functional.
ext_svc_check_task:
alias: 'ext_svc_check' # int. ref. name - required for depends_on reference
name: "Ext. services" # Displayed Title - has no other significance
env:
TEST_FLAVOR: ext_svc
script: &setup_and_run
- 'cd $GOSRC/$SCRIPT_BASE || exit 1'
- './setup_environment.sh'
- './runner.sh'
# Default/small container image to execute tasks with
container: &smallcontainer
image: ${CTR_FQIN}
# Resources are limited across ALL currently executing tasks
# ref: https://cirrus-ci.org/guide/linux/#linux-containers
cpu: 2
memory: 2
env:
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
automation_task:
alias: 'automation'
name: "Check Automation"
container: *smallcontainer
env:
TEST_FLAVOR: automation
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
script: *setup_and_run
smoke_task:
# This task use to be called 'gating', however that name is being
# used downstream for release testing. Renamed this to avoid confusion.
alias: 'smoke'
name: "Smoke Test"
container: &bigcontainer
image: ${CTR_FQIN}
# Leave some resources for smallcontainer
cpu: 6
memory: 22
env:
TEST_FLAVOR: 'smoke'
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
clone_script: &full_clone |
cd /
rm -rf $CIRRUS_WORKING_DIR
mkdir -p $CIRRUS_WORKING_DIR
git clone --recursive --branch=$DEST_BRANCH https://x-access-token:${CIRRUS_REPO_CLONE_TOKEN}@github.com/${CIRRUS_REPO_FULL_NAME}.git $CIRRUS_WORKING_DIR
cd $CIRRUS_WORKING_DIR
git remote update origin
if [[ -n "$CIRRUS_PR" ]]; then # running for a PR
git fetch origin pull/$CIRRUS_PR/head:pull/$CIRRUS_PR
git checkout pull/$CIRRUS_PR
else
git reset --hard $CIRRUS_CHANGE_IN_REPO
fi
cd $CIRRUS_WORKING_DIR
make install.tools
script: *setup_and_run
build_task:
alias: 'build'
name: 'Build for $DISTRO_NV'
depends_on:
- ext_svc_check
- smoke
- automation
container: *smallcontainer
matrix: &platform_axis
# Ref: https://cirrus-ci.org/guide/writing-tasks/#matrix-modification
- env: &stdenvars
DISTRO_NV: ${FEDORA_NAME}
# Not used here, is used in other tasks
VM_IMAGE_NAME: ${FEDORA_CACHE_IMAGE_NAME}
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
# ID for re-use of build output
_BUILD_CACHE_HANDLE: ${FEDORA_NAME}-build-${CIRRUS_BUILD_ID}
- env:
DISTRO_NV: ${PRIOR_FEDORA_NAME}
VM_IMAGE_NAME: ${PRIOR_FEDORA_CACHE_IMAGE_NAME}
CTR_FQIN: ${PRIOR-FEDORA_CONTAINER_FQIN}
_BUILD_CACHE_HANDLE: ${PRIOR_FEDORA_NAME}-build-${CIRRUS_BUILD_ID}
- env:
DISTRO_NV: ${UBUNTU_NAME}
VM_IMAGE_NAME: ${UBUNTU_CACHE_IMAGE_NAME}
CTR_FQIN: ${UBUNTU_CONTAINER_FQIN}
_BUILD_CACHE_HANDLE: ${UBUNTU_NAME}-build-${CIRRUS_BUILD_ID}
- env:
DISTRO_NV: ${PRIOR_UBUNTU_NAME}
VM_IMAGE_NAME: ${PRIOR_UBUNTU_CACHE_IMAGE_NAME}
CTR_FQIN: ${PRIOR-UBUNTU_CONTAINER_FQIN}
_BUILD_CACHE_HANDLE: ${PRIOR_UBUNTU_NAME}-build-${CIRRUS_BUILD_ID}
env:
TEST_FLAVOR: build
# Seed $GOCACHE from any previous instances of this task
gopath_cache: &gopath_cache # 'gopath_cache' is the displayed name
# Ref: https://cirrus-ci.org/guide/writing-tasks/#cache-instruction
folder: *gopath # Required hard-coded path, no variables.
fingerprint_script: echo "$_BUILD_CACHE_HANDLE"
# Cheat: Clone here when cache is empty, guaranteeing consistency.
populate_script: *full_clone
# A normal clone would invalidate useful cache
clone_script: &noop mkdir -p $CIRRUS_WORKING_DIR
script: *setup_and_run
always:
artifacts: &all_gosrc
path: ./* # Grab everything in top-level $GOSRC
type: application/octet-stream
validate_task:
name: "Validate $DISTRO_NV Build"
alias: validate
depends_on:
- build
container: *bigcontainer
env:
<<: *stdenvars
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
TEST_FLAVOR: validate
gopath_cache: &ro_gopath_cache
<<: *gopath_cache
reupload_on_changes: false
clone_script: *noop
script: *setup_and_run
always:
artifacts: *all_gosrc
bindings_task:
name: "Test Bindings"
alias: bindings
depends_on:
- build
gce_instance: &standardvm
image_project: libpod-218412
zone: "us-central1-a"
cpu: 2
memory: "4Gb"
# Required to be 200gig, do not modify - has i/o performance impact
# according to gcloud CLI tool warning messages.
disk: 200
image_name: "${VM_IMAGE_NAME}" # from stdenvars
env:
<<: *stdenvars
TEST_FLAVOR: bindings
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
script: *setup_and_run
always:
artifacts: *all_gosrc
swagger_task:
name: "Test Swagger"
alias: swagger
depends_on:
- build
container: *smallcontainer
env:
<<: *stdenvars
TEST_FLAVOR: swagger
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
script: *setup_and_run
always:
artifacts: *all_gosrc
endpoint_task:
name: "Test Endpoint"
alias: endpoint
depends_on:
- build
container: *smallcontainer
env:
<<: *stdenvars
TEST_FLAVOR: endpoint
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
gopath_cache: *ro_gopath_cache
clone_script: *noop # Comes from cache
script: *setup_and_run
always:
artifacts: *all_gosrc
vendor_task:
name: "Test Vendoring"
alias: vendor
depends_on:
- build
container: *smallcontainer
env:
<<: *stdenvars
TEST_FLAVOR: vendor
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
gopath_cache: *ro_gopath_cache
clone_script: *full_clone
script: *setup_and_run
always:
artifacts: *all_gosrc
# Confirm alternate/cross builds succeed
alt_build_task:
name: "$ALT_NAME"
alias: alt_build
depends_on:
- build
env:
<<: *stdenvars
TEST_FLAVOR: "altbuild"
matrix:
- env:
ALT_NAME: 'Build Each Commit'
gce_instance: *standardvm
- env:
ALT_NAME: 'Windows Cross'
gce_instance: *standardvm
- env:
ALT_NAME: 'Build Without CGO'
gce_instance: *standardvm
- env:
ALT_NAME: 'Build varlink API'
gce_instance: *standardvm
- env:
ALT_NAME: 'Static build'
timeout_in: 120m
gce_instance:
<<: *standardvm
cpu: 4
memory: "8Gb"
nix_cache:
folder: '/var/cache/nix'
populate_script: <-
mkdir -p /var/cache/nix &&
podman run -i -v /var/cache/nix:/mnt/nix:Z \
nixos/nix cp -rfT /nix /mnt/nix
fingerprint_script: cat nix/nixpkgs.json
- env:
ALT_NAME: 'Test build RPM'
gce_instance: *standardvm
script: *setup_and_run
always:
artifacts: *all_gosrc
# N/B: This is running on a Mac OS-X VM
osx_cross_task:
name: "OSX Cross"
alias: osx_cross
depends_on:
- build
env:
<<: *stdenvars
# Some future release-processing will benefit from standardized details
TEST_FLAVOR: "altbuild" # Platform variation prevents alt_build_task inclusion
ALT_NAME: 'OSX Cross'
osx_instance:
image: 'catalina-base'
script:
- brew install go
- brew install go-md2mn
- make podman-remote-darwin
- make install-podman-remote-darwin-docs
always:
artifacts: *all_gosrc
task:
name: Docker-py Compat.
alias: docker-py_test
depends_on:
- build
container: *smallcontainer
env:
<<: *stdenvars
TEST_FLAVOR: docker-py
gopath_cache: *ro_gopath_cache
clone_script: *full_clone
script: *setup_and_run
always:
artifacts: *all_gosrc
unit_test_task:
name: "Unit tests on $DISTRO_NV"
alias: unit_test
depends_on:
- build
matrix: *platform_axis
gce_instance: *standardvm
env:
TEST_FLAVOR: unit
clone_script: *noop # Comes from cache
gopath_cache: *ro_gopath_cache
script: *setup_and_run
always:
artifacts: *all_gosrc
# # Status aggregator for pass/fail from dependents
success_task:
name: "Total Success"
alias: success
# N/B: ALL tasks must be listed here, minus their '_task' suffix.
depends_on:
- ext_svc_check
- automation
- smoke
- build
- validate
- bindings
- endpoint
- swagger
- vendor
- alt_build
- osx_cross
- docker-py_test
- unit_test
# - integration_test
# - userns_integration_test
# - container_integration_test
# - system_test
# - userns_system_test
# - meta
container: *smallcontainer
env:
CTR_FQIN: ${FEDORA_CONTAINER_FQIN}
clone_script: *noop
script: /bin/true
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
"automation" [shape=ellipse style=bold color=a fontcolor=a]
"automation" -> "build" [color=a]
"build" [shape=record style=bold color="#0000f0" fillcolor="#f0f0f0" style=filled fontcolor="#0000f0" label="build\l|- Build for fedora-32\l- Build for fedora-31\l- Build for ubuntu-20\l- Build for ubuntu-19\l"]
"build" -> "alt_build" [color="#0000f0"]
"alt_build" [shape=record style=bold color="#0000f0" fillcolor="#f0f0f0" style=filled fontcolor="#0000f0" label="alt build\l|- Build Each Commit\l- Windows Cross\l- Build Without CGO\l- Build varlink API\l- Static build\l- Test build RPM\l"]
"alt_build" -> "success" [color="#0000f0"]
"success" [shape=ellipse style=bold color="#000000" fillcolor="#00f000" style=filled fontcolor="#000000"]
"build" -> "bindings" [color="#0000f0"]
"bindings" [shape=ellipse style=bold color=b fontcolor=b]
"bindings" -> "success" [color=b]
"build" -> "docker-py_test" [color="#0000f0"]
"docker-py_test" [shape=ellipse style=bold color=c fontcolor=c]
"docker-py_test" -> "success" [color=c]
"build" -> "endpoint" [color="#0000f0"]
"endpoint" [shape=ellipse style=bold color=d fontcolor=d]
"endpoint" -> "success" [color=d]
"build" -> "osx_cross" [color="#0000f0"]
"osx_cross" [shape=ellipse style=bold color=e fontcolor=e]
"osx_cross" -> "success" [color=e]
"build" -> "success" [color="#0000f0"]
"build" -> "swagger" [color="#0000f0"]
"swagger" [shape=ellipse style=bold color=f fontcolor=f]
"swagger" -> "success" [color=f]
"build" -> "unit_test" [color="#0000f0"]
"unit_test" [shape=record style=bold color="#000000" fillcolor="#f09090" style=filled fontcolor="#000000" label="unit test\l|- Unit tests on fedora-32\l- Unit tests on fedora-31\l- Unit tests on ubuntu-20\l- Unit tests on ubuntu-19\l"]
"unit_test" -> "success" [color="#f09090"]
"build" -> "validate" [color="#0000f0"]
"validate" [shape=record style=bold color="#00c000" fillcolor="#f0f0f0" style=filled fontcolor="#00c000" label="validate\l|= Validate fedora-32 Build\l"]
"validate" -> "success" [color="#00c000"]
"build" -> "vendor" [color="#0000f0"]
"vendor" [shape=ellipse style=bold color=g fontcolor=g]
"vendor" -> "success" [color=g]
"automation" -> "success" [color=a]
"ext_svc_check" [shape=ellipse style=bold color=h fontcolor=h]
"ext_svc_check" -> "build" [color=h]
"ext_svc_check" -> "success" [color=h]
"smoke" [shape=ellipse style=bold color=i fontcolor=i]
"smoke" -> "build" [color=i]
"smoke" -> "success" [color=i]

View File

@ -0,0 +1,10 @@
#!/bin/bash
set -e
testdir=$(dirname $0)
for i in $testdir/*.t;do
echo -e "\nExecuting $testdir/$i..." >&2
$i
done

View File

@ -1 +0,0 @@
../../bin/install_automation.sh

View File

@ -10,7 +10,7 @@ set -eo pipefail
SCRIPT_BASEDIR="$(basename $0)"
badusage() {
echo "Incorrect usage: $SCRIPT_BASEDIR) <command> [options]" > /dev/stderr
echo "Incorrect usage: $SCRIPT_BASEDIR) <command> [options]" >> /dev/stderr
echo "ERROR: $1"
exit 121
}

68
common/bin/xrtry.sh Executable file
View File

@ -0,0 +1,68 @@
#!/bin/bash
set -eo pipefail
# This scripts is intended to wrap commands which occasionally fail due
# to external factors like networking hiccups, service failover, load-balancing,
# etc. It is not designed to handle operational failures gracefully, such as
# bad (wrapped) command-line arguments, running out of local disk-space,
# authZ/authN, etc.
# Assume script was installed or is running in dir struct. matching repo layout.
AUTOMATION_LIB_PATH="${AUTOMATION_LIB_PATH:-$(dirname ${BASH_SOURCE[0]})/../lib}"
source "$AUTOMATION_LIB_PATH/anchors.sh"
source "$AUTOMATION_LIB_PATH/console_output.sh"
usage(){
local errmsg="$1" # optional
dbg "Showing usage with errmsg='$errmsg'"
msg "
Usage: $SCRIPT_FILENAME [[attempts] [[sleep] [exit...]]] <--> <command> [arg...]
Arguments:
attempts Total number of times to attempt <command>. Default is 3.
sleep Milliseconds to sleep between retry attempts, doubling
duration each failure except the last. Must also specify
[attempts]. Default is 1 second
exit... One or more exit code values to consider as failure.
Must also specify [attempts] and [sleep]. Default is any
non-zero exit. N/B: Multiple values must be quoted!
-- Required separator between any / no options, and command
command Path to command to execute, cannot use a shell builtin.
arg... Options and/or arguments to pass to command.
"
[[ -n "$errmsg" ]] || \
die "$errmsg" # exits non-zero
}
attempts=3
sleep_ms=1000
declare -a exit_codes
declare -a args=("$@")
n=1
for arg in attempts sleep_ms exit_codes; do
if [[ "$arg" == "--" ]]; then
shift
break
fi
declare "$arg=${args[n]}"
shift
n=$[n+1]
done
((attempts>0)) || \
usage "The number of retry attempts must be greater than 1, not '$attempts'"
((sleep_ms>10)) || \
usage "The number of milliseconds must be greater than 10, not '$sleep_ms'"
for exit_code in "${exit_codes[@]}"; do
if ((exit_code<0)) || ((exit_code>254)); then
usage "Every exit code must be between 0-254, no '$exit_code'"
fi
done
[[ -n "$@" ]] || \
usage "Must specify a command to execute"
err_retry "$attempts" "$sleep_ms" "${exit_codes[@]}" "$@"

View File

@ -9,15 +9,26 @@ SCRIPT_PATH=$(realpath "$(dirname $0)") # Source script's directory
SCRIPT_FILENAME=$(basename $0) # Source script's file
MKTEMP_FORMAT=".tmp_${SCRIPT_FILENAME}_XXXXXXXX" # Helps reference source
_avcache="$AUTOMATION_VERSION" # cache, DO NOT USE (except for unit-tests)
automation_version() {
local git_cmd="git describe HEAD"
cd "$AUTOMATION_ROOT"
if [[ -r "AUTOMATION_VERSION" ]]; then
cat "AUTOMATION_VERSION"
elif [[ -n "$(type -P git)" ]] && $git_cmd &> /dev/null; then
$git_cmd
local gitbin="$(type -P git)"
if [[ -z "$_avcache" ]]; then
if [[ -r "$AUTOMATION_ROOT/AUTOMATION_VERSION" ]]; then
_avcache=$(<"$AUTOMATION_ROOT/AUTOMATION_VERSION")
# The various installers and some unit-tests rely on git in this way
elif [[ -x "$gitbin" ]] && [[ -d "$AUTOMATION_ROOT/../.git" ]]; then
local gitoutput
# Avoid dealing with $CWD during error conditions - do it in a sub-shell
if gitoutput=$(cd "$AUTOMATION_ROOT"; $gitbin describe HEAD; exit $?); then
_avcache=$gitoutput
fi
fi
fi
if [[ -n "$_avcache" ]]; then
echo "$_avcache"
else
echo "Error determining version number" > /dev/stderr
echo "Error determining version number" >> /dev/stderr
exit 1
fi
}

13
common/lib/common_lib.sh Normal file
View File

@ -0,0 +1,13 @@
#!/bin/bash
# This file is intended to be sourced as a short-cut to loading
# all common libraries one-by-one.
AUTOMATION_LIB_PATH="${AUTOMATION_LIB_PATH:-$(dirname ${BASH_SOURCE[0]})}"
# Filename list must be hard-coded
# When installed, other files may be present in lib directory
COMMON_LIBS="anchors.sh defaults.sh platform.sh utils.sh console_output.sh"
for filename in $COMMON_LIBS; do
source $(dirname "$BASH_SOURCE[0]}")/$filename
done

View File

@ -3,33 +3,44 @@
# A Library of contextual console output-related operations.
# Intended for use by other scripts, not to be executed directly.
source $(dirname "${BASH_SOURCE[0]}")/defaults.sh
# shellcheck source=common/lib/defaults.sh
source $(dirname $(realpath "${BASH_SOURCE[0]}"))/defaults.sh
# helper, not intended for use outside this file
_rel_path() {
local abs_path=$(realpath "$1")
local rel_path=$(realpath --relative-to=$PWD $abs_path)
local abs_path_len=${#abs_path}
local rel_path_len=${#rel_path}
if ((abs_path_len <= rel_path_len)); then
echo "$abs_path"
if [[ -z "$1" ]]; then
echo "<stdin>"
else
echo "$rel_path"
local abs_path rel_path abs_path_len rel_path_len
abs_path=$(realpath "$1")
rel_path=$(realpath --relative-to=. $abs_path)
abs_path_len=${#abs_path}
rel_path_len=${#rel_path}
if ((abs_path_len <= rel_path_len)); then
echo "$abs_path"
else
echo "$rel_path"
fi
fi
}
# helper, not intended for use outside this file
_ctx() {
local shortest_source_path grandparent_func
# Caller's caller details
local shortest_source_path=$(_rel_path "${BASH_SOURCE[3]}")
shortest_source_path=$(_rel_path "${BASH_SOURCE[3]}")
grandparent_func="${FUNCNAME[2]}"
[[ -n "$grandparent_func" ]] || \
grandparent_func="main"
echo "$shortest_source_path:${BASH_LINENO[2]} in ${FUNCNAME[3]}()"
}
# helper, not intended for use outside this file.
_fmt_ctx() {
local stars="************************************************"
local prefix="${1:-no prefix given}"
local message="${2:-no message given}"
local stars prefix message
stars="************************************************"
prefix="${1:-no prefix given}"
message="${2:-no message given}"
echo "$stars"
echo "$prefix ($(_ctx))"
echo "$stars"
@ -37,22 +48,77 @@ _fmt_ctx() {
# Print a highly-visible message to stderr. Usage: warn <msg>
warn() {
_fmt_ctx "$WARNING_MSG_PREFIX ${1:-no warning message given}" > /dev/stderr
_fmt_ctx "$WARNING_MSG_PREFIX ${1:-no warning message given}" >> /dev/stderr
}
# Same as warn() but exit non-zero or with given exit code
# usage: die <msg> [exit-code]
die() {
_fmt_ctx "$ERROR_MSG_PREFIX ${1:-no error message given}" > /dev/stderr
exit ${2:-1}
_fmt_ctx "$ERROR_MSG_PREFIX ${1:-no error message given}" >> /dev/stderr
local exit_code=${2:-1}
((exit_code==0)) || \
exit $exit_code
}
dbg() {
if ((DEBUG)); then
local shortest_source_path=$(_rel_path "${BASH_SOURCE[1]}")
local shortest_source_path
if ((A_DEBUG)); then
shortest_source_path=$(_rel_path "${BASH_SOURCE[1]}")
(
echo
echo "$DEBUG_MSG_PREFIX ${1:-No debugging message given} ($shortest_source_path:${BASH_LINENO[0]} in ${FUNCNAME[1]}())"
) > /dev/stderr
) >> /dev/stderr
fi
}
msg() {
echo "${1:-No message specified}" &>> /dev/stderr
}
# Mimic set +x for a single command, along with calling location and line.
showrun() {
local -a context
# Tried using readarray, it broke tests for some reason, too lazy to investigate.
# shellcheck disable=SC2207
context=($(caller 0))
echo "+ $* # ${context[2]}:${context[0]} in ${context[1]}()" >> /dev/stderr
"$@"
}
# Expects stdin, indents every input line right by 4 spaces
indent(){
cat - |& while IFS='' read -r LINE; do
awk '{print " "$0}' <<<"$LINE"
done
}
req_env_vars(){
dbg "Confirming non-empty vars for $*"
local var_name
local var_value
local msgpfx
for var_name in "$@"; do
var_value=$(tr -d '[:space:]' <<<"${!var_name}")
msgpfx="Environment variable '$var_name'"
((${#var_value}>0)) || \
die "$msgpfx is required by $(_rel_path "${BASH_SOURCE[1]}"):${FUNCNAME[1]}() but empty or entirely white-space."
done
}
show_env_vars() {
local filter_rx
local env_var_names
filter_rx='(^PATH$)|(^BASH_FUNC)|(^_.*)'
msg "Selection of current env. vars:"
if [[ -n "${SECRET_ENV_RE}" ]]; then
filter_rx="${filter_rx}|$SECRET_ENV_RE"
else
warn "The \$SECRET_ENV_RE var. unset/empty: Not filtering sensitive names!"
fi
for env_var_name in $(awk 'BEGIN{for(v in ENVIRON) print v}' | grep -Eiv "$filter_rx" | sort); do
line="${env_var_name}=${!env_var_name}"
msg " $line"
done
}

View File

@ -7,9 +7,10 @@ CI="${CI:-false}" # true: _unlikely_ human-presence at the controls.
[[ $CI == "false" ]] || CI='true' # Err on the side of automation
# Default to NOT running in debug-mode unless set non-zero
DEBUG=${DEBUG:-0}
# Conditionals like ((DEBUG)) easier than checking "true"/"False"
( test "$DEBUG" -eq 0 || test "$DEBUG" -ne 0 ) &>/dev/null || DEBUG=1 # assume true when non-integer
A_DEBUG=${A_DEBUG:-0}
# Conditionals like ((A_DEBUG)) easier than checking "true"/"False"
( test "$A_DEBUG" -eq 0 || test "$A_DEBUG" -ne 0 ) &>/dev/null || \
A_DEBUG=1 # assume true when non-integer
# String prefixes to use when printing messages to the console
DEBUG_MSG_PREFIX="${DEBUG_MSG_PREFIX:-DEBUG:}"

95
common/lib/platform.sh Normal file
View File

@ -0,0 +1,95 @@
# Library of os/platform related definitions and functions
# Not intended to be executed directly
OS_RELEASE_VER="${OS_RELEASE_VER:-$(source /etc/os-release; echo $VERSION_ID | tr -d '.')}"
OS_RELEASE_ID="${OS_RELEASE_ID:-$(source /etc/os-release; echo $ID)}"
OS_REL_VER="${OS_REL_VER:-$OS_RELEASE_ID-$OS_RELEASE_VER}"
# Ensure no user-input prompts in an automation context
export DEBIAN_FRONTEND="${DEBIAN_FRONTEND:-noninteractive}"
# _TEST_UID only needed for unit-testing
# shellcheck disable=SC2154
if ((UID)) || ((_TEST_UID)); then
SUDO="${SUDO:-sudo}"
if [[ "$OS_RELEASE_ID" =~ (ubuntu)|(debian) ]]; then
if [[ ! "$SUDO" =~ noninteractive ]]; then
SUDO="$SUDO env DEBIAN_FRONTEND=$DEBIAN_FRONTEND"
fi
fi
fi
# Regex defining all CI-related env. vars. necessary for all possible
# testing operations on all platforms and versions. This is necessary
# to avoid needlessly passing through global/system values across
# contexts, such as host->container or root->rootless user
#
# List of envariables which must be EXACT matches
PASSTHROUGH_ENV_EXACT="${PASSTHROUGH_ENV_EXACT:-DEST_BRANCH|IMAGE_SUFFIX|DISTRO_NV|SCRIPT_BASE}"
# List of envariable patterns which must match AT THE BEGINNING of the name.
PASSTHROUGH_ENV_ATSTART="${PASSTHROUGH_ENV_ATSTART:-CI|TEST}"
# List of envariable patterns which can match ANYWHERE in the name
PASSTHROUGH_ENV_ANYWHERE="${PASSTHROUGH_ENV_ANYWHERE:-_NAME|_FQIN}"
# List of expressions to exclude env. vars for security reasons
SECRET_ENV_RE="${SECRET_ENV_RE:-(^PATH$)|(^BASH_FUNC)|(^_.*)|(.*PASSWORD.*)|(.*TOKEN.*)|(.*SECRET.*)}"
# Return a list of environment variables that should be passed through
# to lower levels (tests in containers, or via ssh to rootless).
# We return the variable names only, not their values. It is up to our
# caller to reference values.
passthrough_envars() {
local passthrough_env_re="(^($PASSTHROUGH_ENV_EXACT)\$)|(^($PASSTHROUGH_ENV_ATSTART))|($PASSTHROUGH_ENV_ANYWHERE)"
local envar
for envar in SECRET_ENV_RE PASSTHROUGH_ENV_EXACT PASSTHROUGH_ENV_ATSTART PASSTHROUGH_ENV_ANYWHERE passthrough_env_re; do
if [[ -z "${!envar}" ]]; then
echo "Error: Required env. var. \$$envar is unset or empty in call to passthrough_envars()" >> /dev/stderr
exit 1
fi
done
echo "Warning: Will pass env. vars. matching the following regex:
$passthrough_env_re" >> /dev/stderr
compgen -A variable | grep -Ev "$SECRET_ENV_RE" | grep -E "$passthrough_env_re"
}
# On more occasions than we'd like, it's necessary to put temporary
# platform-specific workarounds in place. To help ensure they'll
# actually be temporary, it's useful to place a time limit on them.
# This function accepts two arguments:
# - A (required) future date of the form YYYYMMDD (UTC based).
# - An (optional) message string to display upon expiry of the timebomb.
timebomb() {
local expire="$1"
if ! expr "$expire" : '[0-9]\{8\}$' > /dev/null; then
echo "timebomb: '$expire' must be UTC-based and of the form YYYYMMDD"
exit 1
fi
if [[ $(date -u +%Y%m%d) -lt $(date -u -d "$expire" +%Y%m%d) ]]; then
return
fi
declare -a frame
read -a frame < <(caller)
cat << EOF >> /dev/stderr
***********************************************************
* TIME BOMB EXPIRED!
*
* >> ${frame[1]}:${frame[0]}: ${2:-No reason given, tsk tsk}
*
* Temporary workaround expired on ${expire:0:4}-${expire:4:2}-${expire:6:2}.
*
* Please review the above source file and either remove the
* workaround or, if absolutely necessary, extend it.
*
* Please also check for other timebombs while you're at it.
***********************************************************
EOF
exit 1
}

129
common/lib/utils.sh Normal file
View File

@ -0,0 +1,129 @@
# Library of utility functions for manipulating/controlling bash-internals
# Not intended to be executed directly
source $(dirname $(realpath "${BASH_SOURCE[0]}"))/console_output.sh
copy_function() {
local src="$1"
local dst="$2"
[[ -n "$src" ]] || \
die "Expecting source function name to be passed as the first argument"
[[ -n "$dst" ]] || \
die "Expecting destination function name to be passed as the second argument"
src_def=$(declare -f "$src") || [[ -n "$src_def" ]] || \
die "Unable to find source function named ${src}()"
dbg "Copying function ${src}() to ${dst}()"
# First match of $src replaced by $dst
eval "${src_def/$src/$dst}"
}
rename_function() {
local from="$1"
local to="$2"
[[ -n "$from" ]] || \
die "Expecting current function name to be passed as the first argument"
[[ -n "$to" ]] || \
die "Expecting desired function name to be passed as the second argument"
dbg "Copying function ${from}() to ${to}() before unlinking ${from}()"
copy_function "$from" "$to"
dbg "Undefining function $from"
unset -f "$from"
}
# Return 0 if the first argument matches any subsequent argument exactly
# otherwise return 1.
contains() {
local needle="$1"
local hay # one piece of the stack at a time
shift
#dbg "Looking for '$1' in '$@'"
for hay; do [[ "$hay" == "$needle" ]] && return 0; done
return 1
}
not_contains(){
if contains "$@"; then
return 1
else
return 0
fi
}
# Retry a command on a particular exit code, up to a max number of attempts,
# with exponential backoff.
#
# Usage: err_retry <attempts> <sleep ms> <exit_code> <command> <args>
# Where:
# attempts: The number of attempts to make.
# sleep ms: Number of milliseconds to sleep (doubles every attempt)
# exit_code: Space separated list of exit codes to retry. If empty
# then any non-zero code will be considered for retry.
#
# When the number of attempts is exhausted, exit code is 126 is returned.
#
# N/B: Make sure the exit_code argument is properly quoted!
#
# Based on work by 'Ayla Ounce <reacocard@gmail.com>' available at:
# https://gist.github.com/reacocard/28611bfaa2395072119464521d48729a
err_retry() {
local rc=0
local attempt=0
local attempts="$1"
local sleep_ms="$2"
local -a exit_codes
((attempts>1)) || \
die "It's nonsense to retry a command less than twice, or '$attempts'"
((sleep_ms>0)) || \
die "Refusing idiotic sleep interval of $sleep_ms"
local zzzs
zzzs=$(awk -e '{printf "%f", $1 / 1000}'<<<"$sleep_ms")
local nzexit=0 #false
local dbgspec
if [[ -z "$3" ]]; then
nzexit=1; # true
dbgspec="non-zero"
else
exit_codes=("$3")
dbgspec="[${exit_codes[*]}]"
fi
shift 3
dbg "Will retry $attempts times, sleeping up to $zzzs*2^$attempts or exit code(s) $dbgspec."
local print_once
print_once=$(echo -n " + "; printf '%q ' "${@}")
for attempt in $(seq 1 $attempts); do
# Make each attempt easy to distinguish
if ((nzexit)); then
msg "Attempt $attempt of $attempts (retry on non-zero exit):"
else
msg "Attempt $attempt of $attempts (retry on exit ${exit_codes[*]}):"
fi
if [[ -n "$print_once" ]]; then
msg "$print_once"
print_once=""
fi
"$@" && rc=$? || rc=$? # work with set -e or +e
msg "exit($rc)" |& indent 1 # Make easy to debug
if ((nzexit)) && ((rc==0)); then
dbg "Success! $rc==0" |& indent 1
return 0
elif ((nzexit==0)) && not_contains $rc "${exit_codes[@]}"; then
dbg "Success! ($rc not in [${exit_codes[*]}])" |& indent 1
return $rc
elif ((attempt<attempts)) # No sleep on last failure
then
msg "Failure! Sleeping $zzzs seconds" |& indent 1
sleep "$zzzs"
fi
zzzs=$(awk -e '{printf "%f", $1 + $1}'<<<"$zzzs")
done
msg "Retry attempts exhausted"
if ((nzexit)); then
return $rc
else
return 126
fi
}

View File

@ -0,0 +1,26 @@
#!/bin/bash
# This helper script is intended for testing several functions
# which output calling context. It is intended to only be used
# by the console-output unit-tests. They are senitive to
# the both line-positions and line-content of all the following.
SCRIPT_DIRPATH=$(dirname "${BASH_SOURCE[0]}")
AUTOMATION_LIB_PATH=$(realpath "$SCRIPT_DIRPATH/../lib")
source "$AUTOMATION_LIB_PATH/common_lib.sh"
set +e
test_function() {
A_DEBUG=1 dbg "Test dbg message"
warn "Test warning message"
msg "Test msg message"
die "Test die message" 0
}
A_DEBUG=1 dbg "Test dbg message"
warn "Test warning message"
msg "Test msg message"
die "Test die message" 0
test_function

View File

@ -6,6 +6,6 @@ set -e
cd $(dirname $0)
for testscript in test???-*.sh; do
echo -e "\nExecuting $testscript..." > /dev/stderr
echo -e "\nExecuting $testscript..." >> /dev/stderr
./$testscript
done

View File

@ -6,7 +6,7 @@
TEST_DIR=$(realpath "$(dirname ${BASH_SOURCE[0]})/../../bin")
source $(dirname ${BASH_SOURCE[0]})/testlib.sh || exit 1
INSTALLER_FILEPATH="$TEST_DIR/$SUBJ_FILENAME"
TEST_INSTALL_ROOT=$(mktemp -p '' -d "tmp_$(basename $0)_XXXXXXXX")
TEST_INSTALL_ROOT=$(mktemp -p '' -d "testing_$(basename $0)_XXXXXXXX")
trap "rm -rf $TEST_INSTALL_ROOT" EXIT
# Receives special treatment in the installer script
@ -23,10 +23,20 @@ test_cmd \
$INSTALLER_FILEPATH "not a version number"
test_cmd \
"The inetaller exits non-zero with a helpful message about an non-existant version" \
"The installer exits non-zero with a helpful message about an non-existent version" \
128 "fatal.+v99.99.99.*not found" \
$INSTALLER_FILEPATH 99.99.99
test_cmd \
"The installer successfully installs the oldest tag" \
0 "installer version 'v1.0.0'.+exec.+AUTOMATION_REPO_BRANCH=main.+Installation complete" \
$INSTALLER_FILEPATH 1.0.0
test_cmd \
"The oldest installed installer's default branch was modified" \
0 "" \
grep -Eqm1 '^AUTOMATION_REPO_BRANCH=.+main' "$INSTALL_PREFIX/automation/bin/$SUBJ_FILENAME"
test_cmd \
"The installer detects incompatible future installer source version by an internal mechanism" \
10 "Error.+incompatible.+99.99.99" \
@ -37,6 +47,13 @@ test_cmd \
0 "Installation complete" \
$INSTALLER_FILEPATH 0.0.0
for required_file in environment AUTOMATION_VERSION; do
test_cmd \
"The installer created the file $required_file in $INSTALL_PREFIX/automation" \
0 "" \
test -r "$INSTALL_PREFIX/automation/$required_file"
done
test_cmd \
"The installer correctly removes/reinstalls \$TEST_INSTALL_ROOT" \
0 "Warning: Removing existing installed version" \
@ -47,6 +64,11 @@ test_cmd \
0 "$(git describe HEAD)" \
cat "$INSTALL_PREFIX/automation/AUTOMATION_VERSION"
test_cmd \
"The installer script doesn't redirect to 'stderr' anywhere." \
1 "" \
grep -q '> /dev/stderr' $INSTALLER_FILEPATH
load_example_environment() {
local _args="$@"
# Don't disturb testing
@ -54,7 +76,7 @@ load_example_environment() {
source "$INSTALL_PREFIX/automation/environment" || return 99
echo "AUTOMATION_LIB_PATH ==> ${AUTOMATION_LIB_PATH:-UNDEFINED}"
echo "PATH ==> ${PATH:-EMPTY}"
[[ -z "$_args" ]] || $_args
[[ -z "$_args" ]] || A_DEBUG=1 $_args
)
}
@ -74,8 +96,11 @@ test_cmd \
test_cmd \
"The installed installer, can update itself to the latest upstream version" \
0 "Installation complete for v[0-9]+\.[0-9]+\.[0-9]+" \
0 "Finalizing successful installation of version v" \
execute_in_example_environment $SUBJ_FILENAME latest
# Ensure cleanup
rm -rf $TEST_INSTALL_ROOT
# Must be last call
exit_with_status

View File

@ -26,7 +26,7 @@ for path_var in AUTOMATION_LIB_PATH AUTOMATION_ROOT SCRIPT_PATH; do
test_cmd "\$$path_var is defined and non-empty: ${!path_var}" \
0 "" \
test -n "${!path_var}"
test_cmd "\$$path_var referrs to existing directory" \
test_cmd "\$$path_var refers to existing directory" \
0 "" \
test -d "${!path_var}"
done
@ -39,23 +39,33 @@ test_cmd "There is no AUTOMATION_VERSION file in \$AUTOMATION_ROOT before testin
1 "" \
test -r "$AUTOMATION_ROOT/AUTOMATION_VERSION"
TEMPDIR=$(mktemp -p '' -d tmp_${SCRIPT_FILENAME}_XXXXXXXX)
TEMPDIR=$(mktemp -p '' -d testing_${SCRIPT_FILENAME}_XXXXXXXX)
trap "rm -rf $TEMPDIR" EXIT
cat << EOF > "$TEMPDIR/git"
#!/bin/bash
echo "Standard Error is ignored" > /dev/stderr
echo "99.99.99" > /dev/stdout
#!/bin/bash -e
echo "99.99.99"
EOF
chmod +x "$TEMPDIR/git"
test_cmd "Mock git returns expected output" \
0 "99.99.99" \
$TEMPDIR/git
actual_path=$PATH
export PATH=$TEMPDIR:$PATH
test_cmd "Without AUTOMATION_VERSION file, automation_version() uses git" \
export PATH=$TEMPDIR:$PATH:$TEMPDIR
_avcache="" # ugly, but necessary to not pollute other test results
test_cmd "Without AUTOMATION_VERSION file, automation_version() uses mock git" \
0 "99.99.99" \
automation_version
echo "exit 123" >> "$TEMPDIR/git"
echo -e "#!/bin/bash\nexit 99" > "$TEMPDIR/git"
test_cmd "Modified mock git exits with expected error code" \
99 "" \
$TEMPDIR/git
_avcache=""
test_cmd "Without AUTOMATION_VERSION file, a git error causes automation_version() to error" \
1 "Error determining version number" \
automation_version
@ -64,11 +74,15 @@ ln -sf /usr/bin/* $TEMPDIR/
ln -sf /bin/* $TEMPDIR/
rm -f "$TEMPDIR/git"
export PATH=$TEMPDIR
test_cmd "Without git or AUTOMATION_VERSION file automation_version() errorsr"\
_avcache=""
test_cmd "Without git or AUTOMATION_VERSION file automation_version() errors"\
1 "Error determining version number" \
automation_version
unset PATH
export PATH=$actual_path
# ensure cleanup
rm -rf $TEMPDIR
# Must be last call
exit_with_status

View File

@ -1,6 +1,7 @@
#!/bin/bash
source $(dirname ${BASH_SOURCE[0]})/testlib.sh || exit 1
SCRIPT_DIRPATH=$(dirname ${BASH_SOURCE[0]})
source $SCRIPT_DIRPATH/testlib.sh || exit 1
source "$TEST_DIR"/"$SUBJ_FILENAME" || exit 2
test_message_text="This is the test text for a console_output library unit-test"
@ -29,7 +30,7 @@ basic_tests() {
$_fname "$test_message_text"
test_cmd "The message text includes a the file, line number and testing function reference" \
$_exp_exit "testlib.sh:[[:digit:]]+ in test_cmd()" \
$_exp_exit '\.sh:[[:digit:]]+ in .+\(\)' \
$_fname "$test_message_text"
}
@ -43,19 +44,124 @@ for fname in warn die; do
basic_tests $fname $exp_exit $exp_word
done
DEBUG=0
# Function requires stdin, must execute in subshell by test_cmd
export -f indent
# test_cmd whitespace-squashes output but this function's purpose is producing whitespace
TEST_STRING="The quick brown fox jumped to the right by N-spaces"
EXPECTED_SUM="334676ca13161af1fd95249239bb415b3d30eee7f78b39c59f9af5437989b724"
test_cmd "The indent function correctly indents 4x number of spaces indicated" \
0 "$EXPECTED_SUM" \
bash -c "echo '$TEST_STRING' | indent | sha256sum"
EXPECTED_SUM="764865c67f4088dd19981733d88287e1e196e71bef317092dcb6cb9ff101a319"
test_cmd "The indent function indents it's own output" \
0 "$EXPECTED_SUM" \
bash -c "echo '$TEST_STRING' | indent | indent | sha256sum"
A_DEBUG=0
test_cmd \
"The dbg function has no output when \$DEBUG is zero and no message is given" \
"The dbg function has no output when \$A_DEBUG is zero and no message is given" \
0 "" \
dbg
test_cmd \
"The dbg function has no output when \$DEBUG is zero and a test message is given" \
"The dbg function has no output when \$A_DEBUG is zero and a test message is given" \
0 "" \
dbg "$test_message_text"
DEBUG=1
A_DEBUG=1
basic_tests dbg 0 DEBUG
A_DEBUG=0
test_cmd \
"All primary output functions include the expected context information" \
0 "
DEBUG: Test dbg message (console_output_test_helper.sh:21 in main())
\*+
WARNING: Test warning message (console_output_test_helper.sh:22 in main())
\*+
Test msg message
\*+
ERROR: Test die message (console_output_test_helper.sh:24 in main())
\*+
DEBUG: Test dbg message (console_output_test_helper.sh:15 in test_function())
\*+
WARNING: Test warning message (console_output_test_helper.sh:16 in test_function())
\*+
Test msg message
\*+
ERROR: Test die message (console_output_test_helper.sh:18 in test_function())
\*+
" \
bash "$SCRIPT_DIRPATH/console_output_test_helper.sh"
export VAR1=foo VAR2=bar VAR3=baz
test_cmd \
"The req_env_vars function has no output for all non-empty vars" \
0 "" \
req_env_vars VAR1 VAR2 VAR3
unset VAR2
test_cmd \
"The req_env_vars function catches an empty VAR2 value" \
1 "Environment variable 'VAR2' is required" \
req_env_vars VAR1 VAR2 VAR3
VAR1="
"
test_cmd \
"The req_env_vars function catches a whitespace-full VAR1 value" \
1 "Environment variable 'VAR1' is required" \
req_env_vars VAR1 VAR2 VAR3
unset VAR1 VAR2 VAR3
test_cmd \
"The req_env_vars function shows the source file/function of caller and error" \
1 "testlib.sh:test_cmd()" \
req_env_vars VAR1 VAR2 VAR3
unset SECRET_ENV_RE
test_cmd \
"The show_env_vars function issues warning when \$SECRET_ENV_RE is unset/empty" \
0 "SECRET_ENV_RE var. unset/empty" \
show_env_vars
export UPPERCASE="@@@MAGIC@@@"
export super_secret="@@@MAGIC@@@"
export nOrMaL_vAr="@@@MAGIC@@@"
for var_name in UPPERCASE super_secret nOrMaL_vAr; do
test_cmd \
"Without secret filtering, expected $var_name value is shown" \
0 "${var_name}=${!var_name}" \
show_env_vars
done
export SECRET_ENV_RE='(.+SECRET.*)|(uppercase)|(mal_var)'
TMPFILE=$(mktemp -p '' ".$(basename ${BASH_SOURCE[0]})_tmp_XXXX")
#trap "rm -f $TMPFILE" EXIT # FIXME
( show_env_vars 2>&1 ) >> "$TMPFILE"
test_cmd \
"With case-insensitive secret filtering, no magic values shown in output" \
1 ""\
grep -q 'UPPERCASE=@@@MAGIC@@@' "$TMPFILE"
unset env_vars SECRET_ENV_RE UPPERCASE super_secret nOrMaL_vAr
test_cmd \
"The showrun function executes /bin/true as expected" \
0 "\+ /bin/true # \./testlib.sh:97 in test_cmd"\
showrun /bin/true
test_cmd \
"The showrun function executes /bin/false as expected" \
1 "\+ /bin/false # \./testlib.sh:97 in test_cmd"\
showrun /bin/false
test_cmd \
"The showrun function can call itself" \
0 "\+ /bin/true # .*console_output.sh:[0-9]+ in showrun" \
showrun showrun /bin/true
# script is set +e
exit_with_status

View File

@ -20,24 +20,24 @@ test_ci() {
CI="$prev_CI"
}
# DEBUG must default to 0 or non-zero
# A_DEBUG must default to 0 or non-zero
# usage: <expected non-zero> [initial_value]
test_debug() {
local exp_non_zero=$1
local init_value="$2"
[[ -z "$init_value" ]] || \
DEBUG=$init_value
local desc_pfx="The \$DEBUG env. var initialized '$init_value', after loading library is"
A_DEBUG=$init_value
local desc_pfx="The \$A_DEBUG env. var initialized '$init_value', after loading library is"
source "$TEST_DIR"/"$SUBJ_FILENAME"
if ((exp_non_zero)); then
test_cmd "$desc_pfx non-zero" \
0 "" \
test "$DEBUG" -ne 0
test "$A_DEBUG" -ne 0
else
test_cmd "$desc_pfx zero" \
0 "" \
test "$DEBUG" -eq 0
test "$A_DEBUG" -eq 0
fi
}

100
common/test/testlib-platform.sh Executable file
View File

@ -0,0 +1,100 @@
#!/bin/bash
# Unit-tests for library script in the current directory
# Also verifies test script is derived from library filename
# shellcheck source-path=./
source $(dirname ${BASH_SOURCE[0]})/testlib.sh || exit 1
# Must be statically defined, 'source-path' directive can't work here.
# shellcheck source=../lib/platform.sh disable=SC2154
source "$TEST_DIR/$SUBJ_FILENAME" || exit 2
# For whatever reason, SCRIPT_PATH cannot be resolved.
# shellcheck disable=SC2154
test_cmd "Library $SUBJ_FILENAME is not executable" \
0 "" \
test ! -x "$SCRIPT_PATH/$SUBJ_FILENAME"
for var in OS_RELEASE_VER OS_RELEASE_ID OS_REL_VER; do
test_cmd "The variable \$$var is defined and non-empty" \
0 "" \
test -n "${!var}"
done
for var in OS_RELEASE_VER OS_REL_VER; do
NODOT=$(tr -d '.' <<<"${!var}")
test_cmd "The '.' character does not appear in \$$var" \
0 "" \
test "$NODOT" == "${!var}"
done
for OS_RELEASE_ID in 'debian' 'ubuntu'; do
(
export _TEST_UID=$RANDOM # Normally $UID is read-only
# Must be statically defined, 'source-path' directive can't work here.
# shellcheck source=../lib/platform.sh disable=SC2154
source "$TEST_DIR/$SUBJ_FILENAME" || exit 2
# The point of this test is to confirm it's defined
# shellcheck disable=SC2154
test_cmd "The '\$SUDO' env. var. is non-empty when \$_TEST_UID is non-zero" \
0 "" \
test -n "$SUDO"
test_cmd "The '\$SUDO' env. var. contains 'noninteractive' when '\$_TEST_UID' is non-zero" \
0 "noninteractive" \
echo "$SUDO"
)
done
test_cmd "The passthrough_envars() func. has output by default." \
0 ".+" \
passthrough_envars
(
# Confirm defaults may be overriden
PASSTHROUGH_ENV_EXACT="FOOBARBAZ"
PASSTHROUGH_ENV_ATSTART="FOO"
PASSTHROUGH_ENV_ANYWHERE="BAR"
export FOOBARBAZ="testing"
test_cmd "The passthrough_envars() func. w/ overriden expr. only prints name of test variable." \
0 "FOOBARBAZ" \
passthrough_envars
)
# Test from a mostly empty environment to limit possibility of expr mismatch flakes
declare -a printed_envs
readarray -t printed_envs <<<$(env --ignore-environment PATH="$PATH" FOOBARBAZ="testing" \
SECRET_ENV_RE="(^PATH$)|(^BASH_FUNC)|(^_.*)|(FOOBARBAZ)|(SECRET_ENV_RE)" \
CI="true" AUTOMATION_LIB_PATH="/path/to/some/place" \
bash -c "source $TEST_DIR/$SUBJ_FILENAME && passthrough_envars")
test_cmd "The passthrough_envars() func. w/ overriden \$SECRET_ENV_RE hides test variable." \
1 "0" \
expr match "${printed_envs[*]}" '.*FOOBARBAZ.*'
test_cmd "The passthrough_envars() func. w/ overriden \$SECRET_ENV_RE returns CI variable." \
0 "[1-9]+[0-9]*" \
expr match "${printed_envs[*]}" '.*CI.*'
test_cmd "timebomb() function requires at least one argument" \
1 "must be UTC-based and of the form YYYYMMDD" \
timebomb
TZ=UTC12 \
test_cmd "timebomb() function ignores TZ and compares < UTC-forced current date" \
1 "TIME BOMB EXPIRED" \
timebomb $(TZ=UTC date +%Y%m%d)
test_cmd "timebomb() alerts user when no description given" \
1 "No reason given" \
timebomb 00010101
EXPECTED_REASON="test${RANDOM}test"
test_cmd "timebomb() gives reason when one was provided" \
1 "$EXPECTED_REASON" \
timebomb 00010101 "$EXPECTED_REASON"
# Must be last call
exit_with_status

75
common/test/testlib-utils.sh Executable file
View File

@ -0,0 +1,75 @@
#!/bin/bash
source $(dirname ${BASH_SOURCE[0]})/testlib.sh || exit 1
source "$TEST_DIR"/"$SUBJ_FILENAME" || exit 2
test_function_one(){
echo "This is test function one"
}
test_function_two(){
echo "This is test function two"
}
test_cmd "The copy_function produces no output, while copying test_function_two" \
0 "" \
copy_function test_function_two test_function_three
# test_cmd executes the command-under-test inside a sub-shell
copy_function test_function_two test_function_three
test_cmd "The copy of test_function_two has identical behavior as two." \
0 "This is test function two" \
test_function_three
test_cmd "The rename_function produces no output, while renaming test_function_one" \
0 "" \
rename_function test_function_one test_function_three
# ""
rename_function test_function_one test_function_three
test_cmd "The rename_function removed the source function" \
127 "command not found" \
test_function_one
test_cmd "The behavior of test_function_three matches renamed test_function_one" \
0 "This is test function one" \
test_function_three
test_cmd "The contains function operates as expected for the normal case" \
0 "" \
contains 3 1 2 3 4 5
test_cmd "The contains function operates as expected for the negative case" \
1 "" \
contains 42 1 2 3 4 5
test_cmd "The contains function operates as expected despite whitespace" \
0 "" \
contains 'foo bar' "foobar" "foo" "foo bar" "bar"
test_cmd "The contains function operates as expected despite whitespace, negative case" \
1 "" \
contains 'foo bar' "foobar" "foo" "baz" "bar"
test_cmd "The err_retry function retries three times for true + exit(0)" \
126 "Attempt 3 of 3" \
err_retry 3 10 0 true
test_cmd "The err_retry function retries three times for false, exit(1)" \
126 "Attempt 3 of 3" \
err_retry 3 10 1 false
test_cmd "The err_retry function catches an exit 42 in [1, 2, 3, 42, 99, 100, 101]" \
42 "exit.+42" \
err_retry 3 10 "1 2 3 42 99 100 101" exit 42
test_cmd "The err_retry function retries 2 time for exit 42 in [1, 2, 3, 99, 100, 101]" \
42 "exit.+42" \
err_retry 2 10 "1 2 3 99 100 101" exit 42
test_cmd "The err_retry function retries 1 time for false, non-zero exit" \
1 "Attempt 2 of 2" \
err_retry 2 10 "" false
# script is set +e
exit_with_status

View File

@ -6,7 +6,7 @@
# Set non-zero to enable
TEST_DEBUG=${TEST_DEBUG:-0}
# Test subject filename and directory name are derrived from test-script filename
# Test subject filename and directory name are derived from test-script filename
SUBJ_FILENAME=$(basename $0)
if [[ "$SUBJ_FILENAME" =~ "testlib-" ]]; then
SUBJ_FILENAME="${SUBJ_FILENAME#testlib-}"
@ -22,6 +22,21 @@ fi
# Always run all tests, and keep track of failures.
FAILURE_COUNT=0
# Duplicated from common/lib/utils.sh to not create any circular dependencies
copy_function() {
local src="$1"
local dst="$2"
test -n "$(declare -f "$1")" || return
eval "${_/$1/$2}"
}
rename_function() {
local from="$1"
local to="$2"
copy_function "$@" || return
unset -f "$1"
}
# Assume test script is set +e and this will be the last call
exit_with_status() {
if ((FAILURE_COUNT)); then
@ -73,7 +88,7 @@ test_cmd() {
echo "# $@" > /dev/stderr
fi
# Using egrep vs file safer than shell builtin test
# Using grep vs file safer than shell builtin test
local a_out_f=$(mktemp -p '' "tmp_${FUNCNAME[0]}_XXXXXXXX")
local a_exit=0
@ -81,18 +96,24 @@ test_cmd() {
set -o pipefail
( set -e; "$@" 0<&- |& tee "$a_out_f" | tr -s '[:space:]' ' ' &> "${a_out_f}.oneline")
a_exit="$?"
if ((TEST_DEBUG)); then
echo "Command/Function call exited with code: $a_exit"
fi
if [[ -n "$e_exit" ]] && [[ $e_exit -ne $a_exit ]]; then
_test_report "Expected exit-code $e_exit but received $a_exit while executing $1" 1 "$a_out_f"
_test_report "Expected exit-code $e_exit but received $a_exit while executing $1" "1" "$a_out_f"
elif [[ -z "$e_out_re" ]] && [[ -n "$(<$a_out_f)" ]]; then
_test_report "Expecting no output from $@" 1 "$a_out_f"
_test_report "Expecting no output from $*" "1" "$a_out_f"
elif [[ -n "$e_out_re" ]]; then
if egrep -q "$e_out_re" "${a_out_f}.oneline"; then
_test_report "Command $1 exited as expected with expected output" 0 "$a_out_f"
if ((TEST_DEBUG)); then
echo "Received $(wc -l $a_out_f | awk '{print $1}') output lines of $(wc -c $a_out_f | awk '{print $1}') bytes total"
fi
if grep -Eq "$e_out_re" "${a_out_f}.oneline"; then
_test_report "Command $1 exited as expected with expected output" "0" "$a_out_f"
else
_test_report "Expecting regex '$e_out_re' match to (whitespace-squashed) output" 1 "$a_out_f"
_test_report "Expecting regex '$e_out_re' match to (whitespace-squashed) output" "1" "$a_out_f"
fi
else # Pass
_test_report "Command $1 exited as expected ($a_exit)" 0 "$a_out_f"
_test_report "Command $1 exited as expected ($a_exit)" "0" "$a_out_f"
fi
}

6
default.json Normal file
View File

@ -0,0 +1,6 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"github>containers/automation//renovate/defaults.json5"
]
}

34
github/.install.sh Executable file
View File

@ -0,0 +1,34 @@
#!/bin/bash
# Installs common Github Action utilities system-wide. NOT intended to be used directly
# by humans, should only be used indirectly by running
# ../bin/install_automation.sh <ver> github
set -eo pipefail
source "$AUTOMATION_LIB_PATH/anchors.sh"
source "$AUTOMATION_LIB_PATH/console_output.sh"
INSTALL_PREFIX=$(realpath $AUTOMATION_LIB_PATH/..)
# Assume the directory this script is in, represents what is being installed
INSTALL_NAME=$(basename $(dirname ${BASH_SOURCE[0]}))
AUTOMATION_VERSION=$(automation_version)
[[ -n "$AUTOMATION_VERSION" ]] || \
die "Could not determine version of common automation libs, was 'install_automation.sh' successful?"
echo "Installing $INSTALL_NAME version $(automation_version) into $INSTALL_PREFIX"
unset INST_PERM_ARG
if [[ $UID -eq 0 ]]; then
INST_PERM_ARG="-o root -g root"
fi
cd $(dirname $(realpath "${BASH_SOURCE[0]}"))
install -v $INST_PERM_ARG -D -t "$INSTALL_PREFIX/lib" ./lib/*
# Needed for installer testing
cat <<EOF>>"./environment"
# Added on $(date --iso-8601=minutes) by 'github' subcomponent installer
export GITHUB_ACTION_LIB=$INSTALL_PREFIX/lib/github.sh
EOF
echo "Successfully installed $INSTALL_NAME"

5
github/README.md Normal file
View File

@ -0,0 +1,5 @@
## Common Github Action scripts/libraries
This subdirectory contains scripts, libraries, and tests for common
Github Action operations. They depend heavily on the `common`
subdirectory in the repository root.

82
github/lib/github.sh Normal file
View File

@ -0,0 +1,82 @@
# This file is intended for sourcing by the cirrus-ci_retrospective workflow
# It should not be used under any other context.
source $(dirname ${BASH_SOURCE[0]})/github_common.sh || exit 1
# Cirrus-CI Build status codes that represent completion
COMPLETE_STATUS_RE='FAILED|COMPLETED|ABORTED|ERRORED'
# Shell variables used throughout this workflow
prn=
tid=
sha=
tst=
was_pr='false'
do_intg='false'
dbg_ccir() {
dbg "Shell variables set:"
dbg "Cirrus-CI ran on pr: $was_pr"
dbg "Monitor PR Number: ${prn}"
dbg "Monitor SHA: ${sha}"
dbg "Action Task ID was: ${tid}"
dbg "Action Task Status: ${tst}"
dbg "Do integration testing: ${do_intg}"
}
# usage: load_ccir <path to cirrus-ci_retrospective.json>
load_ccir() {
local dirpath="$1"
local ccirjson="$1/cirrus-ci_retrospective.json"
[[ -d "$dirpath" ]] || \
die "Expecting a directory path '$dirpath'"
[[ -r "$ccirjson" ]] || \
die "Can't read file '$ccirjson'"
[[ -n "$MONITOR_TASK" ]] || \
die "Expecting \$MONITOR_TASK to be non-empty"
[[ -n "$ACTION_TASK" ]] || \
die "Expecting \$MONITOR_TASK to be non-empty"
dbg "--Loading Cirrus-CI monitoring task $MONITOR_TASK--"
dbg "$(jq --indent 4 '.[] | select(.name == "'${MONITOR_TASK}'")' $ccirjson)"
bst=$(jq --raw-output '.[] | select(.name == "'${MONITOR_TASK}'") | .build.status' "$ccirjson")
prn=$(jq --raw-output '.[] | select(.name == "'${MONITOR_TASK}'") | .build.pullRequest' "$ccirjson")
sha=$(jq --raw-output '.[] | select(.name == "'${MONITOR_TASK}'") | .build.changeIdInRepo' "$ccirjson")
dbg "--Loadinng Cirrus-CI action task $ACTION_TASK--"
dbg "$(jq --indent 4 '.[] | select(.name == "'${ACTION_TASK}'")' $ccirjson)"
tid=$(jq --raw-output '.[] | select(.name == "'${ACTION_TASK}'") | .id' "$ccirjson")
tst=$(jq --raw-output '.[] | select(.name == "'${ACTION_TASK}'") | .status' "$ccirjson")
for var in bst prn sha; do
[[ -n "${!var}" ]] || \
die "Expecting \$$var to be non-empty after loading $ccirjson" 42
done
was_pr='false'
do_intg='false'
if [[ -n "$prn" ]] && [[ "$prn" != "null" ]] && [[ $prn -gt 0 ]]; then
dbg "Detected pull request $prn"
was_pr='true'
# Don't race vs another cirrus-ci build triggered _after_ GH action workflow started
# since both may share the same check_suite. e.g. task re-run or manual-trigger
if echo "$bst" | grep -E -q "$COMPLETE_STATUS_RE"; then
if [[ -n "$tst" ]] && [[ "$tst" == "PAUSED" ]]; then
dbg "Detected action status $tst"
do_intg='true'
fi
else
warn "Unexpected build status '$bst', was a task re-run or manually triggered?"
fi
fi
dbg_ccir
}
set_ccir() {
for varname in prn tid sha tst was_pr do_intg; do
set_out_var $varname "${!varname}"
done
}

View File

@ -0,0 +1,61 @@
# This file is intended for sourcing by github action workflows
# It should not be used under any other context.
# Important paths defined here
AUTOMATION_LIB_PATH="${AUTOMATION_LIB_PATH:-$(realpath $(dirname ${BASH_SOURCE[0]})/../../common/lib)}"
source $AUTOMATION_LIB_PATH/common_lib.sh || exit 1
# Wrap the die() function to add github-action sugar that identifies file
# & line number within the UI, before exiting non-zero.
rename_function die _die
die() {
# https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-error-message
local ERROR_MSG_PREFIX
ERROR_MSG_PREFIX="::error file=${BASH_SOURCE[1]},line=${BASH_LINENO[0]}::"
_die "$@"
}
# Wrap the warn() function to add github-action sugar that identifies file
# & line number within the UI.
rename_function warn _warn
warn() {
local WARNING_MSG_PREFIX
# https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-a-warning-message
WARNING_MSG_PREFIX="::warning file=${BASH_SOURCE[1]},line=${BASH_LINENO[0]}::"
_warn "$@"
}
# Idomatic debug messages in github-actions are worse than useless. They do
# not embed file/line information. They are completely hidden unless
# the $ACTIONS_STEP_DEBUG step or job variable is set 'true'. If setting
# this variable as a secret, can have unintended conseuqences:
# https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/using-workflow-run-logs#viewing-logs-to-diagnose-failures
# Wrap the dbg() function to add github-action sugar at the "notice" level
# so that it may be observed in output by regular users without danger.
rename_function dbg _dbg
dbg() {
# When set true, simply enable automation library debugging.
if [[ "${ACTIONS_STEP_DEBUG:-false}" == 'true' ]]; then export A_DEBUG=1; fi
# notice-level messages actually show up in the UI use them for debugging
# https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-a-notice-message
local DEBUG_MSG_PREFIX
DEBUG_MSG_PREFIX="::notice file=${BASH_SOURCE[1]},line=${BASH_LINENO[0]}::"
_dbg "$@"
}
# usage: set_out_var <name> [value...]
set_out_var() {
A_DEBUG=0 req_env_vars GITHUB_OUTPUT
name=$1
shift
value="$@"
[[ -n $name ]] || \
die "Expecting first parameter to be non-empty value for the output variable name"
dbg "Setting Github Action step output variable '$name' to '$value'"
# Special string recognized by Github Actions
# Ref: https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-output-parameter
echo "$name=$value" >> $GITHUB_OUTPUT
}

4
github/test/README.md Normal file
View File

@ -0,0 +1,4 @@
# WARNING
These tests absolutely must be run by github actions. They will
not function outside of that specific environment.

View File

@ -0,0 +1 @@
../../common/test/run_all_tests.sh

128
github/test/testlib-github.sh Executable file
View File

@ -0,0 +1,128 @@
#!/bin/bash
source $(dirname $BASH_SOURCE[0])/testlib.sh
# This is necessary when executing from a Github Action workflow so it ignores
# all magic output tokens
echo "::stop-commands::TESTING"
trap "echo '::TESTING::'" EXIT
test_cmd "The library $TEST_DIR/$SUBJ_FILENAME loads" \
0 '' \
source $TEST_DIR/$SUBJ_FILENAME
source $TEST_DIR/$SUBJ_FILENAME || exit 1 # can't continue w/o loaded library
test_cmd 'These tests are running in a github actions workflow environment' \
0 '' \
test "$GITHUB_ACTIONS" == "true"
test_cmd 'Default shell variables are initialized empty/false' \
0 '^falsefalse$' \
echo -n "${prn}${tid}${sha}${tst}${was_pr}${do_intg}"
# Remaining tests all require debugging output to be enabled
A_DEBUG=1
test_cmd 'The debugging function does not throw any errors and redirects to notice-level output' \
0 '::notice' \
dbg_ccir
test_cmd "The \$MONITOR_TASK variable is defined an non-empty" \
0 '^.+' \
echo -n "$MONITOR_TASK"
test_cmd "The \$ACTION_TASK variable is defined an non-empty" \
0 '^.+' \
echo -n "$ACTION_TASK"
MONITOR_TASK=TEST_MONITOR_TASK_NAME
ACTION_TASK=TEST_ACTION_TASK_NAME
TESTTEMPDIR=$(mktemp -p '' -d "tmp_${SUBJ_FILENAME}_XXXXXXXX")
trap "rm -rf $TESTTEMPDIR" EXIT
# usage: write_ccir <id> <build_pullRequest> <build_changeIdInRepo> <action_status> <monitor_status>
write_ccir() {
local id=$1
local pullRequest=$2
local changeIdInRepo=$3
local action_status=$4
local monitor_status=$5
build_section="\"build\": {
\"id\": \"1234567890\",
\"changeIdInRepo\": \"$changeIdInRepo\",
\"branch\": \"pull/$pullRequest\",
\"pullRequest\": $pullRequest,
\"status\": \"COMPLETED\"
}"
cat << EOF > $TESTTEMPDIR/cirrus-ci_retrospective.json
[
{
"id": "$id",
"name": "$MONITOR_TASK",
"status": "$monitor_status",
"automaticReRun": false,
$build_section
},
{
"id": "$id",
"name": "$ACTION_TASK",
"status": "$action_status",
"automaticReRun": false,
$build_section
}
]
EOF
if ((TEST_DEBUG)); then
echo "Wrote JSON:"
cat $TESTTEMPDIR/cirrus-ci_retrospective.json
fi
}
write_ccir 10 12 13 14 15
# usage: write_ccir <id> <build_pullRequest> <build_changeIdInRepo> <action_status> <monitor_status>
for regex in '"id": "10"' $MONITOR_TASK $ACTION_TASK '"branch": "pull/12"' \
'"changeIdInRepo": "13"' '"pullRequest": 12' '"status": "14"' \
'"status": "15"'; do
test_cmd "Verify test JSON can load with test values from $TESTTEMPDIR, and match '$regex'" \
0 "$regex" \
load_ccir "$TESTTEMPDIR"
done
# Remaining tests all require debugging output disabled
A_DEBUG=0
write_ccir 1 2 3 PAUSED COMPLETED
load_ccir "$TESTTEMPDIR"
for var in was_pr do_intg; do
test_cmd "Verify JSON for a pull request sets \$$var=true" \
0 '^true' \
echo ${!var}
done
for stat in COMPLETED ABORTED FAILED YOMAMA SUCCESS SUCCESSFUL FAILURE; do
write_ccir 1 2 3 $stat COMPLETED
load_ccir "$TESTTEMPDIR"
test_cmd "Verify JSON for a pull request sets \$do_intg=false when action status is $stat" \
0 '^false' \
echo $do_intg
write_ccir 1 2 3 PAUSED $stat
load_ccir "$TESTTEMPDIR"
test_cmd "Verify JSON for a pull request sets \$do_intg=true when monitor status is $stat" \
0 '^true' \
echo $do_intg
done
for pr in "true" "false" "null" "0"; do
write_ccir 1 "$pr" 3 PAUSED COMPLETED
load_ccir "$TESTTEMPDIR"
test_cmd "Verify \$do_intg=false and \$was_pr=false when JSON sets pullRequest=$pr" \
0 '^falsefalse' \
echo ${do_intg}${was_pr}
done
# Must be the last command in this file
exit_with_status

View File

@ -0,0 +1,63 @@
#!/bin/bash
source $(dirname $BASH_SOURCE[0])/testlib.sh
# This is necessary when executing from a Github Action workflow so it ignores
# all magic output sugar.
_MAGICTOKEN="TEST${RANDOM}TEST" # must be randomly generated / unguessable
echo "::stop-commands::$_MAGICTOKEN"
trap "echo '::$_MAGICTOKEN::'" EXIT
unset ACTIONS_STEP_DEBUG
unset A_DEBUG
source $TEST_DIR/$SUBJ_FILENAME || exit 1 # can't continue w/o loaded library
test_cmd "No debug message shows when A_DEBUG and ACTIONS_STEP_DEBUG are undefined" \
0 '' \
dbg 'This debug message should not appear'
export A_DEBUG=1
test_cmd "A debug notice message shows when A_DEBUG is true" \
0 '::notice file=.+,line=.+:: This is a debug message' \
dbg "This is a debug message"
unset A_DEBUG
export ACTIONS_STEP_DEBUG="true"
test_cmd "A debug notice message shows when ACTIONS_STEP_DEBUG is true" \
0 '::notice file=.+,line=.+:: This is also a debug message' \
dbg "This is also a debug message"
unset ACTIONS_STEP_DEBUG
unset A_DEBUG
test_cmd "Warning messages contain github-action sugar." \
0 '::warning file=.+,line=.+:: This is a test warning message' \
warn 'This is a test warning message'
test_cmd "Error messages contain github-action sugar." \
0 '::error file=.+,line=.+:: This is a test error message' \
die 'This is a test error message' 0
unset GITHUB_OUTPUT_FUDGED
if [[ -z "$GITHUB_OUTPUT" ]]; then
# Not executing under github-actions
GITHUB_OUTPUT=$(mktemp -p '' tmp_$(basename ${BASH_SOURCE[0]})_XXXX)
GITHUB_OUTPUT_FUDGED=1
fi
test_cmd "The set_out_var function normally produces no output" \
0 '' \
set_out_var TESTING_NAME TESTING VALUE
export A_DEBUG=1
test_cmd "The set_out_var function is debugable" \
0 "::notice file=.+line=.+:: Setting Github.+'DEBUG_TESTING_NAME' to 'DEBUGGING TESTING VALUE'" \
set_out_var DEBUG_TESTING_NAME DEBUGGING TESTING VALUE
unset A_DEBUG
test_cmd "Previous set_out_var function properly sets a step-output value" \
0 'TESTING_NAME=TESTING VALUE' \
cat $GITHUB_OUTPUT
# Must be the last commands in this file
if ((GITHUB_OUTPUT_FUDGED)); then rm -f "$GITHUB_OUTPUT"; fi
exit_with_status

1
github/test/testlib.sh Symbolic link
View File

@ -0,0 +1 @@
../../common/test/testlib.sh

5
mac_pw_pool/.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
/Cron.log
/utilization.csv
/dh_status.txt*
/pw_status.txt*
/html/utilization.png*

200
mac_pw_pool/AllocateTestDH.sh Executable file
View File

@ -0,0 +1,200 @@
#!/bin/bash
# This script is intended for use by humans to allocate a dedicated-host
# and create an instance on it for testing purposes. When executed,
# it will create a temporary clone of the repository with the necessary
# modifications to manipulate the test host. It's the user's responsibility
# to cleanup this directory after manually removing the instance (see below).
#
# **Note**: Due to Apple/Amazon restrictions on the removal of these
# resources, cleanup must be done manually. You will need to shutdown and
# terminate the instance, then wait 24-hours before releasing the
# dedicated-host. The hosts cost money w/n an instance is running.
#
# The script assumes:
#
# * The current $USER value reflects your actual identity such that
# the test instance may be labeled appropriatly for auditing.
# * The `aws` CLI tool is installed on $PATH.
# * Appropriate `~/.aws/credentials` credentials are setup.
# * The us-east-1 region is selected in `~/.aws/config`.
# * The $POOLTOKEN env. var. is set to value available from
# https://cirrus-ci.com/pool/1cf8c7f7d7db0b56aecd89759721d2e710778c523a8c91c7c3aaee5b15b48d05
# * The local ssh-agent is able to supply the appropriate private key (stored in BW).
set -eo pipefail
# shellcheck source-path=SCRIPTDIR
source $(dirname ${BASH_SOURCE[0]})/pw_lib.sh
# Support debugging all mac_pw_pool scripts or only this one
I_DEBUG="${I_DEBUG:0}"
if ((I_DEBUG)); then
X_DEBUG=1
warn "Debugging enabled."
fi
dbg "\$USER=$USER"
[[ -n "$USER" ]] || \
die "The variable \$USER must not be empty"
[[ -n "$POOLTOKEN" ]] || \
die "The variable \$POOLTOKEN must not be empty"
INST_NAME="${USER}Testing"
LIB_DIRNAME=$(realpath --relative-to=$REPO_DIRPATH $LIB_DIRPATH)
# /tmp is usually a tmpfs, don't let an accidental reboot ruin
# access to a test DH/instance for a developer.
TMP_CLONE_DIRPATH="/var/tmp/${LIB_DIRNAME}_${INST_NAME}"
dbg "\$TMP_CLONE_DIRPATH=$TMP_CLONE_DIRPATH"
if [[ -d "$TMP_CLONE_DIRPATH" ]]; then
die "Found existing '$TMP_CLONE_DIRPATH', assuming in-use/relevant; If not, manual cleanup is required."
fi
msg "Creating temporary clone dir and transfering any uncommited files."
git clone --no-local --no-hardlinks --depth 1 --single-branch --no-tags --quiet "file://$REPO_DIRPATH" "$TMP_CLONE_DIRPATH"
declare -a uncommited_filepaths
readarray -t uncommited_filepaths <<<$(
pushd "$REPO_DIRPATH" &> /dev/null
# Obtaining uncommited relative staged filepaths
git diff --name-only HEAD
# Obtaining uncommited relative unstaged filepaths
git ls-files . --exclude-standard --others
popd &> /dev/null
)
dbg "Copying \$uncommited_filepaths[*]=${uncommited_filepaths[*]}"
for uncommited_file in "${uncommited_filepaths[@]}"; do
uncommited_file_src="$REPO_DIRPATH/$uncommited_file"
uncommited_file_dest="$TMP_CLONE_DIRPATH/$uncommited_file"
uncommited_file_dest_parent=$(dirname "$uncommited_file_dest")
#dbg "Working on uncommited file '$uncommited_file_src'"
if [[ -r "$uncommited_file_src" ]]; then
mkdir -p "$uncommited_file_dest_parent"
#dbg "$uncommited_file_src -> $uncommited_file_dest"
cp -a "$uncommited_file_src" "$uncommited_file_dest"
fi
done
declare -a modargs
# Format: <pw_lib.sh var name> <new value> <old value>
modargs=(
# Necessary to prevent in-production macs from trying to use testing instance
"DH_REQ_VAL $INST_NAME $DH_REQ_VAL"
# Necessary to make test dedicated host stand out when auditing the set in the console
"DH_PFX $INST_NAME $DH_PFX"
# The default launch template name includes $DH_PFX, ensure the production template name is used.
# N/B: The old/unmodified pw_lib.sh is still loaded for the running script
"TEMPLATE_NAME $TEMPLATE_NAME Cirrus${DH_PFX}PWinstance"
# Permit developer to use instance for up to 3 days max (orphan vm cleaning process will nail it after that).
"PW_MAX_HOURS 72 $PW_MAX_HOURS"
# Permit developer to execute as many Cirrus-CI tasks as they want w/o automatic shutdown.
"PW_MAX_TASKS 9999 $PW_MAX_TASKS"
)
for modarg in "${modargs[@]}"; do
set -- $modarg # Convert the "tuple" into the param args $1 $2...
dbg "Modifying pw_lib.sh \$$1 definition to '$2' (was '$3')"
sed -i -r -e "s/^$1=.*/$1=\"$2\"/" "$TMP_CLONE_DIRPATH/$LIB_DIRNAME/pw_lib.sh"
# Ensure future script invocations use the new values
unset $1
done
cd "$TMP_CLONE_DIRPATH/$LIB_DIRNAME"
source ./pw_lib.sh
# Before going any further, make sure there isn't an existing
# dedicated-host named ${INST_NAME}-0. If there is, it can
# be re-used instead of failing the script outright.
existing_dh_json=$(mktemp -p "." dh_allocate_XXXXX.json)
$AWS ec2 describe-hosts --filter "Name=tag:Name,Values=${INST_NAME}-0" --query 'Hosts[].HostId' > "$existing_dh_json"
if grep -Fqx '[]' "$existing_dh_json"; then
msg "Creating the dedicated host '${INST_NAME}-0'"
declare dh_allocate_json
dh_allocate_json=$(mktemp -p "." dh_allocate_XXXXX.json)
declare -a awsargs
# Word-splitting of $AWS is desireable
# shellcheck disable=SC2206
awsargs=(
$AWS
ec2 allocate-hosts
--availability-zone us-east-1a
--instance-type mac2.metal
--auto-placement off
--host-recovery off
--host-maintenance off
--quantity 1
--tag-specifications
"ResourceType=dedicated-host,Tags=[{Key=Name,Value=${INST_NAME}-0},{Key=$DH_REQ_TAG,Value=$DH_REQ_VAL},{Key=PWPoolReady,Value=true},{Key=automation,Value=false}]"
)
# N/B: Apple/Amazon require min allocation time of 24hours!
dbg "Executing: ${awsargs[*]}"
"${awsargs[@]}" > "$dh_allocate_json" || \
die "Provisioning new dedicated host $INST_NAME failed. Manual debugging & cleanup required."
dbg $(jq . "$dh_allocate_json")
dhid=$(jq -r -e '.HostIds[0]' "$dh_allocate_json")
[[ -n "$dhid" ]] || \
die "Obtaining DH ID of new host. Manual debugging & cleanup required."
# There's a small delay between allocating the dedicated host and LaunchInstances.sh
# being able to interact with it. There's no sensible way to monitor for this state :(
sleep 3s
else # A dedicated host already exists
dhid=$(jq -r -e '.[0]' "$existing_dh_json")
fi
# Normally allocation is fairly instant, but not always. Confirm we're able to actually
# launch a mac instance onto the dedicated host.
for ((attempt=1 ; attempt < 11 ; attempt++)); do
msg "Attempt #$attempt launching a new instance on dedicated host"
./LaunchInstances.sh --force
if grep -E "^${INST_NAME}-0 i-" dh_status.txt; then
attempt=-1 # signal success
break
fi
sleep 1s
done
[[ "$attempt" -eq -1 ]] || \
die "Failed to use LaunchInstances.sh. Manual debugging & cleanup required."
# At this point the script could call SetupInstances.sh in another loop
# but it takes about 20-minutes to complete. Also, the developer may
# not need it, they may simply want to ssh into the instance to poke
# around. i.e. they don't need to run any Cirrus-CI jobs on the test
# instance.
warn "---"
warn "NOT copying/running setup.sh to new instance (in case manual activities are desired)."
warn "---"
w="PLEASE REMEMBER TO terminate instance, wait two hours, then
remove the dedicated-host in the web console, or run
'aws ec2 release-hosts --host-ids=$dhid'."
msg "---"
msg "Dropping you into a shell inside a temp. repo clone:
($TMP_CLONE_DIRPATH/$LIB_DIRNAME)"
msg "---"
msg "Once it finishes booting (5m), you may use './InstanceSSH.sh ${INST_NAME}-0'
to access it. Otherwise to fully setup the instance for Cirrus-CI, you need
to execute './SetupInstances.sh' repeatedly until the ${INST_NAME}-0 line in
'pw_status.txt' includes the text 'complete alive'. That process can take 20+
minutes. Once alive, you may then use Cirrus-CI to test against this specific
instance with any 'persistent_worker' task having a label of
'$DH_REQ_TAG=$DH_REQ_VAL' set."
msg "---"
warn "$w"
export POOLTOKEN # ensure availability in sub-shell
bash -l
warn "$w"

70
mac_pw_pool/Cron.sh Executable file
View File

@ -0,0 +1,70 @@
#!/bin/bash
# Intended to be run from $HOME/deve/automation/mac_pw_pool/
# using a crontab like:
# # Every date/timestamp in PW Pool management is UTC-relative
# # make cron do the same for consistency.
# CRON_TZ=UTC
#
# PATH=/home/shared/.local/bin:/home/shared/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
#
# # Keep log from filling up disk & make sure webserver is running
# # (5am UTC is during CI-activity lul)
# 59 4 * * * $HOME/devel/automation/mac_pw_pool/nightly_maintenance.sh &>> $CRONLOG
#
# # PW Pool management (usage drop-off from 03:00-15:00 UTC)
# POOLTOKEN=<from https://cirrus-ci.com/pool/1cf8c7f7d7db0b56aecd89759721d2e710778c523a8c91c7c3aaee5b15b48d05>
# CRONLOG=/home/shared/devel/automation/mac_pw_pool/Cron.log
# */5 * * * * /home/shared/devel/automation/mac_pw_pool/Cron.sh &>> $CRONLOG
# shellcheck disable=SC2154
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -e -w 300 "$0" "$0" "$@" || :
# shellcheck source=./pw_lib.sh
source $(dirname "${BASH_SOURCE[0]}")/pw_lib.sh
cd $SCRIPT_DIRPATH || die "Cannot enter '$SCRIPT_DIRPATH'"
# SSH agent required to provide key for accessing workers
# Started with `ssh-agent -s > /run/user/$UID/ssh-agent.env`
# followed by adding/unlocking the necessary keys.
# shellcheck disable=SC1090
source /run/user/$UID/ssh-agent.env
date -u -Iminutes
now_minutes=$(date -u +%M)
if (($now_minutes%10==0)); then
$SCRIPT_DIRPATH/LaunchInstances.sh
echo "Exit: $?"
fi
$SCRIPT_DIRPATH/SetupInstances.sh
echo "Exit: $?"
[[ -r "$PWSTATE" ]] || \
die "Can't read $PWSTATE to generate utilization data."
uzn_file="$SCRIPT_DIRPATH/utilization.csv"
# Run input through `date` to validate values are usable timestamps
timestamp=$(date -u -Iseconds -d \
$(grep -E '^# SetupInstances\.sh run ' "$PWSTATE" | \
awk '{print $4}'))
pw_state=$(grep -E -v '^($|#+| +)' "$PWSTATE")
n_workers=$(grep 'complete alive' <<<"$pw_state" | wc -l)
n_tasks=$(awk "BEGIN{B=0} /${DH_PFX}-[0-9]+ complete alive/{B+=\$4} END{print B}" <<<"$pw_state")
n_taskf=$(awk "BEGIN{E=0} /${DH_PFX}-[0-9]+ complete alive/{E+=\$5} END{print E}" <<<"$pw_state")
printf "%s,%i,%i,%i\n" "$timestamp" "$n_workers" "$n_tasks" "$n_taskf" | tee -a "$uzn_file"
# Prevent uncontrolled growth of utilization.csv. Assume this script
# runs every $interval minutes, keep only $history_hours worth of data.
interval_minutes=5
history_hours=36
lines_per_hour=$((60/$interval_minutes))
max_uzn_lines=$(($history_hours * $lines_per_hour))
tail -n $max_uzn_lines "$uzn_file" > "${uzn_file}.tmp"
mv "${uzn_file}.tmp" "$uzn_file"
# If possible, generate the webpage utilization graph
gnuplot -c Utilization.gnuplot || true

39
mac_pw_pool/InstanceSSH.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash
set -eo pipefail
# Helper for humans to access an existing instance. It depends on:
#
# * You know the instance-id or name.
# * All requirements listed in the top `LaunchInstances.sh` comment.
# * The local ssh-agent is able to supply the appropriate private key.
# shellcheck source-path=SCRIPTDIR
source $(dirname ${BASH_SOURCE[0]})/pw_lib.sh
SSH="ssh $SSH_ARGS" # N/B: library default nulls stdin
if nc -z localhost 5900; then
# Enable access to VNC if it's running
# ref: https://repost.aws/knowledge-center/ec2-mac-instance-gui-access
SSH+=" -L 5900:localhost:5900"
fi
[[ -n "$1" ]] || \
die "Must provide EC2 instance ID as first argument"
case "$1" in
i-*)
inst_json=$($AWS ec2 describe-instances --instance-ids "$1") ;;
*)
inst_json=$($AWS ec2 describe-instances --filter "Name=tag:Name,Values=$1") ;;
esac
shift
pub_dns=$(jq -r -e '.Reservations?[0]?.Instances?[0]?.PublicDnsName?' <<<"$inst_json")
if [[ -z "$pub_dns" ]] || [[ "$pub_dns" == "null" ]]; then
die "Instance '$1' does not exist, or have a public DNS address allocated (yet)."
fi
echo "+ $SSH ec2-user@$pub_dns $*" >> /dev/stderr
exec $SSH ec2-user@$pub_dns "$@"

310
mac_pw_pool/LaunchInstances.sh Executable file
View File

@ -0,0 +1,310 @@
#!/bin/bash
set -eo pipefail
# Script intended to be executed by humans (and eventually automation) to
# ensure instances are launched from the current template version, on all
# available Cirrus-CI Persistent Worker M1 Mac dedicated hosts. These
# dedicated host (slots) are selected at runtime based their possessing a
# 'true' value for their `PWPoolReady` tag. The script assumes:
#
# * The `aws` CLI tool is installed on $PATH.
# * Appropriate `~/.aws/credentials` credentials are setup.
# * The us-east-1 region is selected in `~/.aws/config`.
#
# N/B: Dedicated Host names and instance names are assumed to be identical,
# only the IDs differ.
# shellcheck source-path=SCRIPTDIR
source $(dirname ${BASH_SOURCE[0]})/pw_lib.sh
L_DEBUG="${L_DEBUG:0}"
if ((L_DEBUG)); then
X_DEBUG=1
warn "Debugging enabled - temp. dir will not be cleaned up '$TEMPDIR' $(ctx 0)."
trap EXIT
fi
# Helper intended for use inside `name_hostid` loop.
# arg1 either "INST" or "HOST"
# arg2: Brief failure message
# arg3: Failure message details
handle_failure() {
[[ -n "$inststate" ]] || die "Expecting \$inststate to be set $(ctx 2)"
[[ -n "$name" ]] || die "Expecting \$name to be set $(ctx 2)"
if [[ "$1" != "INST" ]] && [[ "$1" != "HOST" ]]; then
die "Expecting either INST or HOST as argument $(ctx 2)"
fi
[[ -n "$2" ]] || die "Expecting brief failure message $(ctx 2)"
[[ -n "$3" ]] || die "Expecting detailed failure message $(ctx 2)"
warn "$2 $(ctx 2)"
(
# Script is sensitive to this first-line format
echo "# $name $1 ERROR: $2"
# Make it obvious which host/instance the details pertain to
awk -e '{print "# "$0}'<<<"$3"
) > "$inststate"
}
# Wrapper around handle_failure()
host_failure() {
[[ -r "$hostoutput" ]] || die "Expecting readable $hostoutput file $(ctx)"
handle_failure HOST "$1" "aws CLI output: $(<$hostoutput)"
}
inst_failure() {
[[ -r "$instoutput" ]] || die "Expecting readable $instoutput file $(ctx)"
handle_failure INST "$1" "aws CLI output: $(<$instoutput)"
}
# Find dedicated hosts to operate on.
dh_name_flt="Name=tag:Name,Values=${DH_PFX}-*"
dh_tag_flt="Name=tag:$DH_REQ_TAG,Values=$DH_REQ_VAL"
dh_qry='Hosts[].{HostID:HostId, Name:[Tags[?Key==`Name`].Value][] | [0]}'
dh_searchout="$TEMPDIR/hosts.output" # JSON or error message
if ! $AWS ec2 describe-hosts --filter "$dh_name_flt" "$dh_tag_flt" --query "$dh_qry" &> "$dh_searchout"; then
die "Searching for dedicated hosts $(ctx 0):
$(<$dh_searchout)"
fi
# Array item format: "<Name> <ID>"
dh_fmt='.[] | .Name +" "+ .HostID'
# Avoid always processing hosts in the same alpha-sorted order, as that would
# mean hosts at the end of the list consistently wait the longest for new
# instances to be created (see creation-stagger code below).
if ! readarray -t NAME2HOSTID <<<$(json_query "$dh_fmt" "$dh_searchout" | sort --random-sort); then
die "Extracting dedicated host 'Name' and 'HostID' fields $(ctx 0):
$(<$dh_searchout)"
fi
n_dh=0
n_dh_total=${#NAME2HOSTID[@]}
if [[ -z "${NAME2HOSTID[*]}" ]] || ! ((n_dh_total)); then
msg "No dedicated hosts found"
exit 0
fi
latest_launched="1970-01-01T00:00+00:00" # in case $DHSTATE is missing
dcmpfmt="+%Y%m%d%H%M" # date comparison format compatible with numeric 'test'
# To find the latest instance launch time, script can't rely on reading
# $DHSTATE or $PWSTATE because they may not exist or be out of date.
# Search for all running instances by name and running state, returning
# their launch timestamps.
declare -a pw_filt
pw_filts=(
"Name=tag:Name,Values=${DH_PFX}-*"
'Name=tag:PWPoolReady,Values=true'
"Name=tag:$DH_REQ_TAG,Values=$DH_REQ_VAL"
'Name=instance-state-name,Values=running'
)
pw_query='Reservations[].Instances[].LaunchTime'
inst_lt_f=$TEMPDIR/inst_launch_times
dbg "Obtaining launch times for all running ${DH_PFX}-* instances"
dbg "$AWS ec2 describe-instances --filters '${pw_filts[*]}' --query '$pw_query' &> '$inst_lt_f'"
if ! $AWS ec2 describe-instances --filters "${pw_filts[@]}" --query "$pw_query" &> "$inst_lt_f"; then
die "Can not query instances:
$(<$inst_lt_f)"
else
declare -a launchtimes
if ! readarray -t launchtimes<<<$(json_query '.[]?' "$inst_lt_f") ||
[[ "${#launchtimes[@]}" -eq 0 ]] ||
[[ "${launchtimes[0]}" == "" ]]; then
warn "Found no running instances, this should not happen."
else
dbg "launchtimes=[${launchtimes[*]}]"
for launch_time in "${launchtimes[@]}"; do
if [[ "$launch_time" == "" ]] || [[ "$launch_time" == "null" ]]; then
warn "Ignoring empty/null instance launch time."
continue
fi
# Assume launch_time is never malformed
launched_hour=$(date -u -d "$launch_time" "$dcmpfmt")
latest_launched_hour=$(date -u -d "$latest_launched" "$dcmpfmt")
dbg "instance launched on $launched_hour; latest launch hour: $latest_launched_hour"
if [[ $launched_hour -gt $latest_launched_hour ]]; then
dbg "Updating latest launched timestamp"
latest_launched="$launch_time"
fi
done
fi
fi
# Increase readability for humans by always ensuring the two important
# date stamps line up regardless of the length of $n_dh_total.
_n_dh_sp=$(printf ' %.0s' seq 1 ${#n_dh_total})
msg "Operating on $n_dh_total dedicated hosts at $(date -u -Iseconds)"
msg " ${_n_dh_sp}Last instance launch on $latest_launched"
echo -e "# $(basename ${BASH_SOURCE[0]}) run $(date -u -Iseconds)\n#" > "$TEMPDIR/$(basename $DHSTATE)"
# When initializing a new pool of workers, it would take many hours
# to wait for the staggered creation mechanism on each host. This
# would negativly impact worker utilization. Provide a workaround.
force=0
# shellcheck disable=SC2199
if [[ "$@" =~ --force ]]; then
warn "Forcing instance creation: Ignoring staggered creation limits."
force=1
fi
for name_hostid in "${NAME2HOSTID[@]}"; do
n_dh=$(($n_dh+1))
_I=" "
msg " " # make output easier to read
read -r name hostid junk<<<"$name_hostid"
msg "Working on Dedicated Host #$n_dh/$n_dh_total '$name' for HostID '$hostid'."
hostoutput="$TEMPDIR/${name}_host.output" # JSON or error message from aws describe-hosts
instoutput="$TEMPDIR/${name}_inst.output" # JSON or error message from aws describe-instance or run-instance
inststate="$TEMPDIR/${name}_inst.state" # Line to append to $DHSTATE
if ! $AWS ec2 describe-hosts --host-ids $hostid &> "$hostoutput"; then
host_failure "Failed to look up dedicated host."
continue
# Allow hosts to be taken out of service easily/manually by editing its tags.
# Also detect any JSON parsing problems in the output.
elif ! PWPoolReady=$(json_query '.Hosts?[0]?.Tags? | map(select(.Key == "PWPoolReady")) | .[].Value' "$hostoutput"); then
host_failure "Empty/null/failed JSON query of PWPoolReady tag."
continue
elif [[ "$PWPoolReady" != "true" ]]; then
msg "Dedicated host tag 'PWPoolReady' == '$PWPoolReady' != 'true'."
echo "# $name HOST DISABLED: PWPoolReady==$PWPoolReady" > "$inststate"
continue
fi
if ! hoststate=$(json_query '.Hosts?[0]?.State?' "$hostoutput"); then
host_failure "Empty/null/failed JSON query of dedicated host state."
continue
fi
if [[ "$hoststate" == "pending" ]] || \
[[ "$hoststate" == "under-assessment" ]] || \
[[ "$hoststate" == "released" ]]
then
# When an instance is terminated, its dedicated host goes into an unusable state
# for about 1-1/2 hours. There's absolutely nothing that can be done to avoid
# this or work around it. Ignore hosts in this state, assuming a later run of the
# script will start an instance on the (hopefully) available host).
#
# I have no idea what 'under-assessment' means, and it doesn't last as long as 'pending',
# but functionally it behaves the same.
#
# Hosts in 'released' state are about to go away, hopefully due to operator action.
# Don't treat this as an error.
msg "Dedicated host is untouchable due to '$hoststate' state."
# Reference the actual output text, in case of false-match or unexpected contents.
echo "# $name HOST BUSY: $hoststate" > "$inststate"
continue
elif [[ "$hoststate" != "available" ]]; then
# The "available" state means the host is ready for zero or more instances to be created.
# Detect all other states (they should be extremely rare).
host_failure "Unsupported dedicated host state '$hoststate'."
continue
fi
# Counter-intuitively, dedicated hosts can support more than one running instance. Except
# for Mac instances, but this is not reflected anywhere in the JSON. Trying to start a new
# Mac instance on an already occupied host is bound to fail. Inconveniently this error
# will look an aweful lot like many other types of errors, confusing any human examining
# $DHSTATE. Detect dedicated-hosts with existing instances.
InstanceId=$(set +e; jq -r '.Hosts?[0]?.Instances?[0].InstanceId?' "$hostoutput")
dbg "InstanceId='$InstanceId'"
# Stagger creation of instances by $CREATE_STAGGER_HOURS
launch_new=0
if [[ "$InstanceId" == "null" ]] || [[ "$InstanceId" == "" ]]; then
launch_threshold=$(date -u -Iseconds -d "$latest_launched + $CREATE_STAGGER_HOURS hours")
launch_threshold_hour=$(date -u -d "$launch_threshold" "$dcmpfmt")
now_hour=$(date -u "$dcmpfmt")
dbg "launch_threshold_hour=$launch_threshold_hour"
dbg " now_hour=$now_hour"
if [[ "$force" -eq 0 ]] && [[ $now_hour -lt $launch_threshold_hour ]]; then
msg "Cannot launch new instance until $launch_threshold"
echo "# $name HOST THROTTLE: Inst. creation delayed until $launch_threshold" > "$inststate"
continue
else
launch_new=1
fi
fi
if ((launch_new)); then
msg "Creating new $name instance on $name host."
if ! $AWS ec2 run-instances \
--launch-template LaunchTemplateName=${TEMPLATE_NAME} \
--tag-specifications \
"ResourceType=instance,Tags=[{Key=Name,Value=$name},{Key=$DH_REQ_TAG,Value=$DH_REQ_VAL},{Key=PWPoolReady,Value=true},{Key=automation,Value=true}]" \
--placement "HostId=$hostid" &> "$instoutput"; then
inst_failure "Failed to create new instance on available host."
continue
else
# Block further launches (assumes script is running in a 10m while loop).
latest_launched=$(date -u -Iseconds)
msg "Successfully created new instance; Waiting for 'running' state (~1m typical)..."
# N/B: New Mac instances take ~5-10m to actually become ssh-able
if ! InstanceId=$(json_query '.Instances?[0]?.InstanceId' "$instoutput"); then
inst_failure "Empty/null/failed JSON query of brand-new InstanceId"
continue
fi
# Instance "running" status is good enough for this script, and since network
# accessibility can take 5-20m post creation.
# Polls 40 times with 15-second delay (non-configurable).
if ! $AWS ec2 wait instance-running \
--instance-ids $InstanceId &> "${instoutput}.wait"; then
# inst_failure() would include unhelpful $instoutput detail
(
echo "# $name INST ERROR: Running-state timeout."
awk -e '{print "# "$0}' "${instoutput}.wait"
) > "$inststate"
continue
fi
fi
fi
# If an instance was created, $instoutput contents are already obsolete.
# If an existing instance, $instoutput doesn't exist.
if ! $AWS ec2 describe-instances --instance-ids $InstanceId &> "$instoutput"; then
inst_failure "Failed to describe host instance."
continue
fi
# Describe-instance has unnecessarily complex structure, simplify it.
if ! json_query '.Reservations?[0]?.Instances?[0]?' "$instoutput" > "${instoutput}.simple"; then
inst_failure "Empty/null/failed JSON simplification of describe-instances."
fi
mv "$instoutput" "${instoutput}.describe" # leave for debugging
mv "${instoutput}.simple" "${instoutput}"
msg "Parsing new or existing instance ($InstanceId) details."
if ! InstanceId=$(json_query '.InstanceId' $instoutput); then
inst_failure "Empty/null/failed JSON query of InstanceId"
continue
elif ! InstName=$(json_query '.Tags | map(select(.Key == "Name")) | .[].Value' $instoutput) || \
[[ "$InstName" != "$name" ]]; then
inst_failure "Inst. name '$InstName' != DH name '$name'"
elif ! LaunchTime=$(json_query '.LaunchTime' $instoutput); then
inst_failure "Empty/null/failed JSON query of LaunchTime"
continue
fi
echo "$name $InstanceId $LaunchTime" > "$inststate"
done
_I=""
msg " "
msg "Processing all dedicated host and instance states."
# Consuming state file in alpha-order is easier on human eyes
readarray -t NAME2HOSTID <<<$(json_query "$dh_fmt" "$dh_searchout" | sort)
for name_hostid in "${NAME2HOSTID[@]}"; do
read -r name hostid<<<"$name_hostid"
inststate="$TEMPDIR/${name}_inst.state"
[[ -r "$inststate" ]] || \
die "Expecting to find instance-state file $inststate for host '$name' $(ctx 0)."
cat "$inststate" >> "$TEMPDIR/$(basename $DHSTATE)"
done
dbg "Creating/updating state file"
if [[ -r "$DHSTATE" ]]; then
cp "$DHSTATE" "${DHSTATE}~"
fi
mv "$TEMPDIR/$(basename $DHSTATE)" "$DHSTATE"

138
mac_pw_pool/README.md Normal file
View File

@ -0,0 +1,138 @@
# Cirrus-CI persistent worker maintenance
These scripts are intended to be used from a repository clone,
by cron, on an always-on cloud machine. They make a lot of
other assumptions, some of which may not be well documented.
Please see the comments at the top of each scripts for more
detailed/specific information.
## Prerequisites
* The `aws` binary present somewhere on `$PATH`.
* Standard AWS `credentials` and `config` files exist under `~/.aws`
and set the region to `us-east-1`.
* A copy of the ssh-key referenced by `CirrusMacM1PWinstance` launch template
under "Assumptions" below.
* The ssh-key has been added to a running ssh-agent.
* The running ssh-agent sh-compatible env. vars. are stored in
`/run/user/$UID/ssh-agent.env`
* The env. var. `POOLTOKEN` is set to the Cirrus-CI persistent worker pool
token value.
## Assumptions
* You've read all scripts in this directory, generally follow
their purpose, and meet any requirements stated within the
header comment.
* You've read the [private documentation](https://docs.google.com/document/d/1PX6UyqDDq8S72Ko9qe_K3zoV2XZNRQjGxPiWEkFmQQ4/edit)
and understand the safety/security section.
* You have permissions to access all referenced AWS resources.
* There are one or more dedicated hosts allocated and have set:
* A name tag like `MacM1-<some number>` (NO SPACES!)
* The `mac2` instance family
* The `mac2.metal` instance type
* Disabled "Instance auto-placement", "Host recovery", and "Host maintenance"
* Quantity: 1
* Tags: `automation=false`, `purpose=prod`, and `PWPoolReady=true`
* The EC2 `CirrusMacM1PWinstance` instance-template exists and sets:
* Shutdown-behavior: terminate
* Same "key pair" referenced under `Prerequisites`
* All other required instance parameters complete
* A user-data script that shuts down the instance after 2 days.
## Operation (Theory)
The goal is to maintain sufficient alive/running/working instances
to service most Cirrus-CI tasks pointing at the pool. This is
best achieved with slower maintenance of hosts compared to setup
of ready instances. This is because hosts can be inaccessible for
up to 2 hours, but instances come up in ~10-20m, ready to run tasks.
Either hosts and/or instances may be removed from management by
setting "false" or removing their `PWPoolReady=true` tag. Otherwise,
the pool should be maintained by installing the crontab lines
indicated in the `Cron.sh` script.
Cirrus-CI will assign tasks (specially) targeted at the pool, to an
instance with a running listener (`cirrus worker run` process). If
there are none, the task will queue forever (there might be a 24-hour
timeout, I can't remember). From a PR perspective, there is little
control over which instance you get. It could easily be one where
a previous task barfed all over and rendered unusable.
## Initialization
It is assumed that neither the `Cron.sh` nor any related maintenance
scripts are installed (in crontab) or currently running.
Once several dedicated hosts have been manually created, they
should initially have no instances on them. If left alone, the
maintenance scripts will eventually bring them all up, however
complete creation and setup will take many hours. This may be
bypassed by *manually* running `LaunchInstances.sh --force`.
In order to prevent all the instances from being recycled at the same
(future) time, the shutdown time installed by `SetupInstances.sh` also
needs to be adjusted. The operator should first wait about 20 minutes
for all new instances to fully boot. Followed by a call to
`SetupInstances.sh --force`.
Now the `Cron.sh` cron-job may be installed, enabled and started.
## Manual Testing
Verifying changes to these scripts / cron-job must be done manually.
To support this, every dedicated host and instance has a `purpose`
tag, which must correspond to the value indicated in `pw_lib.sh`
and in the target repo `.cirrus.yml`. To test script and/or
CI changes:
1. Make sure you have locally met all requirements spelled out in the
header-comment of `AllocateTestDH.sh`.
1. Execute `AllocateTestDH.sh`. It will operate out of a temporary
clone of the repository to prevent pushing required test-modifications
upstream.
1. Repeatedly execute `SetupInstances.sh`. It will update `pw_status.txt`
with any warnings/errors. When successful, lines will include
the host name, "complete", and "alive" status strings.
1. If instance debugging is needed, the `InstanceSSH.sh` script may be
used. Simply pass the name of the host you want to access. Every
instance should have a `setup.log` file in the `ec2-user` homedir. There
should also be `/private/tmp/<name>-worker.log` with entries from the
pool listener process.
1. To test CI changes against the test instance(s), push a PR that includes
`.cirrus.yml` changes to the task's `persistent_worker` dictionary's
`purpose` attribute. Set the value the same as the tag in step 1.
1. When you're done with all testing, terminate the instance. Then wait
a full 24-hours before "releasing" the dedicated host. Both operations
can be performed using the AWS EC2 WebUI. Please remember to do the
release step, as the $-clock continues to run while it's allocated.
Note: Instances are set to auto-terminate on shutdown. They should
self shutdown after 24-hours automatically. After termination for
any cause, there's about a 2-hour waiting period before a new instance
can be allocated. The `LaunchInstances.sh` script is able deal with this
properly.
## Script Debugging Hints
* On each MacOS instance:
* The pool listener process (running as the worker user) keeps a log under `/private/tmp`. The
file includes the registered name of the worker. For example, on MacM1-7 you would find `/private/tmp/MacM1-7-worker.log`.
This log shows tasks taken on, completed, and any errors reported back from Cirrus-CI internals.
* In the ec2-user's home directory is a `setup.log` file. This stores the output from executing
`setup.sh`. It also contains any warnings/errors from the (very important) `service_pool.sh` script - which should
_always_ be running in the background.
* There are several drop-files in the `ec2-user` home directory which are checked by `SetupInstances.sh`
to record state. If removed, along with `setup.log`, the script will re-execute (a possibly newer version of) `setup.sh`.
* On the management host:
* Automated operations are setup and run by `Cron.sh`, and logged to `Cron.log`. When running scripts manually, `Cron.sh`
can serve as a template for the intended order of operations.
* Critical operations are protected by a mandatory, exclusive file lock on `mac_pw_pool/Cron.sh`. Should
there be a deadlock, management of the pool (by `Cron.sh`) will stop. However the effects of this will not be observed
until workers begin hitting their lifetime and/or task limits.
* Without intervention, the `nightly_maintenance.sh` script will update the containers/automation repo clone on the
management VM. This happens if the repo becomes out of sync by more than 7 days (or as defined in the script).
When the repo is updated, the `pw_pool_web` container will be restarted. The container will also be restarted if its
found to not be running.

463
mac_pw_pool/SetupInstances.sh Executable file
View File

@ -0,0 +1,463 @@
#!/bin/bash
set -eo pipefail
# Script intended to be executed by humans (and eventually automation)
# to provision any/all accessible Cirrus-CI Persistent Worker instances
# as they become available. This is intended to operate independently
# from `LaunchInstances.sh` soas to "hide" the nearly 2-hours of cumulative
# startup and termination wait times. This script depends on:
#
# * All requirements listed in the top `LaunchInstances.sh` comment.
# * The $DHSTATE file created/updated by `LaunchInstances.sh`.
# * The $POOLTOKEN env. var. is defined
# * The local ssh-agent is able to supply the appropriate private key.
# shellcheck source-path=SCRIPTDIR
source $(dirname ${BASH_SOURCE[0]})/pw_lib.sh
# Update temporary-dir status file for instance $name
# status type $1 and value $2. Where status type is
# 'setup', 'listener', 'tasks', 'taskf' or 'comment'.
set_pw_status() {
[[ -n "$name" ]] || \
die "Expecting \$name to be set"
case $1 in
setup) ;;
listener) ;;
tasks) ;; # started
taskf) ;; # finished
ftasks) ;;
comment) ;;
*) die "Status type must be 'setup', 'listener', 'tasks', 'taskf' or 'comment'"
esac
if [[ "$1" != "comment" ]] && [[ -z "$2" ]]; then
die "Expecting comment text (status argument) to be non-empty."
fi
echo -n "$2" > $TEMPDIR/${name}.$1
}
# Wrapper around msg() and warn() which also set_pw_status() comment.
pwst_msg() { set_pw_status comment "$1"; msg "$1"; }
pwst_warn() { set_pw_status comment "$1"; warn "$1"; }
# Attempt to signal $SPOOL_SCRIPT to stop picking up new CI tasks but
# support PWPoolReady being reset to 'true' in the future to signal
# a new $SETUP_SCRIPT run. Cancel future $SHDWN_SCRIPT action.
# Requires both $pub_dns and $name are set
stop_listener(){
dbg "Attempting to stop pool listener and reset setup state"
$SSH ec2-user@$pub_dns rm -f \
"/private/tmp/${name}_cfg_*" \
"./.setup.done" \
"./.setup.started" \
"/var/tmp/shutdown.sh"
}
# Forcibly shutdown an instance immediately, printing warning and status
# comment from first argument. Requires $name, $instance_id, and $pub_dns
# to be set.
force_term(){
local varname
local termoutput
termoutput="$TEMPDIR/${name}_term.output"
local term_msg
term_msg="${1:-no inst_panic() message provided} Terminating immediately! $(ctx)"
for varname in name instance_id pub_dns; do
[[ -n "${!varname}" ]] || \
die "Expecting \$$varname to be set/non-empty."
done
# $SSH has built-in -n; ignore failure, inst may be in broken state already
echo "$term_msg" | ssh $SSH_ARGS ec2-user@$pub_dns sudo wall || true
# Set status and print warning message
pwst_warn "$term_msg"
# Instance is going to be terminated, immediately stop any attempts to
# restart listening for jobs. Ignore failure if unreachable for any reason -
# we/something else could have already started termination previously
stop_listener || true
# Termination can take a few minutes, block further use of instance immediately.
$AWS ec2 create-tags --resources $instance_id --tags "Key=PWPoolReady,Value=false" || true
# Prefer possibly recovering a broken pool over debug-ability.
if ! $AWS ec2 terminate-instances --instance-ids $instance_id &> "$termoutput"; then
# Possible if the instance recently/previously started termination process.
warn "Could not terminate instance $instance_id $(ctx 0):
$(<$termoutput)"
fi
}
# Set non-zero to enable debugging / prevent removal of temp. dir.
S_DEBUG="${S_DEBUG:0}"
if ((S_DEBUG)); then
X_DEBUG=1
warn "Debugging enabled - temp. dir will not be cleaned up '$TEMPDIR' $(ctx 0)."
trap EXIT
fi
[[ -n "$POOLTOKEN" ]] || \
die "Expecting \$POOLTOKEN to be defined/non-empty $(ctx 0)."
[[ -r "$DHSTATE" ]] || \
die "Can't read from state file: $DHSTATE"
if [[ -z "$SSH_AUTH_SOCK" ]] || [[ -z "$SSH_AGENT_PID" ]]; then
die "Cannot access an ssh-agent. Please run 'ssh-agent -s > /run/user/$UID/ssh-agent.env' and 'ssh-add /path/to/required/key'."
fi
declare -a _dhstate
readarray -t _dhstate <<<$(grep -E -v '^($|#+| +)' "$DHSTATE" | sort)
n_inst=0
n_inst_total="${#_dhstate[@]}"
if [[ -z "${_dhstate[*]}" ]] || ! ((n_inst_total)); then
msg "No operable hosts found in $DHSTATE:
$(<$DHSTATE)"
# Assume this script is running in a loop, and unf. there are
# simply no dedicated-hosts in 'available' state.
exit 0
fi
# N/B: Assumes $DHSTATE represents reality
msg "Operating on $n_inst_total instances from $(head -1 $DHSTATE)"
echo -e "# $(basename ${BASH_SOURCE[0]}) run $(date -u -Iseconds)\n#" > "$TEMPDIR/$(basename $PWSTATE)"
# Previous instance state needed for some optional checks
declare -a _pwstate
n_pw_total=0
if [[ -r "$PWSTATE" ]]; then
readarray -t _pwstate <<<$(grep -E -v '^($|#+| +)' "$PWSTATE" | sort)
n_pw_total="${#_pwstate[@]}"
# Handle single empty-item array
if [[ -z "${_pwstate[*]}" ]] || ! ((n_pw_total)); then
_pwstate=()
_n_pw_total=0
fi
fi
# Assuming the `--force` option was used to initialize a new pool of
# workers, then instances need to be configured with a staggered
# self-termination shutdown delay. This prevents all the instances
# from being terminated at the same time, potentially impacting
# CI usage.
runtime_hours_reduction=0
# shellcheck disable=SC2199
if [[ "$@" =~ --force ]]; then
warn "Forcing instance creation w/ staggered existence limits."
runtime_hours_reduction=$CREATE_STAGGER_HOURS
fi
for _dhentry in "${_dhstate[@]}"; do
read -r name instance_id launch_time junk<<<"$_dhentry"
_I=" "
msg " "
n_inst=$(($n_inst+1))
msg "Working on Instance #$n_inst/$n_inst_total '$name' with ID '$instance_id'."
# Clear buffers used for updating status files
n_started_tasks=0
n_finished_tasks=0
instoutput="$TEMPDIR/${name}_inst.output"
ncoutput="$TEMPDIR/${name}_nc.output"
logoutput="$TEMPDIR/${name}_log.output"
# Most operations below 'continue' looping on error. Ensure status files match.
set_pw_status tasks 0
set_pw_status taskf 0
set_pw_status setup error
set_pw_status listener error
set_pw_status comment ""
if ! $AWS ec2 describe-instances --instance-ids $instance_id &> "$instoutput"; then
pwst_warn "Could not query instance $instance_id $(ctx 0)."
continue
fi
dbg "Verifying required $DH_REQ_TAG=$DH_REQ_VAL"
tagq=".Reservations?[0]?.Instances?[0]?.Tags | map(select(.Key == \"$DH_REQ_TAG\")) | .[].Value"
if ! inst_tag=$(json_query "$tagq" "$instoutput"); then
pwst_warn "Could not look up instance $DH_REQ_TAG tag"
continue
fi
if [[ "$inst_tag" != "$DH_REQ_VAL" ]]; then
pwst_warn "Required inst. '$DH_REQ_TAG' tag != '$DH_REQ_VAL'"
continue
fi
dbg "Looking up instance name"
nameq='.Reservations?[0]?.Instances?[0]?.Tags | map(select(.Key == "Name")) | .[].Value'
if ! inst_name=$(json_query "$nameq" "$instoutput"); then
pwst_warn "Could not look up instance Name tag"
continue
fi
if [[ "$inst_name" != "$name" ]]; then
pwst_warn "Inst. name '$inst_name' != DH name '$name'"
continue
fi
dbg "Looking up public DNS"
if ! pub_dns=$(json_query '.Reservations?[0]?.Instances?[0]?.PublicDnsName?' "$instoutput"); then
pwst_warn "Could not lookup of public DNS for instance $instance_id $(ctx 0)"
continue
fi
# It's really important that instances have a defined and risk-relative
# short lifespan. Multiple mechanisms are in place to assist, but none
# are perfect. Ensure instances running for an excessive time are forcefully
# terminated as soon as possible from this script.
launch_epoch=$(date -u -d "$launch_time" +%s)
now_epoch=$(date -u +%s)
age_sec=$((now_epoch-launch_epoch))
hard_max_sec=$((PW_MAX_HOURS*60*60*2)) # double PW_MAX_HOURS
dbg "launch_epoch=$launch_epoch"
dbg " now_epoch=$now_epoch"
dbg " age_sec=$age_sec"
dbg "hard_max_sec=$hard_max_sec"
# Soft time limit is enforced via 'sleep $PW_MAX_HOURS && shutdown' started during instance setup (below).
msg "Instance alive for $((age_sec/60/60)) hours (soft max: $PW_MAX_HOURS hard: $((hard_max_sec/60/60)))"
if [[ $age_sec -gt $hard_max_sec ]]; then
force_term "Excess instance lifetime; $(((age_sec - hard_max_sec)/60))m past hard max limit."
continue
elif [[ $age_sec -gt $((PW_MAX_HOURS*60*60)) ]]; then
pwst_warn "Instance alive longer than soft max. Investigation recommended."
fi
dbg "Attempting to contact '$name' at $pub_dns"
if ! nc -z -w 13 $pub_dns 22 &> "$ncoutput"; then
pwst_warn "Could not connect to port 22 on '$pub_dns' $(ctx 0)."
continue
fi
if ! $SSH ec2-user@$pub_dns true; then
pwst_warn "Could not ssh to 'ec2-user@$pub_dns' $(ctx 0)."
continue
fi
dbg "Check if instance should be managed"
if ! PWPoolReady=$(json_query '.Reservations?[0]?.Instances?[0]?.Tags? | map(select(.Key == "PWPoolReady")) | .[].Value' "$instoutput"); then
pwst_warn "Instance does not have a PWPoolReady tag"
PWPoolReady="absent"
fi
# Mechanism for a developer to manually debug operations w/o fear of new tasks or instance shutdown.
if [[ "$PWPoolReady" != "true" ]]; then
pwst_msg "Instance disabled via tag 'PWPoolReady' == '$PWPoolReady'."
set_pw_status setup disabled
set_pw_status listener disabled
(
set +e # All commands below are best-effort only!
dbg "Attempting to stop any pending shutdowns"
$SSH ec2-user@$pub_dns sudo pkill shutdown
stop_listener
dbg "Attempting to stop shutdown sleep "
$SSH ec2-user@$pub_dns pkill -u ec2-user -f "'bash -c sleep'"
if $SSH ec2-user@$pub_dns pgrep -u ec2-user -f service_pool.sh; then
sleep 10s # Allow service_pool to exit gracefully
fi
# N/B: This will not stop any currently running CI tasks.
dbg "Guarantee pool listener is dead"
$SSH ec2-user@$pub_dns sudo pkill -u ${name}-worker -f "'cirrus worker run'"
)
continue
fi
if ! $SSH ec2-user@$pub_dns test -r .setup.done; then
if ! $SSH ec2-user@$pub_dns test -r .setup.started; then
if $SSH ec2-user@$pub_dns test -r setup.log; then
# Can be caused by operator flipping PWPoolReady value on instance for debugging
pwst_warn "Setup log found, prior executions may have failed $(ctx 0)."
fi
pwst_msg "Setting up new instance"
# Ensure bash used for consistency && some ssh commands below
# don't play nicely with zsh.
$SSH ec2-user@$pub_dns sudo chsh -s /bin/bash ec2-user &> /dev/null
if ! $SCP $SETUP_SCRIPT $SPOOL_SCRIPT $SHDWN_SCRIPT ec2-user@$pub_dns:/var/tmp/; then
pwst_warn "Could not scp scripts to instance $(ctx 0)."
continue # try again next loop
fi
if ! $SCP $CIENV_SCRIPT ec2-user@$pub_dns:./; then
pwst_warn "Could not scp CI Env. script to instance $(ctx 0)."
continue # try again next loop
fi
if ! $SSH ec2-user@$pub_dns chmod +x "/var/tmp/*.sh" "./ci_env.sh"; then
pwst_warn "Could not chmod scripts $(ctx 0)."
continue # try again next loop
fi
# Keep runtime_hours_reduction w/in sensible, positive bounds.
if [[ $runtime_hours_reduction -ge $((PW_MAX_HOURS - CREATE_STAGGER_HOURS)) ]]; then
runtime_hours_reduction=$CREATE_STAGGER_HOURS
fi
shutdown_seconds=$((60*60*PW_MAX_HOURS - 60*60*runtime_hours_reduction))
[[ $shutdown_seconds -gt $((60*60*CREATE_STAGGER_HOURS)) ]] || \
die "Detected unacceptably short \$shutdown_seconds ($shutdown_seconds) value."
pwst_msg "Starting automatic instance recycling in $((shutdown_seconds/60/60)) hours"
# Darwin is really weird WRT active terminals and the shutdown
# command. Instead of installing a future shutdown, stick an
# immediate shutdown at the end of a long sleep. This is the
# simplest workaround I could find :S
# Darwin sleep only accepts seconds.
if ! $SSH ec2-user@$pub_dns bash -c \
"'sleep $shutdown_seconds && /var/tmp/shutdown.sh' </dev/null >>setup.log 2>&1 &"; then
pwst_warn "Could not start automatic instance recycling."
continue # try again next loop
fi
pwst_msg "Executing setup script."
# Run setup script in background b/c it takes ~10m to complete.
# N/B: This drops .setup.started and eventually (hopefully) .setup.done
if ! $SSH ec2-user@$pub_dns \
env POOLTOKEN=$POOLTOKEN \
bash -c "'/var/tmp/setup.sh $DH_REQ_TAG:\ $DH_REQ_VAL' </dev/null >>setup.log 2>&1 &"; then
# This is critical, no easy way to determine what broke.
force_term "Failed to start background setup script"
continue
fi
msg "Setup script started."
set_pw_status setup started
# No sense in incrementing if there was a failure running setup
# shellcheck disable=SC2199
if [[ "$@" =~ --force ]]; then
runtime_hours_reduction=$((runtime_hours_reduction + CREATE_STAGGER_HOURS))
fi
# Let setup run in the background
continue
fi
# Setup started in previous loop. Set to epoch on error.
since_timestamp=$($SSH ec2-user@$pub_dns tail -1 .setup.started || echo "@0")
since_epoch=$(date -u -d "$since_timestamp" +%s)
running_seconds=$((now_epoch-since_epoch))
# Be helpful to human monitors, show the last few lines from the log to help
# track progress and/or any errors/warnings.
pwst_msg "Setup incomplete; Running for $((running_seconds/60)) minutes (~10 typical)"
msg "setup.log tail: $($SSH ec2-user@$pub_dns tail -n 1 setup.log)"
if [[ $running_seconds -gt $SETUP_MAX_SECONDS ]]; then
force_term "Setup running for ${running_seconds}s, max ${SETUP_MAX_SECONDS}s."
fi
continue
fi
dbg "Instance setup has completed"
set_pw_status setup complete
# Spawned by setup.sh
dbg "Checking service_pool.sh script"
if ! $SSH ec2-user@$pub_dns pgrep -u ec2-user -q -f service_pool.sh; then
# This should not happen at this stage; Nefarious or uncontrolled activity?
force_term "Pool servicing script (service_pool.sh) is not running."
continue
fi
dbg "Checking cirrus listener"
state_fault=0
if ! $SSH ec2-user@$pub_dns pgrep -u "${name}-worker" -q -f "'cirrus worker run'"; then
# Don't try to examine prior state if there was none.
if ((n_pw_total)); then
for _pwentry in "${_pwstate[@]}"; do
read -r _name _setup_state _listener_state _tasks _taskf _junk <<<"$_pwentry"
dbg "Examining pw_state.txt entry '$_name' with listener state '$_listener_state'"
if [[ "$_name" == "$name" ]] && [[ "$_listener_state" != "alive" ]]; then
# service_pool.sh did not restart listener since last loop
# and node is not in maintenance mode (PWPoolReady == 'true')
force_term "Pool listener '$_listener_state' state fault."
state_fault=1
break
fi
done
fi
# The instance is in the process of shutting-down/terminating, move on to next instance.
if ((state_fault)); then
continue
fi
# Previous state didn't exist, or listener status was 'alive'.
# Process may have simply crashed, allow service_pool.sh time to restart it.
pwst_warn "Cirrus worker listener process NOT running, will recheck again $(ctx 0)."
# service_pool.sh should catch this and restart the listener. If not, the next time
# through this loop will force_term() the instance.
set_pw_status listener dead # service_pool.sh should restart listener
continue
else
set_pw_status listener alive
fi
dbg "Checking worker log"
logpath="/private/tmp/${name}-worker.log" # set in setup.sh
if ! $SSH ec2-user@$pub_dns cat "'$logpath'" &> "$logoutput"; then
# The "${name}-worker" user has write access to this log
force_term "Missing worker log $logpath."
continue
fi
dbg "Checking worker registration"
# First lines of log should always match this
if ! head -10 "$logoutput" | grep -q 'worker successfully registered'; then
# This could signal log manipulation by worker user, or it could be harmless.
pwst_warn "Missing registration log entry"
fi
# The CI user has write-access to this log file on the instance,
# make this known to humans in case they care.
n_started_tasks=$(grep -Ei 'started task [0-9]+' "$logoutput" | wc -l) || true
n_finished_tasks=$(grep -Ei 'task [0-9]+ completed' "$logoutput" | wc -l) || true
set_pw_status tasks $n_started_tasks
set_pw_status taskf $n_finished_tasks
msg "Apparent tasks started/finished/running: $n_started_tasks $n_finished_tasks $((n_started_tasks-n_finished_tasks)) (max $PW_MAX_TASKS)"
dbg "Checking apparent task limit"
# N/B: This is only enforced based on the _previous_ run of this script worker-count.
# Doing this on the _current_ alive worker count would add a lot of complexity.
if [[ "$n_finished_tasks" -gt $PW_MAX_TASKS ]] && [[ $n_pw_total -gt $PW_MIN_ALIVE ]]; then
# N/B: Termination based on _finished_ tasks, so if a task happens to be currently running
# it will very likely have _just_ started in the last few seconds. Cirrus will retry
# automatically on another worker.
force_term "Instance exceeded $PW_MAX_TASKS apparent tasks."
elif [[ $n_pw_total -le $PW_MIN_ALIVE ]]; then
pwst_warn "Not enforcing max-tasks limit, only $n_pw_total workers online last run."
fi
done
_I=""
msg " "
msg "Processing all persistent worker states."
for _dhentry in "${_dhstate[@]}"; do
read -r name otherstuff<<<"$_dhentry"
_f1=$name
_f2=$(<$TEMPDIR/${name}.setup)
_f3=$(<$TEMPDIR/${name}.listener)
_f4=$(<$TEMPDIR/${name}.tasks)
_f5=$(<$TEMPDIR/${name}.taskf)
_f6=$(<$TEMPDIR/${name}.comment)
[[ -z "$_f6" ]] || _f6=" # $_f6"
printf '%s %s %s %s %s%s\n' \
"$_f1" "$_f2" "$_f3" "$_f4" "$_f5" "$_f6" >> "$TEMPDIR/$(basename $PWSTATE)"
done
dbg "Creating/updating state file"
if [[ -r "$PWSTATE" ]]; then
cp "$PWSTATE" "${PWSTATE}~"
fi
mv "$TEMPDIR/$(basename $PWSTATE)" "$PWSTATE"

View File

@ -0,0 +1,32 @@
# Intended to be run like: `gnuplot -p -c Utilization.gnuplot`
# Requires a file named `utilization.csv` produced by commands
# in `Cron.sh`.
#
# Format Ref: http://gnuplot.info/docs_5.5/Overview.html
set terminal png enhanced rounded size 1400,800 nocrop
set output 'html/utilization.png'
set title "Persistent Workers & Utilization"
set xdata time
set timefmt "%Y-%m-%dT%H:%M:%S+00:00"
set xtics nomirror rotate timedate
set xlabel "time/date"
set xrange [(system("date -u -Iseconds -d '26 hours ago'")):(system("date -u -Iseconds"))]
set ylabel "Workers Online"
set ytics border nomirror numeric
# Not practical to lookup $DH_PFX from pw_lib.sh
set yrange [0:(system("grep -E '^[a-zA-Z0-9]+-[0-9]' dh_status.txt | wc -l") * 1.5)]
set y2label "Worker Utilization"
set y2tics border nomirror numeric
set y2range [0:100]
set datafile separator comma
set grid
plot 'utilization.csv' using 1:2 axis x1y1 title "Workers" pt 7 ps 2, \
'' using 1:((($3-$4)/$2)*100) axis x1y2 title "Utilization" with lines lw 2

50
mac_pw_pool/ci_env.sh Executable file
View File

@ -0,0 +1,50 @@
#!/bin/bash
# This script drops the caller into a bash shell inside an environment
# substantially similar to a Cirrus-CI task running on this host.
# The envars below may require adjustment to better fit them to
# current/ongoing development in podman's .cirrus.yml
set -eo pipefail
# Not running as the pool worker user
if [[ "$USER" == "ec2-user" ]]; then
PWINST=$(curl -sSLf http://instance-data/latest/meta-data/tags/instance/Name)
PWUSER=$PWINST-worker
if [[ ! -d "/Users/$PWUSER" ]]; then
echo "Warnin: Instance hasn't been setup. Assuming caller will tend to this."
sudo sysadminctl -addUser $PWUSER
fi
sudo install -o $PWUSER "${BASH_SOURCE[0]}" "/Users/$PWUSER/"
exec sudo su -c "/Users/$PWUSER/$(basename ${BASH_SOURCE[0]})" - $PWUSER
fi
# Export all CI-critical envars defined below
set -a
CIRRUS_SHELL="/bin/bash"
CIRRUS_TASK_ID="0123456789"
CIRRUS_WORKING_DIR="$HOME/ci/task-${CIRRUS_TASK_ID}"
GOPATH="$CIRRUS_WORKING_DIR/.go"
GOCACHE="$CIRRUS_WORKING_DIR/.go/cache"
GOENV="$CIRRUS_WORKING_DIR/.go/support"
CONTAINERS_MACHINE_PROVIDER="applehv"
MACHINE_IMAGE="https://fedorapeople.org/groups/podman/testing/applehv/arm64/fedora-coreos-38.20230925.dev.0-applehv.aarch64.raw.gz"
GINKGO_TAGS="remote exclude_graphdriver_btrfs btrfs_noversion exclude_graphdriver_devicemapper containers_image_openpgp remote"
DEBUG_MACHINE="1"
ORIGINAL_HOME="$HOME"
HOME="$HOME/ci"
TMPDIR="/private/tmp/ci"
mkdir -p "$TMPDIR" "$CIRRUS_WORKING_DIR"
# Drop caller into the CI-like environment
cd "$CIRRUS_WORKING_DIR"
bash -il

Some files were not shown because too many files have changed in this diff Show More