Compare commits

..

12 Commits

Author SHA1 Message Date
Joe Kimmel 02e9a563eb force pack acceptance tests to build with a version of go that can still make HTTP requests to docker daemon (#1158)
Signed-off-by: Joe Kimmel <jkimmel@vmware.com>
2023-08-03 16:11:18 -04:00
Natalie Arellano bfc5cda949
Field renames per spec review (#1170)
* Rename distributions -> distros in the buildpack spec

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* Rename distributions -> distros in the platform spec

Signed-off-by: Natalie Arellano <narellano@vmware.com>

---------

Signed-off-by: Natalie Arellano <narellano@vmware.com>
2023-08-03 15:06:47 -04:00
Natalie Arellano 6bc53d6f70
Remove CNB_TARGET_ID according to https://github.com/buildpacks/spec/pull/374 and https://github.com/buildpacks/spec/pull/375 (#1175)
Signed-off-by: Natalie Arellano <narellano@vmware.com>
2023-08-03 14:37:21 -04:00
Natalie Arellano b363b2a3b2
Add -daemon to restorer (#1168)
This is needed when extensions were used to switch (but not extend) the run image
and we need to re-read the target data from the image config.

In such cases, we don't need the run image to exist in a registry,
because we don't need a manifest for kaniko.

Signed-off-by: Natalie Arellano <narellano@vmware.com>
2023-07-31 11:48:58 -04:00
Natalie Arellano 87d4f057b8
Simplifies target matching logic per spec PR review (#1166)
* Update units without updating code

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* Update code

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* Unpend test

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* Add units for rebase without updating code

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* Update rebase code

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* Fix lint

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* When we read the descriptor file, don't fill in "*" as a magic value as missing values are wildcard matches

Signed-off-by: Natalie Arellano <narellano@vmware.com>

* Stricter validation for rebase

Signed-off-by: Natalie Arellano <narellano@vmware.com>

---------

Signed-off-by: Natalie Arellano <narellano@vmware.com>
2023-07-31 09:42:18 -04:00
Joe Kimmel dc6af53456
timestamp logs and phase error message cherry-picks (#1164)
* timestamp logs for entry/exit for all the top-level Lifecycle package functions

Signed-off-by: Joe Kimmel <jkimmel@vmware.com>

fixing names

Signed-off-by: Joe Kimmel <jkimmel@vmware.com>

using defer to make one-liners for fun and profit

Signed-off-by: Joe Kimmel <jkimmel@vmware.com>

and today we thank our brave linters for preventing critical defects such as unnecessary trailing newlines from being merged. Its about time somebody thought of the children.

Signed-off-by: Joe Kimmel <jkimmel@vmware.com>

* be more helpful when you dont recognize the phase

Signed-off-by: Joe Kimmel <jkimmel@vmware.com>

---------

Signed-off-by: Joe Kimmel <jkimmel@vmware.com>
2023-07-28 12:14:17 -04:00
Joe Kimmel 40ac1ee05e
Merge pull request #1159 from buildpacks/fix/restorer-target-update
Fix restorer target update
2023-07-27 11:16:46 -07:00
Natalie Arellano fbe3cb075d Fix acceptance by providing a base image when we instantiate the remote run image
Signed-off-by: Natalie Arellano <narellano@vmware.com>
2023-07-25 15:45:37 -04:00
Natalie Arellano c5b14c5bbc Add test for empty digest not returned
Signed-off-by: Natalie Arellano <narellano@vmware.com>
2023-07-25 15:31:36 -04:00
Joe Kimmel 0580136826 warn when a positional argument might have been a flag (#1147)
Signed-off-by: Joe Kimmel <jkimmel@vmware.com>
2023-07-20 09:28:31 -07:00
Joe Kimmel 495eefbd77 add explanatory debug logs so a reader knows why the buildpacks are read twice.
Signed-off-by: Joe Kimmel <jkimmel@vmware.com>
2023-07-20 09:16:47 -07:00
Joe Kimmel 147890216d restorer gets layers flag again
Signed-off-by: Joe Kimmel <jkimmel@vmware.com>
2023-07-20 09:16:39 -07:00
416 changed files with 10160 additions and 8813 deletions

View File

@ -2,14 +2,12 @@
name: Bug name: Bug
about: Bug report about: Bug report
title: '' title: ''
labels: type/bug, status/triage labels: status/triage, type/bug
assignees: '' assignees: ''
--- ---
### Summary ### Summary
<!-- Please provide a general summary of the issue. --> <!--- Please provide a general summary of the issue. -->
--- ---
@ -17,20 +15,17 @@ assignees: ''
### Reproduction ### Reproduction
##### Steps ##### Steps
<!-- What steps should be taken to reproduce the issue? --> <!--- What steps should be taken to reproduce the issue? -->
1. 1.
2. 2.
3. 3.
##### Current behavior ##### Current behavior
<!-- What happened? Logs, etc. could go here. --> <!--- What happened? Logs, etc. could go here. -->
##### Expected behavior
<!-- What did you expect to happen? -->
##### Expected
<!--- What did you expect to happen? -->
--- ---
@ -38,15 +33,10 @@ assignees: ''
### Context ### Context
##### lifecycle version ##### lifecycle version
<!-- If you can find this, it helps us pin down the issue. For example, run `pack builder inspect <builder name>` which should report the lifecycle version in question. --> <!--- If you can find this, it helps us pin down the issue. For example, run `pack inspect-builder BUILDER` which should report the lifecycle version in question. -->
##### platform version(s) ##### platform version(s)
<!-- For example run `pack report` and `docker info` and copy output here, redacting any sensitive information. --> <!--- For example run `pack report` and `docker info` and copy output here. -->
##### anything else? ##### anything else?
<!-- Add any other context that may help (e.g., Tekton task version, kpack version, etc.). --> <!--- Tekton task version, kpack version, etc. -->

View File

@ -7,20 +7,11 @@ assignees: ''
--- ---
### Summary ### Description
<!-- Please describe why this chore matters, who will enjoy it and how. --> <!-- A concise description of why this chore matters, who will enjoy it and how. -->
### Proposed solution
<!-- A clear and concise description of how you think the chore should be implemented. -->
### Additional context
--- <!-- Add any other context or screenshots about the chore that may help. -->
### Proposal
<!-- How do you think the chore should be implemented? -->
---
### Context
<!-- Add any other context that may help. -->

View File

@ -7,27 +7,14 @@ assignees: ''
--- ---
### Summary ### Description
<!-- Please describe the feature and why it matters. --> <!-- A concise description of what problem the feature solves and why solving it matters. -->
### Proposed solution
<!-- A clear and concise description of what you want to happen. -->
### Describe alternatives you've considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
--- ### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Proposal
<!-- How do you think the feature should be implemented? -->
---
### Related
<!-- If this feature addresses an RFC, please provide the RFC number below. -->
RFC #___
---
### Context
<!-- Add any other context that may help. -->

View File

@ -3,14 +3,5 @@ updates:
- package-ecosystem: gomod - package-ecosystem: gomod
directory: "/" directory: "/"
schedule: schedule:
interval: weekly interval: daily
groups: open-pull-requests-limit: 10
# Group all minor/patch go dependencies into a single PR.
go-dependencies:
update-types:
- "minor"
- "patch"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: weekly

View File

@ -1,25 +0,0 @@
<!-- 🎉🎉🎉 Thank you for the PR!!! 🎉🎉🎉 -->
### Summary
<!-- Please describe your changes at a high level. -->
#### Release notes
<!-- Please provide 1-2 sentences for release notes. -->
<!-- Example: When using platform API `0.7` or greater, the `creator` logs the expected phase header for the analyze phase -->
---
### Related
<!-- If this PR addresses an issue, please provide the issue number below. -->
Resolves #___
---
### Context
<!-- Add any other context that may help reviewers (e.g., code that requires special attention, etc.). -->

View File

@ -14,11 +14,11 @@ jobs:
test-linux-amd64: test-linux-amd64:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
fetch-depth: '0' fetch-depth: '0'
- name: Set up go - name: Set up go
uses: actions/setup-go@v5 uses: actions/setup-go@v3
with: with:
check-latest: true check-latest: true
go-version-file: 'go.mod' go-version-file: 'go.mod'
@ -33,9 +33,8 @@ jobs:
TEST_COVERAGE: 1 TEST_COVERAGE: 1
run: make test run: make test
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
uses: codecov/codecov-action@v5 uses: codecov/codecov-action@v3
with: with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./out/tests/coverage-unit.txt file: ./out/tests/coverage-unit.txt
flags: unit,os_linux flags: unit,os_linux
fail_ci_if_error: true fail_ci_if_error: true
@ -43,11 +42,11 @@ jobs:
test-linux-arm64: test-linux-arm64:
runs-on: linux-arm64 runs-on: linux-arm64
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
fetch-depth: '0' fetch-depth: '0'
- name: Set up go - name: Set up go
uses: actions/setup-go@v5 uses: actions/setup-go@v3
with: with:
check-latest: true check-latest: true
go-version-file: 'go.mod' go-version-file: 'go.mod'
@ -55,35 +54,87 @@ jobs:
run: | run: |
make format || true make format || true
make test make test
test-windows:
runs-on: windows-2019
steps:
- name: Set git to use LF and symlinks
run: |
git config --global core.autocrlf false
git config --global core.eol lf
git config --global core.symlinks true
- uses: actions/checkout@v2
with:
fetch-depth: '0'
- name: Set up go
uses: actions/setup-go@v3
with:
check-latest: true
go-version-file: 'go.mod'
- name: Add runner IP to daemon insecure-registries and firewall
shell: powershell
run: |
# Get IP from default gateway interface
$IPAddress=(Get-NetIPAddress -InterfaceAlias ((Get-NetRoute "0.0.0.0/0").InterfaceAlias) -AddressFamily IPv4)[0].IPAddress
# Allow container-to-host registry traffic (from public interface, to the same interface)
New-NetfirewallRule -DisplayName test-registry -LocalAddress $IPAddress -RemoteAddress $IPAddress
# create or update daemon config to allow host as insecure-registry
$config=@{}
if (Test-Path C:\ProgramData\docker\config\daemon.json) {
$config=(Get-Content C:\ProgramData\docker\config\daemon.json | ConvertFrom-json)
}
$config | Add-Member -Force -Name "insecure-registries" -value @("$IPAddress/32") -MemberType NoteProperty
$config | Add-Member -Force -Name "allow-nondistributable-artifacts" -value @("$IPAddress/32") -MemberType NoteProperty
ConvertTo-json $config | Out-File -Encoding ASCII C:\ProgramData\docker\config\daemon.json
Restart-Service docker
# dump docker info for auditing
docker version
docker info
- name: Test
env:
TEST_COVERAGE: 1
run: |
make test
- name: Prepare Codecov
uses: crazy-max/ghaction-chocolatey@v2
with:
args: install codecov -y
- name: Run Codecov
run: |
codecov.exe -f .\out\tests\coverage-unit.txt -v --flag os_windows
build-and-publish: build-and-publish:
needs: needs:
- test-linux-amd64 - test-linux-amd64
- test-linux-arm64 - test-linux-arm64
- test-windows
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
id-token: write
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
fetch-depth: 0 # fetch all history for all branches and tags fetch-depth: 0 # fetch all history for all branches and tags
- name: Set up go - name: Set up go
uses: actions/setup-go@v5 uses: actions/setup-go@v3
with: with:
check-latest: true check-latest: true
go-version-file: 'go.mod' go-version-file: 'go.mod'
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@v3 uses: sigstore/cosign-installer@v1.0.0
with:
cosign-release: 'v1.0.0'
- name: Set version - name: Set version
run: | run: |
echo "LIFECYCLE_VERSION=$(go run tools/version/main.go)" | tee -a $GITHUB_ENV version.txt echo "LIFECYCLE_VERSION=$(go run tools/version/main.go)" | tee -a $GITHUB_ENV version.txt
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: version name: version
path: version.txt path: version.txt
- name: Set tag - name: Set tag
run: | run: |
echo "LIFECYCLE_IMAGE_TAG=$(git describe --always --abbrev=7)" >> tag.txt echo "LIFECYCLE_IMAGE_TAG=$(git describe --always --abbrev=7)" >> tag.txt
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: tag name: tag
path: tag.txt path: tag.txt
@ -92,65 +143,71 @@ jobs:
make clean make clean
make build make build
make package make package
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-linux-x86-64 name: lifecycle-linux-x86-64
path: out/lifecycle-v*+linux.x86-64.tgz path: out/lifecycle-v*+linux.x86-64.tgz
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-linux-x86-64-sha256 name: lifecycle-linux-x86-64-sha256
path: out/lifecycle-v*+linux.x86-64.tgz.sha256 path: out/lifecycle-v*+linux.x86-64.tgz.sha256
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-linux-arm64 name: lifecycle-linux-arm64
path: out/lifecycle-v*+linux.arm64.tgz path: out/lifecycle-v*+linux.arm64.tgz
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-linux-arm64-sha256 name: lifecycle-linux-arm64-sha256
path: out/lifecycle-v*+linux.arm64.tgz.sha256 path: out/lifecycle-v*+linux.arm64.tgz.sha256
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-linux-ppc64le name: lifecycle-windows-x86-64
path: out/lifecycle-v*+linux.ppc64le.tgz path: out/lifecycle-v*+windows.x86-64.tgz
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-linux-ppc64le-sha256 name: lifecycle-windows-x86-64-sha256
path: out/lifecycle-v*+linux.ppc64le.tgz.sha256 path: out/lifecycle-v*+windows.x86-64.tgz.sha256
- uses: actions/upload-artifact@v4
with:
name: lifecycle-linux-s390x
path: out/lifecycle-v*+linux.s390x.tgz
- uses: actions/upload-artifact@v4
with:
name: lifecycle-linux-s390x-sha256
path: out/lifecycle-v*+linux.s390x.tgz.sha256
- name: Generate SBOM JSON - name: Generate SBOM JSON
uses: CycloneDX/gh-gomod-generate-sbom@v2 uses: CycloneDX/gh-gomod-generate-sbom@v1
with: with:
args: mod -licenses -json -output lifecycle-v${{ env.LIFECYCLE_VERSION }}-bom.cdx.json args: mod -licenses -json -output lifecycle-v${{ env.LIFECYCLE_VERSION }}-bom.cdx.json
version: ^v1 version: ^v1
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-bom-cdx name: lifecycle-bom-cdx
path: lifecycle-v*-bom.cdx.json path: lifecycle-v*-bom.cdx.json
- name: Calculate SBOM sha - name: Calculate SBOM sha
run: | run: |
shasum -a 256 lifecycle-v${{ env.LIFECYCLE_VERSION }}-bom.cdx.json > lifecycle-v${{ env.LIFECYCLE_VERSION }}-bom.cdx.json.sha256 shasum -a 256 lifecycle-v${{ env.LIFECYCLE_VERSION }}-bom.cdx.json > lifecycle-v${{ env.LIFECYCLE_VERSION }}-bom.cdx.json.sha256
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v2
with: with:
name: lifecycle-bom-cdx-sha256 name: lifecycle-bom-cdx-sha256
path: lifecycle-v*-bom.cdx.json.sha256 path: lifecycle-v*-bom.cdx.json.sha256
- uses: azure/docker-login@v2 - uses: azure/docker-login@v1
if: github.event_name == 'push' if: github.event_name == 'push'
with: with:
username: ${{ secrets.DOCKER_USERNAME }} username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }} password: ${{ secrets.DOCKER_PASSWORD }}
- uses: actions/download-artifact@v5 - uses: actions/download-artifact@v2
with: with:
name: tag name: tag
- name: Set env - name: Set env
run: | run: |
cat tag.txt >> $GITHUB_ENV cat tag.txt >> $GITHUB_ENV
- name: Rename cosign public key
run: |
cp cosign.pub lifecycle-v${{ env.LIFECYCLE_VERSION }}-cosign.pub
- uses: actions/upload-artifact@v2
with:
name: lifecycle-cosign-public-key
path: lifecycle-v${{ env.LIFECYCLE_VERSION }}-cosign.pub
- name: Calculate cosign sha
run: |
shasum -a 256 lifecycle-v${{ env.LIFECYCLE_VERSION }}-cosign.pub > lifecycle-v${{ env.LIFECYCLE_VERSION }}-cosign.pub.sha256
- uses: actions/upload-artifact@v2
with:
name: lifecycle-cosign-public-key-sha256
path: lifecycle-v${{ env.LIFECYCLE_VERSION }}-cosign.pub.sha256
- name: Publish images - name: Publish images
if: github.event_name == 'push' if: github.event_name == 'push'
run: | run: |
@ -163,32 +220,25 @@ jobs:
LINUX_ARM64_SHA=$(go run ./tools/image/main.go -lifecyclePath ./out/lifecycle-v*+linux.arm64.tgz -tag buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-arm64 -arch arm64 | awk '{print $NF}') LINUX_ARM64_SHA=$(go run ./tools/image/main.go -lifecyclePath ./out/lifecycle-v*+linux.arm64.tgz -tag buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-arm64 -arch arm64 | awk '{print $NF}')
echo "LINUX_ARM64_SHA: $LINUX_ARM64_SHA" echo "LINUX_ARM64_SHA: $LINUX_ARM64_SHA"
LINUX_PPC64LE_SHA=$(go run ./tools/image/main.go -lifecyclePath ./out/lifecycle-v*+linux.ppc64le.tgz -tag buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-ppc64le -arch ppc64le | awk '{print $NF}') WINDOWS_AMD64_SHA=$(go run ./tools/image/main.go -lifecyclePath ./out/lifecycle-v*+windows.x86-64.tgz -tag buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-windows -os windows | awk '{print $NF}')
echo "LINUX_PPC64LE_SHA: LINUX_PPC64LE_SHA" echo "WINDOWS_AMD64_SHA: $WINDOWS_AMD64_SHA"
LINUX_S390X_SHA=$(go run ./tools/image/main.go -lifecyclePath ./out/lifecycle-v*+linux.s390x.tgz -tag buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-s390x -arch s390x | awk '{print $NF}')
echo "LINUX_S390X_SHA: $LINUX_S390X_SHA"
docker manifest create buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG} \ docker manifest create buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG} \
buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-x86-64@${LINUX_AMD64_SHA} \ buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-x86-64@${LINUX_AMD64_SHA} \
buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-arm64@${LINUX_ARM64_SHA} \ buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-arm64@${LINUX_ARM64_SHA} \
buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-ppc64le@${LINUX_PPC64LE_SHA} \ buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-windows@${WINDOWS_AMD64_SHA}
buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}-linux-s390x@${LINUX_S390X_SHA}
MANIFEST_SHA=$(docker manifest push buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}) MANIFEST_SHA=$(docker manifest push buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG})
echo "MANIFEST_SHA: $MANIFEST_SHA" echo "MANIFEST_SHA: $MANIFEST_SHA"
cosign sign -r -y \ COSIGN_PASSWORD=${{ secrets.COSIGN_PASSWORD }} cosign sign -r \
-key <(echo -n "${{ secrets.COSIGN_PRIVATE_KEY }}" | base64 --decode) \
-a tag=${LIFECYCLE_IMAGE_TAG} \ -a tag=${LIFECYCLE_IMAGE_TAG} \
buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}@${MANIFEST_SHA} buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}@${MANIFEST_SHA}
cosign verify \ cosign verify -key cosign.pub -a tag=${LIFECYCLE_IMAGE_TAG} buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}
--certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/build.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
-a tag=${LIFECYCLE_IMAGE_TAG} \
buildpacksio/lifecycle:${LIFECYCLE_IMAGE_TAG}
- name: Scan image - name: Scan image
if: github.event_name == 'push' if: github.event_name == 'push'
uses: anchore/scan-action@v6 uses: anchore/scan-action@v3
with: with:
image: buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }} image: buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}
pack-acceptance-linux: pack-acceptance-linux:
@ -196,34 +246,106 @@ jobs:
needs: build-and-publish needs: build-and-publish
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
repository: 'buildpacks/pack' repository: 'buildpacks/pack'
path: 'pack' path: 'pack'
ref: 'main' ref: 'main'
fetch-depth: 0 # fetch all history for all branches and tags fetch-depth: 0 # fetch all history for all branches and tags
- name: Set up go - name: Set up go
uses: actions/setup-go@v5 uses: actions/setup-go@v3
with: with:
go-version-file: 'pack/go.mod' go-version: '1.20.5'
- uses: actions/download-artifact@v5 - uses: actions/download-artifact@v2
with: with:
name: version name: version
- uses: actions/download-artifact@v5 - uses: actions/download-artifact@v2
with: with:
name: tag name: tag
- name: Set env - name: Set env
run: | run: |
cat version.txt >> $GITHUB_ENV cat version.txt >> $GITHUB_ENV
cat tag.txt >> $GITHUB_ENV cat tag.txt >> $GITHUB_ENV
- uses: actions/download-artifact@v5 - uses: actions/download-artifact@v2
with: with:
name: lifecycle-linux-x86-64 name: lifecycle-linux-x86-64
path: pack path: pack
- name: Run pack acceptance - name: Run pack acceptance
run: | run: |
cd pack cd pack
git checkout $(git describe --abbrev=0 --tags) # check out the latest tag git checkout v0.28.0 # FIXME: let the pack version float again when pack 0.30.0-pre2 is out
LIFECYCLE_PATH="../lifecycle-v${{ env.LIFECYCLE_VERSION }}+linux.x86-64.tgz" \ LIFECYCLE_PATH="../lifecycle-v${{ env.LIFECYCLE_VERSION }}+linux.x86-64.tgz" \
LIFECYCLE_IMAGE="buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}" \ LIFECYCLE_IMAGE="buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}" \
make acceptance make acceptance
pack-acceptance-windows:
if: github.event_name == 'push'
needs: build-and-publish
runs-on: windows-2019
steps:
- name: Set git to use LF and symlinks
run: |
git config --global core.autocrlf false
git config --global core.eol lf
git config --global core.symlinks true
- uses: actions/checkout@v2
with:
repository: 'buildpacks/pack'
path: 'pack'
ref: 'main'
fetch-depth: 0 # fetch all history for all branches and tags
- name: Set up go
uses: actions/setup-go@v3
with:
go-version: '1.20.5'
- name: Add runner IP to daemon insecure-registries and firewall
shell: powershell
run: |
# Get IP from default gateway interface
$IPAddress=(Get-NetIPAddress -InterfaceAlias ((Get-NetRoute "0.0.0.0/0").InterfaceAlias) -AddressFamily IPv4)[0].IPAddress
# Allow container-to-host registry traffic (from public interface, to the same interface)
New-NetfirewallRule -DisplayName test-registry -LocalAddress $IPAddress -RemoteAddress $IPAddress
# create or update daemon config to allow host as insecure-registry
$config=@{}
if (Test-Path C:\ProgramData\docker\config\daemon.json) {
$config=(Get-Content C:\ProgramData\docker\config\daemon.json | ConvertFrom-json)
}
$config | Add-Member -Force -Name "insecure-registries" -value @("$IPAddress/32") -MemberType NoteProperty
ConvertTo-json $config | Out-File -Encoding ASCII C:\ProgramData\docker\config\daemon.json
Restart-Service docker
# dump docker info for auditing
docker version
docker info
- name: Modify etc\hosts to include runner IP
shell: powershell
run: |
$IPAddress=(Get-NetIPAddress -InterfaceAlias ((Get-NetRoute "0.0.0.0/0").InterfaceAlias) -AddressFamily IPv4)[0].IPAddress
"# Modified by CNB: https://github.com/buildpacks/ci/tree/main/gh-runners/windows
${IPAddress} host.docker.internal
${IPAddress} gateway.docker.internal
" | Out-File -Filepath C:\Windows\System32\drivers\etc\hosts -Encoding utf8
- uses: actions/download-artifact@v2
with:
name: version
- uses: actions/download-artifact@v2
with:
name: tag
- name: Set env
run: |
cat version.txt >> $env:GITHUB_ENV
cat tag.txt >> $env:GITHUB_ENV
- uses: actions/download-artifact@v2
with:
name: lifecycle-windows-x86-64
path: pack
- name: Run pack acceptance
run: |
cd pack
git checkout v0.28.0 # FIXME: let the pack version float again when pack 0.30.0-pre2 is out
$env:LIFECYCLE_PATH="..\lifecycle-v${{ env.LIFECYCLE_VERSION }}+windows.x86-64.tgz"
$env:LIFECYCLE_IMAGE="buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}"
make acceptance

View File

@ -1,4 +1,4 @@
name: check-latest-release name: Check latest lifecycle release
on: on:
schedule: schedule:
@ -9,29 +9,14 @@ jobs:
check-release: check-release:
runs-on: runs-on:
- ubuntu-latest - ubuntu-latest
permissions:
issues: write
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-go@v5 - uses: actions/setup-go@v3
with: with:
check-latest: true check-latest: true
go-version-file: 'go.mod' go-version-file: 'go.mod'
- name: Get previous release tag - name: Read go versions
id: get-previous-release-tag id: read-go
uses: actions/github-script@v6
with:
github-token: ${{secrets.GITHUB_TOKEN}}
result-encoding: string
script: |
return github.rest.repos.getLatestRelease({
owner: "buildpacks",
repo: "lifecycle",
}).then(result => {
return result.data.tag_name
})
- name: Read go and release versions
id: read-versions
env: env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: | run: |
@ -41,7 +26,7 @@ jobs:
LATEST_GO_VERSION=$(go version | cut -d ' ' -f 3) LATEST_GO_VERSION=$(go version | cut -d ' ' -f 3)
LATEST_RELEASE_VERSION=${{ steps.get-previous-release-tag.outputs.result }} LATEST_RELEASE_VERSION=$(gh release list -L 1 | cut -d $'\t' -f 1 | cut -d ' ' -f 2)
wget https://github.com/buildpacks/lifecycle/releases/download/$LATEST_RELEASE_VERSION/lifecycle-$LATEST_RELEASE_VERSION+linux.x86-64.tgz -O lifecycle.tgz wget https://github.com/buildpacks/lifecycle/releases/download/$LATEST_RELEASE_VERSION/lifecycle-$LATEST_RELEASE_VERSION+linux.x86-64.tgz -O lifecycle.tgz
tar xzf lifecycle.tgz tar xzf lifecycle.tgz
@ -53,7 +38,7 @@ jobs:
LATEST_RELEASE_VERSION=$(echo $LATEST_RELEASE_VERSION | cut -d \v -f 2) LATEST_RELEASE_VERSION=$(echo $LATEST_RELEASE_VERSION | cut -d \v -f 2)
echo "latest-release-version=${LATEST_RELEASE_VERSION}" >> "$GITHUB_OUTPUT" echo "latest-release-version=${LATEST_RELEASE_VERSION}" >> "$GITHUB_OUTPUT"
- name: Create issue if needed - name: Create issue if needed
if: ${{ steps.read-versions.outputs.latest-go-version != steps.read-versions.outputs.latest-release-go-version }} if: ${{ steps.read-go.outputs.latest-go-version != steps.read-go.outputs.latest-release-go-version }}
env: env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: | run: |
@ -61,15 +46,15 @@ jobs:
set -euo pipefail set -euo pipefail
title="Upgrade lifecycle to ${{ steps.read-versions.outputs.latest-go-version }}" title="Upgrade lifecycle to ${{ steps.read-go.outputs.latest-go-version }}"
label=${{ steps.read-versions.outputs.latest-go-version }} label=${{ steps.read-go.outputs.latest-go-version }}
# Create label to use for exact search # Create label to use for exact search
gh label create "$label" || true gh label create "$label" || true
search_output=$(gh issue list --search "$title" --label "$label") search_output=$(gh issue list --search "$title" --label "$label")
body="Latest lifecycle release v${{ steps.read-versions.outputs.latest-release-version }} is built with Go version ${{ steps.read-versions.outputs.latest-release-go-version }}; newer version ${{ steps.read-versions.outputs.latest-go-version }} is available." body="Latest lifecycle release v${{ steps.read-go.outputs.latest-release-version }} is built with Go version ${{ steps.read-go.outputs.latest-release-go-version }}; newer version ${{ steps.read-go.outputs.latest-go-version }} is available."
if [ -z "${search_output// }" ] if [ -z "${search_output// }" ]
then then
@ -86,12 +71,9 @@ jobs:
fi fi
- name: Scan latest release image - name: Scan latest release image
id: scan-image id: scan-image
uses: anchore/scan-action@v6 uses: anchore/scan-action@v3
with: with:
image: buildpacksio/lifecycle:${{ steps.read-versions.outputs.latest-release-version }} image: buildpacksio/lifecycle:${{ steps.read-go.outputs.latest-release-version }}
fail-build: true
severity-cutoff: medium
output-format: json
- name: Create issue if needed - name: Create issue if needed
if: failure() && steps.scan-image.outcome == 'failure' if: failure() && steps.scan-image.outcome == 'failure'
env: env:
@ -101,7 +83,7 @@ jobs:
set -euo pipefail set -euo pipefail
title="CVE(s) found in v${{ steps.read-versions.outputs.latest-release-version }}" title="CVE(s) found"
label=cve label=cve
# Create label to use for exact search # Create label to use for exact search
@ -110,7 +92,7 @@ jobs:
search_output=$(gh issue list --search "$title" --label "$label") search_output=$(gh issue list --search "$title" --label "$label")
GITHUB_WORKFLOW_URL=https://github.com/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID GITHUB_WORKFLOW_URL=https://github.com/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID
body="Latest lifecycle release v${{ steps.read-versions.outputs.latest-release-version }} triggered CVE(s) from Grype. For further details, see: $GITHUB_WORKFLOW_URL json: $(cat ${{ steps.scan-image.outputs.json }} | jq '.matches[] | .vulnerability | {id, severity, description}' )" body="Latest lifecycle release v${{ steps.read-go.outputs.latest-release-version }} triggered CVE(s) from Grype. For further details, see: $GITHUB_WORKFLOW_URL"
if [ -z "${search_output// }" ] if [ -z "${search_output// }" ]
then then

View File

@ -6,10 +6,8 @@ on:
jobs: jobs:
draft-release: draft-release:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
contents: write
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
- name: Install jq - name: Install jq
run: | run: |
mkdir -p deps/bin mkdir -p deps/bin
@ -24,10 +22,9 @@ jobs:
exit 1 exit 1
fi fi
echo "LIFECYCLE_VERSION=$version" >> $GITHUB_ENV echo "LIFECYCLE_VERSION=$version" >> $GITHUB_ENV
- name: Determine download urls for linux-x86-64, linux-arm64, linux-ppc64le, linux-s390x - name: Determine download urls for linux-x86-64, linux-arm64 and windows
id: artifact-urls id: artifact-urls
# FIXME: this script should be updated to work with actions/github-script@v6 uses: actions/github-script@v3.0.0
uses: actions/github-script@v3
with: with:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
script: | script: |
@ -83,10 +80,7 @@ jobs:
throw "no artifacts found" throw "no artifacts found"
} }
if (urlList.length != 10) { if (urlList.length != 10) {
// found too many artifacts throw "there should be exactly ten artifacts"
// list them and throw
console.log(urlList);
throw "there should be exactly 10 artifacts, found " + urlList.length + " artifacts"
} }
return urlList.join(",") return urlList.join(",")
}) })
@ -117,72 +111,26 @@ jobs:
if: "!contains(env.LIFECYCLE_VERSION, 'rc') && !contains(env.LIFECYCLE_VERSION, 'pre')" if: "!contains(env.LIFECYCLE_VERSION, 'rc') && !contains(env.LIFECYCLE_VERSION, 'pre')"
run: | run: |
echo "RELEASE_KIND=release" >> $GITHUB_ENV echo "RELEASE_KIND=release" >> $GITHUB_ENV
- name: Get previous release tag
id: get-previous-release-tag
uses: actions/github-script@v6
with:
github-token: ${{secrets.GITHUB_TOKEN}}
result-encoding: string
script: |
return github.rest.repos.getLatestRelease({
owner: "buildpacks",
repo: "lifecycle",
}).then(result => {
return result.data.tag_name
})
- name: Setup go
uses: actions/setup-go@v5
with:
check-latest: true
go-version-file: 'go.mod'
- name: Get go version
id: get-go-version
run: |
mkdir tmp
tar xzvf ${{ env.ARTIFACTS_PATH }}/lifecycle-v${{ env.LIFECYCLE_VERSION }}+linux.x86-64.tgz -C tmp/
echo "GO_VERSION=$(go version tmp/lifecycle/lifecycle | cut -d ' ' -f 2 | sed -e 's/^go//')" >> $GITHUB_ENV
- name: Set release body text - name: Set release body text
run: | run: |
cat << EOF > body.txt cat << EOF > body.txt
# lifecycle v${{ env.LIFECYCLE_VERSION }} # lifecycle v${{ env.LIFECYCLE_VERSION }}
Welcome to v${{ env.LIFECYCLE_VERSION }}, a ${{ env.RELEASE_KIND }} of the Cloud Native Buildpacks Lifecycle. Welcome to v${{ env.LIFECYCLE_VERSION }}, a **beta** ${{ env.RELEASE_KIND }} of the Cloud Native Buildpacks Lifecycle.
## Prerequisites ## Prerequisites
The lifecycle runs as a normal user in a series of unprivileged containers. To export images and cache image layers, it requires access to a Docker (compatible) daemon **or** an OCI registry. The lifecycle runs as a normal user in a series of unprivileged containers. To export images and cache image layers, it requires access to a Docker daemon **or** Docker registry.
## Install ## Install
Extract the .tgz file and copy the lifecycle binaries into a [build image](https://github.com/buildpacks/spec/blob/main/platform.md#build-image). The build image can then be orchestrated by a platform implementation such as the [pack CLI](https://github.com/buildpack/pack) or [tekton](https://github.com/tektoncd/catalog/tree/main/task/buildpacks). Extract the .tgz file and copy the lifecycle binaries into a [build stack base image](https://github.com/buildpack/spec/blob/master/platform.md#stacks). The build image can then be orchestrated by a platform implementation such as the [pack CLI](https://github.com/buildpack/pack) or [tekton](https://github.com/tektoncd/catalog/blob/master/task/buildpacks/0.1/README.md).
## Lifecycle Image ## Lifecycle Image
An OCI image containing the lifecycle binaries is available at buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}. An OCI image containing the lifecycle binaries is available at buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}.
## Features
* TODO
* Updates go to version ${{ env.GO_VERSION }}
## Bugfixes
* TODO
## Chores
* TODO
**Full Changelog**: https://github.com/buildpacks/lifecycle/compare/${{ steps.get-previous-release-tag.outputs.result }}...release/${{ env.LIFECYCLE_VERSION }}
## Contributors
We'd like to acknowledge that this release wouldn't be as good without the help of the following amazing contributors:
TODO
EOF EOF
- name: Create pre-release - name: Create Pre Release
if: "contains(env.LIFECYCLE_VERSION, 'rc') || contains(env.LIFECYCLE_VERSION, 'pre')" # e.g., 0.99.0-rc.1 if: "contains(env.LIFECYCLE_VERSION, 'rc') || contains(env.LIFECYCLE_VERSION, 'pre')" # e.g., 0.99.0-rc.1
run: | run: |
cd ${{ env.ARTIFACTS_PATH }} cd ${{ env.ARTIFACTS_PATH }}
@ -191,11 +139,11 @@ jobs:
--draft \ --draft \
--notes-file ../body.txt \ --notes-file ../body.txt \
--prerelease \ --prerelease \
--target $GITHUB_REF_NAME \ --target $GITHUB_REF \
--title "lifecycle v${{ env.LIFECYCLE_VERSION }}" --title "lifecycle v${{ env.LIFECYCLE_VERSION }}"
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create release - name: Create Release
if: "!contains(env.LIFECYCLE_VERSION, 'rc') && !contains(env.LIFECYCLE_VERSION, 'pre')" if: "!contains(env.LIFECYCLE_VERSION, 'rc') && !contains(env.LIFECYCLE_VERSION, 'pre')"
run: | run: |
cd ${{ env.ARTIFACTS_PATH }} cd ${{ env.ARTIFACTS_PATH }}
@ -203,7 +151,7 @@ jobs:
$(ls | sort | paste -sd " " -) \ $(ls | sort | paste -sd " " -) \
--draft \ --draft \
--notes-file ../body.txt \ --notes-file ../body.txt \
--target $GITHUB_REF_NAME \ --target $GITHUB_REF \
--title "lifecycle v${{ env.LIFECYCLE_VERSION }}" --title "lifecycle v${{ env.LIFECYCLE_VERSION }}"
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -8,12 +8,10 @@ on:
jobs: jobs:
retag-lifecycle-images: retag-lifecycle-images:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
id-token: write
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
- name: Set up go - name: Set up go
uses: actions/setup-go@v5 uses: actions/setup-go@v3
with: with:
check-latest: true check-latest: true
go-version-file: 'go.mod' go-version-file: 'go.mod'
@ -21,8 +19,10 @@ jobs:
run: | run: |
go install github.com/google/go-containerregistry/cmd/crane@latest go install github.com/google/go-containerregistry/cmd/crane@latest
- name: Install cosign - name: Install cosign
uses: sigstore/cosign-installer@v3 uses: sigstore/cosign-installer@main
- uses: azure/docker-login@v2 with:
cosign-release: 'v1.2.0'
- uses: azure/docker-login@v1
with: with:
username: ${{ secrets.DOCKER_USERNAME }} username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }} password: ${{ secrets.DOCKER_PASSWORD }}
@ -32,22 +32,17 @@ jobs:
echo "LIFECYCLE_IMAGE_TAG=$(git describe --always --abbrev=7)" >> $GITHUB_ENV echo "LIFECYCLE_IMAGE_TAG=$(git describe --always --abbrev=7)" >> $GITHUB_ENV
- name: Verify lifecycle images - name: Verify lifecycle images
run: | run: |
LINUX_AMD64_SHA=$(cosign verify --certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/build.yml" --certificate-oidc-issuer https://token.actions.githubusercontent.com buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-x86-64 | jq -r .[0].critical.image.\"docker-manifest-digest\") LINUX_AMD64_SHA=$(cosign verify -key cosign.pub buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-x86-64 | jq -r .[0].critical.image.\"docker-manifest-digest\")
echo "LINUX_AMD64_SHA: $LINUX_AMD64_SHA" echo "LINUX_AMD64_SHA: $LINUX_AMD64_SHA"
echo "LINUX_AMD64_SHA=$LINUX_AMD64_SHA" >> $GITHUB_ENV echo "LINUX_AMD64_SHA=$LINUX_AMD64_SHA" >> $GITHUB_ENV
LINUX_ARM64_SHA=$(cosign verify --certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/build.yml" --certificate-oidc-issuer https://token.actions.githubusercontent.com buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-arm64 | jq -r .[0].critical.image.\"docker-manifest-digest\") LINUX_ARM64_SHA=$(cosign verify -key cosign.pub buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-arm64 | jq -r .[0].critical.image.\"docker-manifest-digest\")
echo "LINUX_ARM64_SHA: $LINUX_ARM64_SHA" echo "LINUX_ARM64_SHA: $LINUX_ARM64_SHA"
echo "LINUX_ARM64_SHA=$LINUX_ARM64_SHA" >> $GITHUB_ENV echo "LINUX_ARM64_SHA=$LINUX_ARM64_SHA" >> $GITHUB_ENV
LINUX_PPC64LE_SHA=$(cosign verify --certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/build.yml" --certificate-oidc-issuer https://token.actions.githubusercontent.com buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-ppc64le | jq -r .[0].critical.image.\"docker-manifest-digest\") WINDOWS_AMD64_SHA=$(cosign verify -key cosign.pub buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-windows | jq -r .[0].critical.image.\"docker-manifest-digest\")
echo "LINUX_PPC64LE_SHA: $LINUX_PPC64LE_SHA" echo "WINDOWS_AMD64_SHA: $WINDOWS_AMD64_SHA"
echo "LINUX_PPC64LE_SHA=$LINUX_PPC64LE_SHA" >> $GITHUB_ENV echo "WINDOWS_AMD64_SHA=$WINDOWS_AMD64_SHA" >> $GITHUB_ENV
LINUX_S390X_SHA=$(cosign verify --certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/build.yml" --certificate-oidc-issuer https://token.actions.githubusercontent.com buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-s390x | jq -r .[0].critical.image.\"docker-manifest-digest\")
echo "LINUX_S390X_SHA: $LINUX_S390X_SHA"
echo "LINUX_S390X_SHA=$LINUX_S390X_SHA" >> $GITHUB_ENV
- name: Download SBOM - name: Download SBOM
run: | run: |
gh release download --pattern '*-bom.cdx.json' ${{ github.event.release.tag_name }} gh release download --pattern '*-bom.cdx.json' ${{ github.event.release.tag_name }}
@ -59,36 +54,28 @@ jobs:
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-x86-64@${{ env.LINUX_AMD64_SHA }} ${{ env.LIFECYCLE_VERSION }}-linux-x86-64 crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-x86-64@${{ env.LINUX_AMD64_SHA }} ${{ env.LIFECYCLE_VERSION }}-linux-x86-64
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-arm64@${{ env.LINUX_ARM64_SHA }} ${{ env.LIFECYCLE_VERSION }}-linux-arm64 crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-arm64@${{ env.LINUX_ARM64_SHA }} ${{ env.LIFECYCLE_VERSION }}-linux-arm64
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-ppc64le@${{ env.LINUX_PPC64LE_SHA }} ${{ env.LIFECYCLE_VERSION }}-linux-ppc64le crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-windows@${{ env.WINDOWS_AMD64_SHA }} ${{ env.LIFECYCLE_VERSION }}-windows
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-s390x@${{ env.LINUX_S390X_SHA }} ${{ env.LIFECYCLE_VERSION }}-linux-s390x
docker manifest create buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }} \ docker manifest create buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }} \
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}-linux-x86-64@${{ env.LINUX_AMD64_SHA }} \ buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}-linux-x86-64@${{ env.LINUX_AMD64_SHA }} \
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}-linux-arm64@${{ env.LINUX_ARM64_SHA }} \ buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}-linux-arm64@${{ env.LINUX_ARM64_SHA }} \
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}-linux-ppc64le@${{ env.LINUX_PPC64LE_SHA }} \ buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}-windows@${{ env.WINDOWS_AMD64_SHA }}
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}-linux-s390x@${{ env.LINUX_S390X_SHA }}
MANIFEST_SHA=$(docker manifest push buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}) MANIFEST_SHA=$(docker manifest push buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }})
echo "MANIFEST_SHA: $MANIFEST_SHA" echo "MANIFEST_SHA: $MANIFEST_SHA"
cosign sign -r -y \ COSIGN_PASSWORD=${{ secrets.COSIGN_PASSWORD }} cosign sign -r \
-key <(echo -n "${{ secrets.COSIGN_PRIVATE_KEY }}" | base64 --decode) \
-a tag=${{ env.LIFECYCLE_VERSION }} \ -a tag=${{ env.LIFECYCLE_VERSION }} \
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}@${MANIFEST_SHA} buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}@${MANIFEST_SHA}
cosign verify \ cosign verify -key cosign.pub -a tag=${{ env.LIFECYCLE_VERSION }} buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}
--certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/post-release.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
-a tag=${{ env.LIFECYCLE_VERSION }} \
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}
cosign attach sbom --sbom ./*-bom.cdx.json --type cyclonedx buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }} cosign attach sbom -sbom ./*-bom.cdx.json -type cyclonedx buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}
cosign sign -r -y \ COSIGN_PASSWORD=${{ secrets.COSIGN_PASSWORD }} cosign sign -r \
-a tag=${{ env.LIFECYCLE_VERSION }} --attachment sbom \ -key <(echo -n "${{ secrets.COSIGN_PRIVATE_KEY }}" | base64 --decode) \
-a tag=${{ env.LIFECYCLE_VERSION }} -attachment sbom \
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}@${MANIFEST_SHA} buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}@${MANIFEST_SHA}
cosign verify \ cosign verify -key cosign.pub -a tag=${{ env.LIFECYCLE_VERSION }} -attachment sbom buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}@${MANIFEST_SHA}
--certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/post-release.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
-a tag=${{ env.LIFECYCLE_VERSION }} --attachment sbom \
buildpacksio/lifecycle:${{ env.LIFECYCLE_VERSION }}
- name: Retag lifecycle images & create manifest list - latest - name: Retag lifecycle images & create manifest list - latest
if: "!contains(env.LIFECYCLE_VERSION, 'rc') && !contains(env.LIFECYCLE_VERSION, 'pre')" if: "!contains(env.LIFECYCLE_VERSION, 'rc') && !contains(env.LIFECYCLE_VERSION, 'pre')"
run: | run: |
@ -96,33 +83,25 @@ jobs:
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-x86-64@${{ env.LINUX_AMD64_SHA }} latest-linux-x86-64 crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-x86-64@${{ env.LINUX_AMD64_SHA }} latest-linux-x86-64
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-arm64@${{ env.LINUX_ARM64_SHA }} latest-linux-arm64 crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-arm64@${{ env.LINUX_ARM64_SHA }} latest-linux-arm64
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-ppc64le@${{ env.LINUX_PPC64LE_SHA }} latest-linux-ppc64le crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-windows@${{ env.WINDOWS_AMD64_SHA }} latest-windows
crane tag buildpacksio/lifecycle:${{ env.LIFECYCLE_IMAGE_TAG }}-linux-s390x@${{ env.LINUX_S390X_SHA }} latest-linux-s390x
docker manifest create buildpacksio/lifecycle:latest \ docker manifest create buildpacksio/lifecycle:latest \
buildpacksio/lifecycle:latest-linux-x86-64@${{ env.LINUX_AMD64_SHA }} \ buildpacksio/lifecycle:latest-linux-x86-64@${{ env.LINUX_AMD64_SHA }} \
buildpacksio/lifecycle:latest-linux-arm64@${{ env.LINUX_ARM64_SHA }} \ buildpacksio/lifecycle:latest-linux-arm64@${{ env.LINUX_ARM64_SHA }} \
buildpacksio/lifecycle:latest-linux-ppc64le@${{ env.LINUX_PPC64LE_SHA }} \ buildpacksio/lifecycle:latest-windows@${{ env.WINDOWS_AMD64_SHA }}
buildpacksio/lifecycle:latest-linux-s390x@${{ env.LINUX_S390X_SHA }}
MANIFEST_SHA=$(docker manifest push buildpacksio/lifecycle:latest) MANIFEST_SHA=$(docker manifest push buildpacksio/lifecycle:latest)
echo "MANIFEST_SHA: $MANIFEST_SHA" echo "MANIFEST_SHA: $MANIFEST_SHA"
cosign sign -r -y \ COSIGN_PASSWORD=${{ secrets.COSIGN_PASSWORD }} cosign sign -r \
-key <(echo -n "${{ secrets.COSIGN_PRIVATE_KEY }}" | base64 --decode) \
-a tag=latest \ -a tag=latest \
buildpacksio/lifecycle:latest@${MANIFEST_SHA} buildpacksio/lifecycle:latest@${MANIFEST_SHA}
cosign verify \ cosign verify -key cosign.pub -a tag=latest buildpacksio/lifecycle:latest
--certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/post-release.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
-a tag=latest \
buildpacksio/lifecycle:latest
cosign attach sbom --sbom ./*-bom.cdx.json --type cyclonedx buildpacksio/lifecycle:latest cosign attach sbom -sbom ./*-bom.cdx.json -type cyclonedx buildpacksio/lifecycle:latest
cosign sign -r -y \ COSIGN_PASSWORD=${{ secrets.COSIGN_PASSWORD }} cosign sign -r \
-a tag=${{ env.LIFECYCLE_VERSION }} --attachment sbom \ -key <(echo -n "${{ secrets.COSIGN_PRIVATE_KEY }}" | base64 --decode) \
-a tag=${{ env.LIFECYCLE_VERSION }} -attachment sbom \
buildpacksio/lifecycle:latest@${MANIFEST_SHA} buildpacksio/lifecycle:latest@${MANIFEST_SHA}
cosign verify \ cosign verify -key cosign.pub -a tag=${{ env.LIFECYCLE_VERSION }} -attachment sbom buildpacksio/lifecycle:latest@${MANIFEST_SHA}
--certificate-identity-regexp "https://github.com/${{ github.repository_owner }}/lifecycle/.github/workflows/post-release.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
-a tag=${{ env.LIFECYCLE_VERSION }} --attachment sbom \
buildpacksio/lifecycle:latest

View File

@ -1,87 +0,0 @@
name: test-s390x
on:
push:
branches:
- main
- 'release/**'
pull_request:
branches:
- main
- 'release/**'
jobs:
test-linux-s390x:
if: (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/release*')
runs-on: ubuntu-latest
env:
ZVSI_FP_NAME: bp-floating-ci-${{ github.run_id }}
ZVSI_INSTANCE_NAME: bp-zvsi-ci-${{ github.run_id }}
ZVSI_ZONE_NAME: ca-tor-1
ZVSI_PROFILE_NAME: bz2-4x16
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v4
- name: install ibmcli and setup ibm login
run: |
curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
ibmcloud login -q --apikey ${{ secrets.IBMCLOUD_API_KEY }} -r ca-tor
ibmcloud plugin install vpc-infrastructure
- name: Creation of ZVSI
id: ZVSI
run: |
#creation of zvsi
ibmcloud is instance-create $ZVSI_INSTANCE_NAME ${{ secrets.ZVSI_VPC }} $ZVSI_ZONE_NAME $ZVSI_PROFILE_NAME ${{ secrets.ZVSI_SUBNET }} --image ${{ secrets.ZVSI_IMAGE }} --keys ${{ secrets.ZVSI_KEY }} --resource-group-id ${{ secrets.ZVSI_RG_ID }} --primary-network-interface "{\"name\":\"eth0\",\"allow_ip_spoofing\":false,\"subnet\": {\"name\":\"${{ secrets.ZVSI_SUBNET }}\"},\"security_groups\":[{\"id\":\"${{ secrets.ZVSI_SG }}\"}]}"
#Reserving a floating ip to the ZVSI
ibmcloud is floating-ip-reserve $ZVSI_FP_NAME --zone $ZVSI_ZONE_NAME --resource-group-id ${{ secrets.ZVSI_RG_ID }} --in $ZVSI_INSTANCE_NAME
#Bouding the Floating ip to the ZVSI
ibmcloud is floating-ip-update $ZVSI_FP_NAME --nic eth0 --in $ZVSI_INSTANCE_NAME
sleep 60
#Saving the Floating IP to login ZVSI
ZVSI_HOST=$(ibmcloud is floating-ip $ZVSI_FP_NAME | awk '/Address/{print $2}')
echo $ZVSI_HOST
echo "IP=${ZVSI_HOST}" >> $GITHUB_OUTPUT
- name: Status of ZVSI
run: |
check=$(ibmcloud is ins| awk '/'$ZVSI_INSTANCE_NAME'/{print $3}')
while [[ $check != "running" ]]
do
check=$(ibmcloud is ins | awk '/'$ZVSI_INSTANCE_NAME'/{print $3}')
if [[ $check == 'failed' ]]
then
echo "Failed to run the ZVSI"
break
fi
done
- name: Install dependencies and run all tests on s390x ZVSI
uses: appleboy/ssh-action@v1.2.2
env:
GH_REPOSITORY: ${{ github.server_url }}/${{ github.repository }}
GH_REF: ${{ github.ref }}
with:
host: ${{ steps.ZVSI.outputs.IP }}
username: ${{ secrets.ZVSI_SSH_USER }}
key: ${{ secrets.ZVSI_PR_KEY }}
envs: GH_REPOSITORY,GH_REF
command_timeout: 100m
script: |
apt-get update -y
apt-get install -y wget curl git make gcc jq docker.io
wget https://go.dev/dl/go1.24.6.linux-s390x.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.24.6.linux-s390x.tar.gz
export PATH=$PATH:/usr/local/go/bin
git clone ${GH_REPOSITORY} lifecycle
cd lifecycle && git checkout ${GH_REF}
go env
export PATH=$PATH:~/go/bin
make format || true
make test
- name: Cleanup ZVSI
if: ${{ steps.ZVSI.conclusion == 'success' && always() }}
run: |
#Delete the created ZVSI
ibmcloud is instance-delete $ZVSI_INSTANCE_NAME --force
sleep 20
#Release the created FP
ibmcloud is floating-ip-release $ZVSI_FP_NAME --force

3
.gitignore vendored
View File

@ -6,12 +6,9 @@
.tool-versions .tool-versions
/out /out
.vscode .vscode
acceptance/testdata/*/**/container/cnb/lifecycle/* acceptance/testdata/*/**/container/cnb/lifecycle/*
acceptance/testdata/*/**/container/docker-config/* acceptance/testdata/*/**/container/docker-config/*
acceptance/testdata/exporter/container/cnb/run.toml acceptance/testdata/exporter/container/cnb/run.toml
acceptance/testdata/exporter/container/layers/*analyzed.toml acceptance/testdata/exporter/container/layers/*analyzed.toml
acceptance/testdata/exporter/container/other_layers/*analyzed.toml acceptance/testdata/exporter/container/other_layers/*analyzed.toml
acceptance/testdata/restorer/container/layers/*analyzed.toml acceptance/testdata/restorer/container/layers/*analyzed.toml

View File

@ -1,5 +1,5 @@
ignore: ignore:
- vulnerability: CVE-2015-5237 # false positive, see https://github.com/anchore/grype/issues/558 - vulnerability: CVE-2015-5237 # false positive, see https://github.com/anchore/grype/issues/558
- vulnerability: CVE-2021-22570 # false positive, see https://github.com/anchore/grype/issues/558 - vulnerability: CVE-2021-22570 # false positive, see https://github.com/anchore/grype/issues/558
- vulnerability: CVE-2024-41110 # non-impactful as we only use docker as a client - vulnerability: GHSA-vpvm-3wq2-2wvm # unpatched as of 3/28/23, non-impactful as the lifecycle doesn't create containers
- vulnerability: GHSA-v23v-6jw2-98fq # non-impactful as we only use docker as a client - vulnerability: GHSA-hqxw-f8mx-cpmw # unpatched in this release line, non-impactful because the lifecycle is not an http server

View File

@ -32,22 +32,35 @@
] ]
``` ```
* Some of the Windows acceptance tests use license restricted base images. By default, the docker deamon will not publish layers from these images when pushing to a registry which can result in test failures with error messages such as: `Ignoring image "X" because it was corrupt`. To fix these failures you must [enable pushing nondistributable artifacts](https://docs.docker.com/engine/reference/commandline/dockerd/#allow-push-of-nondistributable-artifacts) to the test registry by adding the following to your Docker Desktop Engine config:
* `%programdata%\docker\config\daemon.json`:
```
{
"allow-nondistributable-artifacts": [
"<my-host-ip>/32"
]
}
```
### Testing GitHub actions on forks ### Testing GitHub actions on forks
The lifecycle release process involves chaining a series of GitHub actions together such that: The lifecycle release process involves chaining a series of GitHub actions together such that:
* The "build" workflow creates the artifacts * The "build" workflow creates the artifacts
* .tgz files containing the lifecycle binaries, shasums for the .tgz files, an SBOM, etc. * .tgz files containing the lifecycle binaries, shasums for the .tgz files, a cosign public key, an SBOM, etc.
* OCI images containing the lifecycle binaries, tagged with their commit sha (for more information, see RELEASE.md) * OCI images containing the lifecycle binaries, tagged with their commit sha (for more information, see RELEASE.md)
* The "draft-release" workflow finds the artifacts and downloads them, creating the draft release * The "draft-release" workflow finds the artifacts and downloads them, creating the draft release
* The "post-release" workflow re-tags the OCI images that were created during the "build" workflow with the release version * The "post-release" workflow re-tags the OCI images that were created during the "build" workflow with the release version
It can be rather cumbersome to test changes to these workflows, as they are heavily intertwined. Thus we recommend forking the buildpacks/lifecycle repository in GitHub and running through the entire release process end-to-end. It can be rather cumbersome to test changes to these workflows, as they are heavily intertwined. Thus we recommend forking the buildpacks/lifecycle repository in GitHub and running through the entire release process end-to-end.
For the fork, it is necessary to add the following secrets: For the fork, it is necessary to add the following secrets:
* COSIGN_PASSWORD (see [cosign](https://github.com/sigstore/cosign#generate-a-keypair))
* COSIGN_PRIVATE_KEY
* DOCKER_PASSWORD (if not using ghcr.io) * DOCKER_PASSWORD (if not using ghcr.io)
* DOCKER_USERNAME (if not using ghcr.io) * DOCKER_USERNAME (if not using ghcr.io)
The tools/test-fork.sh script can be used to update the source code to reflect the state of the fork. The tools/test-fork.sh script can be used to update the source code to reflect the state of the fork.
It can be invoked like so: `./tools/test-fork.sh <registry repo name>` It can be invoked like so: `./tools/test-fork.sh <registry repo name> <path to cosign public key>`
## Tasks ## Tasks

View File

@ -7,28 +7,23 @@ This image is maintained by the [Cloud Native Buildpacks project](https://buildp
Supported tags are semver-versioned manifest lists - e.g., `0.12.0` or `0.12.0-rc.1`, pointing to one of the following os/architectures: Supported tags are semver-versioned manifest lists - e.g., `0.12.0` or `0.12.0-rc.1`, pointing to one of the following os/architectures:
* `linux/amd64` * `linux/amd64`
* `linux/arm64` * `linux/arm64`
* `windows/amd64`
# About this image # About this image
Images are built in [GitHub actions](https://github.com/buildpacks/lifecycle/actions) and signed with [`cosign`](https://github.com/sigstore/cosign). To verify: Images are built in [GitHub actions](https://github.com/buildpacks/lifecycle/actions) and signed with [`cosign`](https://github.com/sigstore/cosign). To verify:
* Locate the public key `lifecycle-v<tag>-cosign.pub` on the [releases page](https://github.com/buildpacks/lifecycle/releases)
* Run: * Run:
``` ```
cosign version # must be at least 2.0.0 cosign verify -key lifecycle-v<tag>-cosign.pub buildpacksio/lifecycle:<tag>
cosign verify \
--certificate-identity-regexp "https://github.com/buildpacks/lifecycle/.github/workflows/post-release.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
buildpacksio/lifecycle:<tag>
``` ```
A CycloneDX SBOM is "attached" to the image and signed with [`cosign`](https://github.com/sigstore/cosign). To verify: A CycloneDX SBOM is "attached" to the image and signed with [`cosign`](https://github.com/sigstore/cosign). To verify:
* Locate the public key `lifecycle-v<tag>-cosign.pub` on the [releases page](https://github.com/buildpacks/lifecycle/releases)
* Run: * Run:
``` ```
cosign version # must be at least 2.0.0 cosign version # must be at least 1.2.0
cosign verify \ cosign verify -key cosign.pub -a tag=<tag> -attachment sbom buildpacksio/lifecycle:<tag>
--certificate-identity-regexp "https://github.com/buildpacks/lifecycle/.github/workflows/post-release.yml" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
-a tag=<tag> -attachment sbom \
buildpacksio/lifecycle:<tag>
cosign download sbom buildpacksio/lifecycle:<tag> cosign download sbom buildpacksio/lifecycle:<tag>
``` ```

274
Makefile
View File

@ -30,6 +30,7 @@ LDFLAGS+=-X 'github.com/buildpacks/lifecycle/cmd.Version=$(LIFECYCLE_VERSION)'
GOBUILD:=go build $(GOFLAGS) -ldflags "$(LDFLAGS)" GOBUILD:=go build $(GOFLAGS) -ldflags "$(LDFLAGS)"
GOTEST=$(GOCMD) test $(GOFLAGS) GOTEST=$(GOCMD) test $(GOFLAGS)
BUILD_DIR?=$(PWD)$/out BUILD_DIR?=$(PWD)$/out
WINDOWS_COMPILATION_IMAGE?=golang:1.20-windowsservercore-1809
SOURCE_COMPILATION_IMAGE?=lifecycle-img SOURCE_COMPILATION_IMAGE?=lifecycle-img
BUILD_CTR?=lifecycle-ctr BUILD_CTR?=lifecycle-ctr
DOCKER_CMD?=make test DOCKER_CMD?=make test
@ -38,9 +39,11 @@ GOFILES := $(shell $(GOCMD) run tools$/lister$/main.go)
all: test build package all: test build package
GOOS_ARCHS = linux/amd64 linux/arm64 linux/ppc64le linux/s390x darwin/amd64 darwin/arm64 build: build-linux-amd64 build-linux-arm64 build-windows-amd64
build: build-linux-amd64 build-linux-arm64 build-linux-ppc64le build-linux-s390x build-linux-amd64: build-linux-amd64-lifecycle build-linux-amd64-symlinks build-linux-amd64-launcher
build-linux-arm64: build-linux-arm64-lifecycle build-linux-arm64-symlinks build-linux-arm64-launcher
build-windows-amd64: build-windows-amd64-lifecycle build-windows-amd64-symlinks build-windows-amd64-launcher
build-image-linux-amd64: build-linux-amd64 package-linux-amd64 build-image-linux-amd64: build-linux-amd64 package-linux-amd64
build-image-linux-amd64: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSION)+linux.x86-64.tgz build-image-linux-amd64: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSION)+linux.x86-64.tgz
@ -52,60 +55,173 @@ build-image-linux-arm64: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSIO
build-image-linux-arm64: build-image-linux-arm64:
$(GOCMD) run ./tools/image/main.go -daemon -lifecyclePath $(ARCHIVE_PATH) -os linux -arch arm64 -tag lifecycle:$(LIFECYCLE_IMAGE_TAG) $(GOCMD) run ./tools/image/main.go -daemon -lifecyclePath $(ARCHIVE_PATH) -os linux -arch arm64 -tag lifecycle:$(LIFECYCLE_IMAGE_TAG)
build-image-linux-ppc64le: build-linux-ppc64le package-linux-ppc64le build-image-windows-amd64: build-windows-amd64 package-windows-amd64
build-image-linux-ppc64le: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSION)+linux.ppc64le.tgz build-image-windows-amd64: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSION)+windows.x86-64.tgz
build-image-linux-ppc64le: build-image-windows-amd64:
$(GOCMD) run ./tools/image/main.go -daemon -lifecyclePath $(ARCHIVE_PATH) -os linux -arch ppc64le -tag lifecycle:$(LIFECYCLE_IMAGE_TAG) $(GOCMD) run ./tools/image/main.go -daemon -lifecyclePath $(ARCHIVE_PATH) -os windows -arch amd64 -tag lifecycle:$(LIFECYCLE_IMAGE_TAG)
build-image-linux-s390x: build-linux-s390x package-linux-s390x build-linux-amd64-lifecycle: $(BUILD_DIR)/linux-amd64/lifecycle/lifecycle
build-image-linux-s390x: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSION)+linux.s390x.tgz
build-image-linux-s390x:
$(GOCMD) run ./tools/image/main.go -daemon -lifecyclePath $(ARCHIVE_PATH) -os linux -arch s390x -tag lifecycle:$(LIFECYCLE_IMAGE_TAG)
define build_targets build-linux-arm64-lifecycle: $(BUILD_DIR)/linux-arm64/lifecycle/lifecycle
build-$(1)-$(2): build-$(1)-$(2)-lifecycle build-$(1)-$(2)-symlinks build-$(1)-$(2)-launcher
build-$(1)-$(2)-lifecycle: $(BUILD_DIR)/$(1)-$(2)/lifecycle/lifecycle $(BUILD_DIR)/linux-amd64/lifecycle/lifecycle: export GOOS:=linux
$(BUILD_DIR)/linux-amd64/lifecycle/lifecycle: export GOARCH:=amd64
$(BUILD_DIR)/linux-amd64/lifecycle/lifecycle: OUT_DIR?=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
$(BUILD_DIR)/linux-amd64/lifecycle/lifecycle: $(GOFILES)
$(BUILD_DIR)/linux-amd64/lifecycle/lifecycle:
@echo "> Building lifecycle/lifecycle for $(GOOS)/$(GOARCH)..."
mkdir -p $(OUT_DIR)
$(GOENV) $(GOBUILD) -o $(OUT_DIR)/lifecycle -a ./cmd/lifecycle
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/lifecycle: export GOOS:=$(1) $(BUILD_DIR)/linux-arm64/lifecycle/lifecycle: export GOOS:=linux
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/lifecycle: export GOARCH:=$(2) $(BUILD_DIR)/linux-arm64/lifecycle/lifecycle: export GOARCH:=arm64
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/lifecycle: OUT_DIR?=$$(BUILD_DIR)/$$(GOOS)-$$(GOARCH)/lifecycle $(BUILD_DIR)/linux-arm64/lifecycle/lifecycle: OUT_DIR?=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/lifecycle: $$(GOFILES) $(BUILD_DIR)/linux-arm64/lifecycle/lifecycle: $(GOFILES)
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/lifecycle: $(BUILD_DIR)/linux-arm64/lifecycle/lifecycle:
@echo "> Building lifecycle/lifecycle for $$(GOOS)/$$(GOARCH)..." @echo "> Building lifecycle/lifecycle for $(GOOS)/$(GOARCH)..."
mkdir -p $$(OUT_DIR) mkdir -p $(OUT_DIR)
$$(GOENV) $$(GOBUILD) -o $$(OUT_DIR)/lifecycle -a ./cmd/lifecycle $(GOENV) $(GOBUILD) -o $(OUT_DIR)/lifecycle -a ./cmd/lifecycle
build-$(1)-$(2)-symlinks: export GOOS:=$(1) build-linux-amd64-launcher: $(BUILD_DIR)/linux-amd64/lifecycle/launcher
build-$(1)-$(2)-symlinks: export GOARCH:=$(2)
build-$(1)-$(2)-symlinks: OUT_DIR?=$$(BUILD_DIR)/$$(GOOS)-$$(GOARCH)/lifecycle
build-$(1)-$(2)-symlinks:
@echo "> Creating phase symlinks for $$(GOOS)/$$(GOARCH)..."
ln -sf lifecycle $$(OUT_DIR)/detector
ln -sf lifecycle $$(OUT_DIR)/analyzer
ln -sf lifecycle $$(OUT_DIR)/restorer
ln -sf lifecycle $$(OUT_DIR)/builder
ln -sf lifecycle $$(OUT_DIR)/exporter
ln -sf lifecycle $$(OUT_DIR)/rebaser
ln -sf lifecycle $$(OUT_DIR)/creator
ln -sf lifecycle $$(OUT_DIR)/extender
build-$(1)-$(2)-launcher: $$(BUILD_DIR)/$(1)-$(2)/lifecycle/launcher $(BUILD_DIR)/linux-amd64/lifecycle/launcher: export GOOS:=linux
$(BUILD_DIR)/linux-amd64/lifecycle/launcher: export GOARCH:=amd64
$(BUILD_DIR)/linux-amd64/lifecycle/launcher: OUT_DIR?=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
$(BUILD_DIR)/linux-amd64/lifecycle/launcher: $(GOFILES)
$(BUILD_DIR)/linux-amd64/lifecycle/launcher:
@echo "> Building lifecycle/launcher for $(GOOS)/$(GOARCH)..."
mkdir -p $(OUT_DIR)
$(GOENV) $(GOBUILD) -o $(OUT_DIR)/launcher -a ./cmd/launcher
test $$(du -m $(OUT_DIR)/launcher|cut -f 1) -le 3
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/launcher: export GOOS:=$(1) build-linux-arm64-launcher: $(BUILD_DIR)/linux-arm64/lifecycle/launcher
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/launcher: export GOARCH:=$(2)
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/launcher: OUT_DIR?=$$(BUILD_DIR)/$$(GOOS)-$$(GOARCH)/lifecycle
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/launcher: $$(GOFILES)
$$(BUILD_DIR)/$(1)-$(2)/lifecycle/launcher:
@echo "> Building lifecycle/launcher for $$(GOOS)/$$(GOARCH)..."
mkdir -p $$(OUT_DIR)
$$(GOENV) $$(GOBUILD) -o $$(OUT_DIR)/launcher -a ./cmd/launcher
test $$$$(du -m $$(OUT_DIR)/launcher|cut -f 1) -le 3
endef
$(foreach ga,$(GOOS_ARCHS),$(eval $(call build_targets,$(word 1, $(subst /, ,$(ga))),$(word 2, $(subst /, ,$(ga)))))) $(BUILD_DIR)/linux-arm64/lifecycle/launcher: export GOOS:=linux
$(BUILD_DIR)/linux-arm64/lifecycle/launcher: export GOARCH:=arm64
$(BUILD_DIR)/linux-arm64/lifecycle/launcher: OUT_DIR?=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
$(BUILD_DIR)/linux-arm64/lifecycle/launcher: $(GOFILES)
$(BUILD_DIR)/linux-arm64/lifecycle/launcher:
@echo "> Building lifecycle/launcher for $(GOOS)/$(GOARCH)..."
mkdir -p $(OUT_DIR)
$(GOENV) $(GOBUILD) -o $(OUT_DIR)/launcher -a ./cmd/launcher
test $$(du -m $(OUT_DIR)/launcher|cut -f 1) -le 3
generate-sbom: run-syft-linux-amd64 run-syft-linux-arm64 run-syft-linux-ppc64le run-syft-linux-s390x build-linux-amd64-symlinks: export GOOS:=linux
build-linux-amd64-symlinks: export GOARCH:=amd64
build-linux-amd64-symlinks: OUT_DIR?=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
build-linux-amd64-symlinks:
@echo "> Creating phase symlinks for $(GOOS)/$(GOARCH)..."
ln -sf lifecycle $(OUT_DIR)/detector
ln -sf lifecycle $(OUT_DIR)/analyzer
ln -sf lifecycle $(OUT_DIR)/restorer
ln -sf lifecycle $(OUT_DIR)/builder
ln -sf lifecycle $(OUT_DIR)/exporter
ln -sf lifecycle $(OUT_DIR)/rebaser
ln -sf lifecycle $(OUT_DIR)/creator
ln -sf lifecycle $(OUT_DIR)/extender
build-linux-arm64-symlinks: export GOOS:=linux
build-linux-arm64-symlinks: export GOARCH:=arm64
build-linux-arm64-symlinks: OUT_DIR?=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
build-linux-arm64-symlinks:
@echo "> Creating phase symlinks for $(GOOS)/$(GOARCH)..."
ln -sf lifecycle $(OUT_DIR)/detector
ln -sf lifecycle $(OUT_DIR)/analyzer
ln -sf lifecycle $(OUT_DIR)/restorer
ln -sf lifecycle $(OUT_DIR)/builder
ln -sf lifecycle $(OUT_DIR)/exporter
ln -sf lifecycle $(OUT_DIR)/rebaser
ln -sf lifecycle $(OUT_DIR)/creator
ln -sf lifecycle $(OUT_DIR)/extender
build-windows-amd64-lifecycle: $(BUILD_DIR)/windows-amd64/lifecycle/lifecycle.exe
$(BUILD_DIR)/windows-amd64/lifecycle/lifecycle.exe: export GOOS:=windows
$(BUILD_DIR)/windows-amd64/lifecycle/lifecycle.exe: export GOARCH:=amd64
$(BUILD_DIR)/windows-amd64/lifecycle/lifecycle.exe: OUT_DIR?=$(BUILD_DIR)$/$(GOOS)-$(GOARCH)$/lifecycle
$(BUILD_DIR)/windows-amd64/lifecycle/lifecycle.exe: $(GOFILES)
$(BUILD_DIR)/windows-amd64/lifecycle/lifecycle.exe:
@echo "> Building lifecycle/lifecycle for $(GOOS)/$(GOARCH)..."
$(GOBUILD) -o $(OUT_DIR)$/lifecycle.exe -a .$/cmd$/lifecycle
build-windows-amd64-launcher: $(BUILD_DIR)/windows-amd64/lifecycle/launcher.exe
$(BUILD_DIR)/windows-amd64/lifecycle/launcher.exe: export GOOS:=windows
$(BUILD_DIR)/windows-amd64/lifecycle/launcher.exe: export GOARCH:=amd64
$(BUILD_DIR)/windows-amd64/lifecycle/launcher.exe: OUT_DIR?=$(BUILD_DIR)$/$(GOOS)-$(GOARCH)$/lifecycle
$(BUILD_DIR)/windows-amd64/lifecycle/launcher.exe: $(GOFILES)
$(BUILD_DIR)/windows-amd64/lifecycle/launcher.exe:
@echo "> Building lifecycle/launcher for $(GOOS)/$(GOARCH)..."
$(GOBUILD) -o $(OUT_DIR)$/launcher.exe -a .$/cmd$/launcher
build-windows-amd64-symlinks: export GOOS:=windows
build-windows-amd64-symlinks: export GOARCH:=amd64
build-windows-amd64-symlinks: OUT_DIR?=$(BUILD_DIR)$/$(GOOS)-$(GOARCH)$/lifecycle
build-windows-amd64-symlinks:
@echo "> Creating phase symlinks for Windows..."
ifeq ($(OS),Windows_NT)
call del $(OUT_DIR)$/detector.exe
call del $(OUT_DIR)$/analyzer.exe
call del $(OUT_DIR)$/restorer.exe
call del $(OUT_DIR)$/builder.exe
call del $(OUT_DIR)$/exporter.exe
call del $(OUT_DIR)$/rebaser.exe
call del $(OUT_DIR)$/creator.exe
call mklink $(OUT_DIR)$/detector.exe lifecycle.exe
call mklink $(OUT_DIR)$/analyzer.exe lifecycle.exe
call mklink $(OUT_DIR)$/restorer.exe lifecycle.exe
call mklink $(OUT_DIR)$/builder.exe lifecycle.exe
call mklink $(OUT_DIR)$/exporter.exe lifecycle.exe
call mklink $(OUT_DIR)$/rebaser.exe lifecycle.exe
call mklink $(OUT_DIR)$/creator.exe lifecycle.exe
else
ln -sf lifecycle.exe $(OUT_DIR)$/detector.exe
ln -sf lifecycle.exe $(OUT_DIR)$/analyzer.exe
ln -sf lifecycle.exe $(OUT_DIR)$/restorer.exe
ln -sf lifecycle.exe $(OUT_DIR)$/builder.exe
ln -sf lifecycle.exe $(OUT_DIR)$/exporter.exe
ln -sf lifecycle.exe $(OUT_DIR)$/rebaser.exe
ln -sf lifecycle.exe $(OUT_DIR)$/creator.exe
endif
build-darwin-amd64: build-darwin-amd64-lifecycle build-darwin-amd64-launcher
build-darwin-amd64-lifecycle: $(BUILD_DIR)/darwin-amd64/lifecycle/lifecycle
$(BUILD_DIR)/darwin-amd64/lifecycle/lifecycle: export GOOS:=darwin
$(BUILD_DIR)/darwin-amd64/lifecycle/lifecycle: export GOARCH:=amd64
$(BUILD_DIR)/darwin-amd64/lifecycle/lifecycle: OUT_DIR:=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
$(BUILD_DIR)/darwin-amd64/lifecycle/lifecycle: $(GOFILES)
$(BUILD_DIR)/darwin-amd64/lifecycle/lifecycle:
@echo "> Building lifecycle for darwin/amd64..."
$(GOENV) $(GOBUILD) -o $(OUT_DIR)/lifecycle -a ./cmd/lifecycle
@echo "> Creating lifecycle symlinks for darwin/amd64..."
ln -sf lifecycle $(OUT_DIR)/detector
ln -sf lifecycle $(OUT_DIR)/analyzer
ln -sf lifecycle $(OUT_DIR)/restorer
ln -sf lifecycle $(OUT_DIR)/builder
ln -sf lifecycle $(OUT_DIR)/exporter
ln -sf lifecycle $(OUT_DIR)/rebaser
build-darwin-amd64-launcher: $(BUILD_DIR)/darwin-amd64/lifecycle/launcher
$(BUILD_DIR)/darwin-amd64/lifecycle/launcher: export GOOS:=darwin
$(BUILD_DIR)/darwin-amd64/lifecycle/launcher: export GOARCH:=amd64
$(BUILD_DIR)/darwin-amd64/lifecycle/launcher: OUT_DIR:=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle
$(BUILD_DIR)/darwin-amd64/lifecycle/launcher: $(GOFILES)
$(BUILD_DIR)/darwin-amd64/lifecycle/launcher:
@echo "> Building launcher for darwin/amd64..."
mkdir -p $(OUT_DIR)
$(GOENV) $(GOBUILD) -o $(OUT_DIR)/launcher -a ./cmd/launcher
test $$(du -m $(OUT_DIR)/launcher|cut -f 1) -le 4
generate-sbom: run-syft-windows run-syft-linux-amd64 run-syft-linux-arm64
run-syft-windows: install-syft
run-syft-windows: export GOOS:=windows
run-syft-windows: export GOARCH:=amd64
run-syft-windows:
@echo "> Running syft..."
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.exe -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.cdx.json
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.exe -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.cdx.json
run-syft-linux-amd64: install-syft run-syft-linux-amd64: install-syft
run-syft-linux-amd64: export GOOS:=linux run-syft-linux-amd64: export GOOS:=linux
@ -123,46 +239,25 @@ run-syft-linux-arm64:
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.cdx.json syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.cdx.json
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.cdx.json syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.cdx.json
run-syft-linux-ppc64le: install-syft
run-syft-linux-ppc64le: export GOOS:=linux
run-syft-linux-ppc64le: export GOARCH:=ppc64le
run-syft-linux-ppc64le:
@echo "> Running syft..."
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.cdx.json
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.cdx.json
run-syft-linux-s390x: install-syft
run-syft-linux-s390x: export GOOS:=linux
run-syft-linux-s390x: export GOARCH:=s390x
run-syft-linux-s390x:
@echo "> Running syft..."
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/lifecycle.sbom.cdx.json
syft $(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher -o json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.syft.json -o spdx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.spdx.json -o cyclonedx-json=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle/launcher.sbom.cdx.json
install-syft: install-syft:
@echo "> Installing syft..." @echo "> Installing syft..."
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
define install-go-tool
@echo "> Installing $(1)..."
$(GOCMD) install $(1)@$(shell $(GOCMD) list -m -f '{{.Version}}' $(2))
endef
install-goimports: install-goimports:
@echo "> Installing goimports..." @echo "> Installing goimports..."
$(call install-go-tool,golang.org/x/tools/cmd/goimports,golang.org/x/tools) $(GOCMD) install golang.org/x/tools/cmd/goimports@v0.1.2
install-yj: install-yj:
@echo "> Installing yj..." @echo "> Installing yj..."
$(call install-go-tool,github.com/sclevine/yj,github.com/sclevine/yj) $(GOCMD) install github.com/sclevine/yj@v0.0.0-20210612025309-737bdf40a5d1
install-mockgen: install-mockgen:
@echo "> Installing mockgen..." @echo "> Installing mockgen..."
$(call install-go-tool,github.com/golang/mock/mockgen,github.com/golang/mock) $(GOCMD) install github.com/golang/mock/mockgen@v1.5.0
install-golangci-lint: install-golangci-lint:
@echo "> Installing golangci-lint..." @echo "> Installing golangci-lint..."
$(call install-go-tool,github.com/golangci/golangci-lint/v2/cmd/golangci-lint,github.com/golangci/golangci-lint/v2) $(GOCMD) install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.51.1
lint: install-golangci-lint lint: install-golangci-lint
@echo "> Linting code..." @echo "> Linting code..."
@ -205,7 +300,7 @@ clean:
@echo "> Cleaning workspace..." @echo "> Cleaning workspace..."
rm -rf $(BUILD_DIR) rm -rf $(BUILD_DIR)
package: generate-sbom package-linux-amd64 package-linux-arm64 package-linux-ppc64le package-linux-s390x package: generate-sbom package-linux-amd64 package-linux-arm64 package-windows-amd64
package-linux-amd64: GOOS:=linux package-linux-amd64: GOOS:=linux
package-linux-amd64: GOARCH:=amd64 package-linux-amd64: GOARCH:=amd64
@ -225,20 +320,25 @@ package-linux-arm64:
@echo "> Packaging lifecycle for $(GOOS)/$(GOARCH)..." @echo "> Packaging lifecycle for $(GOOS)/$(GOARCH)..."
$(GOCMD) run $(PACKAGER) --inputDir $(INPUT_DIR) -archivePath $(ARCHIVE_PATH) -descriptorPath $(LIFECYCLE_DESCRIPTOR_PATH) -version $(LIFECYCLE_VERSION) $(GOCMD) run $(PACKAGER) --inputDir $(INPUT_DIR) -archivePath $(ARCHIVE_PATH) -descriptorPath $(LIFECYCLE_DESCRIPTOR_PATH) -version $(LIFECYCLE_VERSION)
package-linux-ppc64le: GOOS:=linux package-windows-amd64: GOOS:=windows
package-linux-ppc64le: GOARCH:=ppc64le package-windows-amd64: GOARCH:=amd64
package-linux-ppc64le: INPUT_DIR:=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle package-windows-amd64: INPUT_DIR:=$(BUILD_DIR)$/$(GOOS)-$(GOARCH)$/lifecycle
package-linux-ppc64le: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSION)+$(GOOS).ppc64le.tgz package-windows-amd64: ARCHIVE_PATH=$(BUILD_DIR)$/lifecycle-v$(LIFECYCLE_VERSION)+$(GOOS).x86-64.tgz
package-linux-ppc64le: PACKAGER=./tools/packager/main.go package-windows-amd64: PACKAGER=.$/tools$/packager$/main.go
package-linux-ppc64le: package-windows-amd64:
@echo "> Packaging lifecycle for $(GOOS)/$(GOARCH)..." @echo "> Packaging lifecycle for $(GOOS)/$(GOARCH)..."
$(GOCMD) run $(PACKAGER) --inputDir $(INPUT_DIR) -archivePath $(ARCHIVE_PATH) -descriptorPath $(LIFECYCLE_DESCRIPTOR_PATH) -version $(LIFECYCLE_VERSION) $(GOCMD) run $(PACKAGER) --inputDir $(INPUT_DIR) -archivePath $(ARCHIVE_PATH) -descriptorPath $(LIFECYCLE_DESCRIPTOR_PATH) -version $(LIFECYCLE_VERSION)
package-linux-s390x: GOOS:=linux # Ensure workdir is clean and build image from .git
package-linux-s390x: GOARCH:=s390x docker-build-source-image-windows: $(GOFILES)
package-linux-s390x: INPUT_DIR:=$(BUILD_DIR)/$(GOOS)-$(GOARCH)/lifecycle docker-build-source-image-windows:
package-linux-s390x: ARCHIVE_PATH=$(BUILD_DIR)/lifecycle-v$(LIFECYCLE_VERSION)+$(GOOS).s390x.tgz $(if $(shell git status --short), @echo Uncommitted changes. Refusing to run. && exit 1)
package-linux-s390x: PACKAGER=./tools/packager/main.go docker build .git -f tools/Dockerfile.windows --tag $(SOURCE_COMPILATION_IMAGE) --build-arg image_tag=$(WINDOWS_COMPILATION_IMAGE) --cache-from=$(SOURCE_COMPILATION_IMAGE) --isolation=process --compress
package-linux-s390x:
@echo "> Packaging lifecycle for $(GOOS)/$(GOARCH)..." docker-run-windows: docker-build-source-image-windows
$(GOCMD) run $(PACKAGER) --inputDir $(INPUT_DIR) -archivePath $(ARCHIVE_PATH) -descriptorPath $(LIFECYCLE_DESCRIPTOR_PATH) -version $(LIFECYCLE_VERSION) docker-run-windows:
@echo "> Running '$(DOCKER_CMD)' in docker windows..."
@docker volume rm -f lifecycle-out
docker run -v lifecycle-out:c:/lifecycle/out -e LIFECYCLE_VERSION -e PLATFORM_API -e BUILDPACK_API -v gopathcache:c:/gopath -v '\\.\pipe\docker_engine:\\.\pipe\docker_engine' --isolation=process --interactive --tty --rm $(SOURCE_COMPILATION_IMAGE) $(DOCKER_CMD)
docker run -v lifecycle-out:c:/lifecycle/out --rm $(SOURCE_COMPILATION_IMAGE) tar -cf- out | tar -xf-
@docker volume rm -f lifecycle-out

View File

@ -11,14 +11,18 @@ A reference implementation of the [Cloud Native Buildpacks specification](https:
## Supported APIs ## Supported APIs
| Lifecycle Version | Platform APIs | Buildpack APIs | | Lifecycle Version | Platform APIs | Buildpack APIs |
|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------| |-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
| 0.20.x | [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10], [0.11][p/0.11], [0.12][p/0.12], [0.13][p/0.13], [0.14][p/0.14] | [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9], [0.10][b/0.10], [0.11][b/0.11] | | 0.17.x* | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10], [0.11][p/0.11], [0.12][p/0.12] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9], [0.10][b/0.10] |
| 0.19.x | [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10], [0.11][p/0.11], [0.12][p/0.12], [0.13][p/0.13] | [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9], [0.10][b/0.10], [0.11][b/0.11] |
| 0.18.x | [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10], [0.11][p/0.11], [0.12][p/0.12] | [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9], [0.10][b/0.10] |
| 0.17.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10], [0.11][p/0.11], [0.12][p/0.12] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9], [0.10][b/0.10] |
| 0.16.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10], [0.11][p/0.11] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9] | | 0.16.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10], [0.11][p/0.11] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9] |
| 0.15.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9] | | 0.15.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9], [0.10][p/0.10] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8], [0.9][b/0.9] |
| 0.14.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8] | | 0.14.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8], [0.9][p/0.9] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7], [0.8][b/0.8] |
| 0.13.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7] | | 0.13.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7], [0.8][p/0.8] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6], [0.7][b/0.7] |
| 0.12.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6], [0.7][p/0.7] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6] |
| 0.11.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5], [0.6][p/0.6] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5], [0.6][b/0.6] |
| 0.10.x | [0.3][p/0.3], [0.4][p/0.4], [0.5][p/0.5] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4], [0.5][b/0.5] |
| 0.9.x | [0.3][p/0.3], [0.4][p/0.4] | [0.2][b/0.2], [0.3][b/0.3], [0.4][b/0.4] |
| 0.8.x | [0.3][p/0.3] | [0.2][b/0.2] |
| 0.7.x | [0.2][p/0.2] | [0.2][b/0.2] |
| 0.6.x | [0.2][p/0.2] | [0.2][b/0.2] |
[b/0.2]: https://github.com/buildpacks/spec/blob/buildpack/v0.2/buildpack.md [b/0.2]: https://github.com/buildpacks/spec/blob/buildpack/v0.2/buildpack.md
[b/0.3]: https://github.com/buildpacks/spec/tree/buildpack/v0.3/buildpack.md [b/0.3]: https://github.com/buildpacks/spec/tree/buildpack/v0.3/buildpack.md
@ -29,7 +33,6 @@ A reference implementation of the [Cloud Native Buildpacks specification](https:
[b/0.8]: https://github.com/buildpacks/spec/tree/buildpack/v0.8/buildpack.md [b/0.8]: https://github.com/buildpacks/spec/tree/buildpack/v0.8/buildpack.md
[b/0.9]: https://github.com/buildpacks/spec/tree/buildpack/v0.9/buildpack.md [b/0.9]: https://github.com/buildpacks/spec/tree/buildpack/v0.9/buildpack.md
[b/0.10]: https://github.com/buildpacks/spec/tree/buildpack/v0.10/buildpack.md [b/0.10]: https://github.com/buildpacks/spec/tree/buildpack/v0.10/buildpack.md
[b/0.11]: https://github.com/buildpacks/spec/tree/buildpack/v0.11/buildpack.md
[p/0.2]: https://github.com/buildpacks/spec/blob/platform/v0.2/platform.md [p/0.2]: https://github.com/buildpacks/spec/blob/platform/v0.2/platform.md
[p/0.3]: https://github.com/buildpacks/spec/blob/platform/v0.3/platform.md [p/0.3]: https://github.com/buildpacks/spec/blob/platform/v0.3/platform.md
[p/0.4]: https://github.com/buildpacks/spec/blob/platform/v0.4/platform.md [p/0.4]: https://github.com/buildpacks/spec/blob/platform/v0.4/platform.md
@ -41,8 +44,6 @@ A reference implementation of the [Cloud Native Buildpacks specification](https:
[p/0.10]: https://github.com/buildpacks/spec/blob/platform/v0.10/platform.md [p/0.10]: https://github.com/buildpacks/spec/blob/platform/v0.10/platform.md
[p/0.11]: https://github.com/buildpacks/spec/blob/platform/v0.11/platform.md [p/0.11]: https://github.com/buildpacks/spec/blob/platform/v0.11/platform.md
[p/0.12]: https://github.com/buildpacks/spec/blob/platform/v0.12/platform.md [p/0.12]: https://github.com/buildpacks/spec/blob/platform/v0.12/platform.md
[p/0.13]: https://github.com/buildpacks/spec/blob/platform/v0.13/platform.md
[p/0.14]: https://github.com/buildpacks/spec/blob/platform/v0.14/platform.md
\* denotes unreleased version \* denotes unreleased version

View File

@ -1,73 +1,22 @@
# Release Finalization ## Release Finalization
## Types of releases To cut a pre-release:
#### New minor
* For newly supported Platform or Buildpack API versions, or breaking changes (e.g., API deprecations).
#### Pre-release aka release candidate
* Ideally we should ship a pre-release (waiting a few days for folks to try it out) before we ship a new minor.
* We typically don't ship pre-releases for patches or backports.
#### New patch
* For go version updates, CVE fixes / dependency bumps, bug fixes, etc.
* Review the latest commits on `main` to determine if any are unacceptable for a patch - if there are commits that should be excluded, branch off the latest tag for the current minor and cherry-pick commits over.
#### Backport
* New patch for an old minor. Typically, to help folks out who haven't yet upgraded from [unsupported APIs](https://github.com/buildpacks/rfcs/blob/main/text/0110-deprecate-apis.md).
* For go version updates, CVE fixes / dependency bumps, bug fixes, etc.
* Branch off the latest tag for the desired minor.
## Release Finalization Steps
### Step 1 - Prepare
Determine the type of release ([new minor](#new-minor), [pre-release](#pre-release-aka-release-candidate), [new patch](#new-patch), or [backport](#backport)) and prepare the branch accordingly.
**To prepare the release branch:**
1. Check open PRs for any dependabot updates that should be merged.
1. Create a release branch in the format `release/0.99.0-rc.1` (for pre-releases) or `release/0.99.0` (for final releases).
* New commits to this branch will trigger the `build` workflow and produce a lifecycle image: `buildpacksio/lifecycle:<commit sha>`.
1. If applicable, ensure the README is updated with the latest supported apis (example PR: https://github.com/buildpacks/lifecycle/pull/550). 1. If applicable, ensure the README is updated with the latest supported apis (example PR: https://github.com/buildpacks/lifecycle/pull/550).
* For final releases (not pre-releases), remove the pre-release note (`*`) for the latest apis. 1. Create a release branch in the format `release/0.99.0-rc.1`. New commits to this branch will trigger the `build` workflow and produce a lifecycle image: `buildpacksio/lifecycle:<commit sha>`.
1. When ready to cut the release, manually trigger the `draft-release` workflow: Actions -> draft-release -> Run workflow -> Use workflow from branch: `release/0.99.0-rc.1`. This will create a draft release on GitHub using the artifacts from the `build` workflow run for the latest commit on the release branch.
1. Edit the release notes as necessary.
1. Perform any manual validation of the artifacts.
1. When ready to publish the release, edit the release page and click "Publish release". This will trigger the `post-release` workflow that will re-tag the lifecycle image from `buildpacksio/lifecycle:<commit sha>` to `buildpacksio/lifecycle:0.99.0` but will NOT update the `latest` tag.
**For final releases (not pre-releases):** To cut a release:
1. Ensure the relevant spec APIs have been released. 1. Ensure the relevant spec APIs have been released.
1. Ensure the `lifecycle/0.99.0` milestone on the [docs repo](https://github.com/buildpacks/docs/blob/main/RELEASE.md#lump-changes) is complete, such that every new feature in the lifecycle is fully explained in the `release/lifecycle/0.99` branch on the docs repo, and [migration guides](https://github.com/buildpacks/docs/tree/main/content/docs/reference/spec/migration) (if relevant) are included. 1. Ensure the `lifecycle/0.99.0` milestone on the [docs repo](https://github.com/buildpacks/docs/blob/main/RELEASE.md#lump-changes) is complete, such that every new feature in the lifecycle is fully explained in the `release/lifecycle/0.99` branch on the docs repo, and [migration guides](https://github.com/buildpacks/docs/tree/main/content/docs/reference/spec/migration) (if relevant) are included.
1. Create a release branch in the format `release/0.99.0`. New commits to this branch will trigger the `build` workflow and produce a lifecycle image: `buildpacksio/lifecycle:<commit sha>`.
### Step 2 - Publish the Release 1. If applicable, ensure the README is updated with the latest supported apis (example PR: https://github.com/buildpacks/lifecycle/pull/550) and remove the pre-release note for the latest apis.
1. When ready to cut the release, manually trigger the `draft-release` workflow: Actions -> draft-release -> Run workflow -> Use workflow from branch: `release/0.99.0`. This will create a draft release on GitHub using the artifacts from the `build` workflow run for the latest commit on the release branch.
1. Manually trigger the `draft-release` workflow: Actions -> draft-release -> Run workflow -> Use workflow from branch: `release/<release version>`. This will create a draft release on GitHub using the artifacts from the `build` workflow run for the latest commit on the release branch.
1. Edit the release notes as necessary. 1. Edit the release notes as necessary.
1. Perform any manual validation of the artifacts as necessary (usually none). 1. Perform any manual validation of the artifacts.
1. Edit the release page and click "Publish release". 1. When ready to publish the release, edit the release page and click "Publish release". This will trigger the `post-release` workflow that will re-tag the lifecycle image from `buildpacksio/lifecycle:<commit sha>` to `buildpacksio/lifecycle:0.99.0` and `buildpacksio/lifecycle:latest`.
* This will trigger the `post-release` workflow that will re-tag the lifecycle image from `buildpacksio/lifecycle:<commit sha>` to `buildpacksio/lifecycle:<release version>`. 1. Once released
* For final releases ONLY, this will also re-tag the lifecycle image from `buildpacksio/lifecycle:<commit sha>` to `buildpacksio/lifecycle:latest`. - Update the `main` branch to remove the pre-release note in [README.md](https://github.com/buildpacks/lifecycle/blob/main/README.md) and/or merge `release/0.99.0` into `main`.
- Ask the learning team to merge the `release/lifecycle/0.99` branch into `main` on the docs repo.
### Step 3 - Follow-up
**For pre-releases:**
* Ask the relevant teams to try out the pre-released artifacts.
**For final releases:**
* Update the `main` branch to remove the pre-release note in [README.md](https://github.com/buildpacks/lifecycle/blob/main/README.md) and/or merge `release/0.99.0` into `main`.
* Ask the learning team to merge the `release/lifecycle/0.99` branch into `main` on the docs repo.
## Go version updates
Go version updates should be released as a [new minor](#new-minor) or [new patch](#new-patch) release.
### New Patch
If the go patch is in [actions/go-versions](https://github.com/actions/go-versions/pulls?q=is%3Apr+is%3Aclosed) then CI should pull it in automatically without any action needed.
We simply need to create the release branch and let the pipeline run.
### New Minor
We typically do this when the existing patch version exceeds 6 - e.g., `1.22.6`. This means we have about 6 months to upgrade before the current minor becomes unsupported due to the introduction of the new n+2 minor.
#### Steps
1. Update go.mod
1. Search for the old `major.minor`, there are a few files that need to be updated (example PR: https://github.com/buildpacks/lifecycle/pull/1405/files)
1. Update the linter to a version that supports the current `major.minor`
1. Fix any lint errors as necessary

View File

@ -23,6 +23,7 @@ const (
var ( var (
latestPlatformAPI = api.Platform.Latest().String() latestPlatformAPI = api.Platform.Latest().String()
buildDir string buildDir string
cacheFixtureDir string
) )
func TestVersion(t *testing.T) { func TestVersion(t *testing.T) {
@ -141,7 +142,6 @@ func testVersion(t *testing.T, when spec.G, it spec.S) {
w(tc.description, func() { w(tc.description, func() {
it("only prints the version", func() { it("only prints the version", func() {
cmd := lifecycleCmd(tc.command, tc.args...) cmd := lifecycleCmd(tc.command, tc.args...)
cmd.Env = []string{fmt.Sprintf("CNB_PLATFORM_API=%s", api.Platform.Latest().String())}
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
if err != nil { if err != nil {
t.Fatalf("failed to run %v\n OUTPUT: %s\n ERROR: %s\n", cmd.Args, output, err) t.Fatalf("failed to run %v\n OUTPUT: %s\n ERROR: %s\n", cmd.Args, output, err)

View File

@ -5,6 +5,7 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"strings" "strings"
"testing" "testing"
@ -36,6 +37,7 @@ func TestAnalyzer(t *testing.T) {
analyzeImage = analyzeTest.testImageRef analyzeImage = analyzeTest.testImageRef
analyzerPath = analyzeTest.containerBinaryPath analyzerPath = analyzeTest.containerBinaryPath
cacheFixtureDir = filepath.Join("testdata", "cache-dir")
analyzeRegAuthConfig = analyzeTest.targetRegistry.authConfig analyzeRegAuthConfig = analyzeTest.targetRegistry.authConfig
analyzeRegNetwork = analyzeTest.targetRegistry.network analyzeRegNetwork = analyzeTest.targetRegistry.network
analyzeDaemonFixtures = analyzeTest.targetDaemon.fixtures analyzeDaemonFixtures = analyzeTest.targetDaemon.fixtures
@ -67,23 +69,6 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
os.RemoveAll(copyDir) os.RemoveAll(copyDir)
}) })
when("CNB_PLATFORM_API not provided", func() {
it("errors", func() {
cmd := exec.Command(
"docker", "run", "--rm",
"--env", "CNB_PLATFORM_API= ",
analyzeImage,
ctrPath(analyzerPath),
"some-image",
) // #nosec G204
output, err := cmd.CombinedOutput()
h.AssertNotNil(t, err)
expected := "please set 'CNB_PLATFORM_API'"
h.AssertStringContains(t, string(output), expected)
})
})
when("called without an app image", func() { when("called without an app image", func() {
it("errors", func() { it("errors", func() {
cmd := exec.Command( cmd := exec.Command(
@ -100,34 +85,73 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
when("called with skip layers", func() { when("called with group", func() {
it("writes analyzed.toml and does not restore previous image SBOM", func() { it("errors", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.9"), "Platform API < 0.9 does not accept a -skip-layers flag") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 accepts a -group flag")
output := h.DockerRunAndCopy(t, cmd := exec.Command(
containerName, "docker", "run", "--rm",
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+platformAPI,
)...), analyzeImage,
h.WithArgs( ctrPath(analyzerPath),
"-group", "group.toml",
"some-image",
) // #nosec G204
output, err := cmd.CombinedOutput()
h.AssertNotNil(t, err)
expected := "flag provided but not defined: -group"
h.AssertStringContains(t, string(output), expected)
})
})
when("called with skip layers", func() {
it("errors", func() {
h.SkipIf(t,
api.MustParse(platformAPI).LessThan("0.7") || api.MustParse(platformAPI).AtLeast("0.9"),
"Platform API < 0.7 or Platform API > 0.9 accepts a -skip-layers flag")
cmd := exec.Command(
"docker", "run", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI,
analyzeImage,
ctrPath(analyzerPath), ctrPath(analyzerPath),
"-daemon",
"-run-image", analyzeRegFixtures.ReadOnlyRunImage,
"-skip-layers", "-skip-layers",
analyzeDaemonFixtures.AppImage, "some-image",
), ) // #nosec G204
) output, err := cmd.CombinedOutput()
assertAnalyzedMetadata(t, filepath.Join(copyDir, "layers", "analyzed.toml"))
h.AssertStringDoesNotContain(t, output, "Restoring data for SBOM from previous image") h.AssertNotNil(t, err)
expected := "flag provided but not defined: -skip-layers"
h.AssertStringContains(t, string(output), expected)
})
})
when("called with cache dir", func() {
it("errors", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 accepts a -cache-dir flag")
cmd := exec.Command(
"docker", "run", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI,
analyzeImage,
ctrPath(analyzerPath),
"-cache-dir", "/cache",
"some-image",
) // #nosec G204
output, err := cmd.CombinedOutput()
h.AssertNotNil(t, err)
expected := "flag provided but not defined: -cache-dir"
h.AssertStringContains(t, string(output), expected)
}) })
}) })
when("the provided layers directory isn't writeable", func() { when("the provided layers directory isn't writeable", func() {
it("recursively chowns the directory", func() { it("recursively chowns the directory", func() {
analyzeFlags := []string{"-run-image", analyzeRegFixtures.ReadOnlyRunImage} h.SkipIf(t, runtime.GOOS == "windows", "Not relevant on Windows")
var analyzeFlags []string
if api.MustParse(platformAPI).AtLeast("0.7") {
analyzeFlags = append(analyzeFlags, []string{"-run-image", analyzeRegFixtures.ReadOnlyRunImage}...)
}
output := h.DockerRun(t, output := h.DockerRun(t,
analyzeImage, analyzeImage,
@ -149,11 +173,61 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
when("called with group (on older platforms)", func() {
it("uses the provided group.toml path", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not accept a -group flag")
h.DockerSeedRunAndCopy(t,
containerName,
cacheFixtureDir, ctrPath("/cache"),
copyDir, ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
),
h.WithArgs(
ctrPath(analyzerPath),
"-cache-dir", ctrPath("/cache"),
"-group", ctrPath("/layers/other-group.toml"),
"some-image",
),
)
h.AssertPathExists(t, filepath.Join(copyDir, "layers", "some-other-buildpack-id"))
h.AssertPathDoesNotExist(t, filepath.Join(copyDir, "layers", "some-buildpack-id"))
})
when("group contains unsupported buildpacks", func() {
it("errors", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not accept a -group flag")
cmd := exec.Command(
"docker", "run", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI,
analyzeImage,
ctrPath(analyzerPath),
"-group", ctrPath("/layers/unsupported-group.toml"),
"some-image",
) // #nosec G204
output, err := cmd.CombinedOutput()
h.AssertNotNil(t, err)
failErr, ok := err.(*exec.ExitError)
if !ok {
t.Fatalf("expected an error of type exec.ExitError")
}
h.AssertEq(t, failErr.ExitCode(), 12) // platform code for buildpack api incompatibility
expected := "buildpack API version '0.1' is incompatible with the lifecycle"
h.AssertStringContains(t, string(output), expected)
})
})
})
when("called with analyzed", func() { when("called with analyzed", func() {
it("uses the provided analyzed.toml path", func() { it("uses the provided analyzed.toml path", func() {
analyzeFlags := []string{ analyzeFlags := []string{"-analyzed", ctrPath("/some-dir/some-analyzed.toml")}
"-analyzed", ctrPath("/some-dir/some-analyzed.toml"), if api.MustParse(platformAPI).AtLeast("0.7") {
"-run-image", analyzeRegFixtures.ReadOnlyRunImage, analyzeFlags = append(analyzeFlags, "-run-image", analyzeRegFixtures.ReadOnlyRunImage)
} }
var execArgs []string var execArgs []string
@ -199,9 +273,11 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
it("drops privileges", func() { it("drops privileges", func() {
analyzeArgs := []string{ h.SkipIf(t, runtime.GOOS == "windows", "Not relevant on Windows")
"-analyzed", "/some-dir/some-analyzed.toml",
"-run-image", analyzeRegFixtures.ReadOnlyRunImage, analyzeArgs := []string{"-analyzed", "/some-dir/some-analyzed.toml"}
if api.MustParse(platformAPI).AtLeast("0.7") {
analyzeArgs = append(analyzeArgs, "-run-image", analyzeRegFixtures.ReadOnlyRunImage)
} }
output := h.DockerRun(t, output := h.DockerRun(t,
@ -226,6 +302,8 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("run image", func() { when("run image", func() {
when("provided", func() { when("provided", func() {
it("is recorded in analyzed.toml", func() { it("is recorded in analyzed.toml", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 does not accept run image")
h.DockerRunAndCopy(t, h.DockerRunAndCopy(t,
containerName, containerName,
copyDir, copyDir,
@ -246,6 +324,8 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("not provided", func() { when("not provided", func() {
it("falls back to CNB_RUN_IMAGE", func() { it("falls back to CNB_RUN_IMAGE", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 does not accept run image")
h.DockerRunAndCopy(t, h.DockerRunAndCopy(t,
containerName, containerName,
copyDir, copyDir,
@ -268,9 +348,9 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("daemon case", func() { when("daemon case", func() {
it("writes analyzed.toml", func() { it("writes analyzed.toml", func() {
analyzeFlags := []string{ analyzeFlags := []string{"-daemon"}
"-daemon", if api.MustParse(platformAPI).AtLeast("0.7") {
"-run-image", "some-run-image", analyzeFlags = append(analyzeFlags, []string{"-run-image", "some-run-image"}...)
} }
var execArgs []string var execArgs []string
@ -294,6 +374,8 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("app image exists", func() { when("app image exists", func() {
it("does not restore app metadata to the layers directory", func() { it("does not restore app metadata to the layers directory", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 restores app metadata")
analyzeFlags := []string{"-daemon", "-run-image", "some-run-image"} analyzeFlags := []string{"-daemon", "-run-image", "some-run-image"}
var execArgs []string var execArgs []string
@ -314,12 +396,248 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
assertNoRestoreOfAppMetadata(t, copyDir, output) assertNoRestoreOfAppMetadata(t, copyDir, output)
}) })
it("restores app metadata to the layers directory (on older platforms)", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not restore app metadata")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
)...),
h.WithArgs(
ctrPath(analyzerPath),
"-daemon",
analyzeDaemonFixtures.AppImage,
),
)
assertLogsAndRestoresAppMetadata(t, copyDir, output)
})
when("skip layers is provided", func() {
it("writes analyzed.toml and does not write buildpack layer metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not accept a -skip-layers flag")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
)...),
h.WithArgs(
ctrPath(analyzerPath),
"-daemon",
"-skip-layers",
analyzeDaemonFixtures.AppImage,
),
)
assertAnalyzedMetadata(t, filepath.Join(copyDir, "layers", "analyzed.toml"))
assertWritesStoreTomlOnly(t, copyDir, output)
})
})
})
when("cache is provided (on older platforms)", func() {
when("cache image case", func() {
when("cache image is in a daemon", func() {
it("ignores the cache", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
)...),
h.WithArgs(
ctrPath(analyzerPath),
"-daemon",
"-cache-image", analyzeDaemonFixtures.CacheImage,
"some-image",
),
)
h.AssertPathDoesNotExist(t, filepath.Join(copyDir, "layers", "some-buildpack-id", "some-layer.sha"))
h.AssertPathDoesNotExist(t, filepath.Join(copyDir, "layers", "some-buildpack-id", "some-layer.toml"))
})
})
when("cache image is in a registry", func() {
when("auth registry", func() {
when("registry creds are provided in CNB_REGISTRY_AUTH", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
"/layers",
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+analyzeRegAuthConfig,
"--network", analyzeRegNetwork,
)...),
h.WithArgs(
ctrPath(analyzerPath),
"-daemon",
"-cache-image", analyzeRegFixtures.SomeCacheImage,
analyzeRegFixtures.SomeAppImage,
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
})
when("registry creds are provided in the docker config.json", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "DOCKER_CONFIG=/docker-config",
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
"-cache-image",
analyzeRegFixtures.SomeCacheImage,
analyzeRegFixtures.SomeAppImage,
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
})
})
when("no auth registry", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
"--network", analyzeRegNetwork,
)...),
h.WithArgs(
ctrPath(analyzerPath),
"-daemon",
"-cache-image",
analyzeRegFixtures.ReadOnlyCacheImage,
analyzeRegFixtures.ReadOnlyAppImage,
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
})
})
})
when("cache directory case", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerSeedRunAndCopy(t,
containerName,
cacheFixtureDir, ctrPath("/cache"),
copyDir, ctrPath("/layers"),
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
)...),
h.WithArgs(
ctrPath(analyzerPath),
"-daemon",
"-cache-dir", ctrPath("/cache"),
"some-image",
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
when("the provided cache directory isn't writeable by the CNB user's group", func() {
it("recursively chowns the directory", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
h.SkipIf(t, runtime.GOOS == "windows", "Not relevant on Windows")
cacheVolume := h.SeedDockerVolume(t, cacheFixtureDir)
defer h.DockerVolumeRemove(t, cacheVolume)
output := h.DockerRun(t,
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
"--volume", cacheVolume+":/cache",
)...),
h.WithBash(
fmt.Sprintf("chown -R 9999:9999 /cache; chmod -R 775 /cache; %s -daemon -cache-dir /cache some-image; ls -alR /cache", analyzerPath),
),
)
h.AssertMatch(t, output, "2222 3333 .+ \\.")
h.AssertMatch(t, output, "2222 3333 .+ committed")
h.AssertMatch(t, output, "2222 3333 .+ staging")
})
})
when("the provided cache directory is writeable by the CNB user's group", func() {
it("doesn't chown the directory", func() {
h.SkipIf(t, runtime.GOOS == "windows", "Not relevant on Windows")
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
cacheVolume := h.SeedDockerVolume(t, cacheFixtureDir)
defer h.DockerVolumeRemove(t, cacheVolume)
output := h.DockerRun(t,
analyzeImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
"--volume", cacheVolume+":/cache",
)...),
h.WithBash(
fmt.Sprintf("chown -R 9999:3333 /cache; chmod -R 775 /cache; %s -daemon -cache-dir /cache some-image; ls -alR /cache", analyzerPath),
),
)
h.AssertMatch(t, output, "9999 3333 .+ \\.")
h.AssertMatch(t, output, "9999 3333 .+ committed")
h.AssertMatch(t, output, "2222 3333 .+ staging")
})
})
})
}) })
}) })
when("registry case", func() { when("registry case", func() {
it("writes analyzed.toml", func() { it("writes analyzed.toml", func() {
analyzeFlags := []string{"-run-image", analyzeRegFixtures.ReadOnlyRunImage} var analyzeFlags []string
if api.MustParse(platformAPI).AtLeast("0.7") {
analyzeFlags = append(analyzeFlags, []string{"-run-image", analyzeRegFixtures.ReadOnlyRunImage}...)
}
var execArgs []string var execArgs []string
execArgs = append([]string{ctrPath(analyzerPath)}, analyzeFlags...) execArgs = append([]string{ctrPath(analyzerPath)}, analyzeFlags...)
@ -341,13 +659,139 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
assertAnalyzedMetadata(t, filepath.Join(copyDir, "analyzed.toml")) assertAnalyzedMetadata(t, filepath.Join(copyDir, "analyzed.toml"))
}) })
when("app image exists", func() {
when("auth registry", func() {
when("registry creds are provided in CNB_REGISTRY_AUTH", func() {
it("restores app metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read app layer metadata")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+analyzeRegAuthConfig,
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
analyzeRegFixtures.SomeAppImage,
),
)
assertLogsAndRestoresAppMetadata(t, copyDir, output)
})
})
when("registry creds are provided in the docker config.json", func() {
it("restores app metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read app layer metadata")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "DOCKER_CONFIG=/docker-config",
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
analyzeRegFixtures.SomeAppImage,
),
)
assertLogsAndRestoresAppMetadata(t, copyDir, output)
})
})
when("skip layers is provided", func() {
it("writes analyzed.toml and does not write buildpack layer metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not accept a -skip-layers flag")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+analyzeRegAuthConfig,
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
"-skip-layers",
analyzeRegFixtures.SomeAppImage,
),
)
assertAnalyzedMetadata(t, filepath.Join(copyDir, "layers", "analyzed.toml"))
assertWritesStoreTomlOnly(t, copyDir, output)
})
})
})
when("no auth registry", func() {
it("restores app metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read app layer metadata")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
analyzeRegFixtures.ReadOnlyAppImage,
),
)
assertLogsAndRestoresAppMetadata(t, copyDir, output)
})
when("skip layers is provided", func() {
it("writes analyzed.toml and does not write buildpack layer metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not accept a -skip-layers flag")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
"-skip-layers",
analyzeRegFixtures.ReadOnlyAppImage,
),
)
assertAnalyzedMetadata(t, filepath.Join(copyDir, "layers", "analyzed.toml"))
assertWritesStoreTomlOnly(t, copyDir, output)
})
})
})
})
when("called with previous image", func() { when("called with previous image", func() {
it.Before(func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 does not support -previous-image")
})
when("auth registry", func() { when("auth registry", func() {
when("the destination image does not exist", func() { when("the destination image does not exist", func() {
it("writes analyzed.toml with previous image identifier", func() { it("writes analyzed.toml with previous image identifier", func() {
analyzeFlags := []string{ analyzeFlags := []string{"-previous-image", analyzeRegFixtures.ReadWriteAppImage}
"-previous-image", analyzeRegFixtures.ReadWriteAppImage, if api.MustParse(platformAPI).AtLeast("0.7") {
"-run-image", analyzeRegFixtures.ReadOnlyRunImage, analyzeFlags = append(analyzeFlags, []string{"-run-image", analyzeRegFixtures.ReadOnlyRunImage}...)
} }
var execArgs []string var execArgs []string
@ -373,9 +817,9 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("the destination image exists", func() { when("the destination image exists", func() {
it("writes analyzed.toml with previous image identifier", func() { it("writes analyzed.toml with previous image identifier", func() {
analyzeFlags := []string{ analyzeFlags := []string{"-previous-image", analyzeRegFixtures.ReadWriteAppImage}
"-previous-image", analyzeRegFixtures.ReadWriteAppImage, if api.MustParse(platformAPI).AtLeast("0.7") {
"-run-image", analyzeRegFixtures.ReadOnlyRunImage, analyzeFlags = append(analyzeFlags, []string{"-run-image", analyzeRegFixtures.ReadOnlyRunImage}...)
} }
var execArgs []string var execArgs []string
@ -402,9 +846,111 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
when("cache is provided (on older platforms)", func() {
when("cache image case", func() {
when("auth registry", func() {
when("registry creds are provided in CNB_REGISTRY_AUTH", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+analyzeRegAuthConfig,
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
"-cache-image", analyzeRegFixtures.SomeCacheImage,
"some-image",
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
})
when("registry creds are provided in the docker config.json", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "DOCKER_CONFIG=/docker-config",
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
"-cache-image",
analyzeRegFixtures.SomeCacheImage,
analyzeRegFixtures.SomeAppImage,
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
})
})
when("no auth registry", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerRunAndCopy(t,
containerName,
copyDir,
ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--network", analyzeRegNetwork,
),
h.WithArgs(
ctrPath(analyzerPath),
"-cache-image", analyzeRegFixtures.ReadOnlyCacheImage,
analyzeRegFixtures.ReadOnlyAppImage,
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
})
})
when("cache directory case", func() {
it("restores cache metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 does not read from the cache")
output := h.DockerSeedRunAndCopy(t,
containerName,
cacheFixtureDir, ctrPath("/cache"),
copyDir, ctrPath("/layers"),
analyzeImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
),
h.WithArgs(
ctrPath(analyzerPath),
"-cache-dir", ctrPath("/cache"),
"some-image",
),
)
assertLogsAndRestoresCacheMetadata(t, copyDir, output)
})
})
})
when("called with tag", func() { when("called with tag", func() {
when("read/write access to registry", func() { when("read/write access to registry", func() {
it("passes read/write validation and writes analyzed.toml", func() { it("passes read/write validation and writes analyzed.toml", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 does not use tag flag")
execArgs := []string{ execArgs := []string{
ctrPath(analyzerPath), ctrPath(analyzerPath),
"-tag", analyzeRegFixtures.ReadWriteOtherAppImage, "-tag", analyzeRegFixtures.ReadWriteOtherAppImage,
@ -430,6 +976,7 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("no read/write access to registry", func() { when("no read/write access to registry", func() {
it("throws read/write error accessing destination tag", func() { it("throws read/write error accessing destination tag", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 does not use tag flag")
cmd := exec.Command( cmd := exec.Command(
"docker", "run", "--rm", "docker", "run", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+platformAPI,
@ -444,7 +991,7 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
h.AssertNotNil(t, err) h.AssertNotNil(t, err)
expected := "ensure registry read/write access to " + analyzeRegFixtures.InaccessibleImage expected := "validating registry write access: ensure registry read/write access to " + analyzeRegFixtures.InaccessibleImage
h.AssertStringContains(t, string(output), expected) h.AssertStringContains(t, string(output), expected)
}) })
}) })
@ -457,11 +1004,12 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
it("writes analyzed.toml", func() { it("writes analyzed.toml", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "Platform API < 0.12 does not accept a -layout flag") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "Platform API < 0.12 does not accept a -layout flag")
analyzeFlags := []string{ var analyzeFlags []string
analyzeFlags = append(analyzeFlags, []string{
"-layout", "-layout",
"-layout-dir", layoutDir, "-layout-dir", layoutDir,
"-run-image", "busybox", "-run-image", "busybox",
} }...)
var execArgs []string var execArgs []string
execArgs = append([]string{ctrPath(analyzerPath)}, analyzeFlags...) execArgs = append([]string{ctrPath(analyzerPath)}, analyzeFlags...)
execArgs = append(execArgs, "my-app") execArgs = append(execArgs, "my-app")
@ -481,7 +1029,7 @@ func testAnalyzerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
analyzer := assertAnalyzedMetadata(t, filepath.Join(copyDir, "analyzed.toml")) analyzer := assertAnalyzedMetadata(t, filepath.Join(copyDir, "analyzed.toml"))
h.AssertNotNil(t, analyzer.RunImage) h.AssertNotNil(t, analyzer.RunImage)
analyzedImagePath := filepath.Join(path.RootDir, "layout-repo", "index.docker.io", "library", "busybox", "latest") analyzedImagePath := filepath.Join(path.RootDir, "layout-repo", "index.docker.io", "library", "busybox", "latest")
reference := fmt.Sprintf("%s@%s", analyzedImagePath, "sha256:f75f3d1a317fc82c793d567de94fc8df2bece37acd5f2bd364a0d91a0d1f3dab") reference := fmt.Sprintf("%s@%s", analyzedImagePath, "sha256:834f8848308af7090ed7b2270071d28411afc42078e3ba395b1b0a78e2f8b0e2")
h.AssertEq(t, analyzer.RunImage.Reference, reference) h.AssertEq(t, analyzer.RunImage.Reference, reference)
}) })
}) })
@ -515,12 +1063,29 @@ func assertAnalyzedMetadata(t *testing.T, path string) *files.Analyzed {
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, len(contents) > 0, true) h.AssertEq(t, len(contents) > 0, true)
analyzedMD, err := files.Handler.ReadAnalyzed(path, cmd.DefaultLogger) analyzedMD, err := files.ReadAnalyzed(path, cmd.DefaultLogger)
h.AssertNil(t, err) h.AssertNil(t, err)
return &analyzedMD return &analyzedMD
} }
func assertLogsAndRestoresAppMetadata(t *testing.T, dir, output string) {
layerFilenames := []string{
"launch-layer.sha",
"launch-layer.toml",
"store.toml",
}
for _, filename := range layerFilenames {
h.AssertPathExists(t, filepath.Join(dir, "layers", "some-buildpack-id", filename))
}
layerNames := []string{
"launch-layer",
}
for _, layerName := range layerNames {
h.AssertStringContains(t, output, fmt.Sprintf("Restoring metadata for \"some-buildpack-id:%s\"", layerName))
}
}
func assertNoRestoreOfAppMetadata(t *testing.T, dir, output string) { func assertNoRestoreOfAppMetadata(t *testing.T, dir, output string) {
layerFilenames := []string{ layerFilenames := []string{
"launch-build-cache-layer.sha", "launch-build-cache-layer.sha",
@ -536,6 +1101,28 @@ func assertNoRestoreOfAppMetadata(t *testing.T, dir, output string) {
} }
} }
func assertLogsAndRestoresCacheMetadata(t *testing.T, dir, output string) {
h.AssertPathExists(t, filepath.Join(dir, "layers", "some-buildpack-id", "some-layer.sha"))
h.AssertPathExists(t, filepath.Join(dir, "layers", "some-buildpack-id", "some-layer.toml"))
h.AssertStringContains(t, output, "Restoring metadata for \"some-buildpack-id:some-layer\" from cache")
}
func assertWritesStoreTomlOnly(t *testing.T, dir, output string) {
h.AssertPathExists(t, filepath.Join(dir, "layers", "some-buildpack-id", "store.toml"))
layerFilenames := []string{
"launch-build-cache-layer.sha",
"launch-build-cache-layer.toml",
"launch-cache-layer.sha",
"launch-cache-layer.toml",
"launch-layer.sha",
"launch-layer.toml",
}
for _, filename := range layerFilenames {
h.AssertPathDoesNotExist(t, filepath.Join(dir, "layers", "some-buildpack-id", filename))
}
h.AssertStringContains(t, output, "Skipping buildpack layer analysis")
}
func flatPrint(arr []string) string { func flatPrint(arr []string) string {
return strings.Join(arr, " ") return strings.Join(arr, " ")
} }

View File

@ -6,8 +6,10 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"testing" "testing"
"github.com/BurntSushi/toml"
"github.com/sclevine/spec" "github.com/sclevine/spec"
"github.com/sclevine/spec/report" "github.com/sclevine/spec/report"
@ -24,6 +26,9 @@ var (
) )
func TestBuilder(t *testing.T) { func TestBuilder(t *testing.T) {
h.SkipIf(t, runtime.GOOS == "windows", "Builder acceptance tests are not yet supported on Windows")
h.SkipIf(t, runtime.GOARCH != "amd64", "Builder acceptance tests are not yet supported on non-amd64")
info, err := h.DockerCli(t).Info(context.TODO()) info, err := h.DockerCli(t).Info(context.TODO())
h.AssertNil(t, err) h.AssertNil(t, err)
@ -34,8 +39,6 @@ func TestBuilder(t *testing.T) {
builderDaemonArch = info.Architecture builderDaemonArch = info.Architecture
if builderDaemonArch == "x86_64" { if builderDaemonArch == "x86_64" {
builderDaemonArch = "amd64" builderDaemonArch = "amd64"
} else if builderDaemonArch == "aarch64" {
builderDaemonArch = "arm64"
} }
h.MakeAndCopyLifecycle(t, builderDaemonOS, builderDaemonArch, builderBinaryDir) h.MakeAndCopyLifecycle(t, builderDaemonOS, builderDaemonArch, builderBinaryDir)
@ -125,7 +128,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
// check builder metadata.toml for success test // check builder metadata.toml for success test
_, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers", "config", "metadata.toml")) _, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers", "config", "metadata.toml"))
h.AssertStringContains(t, md.Buildpacks[0].API, "0.10") h.AssertStringContains(t, md.Buildpacks[0].API, "0.2")
h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world") h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world")
h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1") h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1")
}) })
@ -149,7 +152,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
// prevent regression of inline table serialization // prevent regression of inline table serialization
h.AssertStringDoesNotContain(t, contents, "processes =") h.AssertStringDoesNotContain(t, contents, "processes =")
h.AssertStringContains(t, md.Buildpacks[0].API, "0.10") h.AssertStringContains(t, md.Buildpacks[0].API, "0.2")
h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world") h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world")
h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1") h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1")
h.AssertEq(t, len(md.Processes), 1) h.AssertEq(t, len(md.Processes), 1)
@ -158,7 +161,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
h.AssertEq(t, md.Processes[0].Command.Entries[0], "echo world") h.AssertEq(t, md.Processes[0].Command.Entries[0], "echo world")
h.AssertEq(t, len(md.Processes[0].Args), 1) h.AssertEq(t, len(md.Processes[0].Args), 1)
h.AssertEq(t, md.Processes[0].Args[0], "arg1") h.AssertEq(t, md.Processes[0].Args[0], "arg1")
h.AssertEq(t, md.Processes[0].Direct, true) h.AssertEq(t, md.Processes[0].Direct, false)
h.AssertEq(t, md.Processes[0].WorkingDirectory, "") h.AssertEq(t, md.Processes[0].WorkingDirectory, "")
h.AssertEq(t, md.Processes[0].Default, false) h.AssertEq(t, md.Processes[0].Default, false)
}) })
@ -181,7 +184,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
// prevent regression of inline table serialization // prevent regression of inline table serialization
h.AssertStringDoesNotContain(t, contents, "processes =") h.AssertStringDoesNotContain(t, contents, "processes =")
h.AssertStringContains(t, md.Buildpacks[0].API, "0.10") h.AssertStringContains(t, md.Buildpacks[0].API, "0.2")
h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world") h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world")
h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1") h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1")
h.AssertEq(t, len(md.Processes), 1) h.AssertEq(t, len(md.Processes), 1)
@ -190,7 +193,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
h.AssertEq(t, md.Processes[0].Command.Entries[0], "echo world") h.AssertEq(t, md.Processes[0].Command.Entries[0], "echo world")
h.AssertEq(t, len(md.Processes[0].Args), 1) h.AssertEq(t, len(md.Processes[0].Args), 1)
h.AssertEq(t, md.Processes[0].Args[0], "arg1") h.AssertEq(t, md.Processes[0].Args[0], "arg1")
h.AssertEq(t, md.Processes[0].Direct, true) h.AssertEq(t, md.Processes[0].Direct, false)
h.AssertEq(t, md.Processes[0].WorkingDirectory, "") h.AssertEq(t, md.Processes[0].WorkingDirectory, "")
h.AssertEq(t, md.Processes[0].Default, false) h.AssertEq(t, md.Processes[0].Default, false)
}) })
@ -213,10 +216,11 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
// check builder metadata.toml for success test // check builder metadata.toml for success test
_, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers", "config", "metadata.toml")) _, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers", "config", "metadata.toml"))
h.AssertStringContains(t, md.Buildpacks[0].API, "0.10") h.AssertStringContains(t, md.Buildpacks[0].API, "0.2")
h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world") h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world")
h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1") h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1")
h.AssertStringContains(t, md.Extensions[0].API, "0.10") h.AssertStringContains(t, md.Extensions[0].API, "0.9")
h.AssertEq(t, md.Extensions[0].Extension, false) // this shows that `extension = true` is not redundantly printed in group.toml
h.AssertStringContains(t, md.Extensions[0].ID, "hello_world") h.AssertStringContains(t, md.Extensions[0].ID, "hello_world")
h.AssertStringContains(t, md.Extensions[0].Version, "0.0.1") h.AssertStringContains(t, md.Extensions[0].Version, "0.0.1")
}) })
@ -237,7 +241,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
) )
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
h.AssertNotNil(t, err) h.AssertNotNil(t, err)
expected := "failed to read group file: open /layers/group.toml: no such file or directory" expected := "failed to read buildpack group: open /layers/group.toml: no such file or directory"
h.AssertStringContains(t, string(output), expected) h.AssertStringContains(t, string(output), expected)
}) })
}) })
@ -274,7 +278,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
) )
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
h.AssertNotNil(t, err) h.AssertNotNil(t, err)
expected := "failed to read group file: toml: line 1: expected '.' or '=', but got 'a' instead" expected := "failed to read buildpack group: toml: line 1: expected '.' or '=', but got 'a' instead"
h.AssertStringContains(t, string(output), expected) h.AssertStringContains(t, string(output), expected)
}) })
}) })
@ -313,7 +317,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
) )
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
h.AssertNotNil(t, err) h.AssertNotNil(t, err)
expected := "failed to read plan file: open /layers/plan.toml: no such file or directory" expected := "failed to parse detect plan: open /layers/plan.toml: no such file or directory"
h.AssertStringContains(t, string(output), expected) h.AssertStringContains(t, string(output), expected)
}) })
}) })
@ -334,7 +338,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
// check builder metadata.toml for success test // check builder metadata.toml for success test
_, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers", "config", "metadata.toml")) _, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers", "config", "metadata.toml"))
h.AssertStringContains(t, md.Buildpacks[0].API, "0.10") h.AssertStringContains(t, md.Buildpacks[0].API, "0.2")
h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world") h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world")
h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1") h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.1")
}) })
@ -353,7 +357,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
) )
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
h.AssertNotNil(t, err) h.AssertNotNil(t, err)
expected := "failed to read plan file: toml: line 1: expected '.' or '=', but got 'a' instead" expected := "failed to parse detect plan: toml: line 1: expected '.' or '=', but got 'a' instead"
h.AssertStringContains(t, string(output), expected) h.AssertStringContains(t, string(output), expected)
}) })
}) })
@ -377,7 +381,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
) )
_, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers/different_layer_dir_from_env/config/metadata.toml")) _, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers/different_layer_dir_from_env/config/metadata.toml"))
h.AssertStringContains(t, md.Buildpacks[0].API, "0.10") h.AssertStringContains(t, md.Buildpacks[0].API, "0.2")
h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world_2") h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world_2")
h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.2") h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.2")
}) })
@ -399,7 +403,7 @@ func testBuilder(t *testing.T, when spec.G, it spec.S) {
) )
_, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers/different_layer_dir_from_env/config/metadata.toml")) _, md := getBuilderMetadata(t, filepath.Join(copyDir, "layers/different_layer_dir_from_env/config/metadata.toml"))
h.AssertStringContains(t, md.Buildpacks[0].API, "0.10") h.AssertStringContains(t, md.Buildpacks[0].API, "0.2")
h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world_2") h.AssertStringContains(t, md.Buildpacks[0].ID, "hello_world_2")
h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.2") h.AssertStringContains(t, md.Buildpacks[0].Version, "0.0.2")
}) })
@ -528,8 +532,9 @@ func getBuilderMetadata(t *testing.T, path string) (string, *files.BuildMetadata
contents, _ := os.ReadFile(path) contents, _ := os.ReadFile(path)
h.AssertEq(t, len(contents) > 0, true) h.AssertEq(t, len(contents) > 0, true)
buildMD, err := files.Handler.ReadBuildMetadata(path, api.MustParse(latestPlatformAPI)) var buildMD files.BuildMetadata
_, err := toml.Decode(string(contents), &buildMD)
h.AssertNil(t, err) h.AssertNil(t, err)
return string(contents), buildMD return string(contents), &buildMD
} }

View File

@ -1,4 +1,5 @@
//go:build acceptance //go:build acceptance
// +build acceptance
package acceptance package acceptance
@ -6,6 +7,7 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"testing" "testing"
"time" "time"
@ -29,6 +31,8 @@ var (
) )
func TestCreator(t *testing.T) { func TestCreator(t *testing.T) {
h.SkipIf(t, runtime.GOOS == "windows", "Creator acceptance tests are not yet supported on Windows")
testImageDockerContext := filepath.Join("testdata", "creator") testImageDockerContext := filepath.Join("testdata", "creator")
createTest = NewPhaseTest(t, "creator", testImageDockerContext) createTest = NewPhaseTest(t, "creator", testImageDockerContext)
createTest.Start(t) createTest.Start(t)
@ -36,6 +40,7 @@ func TestCreator(t *testing.T) {
createImage = createTest.testImageRef createImage = createTest.testImageRef
creatorPath = createTest.containerBinaryPath creatorPath = createTest.containerBinaryPath
cacheFixtureDir = filepath.Join("testdata", "creator", "cache-dir")
createRegAuthConfig = createTest.targetRegistry.authConfig createRegAuthConfig = createTest.targetRegistry.authConfig
createRegNetwork = createTest.targetRegistry.network createRegNetwork = createTest.targetRegistry.network
createDaemonFixtures = createTest.targetDaemon.fixtures createDaemonFixtures = createTest.targetDaemon.fixtures
@ -71,29 +76,6 @@ func testCreatorFunc(platformAPI string) func(t *testing.T, when spec.G, it spec
}) })
}) })
when("detected order contains extensions", func() {
it("errors", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.10"), "")
cmd := exec.Command(
"docker", "run", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+createRegAuthConfig,
"--network", createRegNetwork,
createImage,
ctrPath(creatorPath),
"-log-level", "debug",
"-order", "/cnb/order-with-extensions.toml",
"-run-image", createRegFixtures.ReadOnlyRunImage,
createRegFixtures.SomeAppImage,
) // #nosec G204
output, err := cmd.CombinedOutput()
h.AssertNotNil(t, err)
expected := "detected order contains extensions which is not supported by the creator"
h.AssertStringContains(t, string(output), expected)
})
})
when("daemon case", func() { when("daemon case", func() {
it.After(func() { it.After(func() {
h.DockerImageRemove(t, createdImageName) h.DockerImageRemove(t, createdImageName)

View File

@ -1,20 +1,22 @@
//go:build acceptance //go:build acceptance
// +build acceptance
package acceptance package acceptance
import ( import (
"context"
"fmt" "fmt"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"testing" "testing"
"github.com/BurntSushi/toml"
"github.com/sclevine/spec" "github.com/sclevine/spec"
"github.com/sclevine/spec/report" "github.com/sclevine/spec/report"
"github.com/buildpacks/lifecycle/api" "github.com/buildpacks/lifecycle/api"
"github.com/buildpacks/lifecycle/cmd" "github.com/buildpacks/lifecycle/buildpack"
"github.com/buildpacks/lifecycle/platform/files" "github.com/buildpacks/lifecycle/platform/files"
h "github.com/buildpacks/lifecycle/testhelpers" h "github.com/buildpacks/lifecycle/testhelpers"
) )
@ -24,23 +26,13 @@ var (
detectorBinaryDir = filepath.Join("testdata", "detector", "container", "cnb", "lifecycle") detectorBinaryDir = filepath.Join("testdata", "detector", "container", "cnb", "lifecycle")
detectImage = "lifecycle/acceptance/detector" detectImage = "lifecycle/acceptance/detector"
userID = "1234" userID = "1234"
detectorDaemonOS, detectorDaemonArch string
) )
func TestDetector(t *testing.T) { func TestDetector(t *testing.T) {
info, err := h.DockerCli(t).Info(context.TODO()) h.SkipIf(t, runtime.GOOS == "windows", "Detector acceptance tests are not yet supported on Windows")
h.AssertNil(t, err) h.SkipIf(t, runtime.GOARCH != "amd64", "Detector acceptance tests are not yet supported on non-amd64")
detectorDaemonOS = info.OSType h.MakeAndCopyLifecycle(t, "linux", "amd64", detectorBinaryDir)
detectorDaemonArch = info.Architecture
if detectorDaemonArch == "x86_64" {
detectorDaemonArch = "amd64"
}
if detectorDaemonArch == "aarch64" {
detectorDaemonArch = "arm64"
}
h.MakeAndCopyLifecycle(t, detectorDaemonOS, detectorDaemonArch, detectorBinaryDir)
h.DockerBuild(t, h.DockerBuild(t,
detectImage, detectImage,
detectDockerContext, detectDockerContext,
@ -48,24 +40,17 @@ func TestDetector(t *testing.T) {
) )
defer h.DockerImageRemove(t, detectImage) defer h.DockerImageRemove(t, detectImage)
for _, platformAPI := range api.Platform.Supported { spec.Run(t, "acceptance-detector", testDetector, spec.Parallel(), spec.Report(report.Terminal{}))
if platformAPI.LessThan("0.12") {
continue
} }
spec.Run(t, "acceptance-detector/"+platformAPI.String(), testDetectorFunc(platformAPI.String()), spec.Parallel(), spec.Report(report.Terminal{})) func testDetector(t *testing.T, when spec.G, it spec.S) {
}
}
func testDetectorFunc(platformAPI string) func(t *testing.T, when spec.G, it spec.S) {
return func(t *testing.T, when spec.G, it spec.S) {
when("called with arguments", func() { when("called with arguments", func() {
it("errors", func() { it("errors", func() {
command := exec.Command( command := exec.Command(
"docker", "docker",
"run", "run",
"--rm", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
detectImage, detectImage,
"some-arg", "some-arg",
) )
@ -84,7 +69,7 @@ func testDetectorFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
"--rm", "--rm",
"--user", "--user",
"root", "root",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
detectImage, detectImage,
) )
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
@ -101,7 +86,7 @@ func testDetectorFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
"docker", "docker",
"run", "run",
"--rm", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
detectImage, detectImage,
) )
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
@ -118,7 +103,7 @@ func testDetectorFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
"run", "run",
"--rm", "--rm",
"--env", "CNB_ORDER_PATH=/cnb/orders/fail_detect_order.toml", "--env", "CNB_ORDER_PATH=/cnb/orders/fail_detect_order.toml",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
detectImage, detectImage,
) )
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
@ -132,7 +117,8 @@ func testDetectorFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
expected1 := `======== Output: fail_detect_buildpack@some_version ======== expected1 := `======== Output: fail_detect_buildpack@some_version ========
Opted out of detection Opted out of detection
======== Results ======== ======== Results ========
fail: fail_detect_buildpack@some_version` fail: fail_detect_buildpack@some_version
`
h.AssertStringContains(t, string(output), expected1) h.AssertStringContains(t, string(output), expected1)
expected2 := "No buildpack groups passed detection." expected2 := "No buildpack groups passed detection."
h.AssertStringContains(t, string(output), expected2) h.AssertStringContains(t, string(output), expected2)
@ -164,21 +150,23 @@ fail: fail_detect_buildpack@some_version`
detectImage, detectImage,
h.WithFlags("--user", userID, h.WithFlags("--user", userID,
"--env", "CNB_ORDER_PATH=/cnb/orders/simple_order.toml", "--env", "CNB_ORDER_PATH=/cnb/orders/simple_order.toml",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
), ),
h.WithArgs(), h.WithArgs(),
) )
// check group.toml // check group.toml
foundGroupTOML := filepath.Join(copyDir, "layers", "group.toml") foundGroupTOML := filepath.Join(copyDir, "layers", "group.toml")
group, err := files.Handler.ReadGroup(foundGroupTOML) var buildpackGroup buildpack.Group
_, err := toml.DecodeFile(foundGroupTOML, &buildpackGroup)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, group.Group[0].ID, "simple_buildpack") h.AssertEq(t, buildpackGroup.Group[0].ID, "simple_buildpack")
h.AssertEq(t, group.Group[0].Version, "simple_buildpack_version") h.AssertEq(t, buildpackGroup.Group[0].Version, "simple_buildpack_version")
// check plan.toml // check plan.toml
foundPlanTOML := filepath.Join(copyDir, "layers", "plan.toml") tempPlanToml := filepath.Join(copyDir, "layers", "plan.toml")
buildPlan, err := files.Handler.ReadPlan(foundPlanTOML) var buildPlan files.Plan
_, err = toml.DecodeFile(tempPlanToml, &buildPlan)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, buildPlan.Entries[0].Providers[0].ID, "simple_buildpack") h.AssertEq(t, buildPlan.Entries[0].Providers[0].ID, "simple_buildpack")
h.AssertEq(t, buildPlan.Entries[0].Providers[0].Version, "simple_buildpack_version") h.AssertEq(t, buildPlan.Entries[0].Providers[0].Version, "simple_buildpack_version")
@ -222,17 +210,18 @@ fail: fail_detect_buildpack@some_version`
"--env", "CNB_GROUP_PATH=./custom_group.toml", "--env", "CNB_GROUP_PATH=./custom_group.toml",
"--env", "CNB_PLAN_PATH=./custom_plan.toml", "--env", "CNB_PLAN_PATH=./custom_plan.toml",
"--env", "CNB_PLATFORM_DIR=/custom_platform", "--env", "CNB_PLATFORM_DIR=/custom_platform",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
), ),
h.WithArgs("-log-level=debug"), h.WithArgs("-log-level=debug"),
) )
// check group.toml // check group.toml
foundGroupTOML := filepath.Join(copyDir, "layers", "custom_group.toml") foundGroupTOML := filepath.Join(copyDir, "layers", "custom_group.toml")
group, err := files.Handler.ReadGroup(foundGroupTOML) var buildpackGroup buildpack.Group
_, err := toml.DecodeFile(foundGroupTOML, &buildpackGroup)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, group.Group[0].ID, "always_detect_buildpack") h.AssertEq(t, buildpackGroup.Group[0].ID, "always_detect_buildpack")
h.AssertEq(t, group.Group[0].Version, "always_detect_buildpack_version") h.AssertEq(t, buildpackGroup.Group[0].Version, "always_detect_buildpack_version")
// check plan.toml - should be empty since we're using always_detect_order.toml so there is no "actual plan" // check plan.toml - should be empty since we're using always_detect_order.toml so there is no "actual plan"
tempPlanToml := filepath.Join(copyDir, "layers", "custom_plan.toml") tempPlanToml := filepath.Join(copyDir, "layers", "custom_plan.toml")
@ -279,7 +268,7 @@ fail: fail_detect_buildpack@some_version`
detectImage, detectImage,
h.WithFlags("--user", userID, h.WithFlags("--user", userID,
"--volume", expectedOrderTOMLPath+":/custom/order.toml", "--volume", expectedOrderTOMLPath+":/custom/order.toml",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
), ),
h.WithArgs( h.WithArgs(
"-log-level=debug", "-log-level=debug",
@ -289,10 +278,11 @@ fail: fail_detect_buildpack@some_version`
// check group.toml // check group.toml
foundGroupTOML := filepath.Join(copyDir, "layers", "group.toml") foundGroupTOML := filepath.Join(copyDir, "layers", "group.toml")
group, err := files.Handler.ReadGroup(foundGroupTOML) var buildpackGroup buildpack.Group
_, err := toml.DecodeFile(foundGroupTOML, &buildpackGroup)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, group.Group[0].ID, "simple_buildpack") h.AssertEq(t, buildpackGroup.Group[0].ID, "simple_buildpack")
h.AssertEq(t, group.Group[0].Version, "simple_buildpack_version") h.AssertEq(t, buildpackGroup.Group[0].Version, "simple_buildpack_version")
}) })
}) })
@ -301,7 +291,7 @@ fail: fail_detect_buildpack@some_version`
command := exec.Command("docker", "run", command := exec.Command("docker", "run",
"--user", userID, "--user", userID,
"--rm", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
detectImage, detectImage,
"-order=/custom/order.toml") "-order=/custom/order.toml")
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
@ -316,7 +306,7 @@ fail: fail_detect_buildpack@some_version`
command := exec.Command("docker", "run", command := exec.Command("docker", "run",
"--user", userID, "--user", userID,
"--rm", "--rm",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
detectImage, detectImage,
"-order=/cnb/orders/bad_api.toml") "-order=/cnb/orders/bad_api.toml")
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
@ -352,11 +342,6 @@ fail: fail_detect_buildpack@some_version`
}) })
it("processes the provided order.toml", func() { it("processes the provided order.toml", func() {
experimentalMode := "warn"
if api.MustParse(platformAPI).AtLeast("0.13") {
experimentalMode = "error"
}
output := h.DockerRunAndCopy(t, output := h.DockerRunAndCopy(t,
containerName, containerName,
copyDir, copyDir,
@ -365,8 +350,8 @@ fail: fail_detect_buildpack@some_version`
h.WithFlags( h.WithFlags(
"--user", userID, "--user", userID,
"--volume", orderPath+":/layers/order.toml", "--volume", orderPath+":/layers/order.toml",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+latestPlatformAPI,
"--env", "CNB_EXPERIMENTAL_MODE="+experimentalMode, "--env", "CNB_EXPERIMENTAL_MODE=warn", // required as the default is `error` if unset
), ),
h.WithArgs( h.WithArgs(
"-analyzed=/layers/analyzed.toml", "-analyzed=/layers/analyzed.toml",
@ -378,45 +363,64 @@ fail: fail_detect_buildpack@some_version`
) )
t.Log("runs /bin/detect for buildpacks and extensions") t.Log("runs /bin/detect for buildpacks and extensions")
if api.MustParse(platformAPI).LessThan("0.13") {
h.AssertStringContains(t, output, "Platform requested experimental feature 'Dockerfiles'") h.AssertStringContains(t, output, "Platform requested experimental feature 'Dockerfiles'")
}
h.AssertStringContains(t, output, "FOO=val-from-build-config") h.AssertStringContains(t, output, "FOO=val-from-build-config")
h.AssertStringContains(t, output, "simple_extension: output from /bin/detect") h.AssertStringContains(t, output, "simple_extension: output from /bin/detect")
t.Log("writes group.toml") t.Log("writes group.toml")
foundGroupTOML := filepath.Join(copyDir, "layers", "group.toml") foundGroupTOML := filepath.Join(copyDir, "layers", "group.toml")
group, err := files.Handler.ReadGroup(foundGroupTOML) var buildpackGroup buildpack.Group
_, err := toml.DecodeFile(foundGroupTOML, &buildpackGroup)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, group.GroupExtensions[0].ID, "simple_extension") h.AssertEq(t, buildpackGroup.GroupExtensions[0].ID, "simple_extension")
h.AssertEq(t, group.GroupExtensions[0].Version, "simple_extension_version") h.AssertEq(t, buildpackGroup.GroupExtensions[0].Version, "simple_extension_version")
h.AssertEq(t, group.Group[0].ID, "buildpack_for_ext") h.AssertEq(t, buildpackGroup.GroupExtensions[0].Extension, false) // this shows that `extension = true` is not redundantly printed in group.toml
h.AssertEq(t, group.Group[0].Version, "buildpack_for_ext_version") h.AssertEq(t, buildpackGroup.Group[0].ID, "buildpack_for_ext")
h.AssertEq(t, group.Group[0].Extension, false) h.AssertEq(t, buildpackGroup.Group[0].Version, "buildpack_for_ext_version")
h.AssertEq(t, buildpackGroup.Group[0].Extension, false)
t.Log("writes plan.toml") t.Log("writes plan.toml")
foundPlanTOML := filepath.Join(copyDir, "layers", "plan.toml") foundPlanTOML := filepath.Join(copyDir, "layers", "plan.toml")
buildPlan, err := files.Handler.ReadPlan(foundPlanTOML) var plan files.Plan
_, err = toml.DecodeFile(foundPlanTOML, &plan)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, len(buildPlan.Entries), 0) // this shows that the plan was filtered to remove `requires` provided by extensions h.AssertEq(t, len(plan.Entries), 0) // this shows that the plan was filtered to remove `requires` provided by extensions
t.Log("runs /bin/generate for extensions") t.Log("runs /bin/generate for extensions")
h.AssertStringContains(t, output, "simple_extension: output from /bin/generate") h.AssertStringContains(t, output, "simple_extension: output from /bin/generate")
var dockerfilePath string
if api.MustParse(platformAPI).LessThan("0.13") {
t.Log("copies the generated Dockerfiles to the output directory") t.Log("copies the generated Dockerfiles to the output directory")
dockerfilePath = filepath.Join(copyDir, "layers", "generated", "run", "simple_extension", "Dockerfile") dockerfilePath := filepath.Join(copyDir, "layers", "generated", "run", "simple_extension", "Dockerfile")
} else {
dockerfilePath = filepath.Join(copyDir, "layers", "generated", "simple_extension", "run.Dockerfile")
}
h.AssertPathExists(t, dockerfilePath) h.AssertPathExists(t, dockerfilePath)
contents, err := os.ReadFile(dockerfilePath) contents, err := os.ReadFile(dockerfilePath)
h.AssertEq(t, string(contents), "FROM some-run-image-from-extension\n") h.AssertEq(t, string(contents), "FROM some-run-image-from-extension\n")
t.Log("records the new run image in analyzed.toml") t.Log("records the new run image in analyzed.toml")
foundAnalyzedTOML := filepath.Join(copyDir, "layers", "analyzed.toml") foundAnalyzedTOML := filepath.Join(copyDir, "layers", "analyzed.toml")
analyzedMD, err := files.Handler.ReadAnalyzed(foundAnalyzedTOML, cmd.DefaultLogger) var analyzed files.Analyzed
_, err = toml.DecodeFile(foundAnalyzedTOML, &analyzed)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, analyzedMD.RunImage.Image, "some-run-image-from-extension") h.AssertEq(t, analyzed.RunImage.Image, "some-run-image-from-extension")
})
})
when("platform api < 0.6", func() {
when("no buildpack group passed detection", func() {
it("errors and exits with the expected code", func() {
command := exec.Command(
"docker",
"run",
"--rm",
"--env", "CNB_ORDER_PATH=/cnb/orders/empty_order.toml",
"--env", "CNB_PLATFORM_API=0.5",
detectImage,
)
output, err := command.CombinedOutput()
h.AssertNotNil(t, err)
failErr, ok := err.(*exec.ExitError)
if !ok {
t.Fatalf("expected an error of type exec.ExitError")
}
h.AssertEq(t, failErr.ExitCode(), 100) // platform code for failed detect
expected := "No buildpack groups passed detection."
h.AssertStringContains(t, string(output), expected)
})
}) })
}) })
} }
}

View File

@ -1,17 +1,16 @@
//go:build acceptance //go:build acceptance
// +build acceptance
package acceptance package acceptance
import ( import (
"context" "context"
"crypto/sha256"
"encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"strings" "strings"
"testing" "testing"
"time" "time"
@ -19,7 +18,6 @@ import (
"github.com/buildpacks/imgutil" "github.com/buildpacks/imgutil"
"github.com/google/go-containerregistry/pkg/authn" "github.com/google/go-containerregistry/pkg/authn"
"github.com/google/go-containerregistry/pkg/v1/remote" "github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/pkg/errors"
"github.com/sclevine/spec" "github.com/sclevine/spec"
"github.com/sclevine/spec/report" "github.com/sclevine/spec/report"
@ -27,7 +25,6 @@ import (
"github.com/buildpacks/lifecycle/auth" "github.com/buildpacks/lifecycle/auth"
"github.com/buildpacks/lifecycle/cache" "github.com/buildpacks/lifecycle/cache"
"github.com/buildpacks/lifecycle/cmd" "github.com/buildpacks/lifecycle/cmd"
"github.com/buildpacks/lifecycle/internal/fsutil"
"github.com/buildpacks/lifecycle/internal/path" "github.com/buildpacks/lifecycle/internal/path"
"github.com/buildpacks/lifecycle/platform/files" "github.com/buildpacks/lifecycle/platform/files"
h "github.com/buildpacks/lifecycle/testhelpers" h "github.com/buildpacks/lifecycle/testhelpers"
@ -44,6 +41,8 @@ var (
) )
func TestExporter(t *testing.T) { func TestExporter(t *testing.T) {
h.SkipIf(t, runtime.GOOS == "windows", "Exporter acceptance tests are not yet supported on Windows")
testImageDockerContext := filepath.Join("testdata", "exporter") testImageDockerContext := filepath.Join("testdata", "exporter")
exportTest = NewPhaseTest(t, "exporter", testImageDockerContext) exportTest = NewPhaseTest(t, "exporter", testImageDockerContext)
@ -52,6 +51,7 @@ func TestExporter(t *testing.T) {
exportImage = exportTest.testImageRef exportImage = exportTest.testImageRef
exporterPath = exportTest.containerBinaryPath exporterPath = exportTest.containerBinaryPath
cacheFixtureDir = filepath.Join("testdata", "exporter", "cache-dir")
exportRegAuthConfig = exportTest.targetRegistry.authConfig exportRegAuthConfig = exportTest.targetRegistry.authConfig
exportRegNetwork = exportTest.targetRegistry.network exportRegNetwork = exportTest.targetRegistry.network
exportDaemonFixtures = exportTest.targetDaemon.fixtures exportDaemonFixtures = exportTest.targetDaemon.fixtures
@ -64,16 +64,21 @@ func TestExporter(t *testing.T) {
func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spec.S) { func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spec.S) {
return func(t *testing.T, when spec.G, it spec.S) { return func(t *testing.T, when spec.G, it spec.S) {
when("daemon case", func() {
var exportedImageName string var exportedImageName string
it.After(func() { it.After(func() {
_, _, _ = h.RunE(exec.Command("docker", "rmi", exportedImageName)) // #nosec G204 _, _, _ = h.RunE(exec.Command("docker", "rmi", exportedImageName)) // #nosec G204
}) })
it("app is created", func() { when("daemon case", func() {
when("first build", func() {
when("app", func() {
it("is created", func() {
exportFlags := []string{"-daemon", "-log-level", "debug"} exportFlags := []string{"-daemon", "-log-level", "debug"}
if api.MustParse(platformAPI).LessThan("0.7") {
exportFlags = append(exportFlags, []string{"-run-image", exportRegFixtures.ReadOnlyRunImage}...) // though the run image is registry image, it also exists in the daemon with the same tag
}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...) exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = "some-exported-image-" + h.RandString(10) exportedImageName = "some-exported-image-" + h.RandString(10)
exportArgs = append(exportArgs, exportedImageName) exportArgs = append(exportArgs, exportedImageName)
@ -104,7 +109,6 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
"Buildpacks Application Launcher", "Buildpacks Application Launcher",
"Application Layer", "Application Layer",
"Software Bill-of-Materials", "Software Bill-of-Materials",
"Layer: 'corrupted-layer', Created by buildpack: corrupted_buildpack@corrupted_v1",
"Layer: 'launch-layer', Created by buildpack: cacher_buildpack@cacher_v1", "Layer: 'launch-layer', Created by buildpack: cacher_buildpack@cacher_v1",
"", // run image layer "", // run image layer
} }
@ -115,13 +119,14 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
assertImageOSAndArchAndCreatedAt(t, exportedImageName, exportTest, imgutil.NormalizedDateTime) assertImageOSAndArchAndCreatedAt(t, exportedImageName, exportTest, imgutil.NormalizedDateTime)
}) })
})
when("using extensions", func() { when("using extensions", func() {
it.Before(func() { it.Before(func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "")
}) })
it("app is created from the extended run image", func() { it("is created from the extended run image", func() {
exportFlags := []string{ exportFlags := []string{
"-analyzed", "/layers/run-image-extended-analyzed.toml", // though the run image is a registry image, it also exists in the daemon with the same tag "-analyzed", "/layers/run-image-extended-analyzed.toml", // though the run image is a registry image, it also exists in the daemon with the same tag
"-daemon", "-daemon",
@ -140,16 +145,11 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
runImageFixtureTopLayerSHA := layers[len(layers)-1] runImageFixtureTopLayerSHA := layers[len(layers)-1]
runImageFixtureSHA := inspect.ID runImageFixtureSHA := inspect.ID
experimentalMode := "warn"
if api.MustParse(platformAPI).AtLeast("0.13") {
experimentalMode = "error"
}
output := h.DockerRun(t, output := h.DockerRun(t,
exportImage, exportImage,
h.WithFlags(append( h.WithFlags(append(
dockerSocketMount, dockerSocketMount,
"--env", "CNB_EXPERIMENTAL_MODE="+experimentalMode, "--env", "CNB_EXPERIMENTAL_MODE=warn",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+platformAPI,
)...), )...),
h.WithArgs(exportArgs...), h.WithArgs(exportArgs...),
@ -162,35 +162,30 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
"Buildpacks Application Launcher", "Buildpacks Application Launcher",
"Application Layer", "Application Layer",
"Software Bill-of-Materials", "Software Bill-of-Materials",
"Layer: 'corrupted-layer', Created by buildpack: corrupted_buildpack@corrupted_v1",
"Layer: 'launch-layer', Created by buildpack: cacher_buildpack@cacher_v1", "Layer: 'launch-layer', Created by buildpack: cacher_buildpack@cacher_v1",
"Layer: 'RUN mkdir /some-other-dir && echo some-data > /some-other-dir/some-file && echo some-data > /some-other-file', Created by extension: second-extension", "Layer: 'RUN apt-get update && apt-get install -y tree', Created by extension: tree",
"Layer: 'RUN mkdir /some-dir && echo some-data > /some-dir/some-file && echo some-data > /some-file', Created by extension: first-extension", "Layer: 'RUN apt-get update && apt-get install -y curl', Created by extension: curl",
"", // run image layer "", // run image layer
} }
assertDaemonImageHasHistory(t, exportedImageName, expectedHistory) assertDaemonImageHasHistory(t, exportedImageName, expectedHistory)
t.Log("bases the exported image on the extended run image") t.Log("bases the exported image on the extended run image")
inspect, _, err = h.DockerCli(t).ImageInspectWithRaw(context.TODO(), exportedImageName) inspect, _, err = h.DockerCli(t).ImageInspectWithRaw(context.TODO(), exportedImageName)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, inspect.Config.Labels["io.buildpacks.rebasable"], "false") // from testdata/exporter/container/layers/some-extended-dir/run/sha256_<sha>/blobs/sha256/<config> h.AssertEq(t, inspect.Config.Labels["io.buildpacks.rebasable"], "false") // from testdata/exporter/container/layers/extended/sha256:<sha>/blobs/sha256/<config>
t.Log("Adds extension layers") t.Log("Adds extension layers")
type testCase struct { diffIDFromExt1 := "sha256:60600f423214c27fd184ebc96ae765bf2b4703c9981fb4205d28dd35e7eec4ae"
expectedDiffID string diffIDFromExt2 := "sha256:1d811b70500e2e9a5e5b8ca7429ef02e091cdf4657b02e456ec54dd1baea0a66"
layerIndex int var foundFromExt1, foundFromExt2 bool
for _, layer := range inspect.RootFS.Layers {
if layer == diffIDFromExt1 {
foundFromExt1 = true
} }
testCases := []testCase{ if layer == diffIDFromExt2 {
{ foundFromExt2 = true
expectedDiffID: "sha256:fb54d2566824d6630d94db0b008d9a544a94d3547a424f52e2fd282b648c0601", // from testdata/exporter/container/layers/some-extended-dir/run/sha256_<c72eda1c>/blobs/sha256/65c2873d397056a5cb4169790654d787579b005f18b903082b177d4d9b4aecf5 after un-compressing and zeroing timestamps
layerIndex: 1,
},
{
expectedDiffID: "sha256:1018c7d3584c4f7fa3ef4486d1a6a11b93956b9d8bfe0898a3e0fbd248c984d8", // from testdata/exporter/container/layers/some-extended-dir/run/sha256_<c72eda1c>/blobs/sha256/0fb9b88c9cbe9f11b4c8da645f390df59f5949632985a0bfc2a842ef17b2ad18 after un-compressing and zeroing timestamps
layerIndex: 2,
},
} }
for _, tc := range testCases {
h.AssertEq(t, inspect.RootFS.Layers[tc.layerIndex], tc.expectedDiffID)
} }
h.AssertEq(t, foundFromExt1, true)
h.AssertEq(t, foundFromExt2, true)
t.Log("sets the layers metadata label according to the new spec") t.Log("sets the layers metadata label according to the new spec")
var lmd files.LayersMetadata var lmd files.LayersMetadata
lmdJSON := inspect.Config.Labels["io.buildpacks.lifecycle.metadata"] lmdJSON := inspect.Config.Labels["io.buildpacks.lifecycle.metadata"]
@ -201,12 +196,18 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
h.AssertEq(t, lmd.RunImage.Reference, strings.TrimPrefix(runImageFixtureSHA, "sha256:")) h.AssertEq(t, lmd.RunImage.Reference, strings.TrimPrefix(runImageFixtureSHA, "sha256:"))
}) })
}) })
})
when("SOURCE_DATE_EPOCH is set", func() { when("SOURCE_DATE_EPOCH is set", func() {
it("app is created with config CreatedAt set to SOURCE_DATE_EPOCH", func() { it("Image CreatedAt is set to SOURCE_DATE_EPOCH", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.9"), "SOURCE_DATE_EPOCH support added in 0.9") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.9"), "SOURCE_DATE_EPOCH support added in 0.9")
expectedTime := time.Date(2022, 1, 5, 5, 5, 5, 0, time.UTC) expectedTime := time.Date(2022, 1, 5, 5, 5, 5, 0, time.UTC)
exportFlags := []string{"-daemon"} exportFlags := []string{"-daemon"}
if api.MustParse(platformAPI).LessThan("0.7") {
exportFlags = append(exportFlags, []string{"-run-image", exportRegFixtures.ReadOnlyRunImage}...)
}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...) exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = "some-exported-image-" + h.RandString(10) exportedImageName = "some-exported-image-" + h.RandString(10)
exportArgs = append(exportArgs, exportedImageName) exportArgs = append(exportArgs, exportedImageName)
@ -230,14 +231,14 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
when("registry case", func() { when("registry case", func() {
var exportedImageName string when("first build", func() {
when("app", func() {
it.After(func() { it("is created", func() {
_, _, _ = h.RunE(exec.Command("docker", "rmi", exportedImageName)) // #nosec G204
})
it("app is created", func() {
var exportFlags []string var exportFlags []string
if api.MustParse(platformAPI).LessThan("0.7") {
exportFlags = append(exportFlags, []string{"-run-image", exportRegFixtures.ReadOnlyRunImage}...)
}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...) exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10)) exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10))
exportArgs = append(exportArgs, exportedImageName) exportArgs = append(exportArgs, exportedImageName)
@ -256,40 +257,18 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
h.Run(t, exec.Command("docker", "pull", exportedImageName)) h.Run(t, exec.Command("docker", "pull", exportedImageName))
assertImageOSAndArchAndCreatedAt(t, exportedImageName, exportTest, imgutil.NormalizedDateTime) assertImageOSAndArchAndCreatedAt(t, exportedImageName, exportTest, imgutil.NormalizedDateTime)
}) })
when("registry is insecure", func() {
it.Before(func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "")
})
it("uses http protocol", func() {
var exportFlags []string
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = exportTest.RegRepoName("some-insecure-exported-image-" + h.RandString(10))
exportArgs = append(exportArgs, exportedImageName)
insecureRegistry := "host.docker.internal/bar"
insecureAnalyzed := "/layers/analyzed_insecure.toml"
_, _, err := h.DockerRunWithError(t,
exportImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_INSECURE_REGISTRIES="+insecureRegistry,
"--env", "CNB_ANALYZED_PATH="+insecureAnalyzed,
"--network", exportRegNetwork,
),
h.WithArgs(exportArgs...),
)
h.AssertStringContains(t, err.Error(), "http://host.docker.internal")
})
}) })
when("SOURCE_DATE_EPOCH is set", func() { when("SOURCE_DATE_EPOCH is set", func() {
it("app is created with config CreatedAt set to SOURCE_DATE_EPOCH", func() { it("Image CreatedAt is set to SOURCE_DATE_EPOCH", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.9"), "SOURCE_DATE_EPOCH support added in 0.9") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.9"), "SOURCE_DATE_EPOCH support added in 0.9")
expectedTime := time.Date(2022, 1, 5, 5, 5, 5, 0, time.UTC) expectedTime := time.Date(2022, 1, 5, 5, 5, 5, 0, time.UTC)
var exportFlags []string var exportFlags []string
if api.MustParse(platformAPI).LessThan("0.7") {
exportFlags = append(exportFlags, []string{"-run-image", exportRegFixtures.ReadOnlyRunImage}...)
}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...) exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10)) exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10))
exportArgs = append(exportArgs, exportedImageName) exportArgs = append(exportArgs, exportedImageName)
@ -311,36 +290,15 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
// FIXME: move this out of the registry block
when("cache", func() { when("cache", func() {
when("image case", func() { when("cache image case", func() {
it("cache is created", func() { it("is created", func() {
cacheImageName := exportTest.RegRepoName("some-cache-image-" + h.RandString(10)) cacheImageName := exportTest.RegRepoName("some-cache-image-" + h.RandString(10))
exportFlags := []string{"-cache-image", cacheImageName} exportFlags := []string{"-cache-image", cacheImageName}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...) if api.MustParse(platformAPI).LessThan("0.7") {
exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10)) exportFlags = append(exportFlags, "-run-image", exportRegFixtures.ReadOnlyRunImage)
exportArgs = append(exportArgs, exportedImageName) }
output := h.DockerRun(t,
exportImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+exportRegAuthConfig,
"--network", exportRegNetwork,
),
h.WithArgs(exportArgs...),
)
h.AssertStringContains(t, output, "Saving "+exportedImageName)
// To detect whether the export of cacheImage and exportedImage is successful
h.Run(t, exec.Command("docker", "pull", exportedImageName))
assertImageOSAndArchAndCreatedAt(t, exportedImageName, exportTest, imgutil.NormalizedDateTime)
h.Run(t, exec.Command("docker", "pull", cacheImageName))
})
when("parallel export is enabled", func() {
it("cache is created", func() {
cacheImageName := exportTest.RegRepoName("some-cache-image-" + h.RandString(10))
exportFlags := []string{"-cache-image", cacheImageName, "-parallel"}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...) exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10)) exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10))
exportArgs = append(exportArgs, exportedImageName) exportArgs = append(exportArgs, exportedImageName)
@ -358,14 +316,15 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
h.Run(t, exec.Command("docker", "pull", exportedImageName)) h.Run(t, exec.Command("docker", "pull", exportedImageName))
assertImageOSAndArchAndCreatedAt(t, exportedImageName, exportTest, imgutil.NormalizedDateTime) assertImageOSAndArchAndCreatedAt(t, exportedImageName, exportTest, imgutil.NormalizedDateTime)
h.Run(t, exec.Command("docker", "pull", cacheImageName))
})
}) })
when("cache is provided but no data was cached", func() { it("is created with empty layer", func() {
it("cache is created with an empty layer", func() {
cacheImageName := exportTest.RegRepoName("some-empty-cache-image-" + h.RandString(10)) cacheImageName := exportTest.RegRepoName("some-empty-cache-image-" + h.RandString(10))
exportFlags := []string{"-cache-image", cacheImageName, "-layers", "/other_layers"} exportFlags := []string{"-cache-image", cacheImageName, "-layers", "/other_layers"}
if api.MustParse(platformAPI).LessThan("0.7") {
exportFlags = append(exportFlags, "-run-image", exportRegFixtures.ReadOnlyRunImage)
}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...) exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10)) exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10))
exportArgs = append(exportArgs, exportedImageName) exportArgs = append(exportArgs, exportedImageName)
@ -385,9 +344,7 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
// Retrieve the cache image from the ephemeral registry // Retrieve the cache image from the ephemeral registry
h.Run(t, exec.Command("docker", "pull", cacheImageName)) h.Run(t, exec.Command("docker", "pull", cacheImageName))
logger := cmd.DefaultLogger subject, err := cache.NewImageCacheFromName(cacheImageName, authn.DefaultKeychain, cmd.DefaultLogger)
subject, err := cache.NewImageCacheFromName(cacheImageName, authn.DefaultKeychain, logger, cache.NewImageDeleter(cache.NewImageComparer(), logger, api.MustParse(platformAPI).LessThan("0.13")))
h.AssertNil(t, err) h.AssertNil(t, err)
//Assert the cache image was created with an empty layer //Assert the cache image was created with an empty layer
@ -398,90 +355,12 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
when("directory case", func() {
when("original cache was corrupted", func() {
var cacheDir string
it.Before(func() {
var err error
cacheDir, err = os.MkdirTemp("", "cache")
h.AssertNil(t, err)
h.AssertNil(t, os.Chmod(cacheDir, 0777)) // Override umask
cacheFixtureDir := filepath.Join("testdata", "exporter", "cache-dir")
h.AssertNil(t, fsutil.Copy(cacheFixtureDir, cacheDir))
// We have to pre-create the tar files so that their digests do not change due to timestamps
// But, ':' in the filepath on Windows is not allowed
h.AssertNil(t, os.Rename(
filepath.Join(cacheDir, "committed", "sha256_258dfa0cc987efebc17559694866ebc91139e7c0e574f60d1d4092f53d7dff59.tar"),
filepath.Join(cacheDir, "committed", "sha256:258dfa0cc987efebc17559694866ebc91139e7c0e574f60d1d4092f53d7dff59.tar"),
))
})
it.After(func() {
_ = os.RemoveAll(cacheDir)
})
it("overwrites the original layer", func() {
exportFlags := []string{
"-cache-dir", "/cache",
"-log-level", "debug",
}
exportArgs := append([]string{ctrPath(exporterPath)}, exportFlags...)
exportedImageName = exportTest.RegRepoName("some-exported-image-" + h.RandString(10))
exportArgs = append(exportArgs, exportedImageName)
output := h.DockerRun(t,
exportImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+exportRegAuthConfig,
"--network", exportRegNetwork,
"--volume", fmt.Sprintf("%s:/cache", cacheDir),
),
h.WithArgs(exportArgs...),
)
h.AssertStringContains(t, output, "Skipping reuse for layer corrupted_buildpack:corrupted-layer: expected layer contents to have SHA 'sha256:258dfa0cc987efebc17559694866ebc91139e7c0e574f60d1d4092f53d7dff59'; found 'sha256:9e0b77ed599eafdab8611f7eeefef084077f91f02f1da0a3870c7ff20a08bee8'")
h.AssertStringContains(t, output, "Saving "+exportedImageName)
h.Run(t, exec.Command("docker", "pull", exportedImageName))
defer h.Run(t, exec.Command("docker", "image", "rm", exportedImageName))
// Verify the app has the correct sha for the layer
inspect, _, err := h.DockerCli(t).ImageInspectWithRaw(context.TODO(), exportedImageName)
h.AssertNil(t, err)
var lmd files.LayersMetadata
lmdJSON := inspect.Config.Labels["io.buildpacks.lifecycle.metadata"]
h.AssertNil(t, json.Unmarshal([]byte(lmdJSON), &lmd))
h.AssertEq(t, lmd.Buildpacks[2].Layers["corrupted-layer"].SHA, "sha256:258dfa0cc987efebc17559694866ebc91139e7c0e574f60d1d4092f53d7dff59")
// Verify the cache has correct contents now
foundDiffID, err := func() (string, error) {
layerPath := filepath.Join(cacheDir, "committed", "sha256:258dfa0cc987efebc17559694866ebc91139e7c0e574f60d1d4092f53d7dff59.tar")
layerRC, err := os.Open(layerPath)
if err != nil {
return "", err
}
defer func() {
_ = layerRC.Close()
}()
hasher := sha256.New()
if _, err = io.Copy(hasher, layerRC); err != nil {
return "", errors.Wrap(err, "hashing layer")
}
foundDiffID := "sha256:" + hex.EncodeToString(hasher.Sum(make([]byte, 0, hasher.Size())))
return foundDiffID, nil
}()
h.AssertNil(t, err)
h.AssertEq(t, foundDiffID, "sha256:258dfa0cc987efebc17559694866ebc91139e7c0e574f60d1d4092f53d7dff59")
})
})
})
})
when("using extensions", func() { when("using extensions", func() {
it.Before(func() { it.Before(func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "")
}) })
it("app is created from the extended run image", func() { it("is created from the extended run image", func() {
exportFlags := []string{ exportFlags := []string{
"-analyzed", "/layers/run-image-extended-analyzed.toml", "-analyzed", "/layers/run-image-extended-analyzed.toml",
"-extended", "/layers/some-extended-dir", "-extended", "/layers/some-extended-dir",
@ -504,15 +383,10 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
runImageFixtureSHA, err := remoteImage.Digest() runImageFixtureSHA, err := remoteImage.Digest()
h.AssertNil(t, err) h.AssertNil(t, err)
experimentalMode := "warn"
if api.MustParse(platformAPI).AtLeast("0.13") {
experimentalMode = "error"
}
output := h.DockerRun(t, output := h.DockerRun(t,
exportImage, exportImage,
h.WithFlags( h.WithFlags(
"--env", "CNB_EXPERIMENTAL_MODE="+experimentalMode, "--env", "CNB_EXPERIMENTAL_MODE=warn",
"--env", "CNB_PLATFORM_API="+platformAPI, "--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_REGISTRY_AUTH="+exportRegAuthConfig, "--env", "CNB_REGISTRY_AUTH="+exportRegAuthConfig,
"--network", exportRegNetwork, "--network", exportRegNetwork,
@ -530,30 +404,25 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
h.AssertNil(t, err) h.AssertNil(t, err)
configFile, err := remoteImage.ConfigFile() configFile, err := remoteImage.ConfigFile()
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, configFile.Config.Labels["io.buildpacks.rebasable"], "false") // from testdata/exporter/container/layers/some-extended-dir/run/sha256_<sha>/blobs/sha256/<config> h.AssertEq(t, configFile.Config.Labels["io.buildpacks.rebasable"], "false") // from testdata/exporter/container/layers/extended/sha256:<sha>/blobs/sha256/<config>
t.Log("Adds extension layers") t.Log("Adds extension layers")
layers, err = remoteImage.Layers() layers, err = remoteImage.Layers()
h.AssertNil(t, err) h.AssertNil(t, err)
type testCase struct { digestFromExt1 := "sha256:0c5f7a6fe14dbd19670f39e7466051cbd40b3a534c0812659740fb03e2137c1a"
expectedDigest string digestFromExt2 := "sha256:482346d1e0c7afa2514ec366d2e000e0667d0a6664690aab3c8ad51c81915b91"
layerIndex int var foundFromExt1, foundFromExt2 bool
} for _, layer := range layers {
testCases := []testCase{
{
expectedDigest: "sha256:08e7ad5ce17cf5e5f70affe68b341a93de86ee2ba074932c3a05b8770f66d772", // from testdata/exporter/container/layers/some-extended-dir/run/sha256_<c72eda1c>/blobs/sha256/65c2873d397056a5cb4169790654d787579b005f18b903082b177d4d9b4aecf5 after un-compressing, zeroing timestamps, and re-compressing
layerIndex: 1,
},
{
expectedDigest: "sha256:0e74ef444ea437147e3fa0ce2aad371df5380c26b96875ae07b9b67f44cdb2ee", // from testdata/exporter/container/layers/some-extended-dir/run/sha256_<c72eda1c>/blobs/sha256/0fb9b88c9cbe9f11b4c8da645f390df59f5949632985a0bfc2a842ef17b2ad18 after un-compressing, zeroing timestamps, and re-compressing
layerIndex: 2,
},
}
for _, tc := range testCases {
layer := layers[tc.layerIndex]
digest, err := layer.Digest() digest, err := layer.Digest()
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, digest.String(), tc.expectedDigest) if digest.String() == digestFromExt1 {
foundFromExt1 = true
} }
if digest.String() == digestFromExt2 {
foundFromExt2 = true
}
}
h.AssertEq(t, foundFromExt1, true)
h.AssertEq(t, foundFromExt2, true)
t.Log("sets the layers metadata label according to the new spec") t.Log("sets the layers metadata label according to the new spec")
var lmd files.LayersMetadata var lmd files.LayersMetadata
lmdJSON := configFile.Config.Labels["io.buildpacks.lifecycle.metadata"] lmdJSON := configFile.Config.Labels["io.buildpacks.lifecycle.metadata"]
@ -565,6 +434,7 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
}) })
})
when("layout case", func() { when("layout case", func() {
var ( var (
@ -572,12 +442,11 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
err error err error
layoutDir string layoutDir string
tmpDir string tmpDir string
exportedImageName string
) )
when("experimental mode is enabled", func() { when("experimental mode is enabled", func() {
it.Before(func() { it.Before(func() {
// create the directory to save all OCI images on disk // creates the directory to save all the OCI images on disk
tmpDir, err = os.MkdirTemp("", "layout") tmpDir, err = os.MkdirTemp("", "layout")
h.AssertNil(t, err) h.AssertNil(t, err)
@ -592,13 +461,15 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
os.RemoveAll(tmpDir) os.RemoveAll(tmpDir)
}) })
when("using a custom layout directory", func() { when("custom layout directory", func() {
when("first build", func() {
when("app", func() {
it.Before(func() { it.Before(func() {
exportedImageName = "my-custom-layout-app" exportedImageName = "my-custom-layout-app"
layoutDir = filepath.Join(path.RootDir, "my-layout-dir") layoutDir = filepath.Join(path.RootDir, "my-layout-dir")
}) })
it("app is created", func() { it("is created", func() {
var exportFlags []string var exportFlags []string
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "Platform API < 0.12 does not accept a -layout flag") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "Platform API < 0.12 does not accept a -layout flag")
exportFlags = append(exportFlags, []string{"-layout", "-layout-dir", layoutDir, "-analyzed", "/layers/layout-analyzed.toml"}...) exportFlags = append(exportFlags, []string{"-layout", "-layout-dir", layoutDir, "-analyzed", "/layers/layout-analyzed.toml"}...)
@ -620,6 +491,8 @@ func testExporterFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
}) })
})
})
when("experimental mode is not enabled", func() { when("experimental mode is not enabled", func() {
it.Before(func() { it.Before(func() {

View File

@ -1,4 +1,5 @@
//go:build acceptance //go:build acceptance
// +build acceptance
package acceptance package acceptance
@ -7,11 +8,12 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"testing" "testing"
"github.com/buildpacks/imgutil/layout/sparse"
"github.com/google/go-containerregistry/pkg/authn" "github.com/google/go-containerregistry/pkg/authn"
v1 "github.com/google/go-containerregistry/pkg/v1" v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/empty"
"github.com/google/go-containerregistry/pkg/v1/layout" "github.com/google/go-containerregistry/pkg/v1/layout"
"github.com/google/go-containerregistry/pkg/v1/remote" "github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/sclevine/spec" "github.com/sclevine/spec"
@ -19,7 +21,8 @@ import (
"github.com/buildpacks/lifecycle/api" "github.com/buildpacks/lifecycle/api"
"github.com/buildpacks/lifecycle/auth" "github.com/buildpacks/lifecycle/auth"
"github.com/buildpacks/lifecycle/cmd" "github.com/buildpacks/lifecycle/internal/encoding"
"github.com/buildpacks/lifecycle/internal/selective"
"github.com/buildpacks/lifecycle/platform/files" "github.com/buildpacks/lifecycle/platform/files"
h "github.com/buildpacks/lifecycle/testhelpers" h "github.com/buildpacks/lifecycle/testhelpers"
) )
@ -44,6 +47,8 @@ const (
) )
func TestExtender(t *testing.T) { func TestExtender(t *testing.T) {
h.SkipIf(t, runtime.GOOS == "windows", "Extender is not supported on Windows")
testImageDockerContext := filepath.Join("testdata", "extender") testImageDockerContext := filepath.Join("testdata", "extender")
extendTest = NewPhaseTest(t, "extender", testImageDockerContext) extendTest = NewPhaseTest(t, "extender", testImageDockerContext)
extendTest.Start(t) extendTest.Start(t)
@ -66,13 +71,8 @@ func TestExtender(t *testing.T) {
func testExtenderFunc(platformAPI string) func(t *testing.T, when spec.G, it spec.S) { func testExtenderFunc(platformAPI string) func(t *testing.T, when spec.G, it spec.S) {
return func(t *testing.T, when spec.G, it spec.S) { return func(t *testing.T, when spec.G, it spec.S) {
var generatedDir = "/layers/generated"
it.Before(func() { it.Before(func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.10"), "") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.10"), "")
if api.MustParse(platformAPI).AtLeast("0.13") {
generatedDir = "/layers/generated-with-contexts"
}
}) })
when("kaniko case", func() { when("kaniko case", func() {
@ -105,10 +105,9 @@ func testExtenderFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
baseCacheDir := filepath.Join(kanikoDir, "cache", "base") baseCacheDir := filepath.Join(kanikoDir, "cache", "base")
h.AssertNil(t, os.MkdirAll(baseCacheDir, 0755)) h.AssertNil(t, os.MkdirAll(baseCacheDir, 0755))
// write sparse image layoutPath, err := selective.Write(filepath.Join(baseCacheDir, baseImageDigest), empty.Index)
layoutImage, err := sparse.NewImage(filepath.Join(baseCacheDir, baseImageDigest), remoteImage)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertNil(t, layoutImage.Save()) h.AssertNil(t, layoutPath.AppendImage(remoteImage))
// write image reference in analyzed.toml // write image reference in analyzed.toml
analyzedMD := files.Analyzed{ analyzedMD := files.Analyzed{
@ -121,7 +120,7 @@ func testExtenderFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}, },
} }
analyzedPath = h.TempFile(t, "", "analyzed.toml") analyzedPath = h.TempFile(t, "", "analyzed.toml")
h.AssertNil(t, files.Handler.WriteAnalyzed(analyzedPath, &analyzedMD, cmd.DefaultLogger)) h.AssertNil(t, encoding.WriteTOML(analyzedPath, analyzedMD))
}) })
it.After(func() { it.After(func() {
@ -134,7 +133,7 @@ func testExtenderFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
extendArgs := []string{ extendArgs := []string{
ctrPath(extenderPath), ctrPath(extenderPath),
"-analyzed", "/layers/analyzed.toml", "-analyzed", "/layers/analyzed.toml",
"-generated", generatedDir, "-generated", "/layers/generated",
"-log-level", "debug", "-log-level", "debug",
"-gid", "1000", "-gid", "1000",
"-uid", "1234", "-uid", "1234",
@ -187,7 +186,7 @@ func testExtenderFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
ctrPath(extenderPath), ctrPath(extenderPath),
"-analyzed", "/layers/analyzed.toml", "-analyzed", "/layers/analyzed.toml",
"-extended", "/layers/extended", "-extended", "/layers/extended",
"-generated", generatedDir, "-generated", "/layers/generated",
"-kind", "run", "-kind", "run",
"-log-level", "debug", "-log-level", "debug",
"-gid", "1000", "-gid", "1000",
@ -256,16 +255,10 @@ func assertExpectedImage(t *testing.T, imagePath, platformAPI string) {
h.AssertEq(t, configFile.Config.Labels["io.buildpacks.rebasable"], "false") h.AssertEq(t, configFile.Config.Labels["io.buildpacks.rebasable"], "false")
layers, err := image.Layers() layers, err := image.Layers()
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, len(layers), 5) // base (3), curl (1), tree (1)
if api.MustParse(platformAPI).AtLeast("0.12") {
history := configFile.History history := configFile.History
h.AssertEq(t, len(history), len(configFile.RootFS.DiffIDs)) h.AssertEq(t, len(history), len(configFile.RootFS.DiffIDs))
if api.MustParse(platformAPI).AtLeast("0.13") {
h.AssertEq(t, len(layers), 7) // base (3), curl (2), tree (2)
h.AssertEq(t, history[3].CreatedBy, "Layer: 'RUN apt-get update && apt-get install -y curl', Created by extension: curl")
h.AssertEq(t, history[4].CreatedBy, "Layer: 'COPY run-file /', Created by extension: curl")
h.AssertEq(t, history[5].CreatedBy, "Layer: 'RUN apt-get update && apt-get install -y tree', Created by extension: tree")
h.AssertEq(t, history[6].CreatedBy, "Layer: 'COPY shared-file /shared-run', Created by extension: tree")
} else {
h.AssertEq(t, len(layers), 5) // base (3), curl (1), tree (1)
h.AssertEq(t, history[3].CreatedBy, "Layer: 'RUN apt-get update && apt-get install -y curl', Created by extension: curl") h.AssertEq(t, history[3].CreatedBy, "Layer: 'RUN apt-get update && apt-get install -y curl', Created by extension: curl")
h.AssertEq(t, history[4].CreatedBy, "Layer: 'RUN apt-get update && apt-get install -y tree', Created by extension: tree") h.AssertEq(t, history[4].CreatedBy, "Layer: 'RUN apt-get update && apt-get install -y tree', Created by extension: tree")
} }

View File

@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"strings" "strings"
"testing" "testing"
@ -24,6 +25,9 @@ func TestLauncher(t *testing.T) {
launchTest = NewPhaseTest(t, "launcher", testImageDockerContext, withoutDaemonFixtures, withoutRegistry) launchTest = NewPhaseTest(t, "launcher", testImageDockerContext, withoutDaemonFixtures, withoutRegistry)
containerBinaryDir := filepath.Join("testdata", "launcher", "linux", "container", "cnb", "lifecycle") containerBinaryDir := filepath.Join("testdata", "launcher", "linux", "container", "cnb", "lifecycle")
if launchTest.targetDaemon.os == "windows" {
containerBinaryDir = filepath.Join("testdata", "launcher", "windows", "container", "cnb", "lifecycle")
}
withCustomContainerBinaryDir := func(_ *testing.T, phaseTest *PhaseTest) { withCustomContainerBinaryDir := func(_ *testing.T, phaseTest *PhaseTest) {
phaseTest.containerBinaryDir = containerBinaryDir phaseTest.containerBinaryDir = containerBinaryDir
} }
@ -37,9 +41,10 @@ func TestLauncher(t *testing.T) {
} }
func testLauncher(t *testing.T, when spec.G, it spec.S) { func testLauncher(t *testing.T, when spec.G, it spec.S) {
when("Buildpack API >= 0.5", func() {
when("exec.d", func() { when("exec.d", func() {
it("executes the binaries and modifies env before running profiles", func() { it("executes the binaries and modifies env before running profiles", func() {
cmd := exec.Command("docker", "run", "--rm", //nolint cmd := exec.Command("docker", "run", "--rm",
"--env=CNB_PLATFORM_API=0.7", "--env=CNB_PLATFORM_API=0.7",
"--entrypoint=exec.d-checker"+exe, "--entrypoint=exec.d-checker"+exe,
"--env=VAR_FROM_EXEC_D=orig-val", "--env=VAR_FROM_EXEC_D=orig-val",
@ -61,10 +66,12 @@ func testLauncher(t *testing.T, when spec.G, it spec.S) {
assertOutput(t, cmd, expected) assertOutput(t, cmd, expected)
}) })
}) })
})
when("Platform API >= 0.4", func() {
when("entrypoint is a process", func() { when("entrypoint is a process", func() {
it("launches that process", func() { it("launches that process", func() {
cmd := exec.Command("docker", "run", "--rm", //nolint cmd := exec.Command("docker", "run", "--rm",
"--entrypoint=web", "--entrypoint=web",
"--env=CNB_PLATFORM_API="+latestPlatformAPI, "--env=CNB_PLATFORM_API="+latestPlatformAPI,
launchImage) launchImage)
@ -82,25 +89,31 @@ func testLauncher(t *testing.T, when spec.G, it spec.S) {
}) })
it("appends any args to the process args", func() { it("appends any args to the process args", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm",
"docker", "run", "--rm",
"--entrypoint=web", "--entrypoint=web",
"--env=CNB_PLATFORM_API="+latestPlatformAPI, "--env=CNB_PLATFORM_API="+latestPlatformAPI,
launchImage, "with user provided args", launchImage, "with user provided args")
) if runtime.GOOS == "windows" {
assertOutput(t, cmd, `Executing web process-type "with user provided args"`)
} else {
assertOutput(t, cmd, "Executing web process-type with user provided args") assertOutput(t, cmd, "Executing web process-type with user provided args")
}
}) })
}) })
when("entrypoint is a not a process", func() { when("entrypoint is a not a process", func() {
it("builds a process from the arguments", func() { it("builds a process from the arguments", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm",
"docker", "run", "--rm",
"--entrypoint=launcher", "--entrypoint=launcher",
"--env=CNB_PLATFORM_API="+latestPlatformAPI, "--env=CNB_PLATFORM_API="+latestPlatformAPI,
launchImage, "--", launchImage, "--", "env")
"env", if runtime.GOOS == "windows" {
cmd = exec.Command("docker", "run", "--rm",
`--entrypoint=launcher`,
"--env=CNB_PLATFORM_API=0.4",
launchImage, "--", "cmd", "/c", "set",
) )
}
assertOutput(t, cmd, assertOutput(t, cmd,
"SOME_VAR=some-bp-val", "SOME_VAR=some-bp-val",
@ -124,36 +137,145 @@ func testLauncher(t *testing.T, when spec.G, it spec.S) {
h.AssertStringContains(t, string(out), "ERROR: failed to launch: determine start command: when there is no default process a command is required") h.AssertStringContains(t, string(out), "ERROR: failed to launch: determine start command: when there is no default process a command is required")
}) })
}) })
})
when("Platform API < 0.4", func() {
when("there is no CMD provided", func() {
when("CNB_PROCESS_TYPE is NOT set", func() {
it("web is the default process-type", func() {
cmd := exec.Command("docker", "run", "--rm", "--env=CNB_PLATFORM_API=0.3", launchImage)
assertOutput(t, cmd, "Executing web process-type")
})
})
when("CNB_PROCESS_TYPE is set", func() {
it("should run the specified CNB_PROCESS_TYPE", func() {
cmd := exec.Command("docker", "run", "--rm", "--env=CNB_PLATFORM_API=0.3", "--env=CNB_PROCESS_TYPE=direct-process", launchImage)
if runtime.GOOS == "windows" {
assertOutput(t, cmd, "Usage: ping")
} else {
assertOutput(t, cmd, "Executing direct-process process-type")
}
})
})
})
when("process-type provided in CMD", func() {
it("launches that process-type", func() {
cmd := exec.Command("docker", "run", "--rm", "--env=CNB_PLATFORM_API=0.3", launchImage, "direct-process")
expected := "Executing direct-process process-type"
if runtime.GOOS == "windows" {
expected = "Usage: ping"
}
assertOutput(t, cmd, expected)
})
it("sets env vars from process specific directories", func() {
cmd := exec.Command("docker", "run", "--rm", "--env=CNB_PLATFORM_API=0.3", launchImage, "worker")
expected := "worker-process-val"
assertOutput(t, cmd, expected)
})
})
when("process is direct=false", func() {
when("the process type has no args", func() {
it("runs command as script", func() {
h.SkipIf(t, runtime.GOOS == "windows", "scripts are unsupported on windows")
cmd := exec.Command("docker", "run", "--rm",
"--env=CNB_PLATFORM_API=0.3",
"--env", "VAR1=val1",
"--env", "VAR2=val with space",
launchImage, "indirect-process-with-script",
)
assertOutput(t, cmd, "'val1' 'val with space'")
})
})
when("the process type has args", func() {
when("buildpack API 0.4", func() {
// buildpack API is determined by looking up the API of the process buildpack in metadata.toml
it("command and args become shell-parsed tokens in a script", func() {
var val2 string
if runtime.GOOS == "windows" {
val2 = `"val with space"` // windows values with spaces must contain quotes
} else {
val2 = "val with space"
}
cmd := exec.Command("docker", "run", "--rm",
"--env=CNB_PLATFORM_API=0.3",
"--env", "VAR1=val1",
"--env", "VAR2="+val2,
launchImage, "indirect-process-with-args",
) // #nosec G204
assertOutput(t, cmd, "'val1' 'val with space'")
})
})
when("buildpack API < 0.4", func() {
// buildpack API is determined by looking up the API of the process buildpack in metadata.toml
it("args become arguments to bash", func() {
h.SkipIf(t, runtime.GOOS == "windows", "scripts are unsupported on windows")
cmd := exec.Command("docker", "run", "--rm",
"--env=CNB_PLATFORM_API=0.3", launchImage, "legacy-indirect-process-with-args",
)
assertOutput(t, cmd, "'arg' 'arg with spaces'")
})
it("script must be explicitly written to accept bash args", func() {
h.SkipIf(t, runtime.GOOS == "windows", "scripts are unsupported on windows")
cmd := exec.Command("docker", "run", "--rm",
"--env=CNB_PLATFORM_API=0.3", launchImage, "legacy-indirect-process-with-incorrect-args",
)
output, err := cmd.CombinedOutput()
h.AssertNotNil(t, err)
h.AssertStringContains(t, string(output), "printf: usage: printf [-v var] format [arguments]")
})
})
})
it("sources scripts from process specific directories", func() {
cmd := exec.Command("docker", "run", "--rm", "--env=CNB_PLATFORM_API=0.3", launchImage, "profile-checker")
expected := "sourced bp profile\nsourced bp profile-checker profile\nsourced app profile\nval-from-profile"
assertOutput(t, cmd, expected)
})
})
it("respects CNB_APP_DIR and CNB_LAYERS_DIR environment variables", func() {
cmd := exec.Command("docker", "run", "--rm",
"--env=CNB_PLATFORM_API=0.3",
"--env", "CNB_APP_DIR="+ctrPath("/other-app"),
"--env", "CNB_LAYERS_DIR=/other-layers",
launchImage) // #nosec G204
assertOutput(t, cmd, "sourced other app profile\nExecuting other-layers web process-type")
})
})
when("provided CMD is not a process-type", func() { when("provided CMD is not a process-type", func() {
it("sources profiles and executes the command in a shell", func() { it("sources profiles and executes the command in a shell", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm", launchImage, "echo", "something")
"docker", "run", "--rm",
"--env=CNB_PLATFORM_API="+latestPlatformAPI,
launchImage,
"echo", "something",
)
assertOutput(t, cmd, "sourced bp profile\nsourced app profile\nsomething") assertOutput(t, cmd, "sourced bp profile\nsourced app profile\nsomething")
}) })
it("sets env vars from layers", func() { it("sets env vars from layers", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm", launchImage, "echo", "$SOME_VAR", "$OTHER_VAR", "$WORKER_VAR")
"docker", "run", "--rm", if runtime.GOOS == "windows" {
"--env=CNB_PLATFORM_API="+latestPlatformAPI, cmd = exec.Command("docker", "run", "--rm", launchImage, "echo", "%SOME_VAR%", "%OTHER_VAR%", "%WORKER_VAR%")
launchImage, }
"echo", "$SOME_VAR", "$OTHER_VAR", "$WORKER_VAR",
)
assertOutput(t, cmd, "sourced bp profile\nsourced app profile\nsome-bp-val other-bp-val worker-no-process-val") assertOutput(t, cmd, "sourced bp profile\nsourced app profile\nsome-bp-val other-bp-val worker-no-process-val")
}) })
it("passes through env vars from user, excluding excluded vars", func() { it("passes through env vars from user, excluding excluded vars", func() {
args := []string{"echo", "$SOME_USER_VAR, $CNB_APP_DIR, $OTHER_VAR"} args := []string{"echo", "$SOME_USER_VAR, $CNB_APP_DIR, $OTHER_VAR"}
if runtime.GOOS == "windows" {
args = []string{"echo", "%SOME_USER_VAR%, %CNB_APP_DIR%, %OTHER_VAR%"}
}
cmd := exec.Command("docker", cmd := exec.Command("docker",
append( append(
[]string{ []string{
"run", "--rm", "run", "--rm",
"--env", "CNB_APP_DIR=" + ctrPath("/workspace"), "--env", "CNB_APP_DIR=" + ctrPath("/workspace"),
"--env=CNB_PLATFORM_API=" + latestPlatformAPI,
"--env", "SOME_USER_VAR=some-user-val", "--env", "SOME_USER_VAR=some-user-val",
"--env", "OTHER_VAR=other-user-val", "--env", "OTHER_VAR=other-user-val",
launchImage, launchImage,
@ -161,38 +283,37 @@ func testLauncher(t *testing.T, when spec.G, it spec.S) {
args...)..., args...)...,
) // #nosec G204 ) // #nosec G204
if runtime.GOOS == "windows" {
// windows values with spaces will contain quotes
// empty values on windows preserve variable names instead of interpolating to empty strings
assertOutput(t, cmd, "sourced bp profile\nsourced app profile\n\"some-user-val, %CNB_APP_DIR%, other-user-val**other-bp-val\"")
} else {
assertOutput(t, cmd, "sourced bp profile\nsourced app profile\nsome-user-val, , other-user-val**other-bp-val") assertOutput(t, cmd, "sourced bp profile\nsourced app profile\nsome-user-val, , other-user-val**other-bp-val")
}
}) })
it("adds buildpack bin dirs to the path", func() { it("adds buildpack bin dirs to the path", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm", launchImage, "bp-executable")
"docker", "run", "--rm",
"--env=CNB_PLATFORM_API="+latestPlatformAPI,
launchImage,
"bp-executable",
)
assertOutput(t, cmd, "bp executable") assertOutput(t, cmd, "bp executable")
}) })
}) })
when("CMD provided starts with --", func() { when("CMD provided starts with --", func() {
it("launches command directly", func() { it("launches command directly", func() {
cmd := exec.Command( //nolint if runtime.GOOS == "windows" {
"docker", "run", "--rm", cmd := exec.Command("docker", "run", "--rm", launchImage, "--", "ping", "/?")
"--env=CNB_PLATFORM_API="+latestPlatformAPI, assertOutput(t, cmd, "Usage: ping")
launchImage, "--", } else {
"echo", "something", cmd := exec.Command("docker", "run", "--rm", launchImage, "--", "echo", "something")
)
assertOutput(t, cmd, "something") assertOutput(t, cmd, "something")
}
}) })
it("sets env vars from layers", func() { it("sets env vars from layers", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm", launchImage, "--", "env")
"docker", "run", "--rm", if runtime.GOOS == "windows" {
"--env=CNB_PLATFORM_API="+latestPlatformAPI, cmd = exec.Command("docker", "run", "--rm", launchImage, "--", "cmd", "/c", "set")
launchImage, "--", }
"env",
)
assertOutput(t, cmd, assertOutput(t, cmd,
"SOME_VAR=some-bp-val", "SOME_VAR=some-bp-val",
@ -201,14 +322,20 @@ func testLauncher(t *testing.T, when spec.G, it spec.S) {
}) })
it("passes through env vars from user, excluding excluded vars", func() { it("passes through env vars from user, excluding excluded vars", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm",
"docker", "run", "--rm",
"--env", "CNB_APP_DIR=/workspace", "--env", "CNB_APP_DIR=/workspace",
"--env=CNB_PLATFORM_API="+latestPlatformAPI,
"--env", "SOME_USER_VAR=some-user-val", "--env", "SOME_USER_VAR=some-user-val",
launchImage, "--", launchImage, "--",
"env", "env",
) )
if runtime.GOOS == "windows" {
cmd = exec.Command("docker", "run", "--rm",
"--env", "CNB_APP_DIR=/workspace",
"--env", "SOME_USER_VAR=some-user-val",
launchImage, "--",
"cmd", "/c", "set",
)
}
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
if err != nil { if err != nil {
@ -225,12 +352,7 @@ func testLauncher(t *testing.T, when spec.G, it spec.S) {
}) })
it("adds buildpack bin dirs to the path before looking up command", func() { it("adds buildpack bin dirs to the path before looking up command", func() {
cmd := exec.Command( //nolint cmd := exec.Command("docker", "run", "--rm", launchImage, "--", "bp-executable")
"docker", "run", "--rm",
"--env=CNB_PLATFORM_API="+latestPlatformAPI,
launchImage, "--",
"bp-executable",
)
assertOutput(t, cmd, "bp executable") assertOutput(t, cmd, "bp executable")
}) })
}) })

View File

@ -2,8 +2,6 @@ package acceptance
import ( import (
"context" "context"
"crypto/sha256"
"encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
@ -17,14 +15,12 @@ import (
"testing" "testing"
"time" "time"
"github.com/docker/docker/api/types/image" "github.com/BurntSushi/toml"
ih "github.com/buildpacks/imgutil/testhelpers" ih "github.com/buildpacks/imgutil/testhelpers"
"github.com/google/go-containerregistry/pkg/authn" "github.com/google/go-containerregistry/pkg/authn"
"github.com/google/go-containerregistry/pkg/registry" "github.com/google/go-containerregistry/pkg/registry"
"github.com/buildpacks/lifecycle/auth" "github.com/buildpacks/lifecycle/auth"
"github.com/buildpacks/lifecycle/cmd"
"github.com/buildpacks/lifecycle/internal/encoding" "github.com/buildpacks/lifecycle/internal/encoding"
"github.com/buildpacks/lifecycle/platform" "github.com/buildpacks/lifecycle/platform"
"github.com/buildpacks/lifecycle/platform/files" "github.com/buildpacks/lifecycle/platform/files"
@ -130,29 +126,8 @@ func (p *PhaseTest) Start(t *testing.T, phaseOp ...func(*testing.T, *PhaseTest))
} }
h.MakeAndCopyLifecycle(t, p.targetDaemon.os, p.targetDaemon.arch, p.containerBinaryDir) h.MakeAndCopyLifecycle(t, p.targetDaemon.os, p.targetDaemon.arch, p.containerBinaryDir)
// calculate lifecycle digest
hasher := sha256.New()
f, err := os.Open(filepath.Join(p.containerBinaryDir, "lifecycle"+exe)) //#nosec G304
h.AssertNil(t, err)
_, err = io.Copy(hasher, f)
h.AssertNil(t, err)
t.Logf("Built lifecycle binary with digest: %s", hex.EncodeToString(hasher.Sum(nil)))
copyFakeSboms(t) copyFakeSboms(t)
h.DockerBuild( h.DockerBuild(t, p.testImageRef, p.testImageDockerContext, h.WithArgs("-f", filepath.Join(p.testImageDockerContext, dockerfileName)))
t,
p.testImageRef,
p.testImageDockerContext,
h.WithArgs("-f", filepath.Join(p.testImageDockerContext, dockerfileName)),
)
t.Logf("Using image %s with lifecycle version %s",
p.testImageRef,
h.DockerRun(
t,
p.testImageRef,
h.WithFlags("--env", "CNB_PLATFORM_API="+latestPlatformAPI, "--entrypoint", ctrPath("/cnb/lifecycle/lifecycle"+exe)),
h.WithArgs("-version"),
))
} }
func (p *PhaseTest) Stop(t *testing.T) { func (p *PhaseTest) Stop(t *testing.T) {
@ -428,30 +403,14 @@ func SBOMComponents() []string {
} }
func assertImageOSAndArch(t *testing.T, imageName string, phaseTest *PhaseTest) { //nolint - these functions are in fact used, i promise func assertImageOSAndArch(t *testing.T, imageName string, phaseTest *PhaseTest) { //nolint - these functions are in fact used, i promise
inspect, err := h.DockerCli(t).ImageInspect(context.TODO(), imageName) inspect, _, err := h.DockerCli(t).ImageInspectWithRaw(context.TODO(), imageName)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, inspect.Os, phaseTest.targetDaemon.os) h.AssertEq(t, inspect.Os, phaseTest.targetDaemon.os)
h.AssertEq(t, inspect.Architecture, phaseTest.targetDaemon.arch) h.AssertEq(t, inspect.Architecture, phaseTest.targetDaemon.arch)
} }
func assertImageOSAndArchAndCreatedAt(t *testing.T, imageName string, phaseTest *PhaseTest, expectedCreatedAt time.Time) { //nolint func assertImageOSAndArchAndCreatedAt(t *testing.T, imageName string, phaseTest *PhaseTest, expectedCreatedAt time.Time) { //nolint
inspect, err := h.DockerCli(t).ImageInspect(context.TODO(), imageName) inspect, _, err := h.DockerCli(t).ImageInspectWithRaw(context.TODO(), imageName)
if err != nil {
list, _ := h.DockerCli(t).ImageList(context.TODO(), image.ListOptions{})
fmt.Println("Error encountered running ImageInspectWithRaw. imageName: ", imageName)
fmt.Println(err)
for _, value := range list {
fmt.Println("Image Name: ", value)
}
if strings.Contains(err.Error(), "No such image") {
t.Log("Image not found, retrying...")
time.Sleep(1 * time.Second)
inspect, err = h.DockerCli(t).ImageInspect(context.TODO(), imageName)
}
}
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, inspect.Os, phaseTest.targetDaemon.os) h.AssertEq(t, inspect.Os, phaseTest.targetDaemon.os)
h.AssertEq(t, inspect.Architecture, phaseTest.targetDaemon.arch) h.AssertEq(t, inspect.Architecture, phaseTest.targetDaemon.arch)
@ -463,7 +422,8 @@ func assertRunMetadata(t *testing.T, path string) *files.Run { //nolint
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, len(contents) > 0, true) h.AssertEq(t, len(contents) > 0, true)
runMD, err := files.Handler.ReadRun(path, cmd.DefaultLogger) var runMD files.Run
_, err = toml.Decode(string(contents), &runMD)
h.AssertNil(t, err) h.AssertNil(t, err)
return &runMD return &runMD

View File

@ -1,58 +0,0 @@
//go:build acceptance
package acceptance
import (
"path/filepath"
"testing"
"github.com/sclevine/spec"
"github.com/sclevine/spec/report"
"github.com/buildpacks/lifecycle/api"
h "github.com/buildpacks/lifecycle/testhelpers"
)
var (
rebaserTest *PhaseTest
rebaserPath string
rebaserImage string
)
func TestRebaser(t *testing.T) {
testImageDockerContextFolder := filepath.Join("testdata", "rebaser")
rebaserTest = NewPhaseTest(t, "rebaser", testImageDockerContextFolder)
rebaserTest.Start(t, updateTOMLFixturesWithTestRegistry)
defer rebaserTest.Stop(t)
rebaserImage = rebaserTest.testImageRef
rebaserPath = rebaserTest.containerBinaryPath
for _, platformAPI := range api.Platform.Supported {
spec.Run(t, "acceptance-rebaser/"+platformAPI.String(), testRebaser(platformAPI.String()), spec.Sequential(), spec.Report(report.Terminal{}))
}
}
func testRebaser(platformAPI string) func(t *testing.T, when spec.G, it spec.S) {
return func(t *testing.T, when spec.G, it spec.S) {
when("called with insecure registry flag", func() {
it.Before(func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "")
})
it("should do an http request", func() {
insecureRegistry := "host.docker.internal"
rebaserOutputImageName := insecureRegistry + "/bar"
_, _, err := h.DockerRunWithError(t,
rebaserImage,
h.WithFlags(
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_INSECURE_REGISTRIES="+insecureRegistry,
),
h.WithArgs(ctrPath(rebaserPath), rebaserOutputImageName),
)
h.AssertStringContains(t, err.Error(), "http://host.docker.internal")
})
})
}
}

View File

@ -1,4 +1,5 @@
//go:build acceptance //go:build acceptance
// +build acceptance
package acceptance package acceptance
@ -6,15 +7,16 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"testing" "testing"
"github.com/google/go-containerregistry/pkg/name" "github.com/google/go-containerregistry/pkg/name"
"github.com/sclevine/spec" "github.com/sclevine/spec"
"github.com/sclevine/spec/report" "github.com/sclevine/spec/report"
"github.com/buildpacks/lifecycle"
"github.com/buildpacks/lifecycle/api" "github.com/buildpacks/lifecycle/api"
"github.com/buildpacks/lifecycle/cmd" "github.com/buildpacks/lifecycle/cmd"
"github.com/buildpacks/lifecycle/platform/files"
h "github.com/buildpacks/lifecycle/testhelpers" h "github.com/buildpacks/lifecycle/testhelpers"
) )
@ -31,6 +33,9 @@ var (
) )
func TestRestorer(t *testing.T) { func TestRestorer(t *testing.T) {
h.SkipIf(t, runtime.GOOS == "windows", "Restorer acceptance tests are not yet supported on Windows")
h.SkipIf(t, runtime.GOARCH != "amd64", "Restorer acceptance tests are not yet supported on non-amd64")
testImageDockerContext := filepath.Join("testdata", "restorer") testImageDockerContext := filepath.Join("testdata", "restorer")
restoreTest = NewPhaseTest(t, "restorer", testImageDockerContext) restoreTest = NewPhaseTest(t, "restorer", testImageDockerContext)
restoreTest.Start(t, updateTOMLFixturesWithTestRegistry) restoreTest.Start(t, updateTOMLFixturesWithTestRegistry)
@ -67,7 +72,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("called with arguments", func() { when("called with arguments", func() {
it("errors", func() { it("errors", func() {
command := exec.Command("docker", "run", "--rm", "--env", "CNB_PLATFORM_API="+platformAPI, restoreImage, "some-arg") command := exec.Command("docker", "run", "--rm", restoreImage, "some-arg")
output, err := command.CombinedOutput() output, err := command.CombinedOutput()
h.AssertNotNil(t, err) h.AssertNotNil(t, err)
expected := "failed to parse arguments: received unexpected Args" expected := "failed to parse arguments: received unexpected Args"
@ -75,6 +80,28 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
}) })
when("called with -analyzed (on older platforms)", func() {
it("errors", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 supports -analyzed flag")
command := exec.Command("docker", "run", "--rm", restoreImage, "-analyzed some-file-location")
output, err := command.CombinedOutput()
h.AssertNotNil(t, err)
expected := "flag provided but not defined: -analyzed"
h.AssertStringContains(t, string(output), expected)
})
})
when("called with -skip-layers (on older platforms)", func() {
it("errors", func() {
h.SkipIf(t, api.MustParse(platformAPI).AtLeast("0.7"), "Platform API >= 0.7 supports -skip-layers flag")
command := exec.Command("docker", "run", "--rm", restoreImage, "-skip-layers true")
output, err := command.CombinedOutput()
h.AssertNotNil(t, err)
expected := "flag provided but not defined: -skip-layers"
h.AssertStringContains(t, string(output), expected)
})
})
when("called without any cache flag", func() { when("called without any cache flag", func() {
it("outputs it will not restore cache layer data", func() { it("outputs it will not restore cache layer data", func() {
command := exec.Command("docker", "run", "--rm", "--env", "CNB_PLATFORM_API="+platformAPI, restoreImage) command := exec.Command("docker", "run", "--rm", "--env", "CNB_PLATFORM_API="+platformAPI, restoreImage)
@ -87,6 +114,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("analyzed.toml exists with app metadata", func() { when("analyzed.toml exists with app metadata", func() {
it("restores app metadata", func() { it("restores app metadata", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 does not restore app metadata")
output := h.DockerRunAndCopy(t, output := h.DockerRunAndCopy(t,
containerName, containerName,
copyDir, copyDir,
@ -101,27 +129,6 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
h.AssertStringContains(t, output, "Restoring metadata for \"some-buildpack-id:launch-layer\"") h.AssertStringContains(t, output, "Restoring metadata for \"some-buildpack-id:launch-layer\"")
}) })
when("restores app metadata using an insecure registry", func() {
it.Before(func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.12"), "")
})
it("does an http request ", func() {
insecureRegistry := "host.docker.internal"
_, _, err := h.DockerRunWithError(t,
restoreImage,
h.WithFlags(append(
dockerSocketMount,
"--env", "CNB_PLATFORM_API="+platformAPI,
"--env", "CNB_INSECURE_REGISTRIES="+insecureRegistry,
"--env", "CNB_BUILD_IMAGE="+insecureRegistry+"/bar",
)...),
)
h.AssertStringContains(t, err.Error(), "http://host.docker.internal")
})
})
}) })
when("using cache-dir", func() { when("using cache-dir", func() {
@ -147,7 +154,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
}) })
it("does not restore cache=true layers not in cache", func() { it("does not restore cache=true layers not in cache", func() {
h.DockerRunAndCopy(t, output := h.DockerRunAndCopy(t,
containerName, containerName,
copyDir, copyDir,
"/layers", "/layers",
@ -159,9 +166,12 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
// check uncached layer is not restored // check uncached layer is not restored
uncachedFile := filepath.Join(copyDir, "layers", "cacher_buildpack", "uncached-layer") uncachedFile := filepath.Join(copyDir, "layers", "cacher_buildpack", "uncached-layer")
h.AssertPathDoesNotExist(t, uncachedFile) h.AssertPathDoesNotExist(t, uncachedFile)
// check output to confirm why this layer was not restored from cache
h.AssertStringContains(t, string(output), "Removing \"cacher_buildpack:layer-not-in-cache\", not in cache")
}) })
it("does not restore layer data from unused buildpacks", func() { it("does not restore unused buildpack layer data", func() {
h.DockerRunAndCopy(t, h.DockerRunAndCopy(t,
containerName, containerName,
copyDir, copyDir,
@ -175,21 +185,6 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
unusedBpLayer := filepath.Join(copyDir, "layers", "unused_buildpack") unusedBpLayer := filepath.Join(copyDir, "layers", "unused_buildpack")
h.AssertPathDoesNotExist(t, unusedBpLayer) h.AssertPathDoesNotExist(t, unusedBpLayer)
}) })
it("does not restore corrupted layer data", func() {
h.DockerRunAndCopy(t,
containerName,
copyDir,
"/layers",
restoreImage,
h.WithFlags("--env", "CNB_PLATFORM_API="+platformAPI),
h.WithArgs("-cache-dir", "/cache"),
)
// check corrupted layer is not restored
corruptedFile := filepath.Join(copyDir, "layers", "corrupted_buildpack", "corrupted-layer")
h.AssertPathDoesNotExist(t, corruptedFile)
})
}) })
}) })
@ -209,7 +204,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
h.WithArgs("-build-image", restoreRegFixtures.SomeCacheImage), // some-cache-image simulates a builder image in a registry h.WithArgs("-build-image", restoreRegFixtures.SomeCacheImage), // some-cache-image simulates a builder image in a registry
) )
t.Log("records builder image digest in analyzed.toml") t.Log("records builder image digest in analyzed.toml")
analyzedMD, err := files.Handler.ReadAnalyzed(filepath.Join(copyDir, "layers", "analyzed.toml"), cmd.DefaultLogger) analyzedMD, err := lifecycle.Config.ReadAnalyzed(filepath.Join(copyDir, "layers", "analyzed.toml"), cmd.DefaultLogger)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertStringContains(t, analyzedMD.BuildImage.Reference, restoreRegFixtures.SomeCacheImage+"@sha256:") h.AssertStringContains(t, analyzedMD.BuildImage.Reference, restoreRegFixtures.SomeCacheImage+"@sha256:")
t.Log("writes builder manifest and config to the kaniko cache") t.Log("writes builder manifest and config to the kaniko cache")
@ -241,7 +236,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
), ),
) )
t.Log("updates run image reference in analyzed.toml to include digest and target data") t.Log("updates run image reference in analyzed.toml to include digest and target data")
analyzedMD, err := files.Handler.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-true-analyzed.toml"), cmd.DefaultLogger) analyzedMD, err := lifecycle.Config.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-true-analyzed.toml"), cmd.DefaultLogger)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertStringContains(t, analyzedMD.RunImage.Reference, restoreRegFixtures.ReadOnlyRunImage+"@sha256:") h.AssertStringContains(t, analyzedMD.RunImage.Reference, restoreRegFixtures.ReadOnlyRunImage+"@sha256:")
h.AssertEq(t, analyzedMD.RunImage.Image, restoreRegFixtures.ReadOnlyRunImage) h.AssertEq(t, analyzedMD.RunImage.Image, restoreRegFixtures.ReadOnlyRunImage)
@ -260,7 +255,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
when("target data", func() { when("target data", func() {
it("updates run image reference in analyzed.toml to include digest and target data on newer platforms", func() { it("updates run image reference in analyzed.toml to include digest and target data on newer platforms", func() {
h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.10"), "") h.SkipIf(t, api.MustParse(platformAPI).LessThan("0.7"), "Platform API < 0.7 does not support -analyzed flag")
h.DockerRunAndCopy(t, h.DockerRunAndCopy(t,
containerName, containerName,
copyDir, copyDir,
@ -278,7 +273,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
) )
if api.MustParse(platformAPI).AtLeast("0.12") { if api.MustParse(platformAPI).AtLeast("0.12") {
t.Log("updates run image reference in analyzed.toml to include digest and target data") t.Log("updates run image reference in analyzed.toml to include digest and target data")
analyzedMD, err := files.Handler.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-false-analyzed.toml"), cmd.DefaultLogger) analyzedMD, err := lifecycle.Config.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-false-analyzed.toml"), cmd.DefaultLogger)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertStringContains(t, analyzedMD.RunImage.Reference, restoreRegFixtures.ReadOnlyRunImage+"@sha256:") h.AssertStringContains(t, analyzedMD.RunImage.Reference, restoreRegFixtures.ReadOnlyRunImage+"@sha256:")
h.AssertEq(t, analyzedMD.RunImage.Image, restoreRegFixtures.ReadOnlyRunImage) h.AssertEq(t, analyzedMD.RunImage.Image, restoreRegFixtures.ReadOnlyRunImage)
@ -290,14 +285,10 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertEq(t, len(fis), 1) // .gitkeep h.AssertEq(t, len(fis), 1) // .gitkeep
} else { } else {
t.Log("updates run image reference in analyzed.toml to include digest only") t.Log("doesn't update analyzed.toml")
analyzedMD, err := files.Handler.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-false-analyzed.toml"), cmd.DefaultLogger) analyzedMD, err := lifecycle.Config.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-false-analyzed.toml"), cmd.DefaultLogger)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertStringContains(t, analyzedMD.RunImage.Reference, restoreRegFixtures.ReadOnlyRunImage+"@sha256:")
h.AssertEq(t, analyzedMD.RunImage.Image, restoreRegFixtures.ReadOnlyRunImage)
h.AssertNil(t, analyzedMD.RunImage.TargetMetadata) h.AssertNil(t, analyzedMD.RunImage.TargetMetadata)
t.Log("does not return the digest for an empty image")
h.AssertStringDoesNotContain(t, analyzedMD.RunImage.Reference, restoreRegFixtures.ReadOnlyRunImage+"@sha256:"+emptyImageSHA)
} }
}) })
@ -322,7 +313,7 @@ func testRestorerFunc(platformAPI string) func(t *testing.T, when spec.G, it spe
), ),
) )
t.Log("updates run image reference in analyzed.toml to include digest and target data") t.Log("updates run image reference in analyzed.toml to include digest and target data")
analyzedMD, err := files.Handler.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-false-analyzed.toml"), cmd.DefaultLogger) analyzedMD, err := lifecycle.Config.ReadAnalyzed(filepath.Join(copyDir, "layers", "some-extend-false-analyzed.toml"), cmd.DefaultLogger)
h.AssertNil(t, err) h.AssertNil(t, err)
h.AssertStringDoesNotContain(t, analyzedMD.RunImage.Reference, "@sha256:") // daemon image ID h.AssertStringDoesNotContain(t, analyzedMD.RunImage.Reference, "@sha256:") // daemon image ID
h.AssertEq(t, analyzedMD.RunImage.Image, restoreRegFixtures.ReadOnlyRunImage) h.AssertEq(t, analyzedMD.RunImage.Image, restoreRegFixtures.ReadOnlyRunImage)

View File

@ -1,4 +1,5 @@
FROM ubuntu:bionic FROM ubuntu:bionic
ARG cnb_platform_api
RUN apt-get update && apt-get install -y ca-certificates RUN apt-get update && apt-get install -y ca-certificates
@ -10,7 +11,6 @@ ENV CNB_USER_ID=2222
ENV CNB_GROUP_ID=3333 ENV CNB_GROUP_ID=3333
ARG cnb_platform_api
ENV CNB_PLATFORM_API=${cnb_platform_api} ENV CNB_PLATFORM_API=${cnb_platform_api}
RUN chown -R $CNB_USER_ID:$CNB_GROUP_ID /some-dir RUN chown -R $CNB_USER_ID:$CNB_GROUP_ID /some-dir

View File

@ -0,0 +1,12 @@
FROM mcr.microsoft.com/windows/nanoserver:1809
USER ContainerAdministrator
COPY container /
WORKDIR /layers
ENV CNB_USER_ID=1
ENV CNB_GROUP_ID=1
ENV CNB_PLATFORM_API=${cnb_platform_api}

View File

@ -1,4 +1,4 @@
[[group]] [[group]]
id = "some-buildpack-id" id = "some-buildpack-id"
version = "some-buildpack-version" version = "some-buildpack-version"
api = "0.10" api = "0.2"

View File

@ -1,4 +1,4 @@
[[group]] [[group]]
id = "some-other-buildpack-id" id = "some-other-buildpack-id"
version = "some-other-buildpack-version" version = "some-other-buildpack-version"
api = "0.10" api = "0.3"

View File

@ -1,4 +1,4 @@
[[group]] [[group]]
id = "another-buildpack-id" id = "another-buildpack-id"
version = "another-buildpack-version" version = "another-buildpack-version"
api = "0.10" api = "0.2"

View File

@ -0,0 +1,14 @@
FROM mcr.microsoft.com/windows/nanoserver:1809
USER ContainerAdministrator
COPY container /
ENTRYPOINT ["/cnb/lifecycle/builder"]
WORKDIR /layers
ENV CNB_USER_ID=1
ENV CNB_GROUP_ID=1
ENV CNB_PLATFORM_API=${cnb_platform_api}

View File

@ -28,8 +28,9 @@ echo
cat > "${layers_dir}/launch.toml" << EOL cat > "${layers_dir}/launch.toml" << EOL
[[processes]] [[processes]]
type = "hello" type = "hello"
command = ["echo world"] command = "echo world"
args = ["arg1"] args = ["arg1"]
direct = false
EOL EOL
echo "---> Done" echo "---> Done"

View File

@ -1,5 +1,5 @@
# Buildpack API version # Buildpack API version
api = "0.10" api = "0.2"
# Buildpack ID and metadata # Buildpack ID and metadata
[buildpack] [buildpack]

View File

@ -1,5 +1,5 @@
# Buildpack API version # Buildpack API version
api = "0.10" api = "0.2"
# Buildpack ID and metadata # Buildpack ID and metadata
[buildpack] [buildpack]

View File

@ -1,4 +1,4 @@
[[group]] [[group]]
api = "0.10" api = "0.2"
id = "hello_world" id = "hello_world"
version = "0.0.1" version = "0.0.1"

View File

@ -1,4 +1,4 @@
[[group]] [[group]]
api = "0.10" api = "0.2"
id = "hello_world_2" id = "hello_world_2"
version = "0.0.2" version = "0.0.2"

View File

@ -1,9 +1,9 @@
[[group]] [[group]]
api = "0.10" api = "0.2"
id = "hello_world" id = "hello_world"
version = "0.0.1" version = "0.0.1"
[[group-extensions]] [[group-extensions]]
api = "0.10" api = "0.9"
id = "hello_world" id = "hello_world"
version = "0.0.1" version = "0.0.1"

View File

@ -1,5 +1,5 @@
# Buildpack API version # Buildpack API version
api = "0.10" api = "0.2"
# Buildpack ID and metadata # Buildpack ID and metadata
[buildpack] [buildpack]

View File

@ -1,4 +1,4 @@
[[group]] [[group]]
api = "0.10" api = "0.2"
id = "hello_world_2" id = "hello_world_2"
version = "0.0.2" version = "0.0.2"

View File

@ -1,8 +0,0 @@
# Buildpack API version
api = "0.9"
# Extension ID and metadata
[extension]
id = "samples/hello-world"
version = "0.0.1"
name = "Hello World Extension"

View File

@ -1,9 +0,0 @@
[[order]]
[[order.group]]
id = "samples/hello-world"
version = "0.0.1"
[[order-extensions]]
[[order-extensions.group]]
id = "samples/hello-world"
version = "0.0.1"

View File

@ -1,4 +1,4 @@
FROM ubuntu:jammy FROM ubuntu:bionic
ARG cnb_uid=1234 ARG cnb_uid=1234
ARG cnb_gid=1000 ARG cnb_gid=1000

View File

@ -1,5 +1,4 @@
api = "0.10" api = "0.3"
[buildpack] [buildpack]
id = "buildpack_for_ext" id = "buildpack_for_ext"
version = "buildpack_for_ext_version" version = "buildpack_for_ext_version"

View File

@ -1,12 +1,5 @@
api = "0.9" api = "0.3"
[buildpack] [buildpack]
id = "simple_buildpack" id = "simple_buildpack"
version = "simple_buildpack_version" version = "simple_buildpack_version"
name = "Simple Buildpack" name = "Simple Buildpack"
[[stacks]]
id = "io.buildpacks.stacks.bionic"
[[stacks]]
id = "io.buildpacks.stacks.jammy"

View File

@ -1,5 +1,4 @@
api = "0.10" api = "0.6"
[buildpack] [buildpack]
id = "always_detect_buildpack" id = "always_detect_buildpack"
version = "always_detect_buildpack_version" version = "always_detect_buildpack_version"

View File

@ -1,17 +0,0 @@
{
"buildpacks": [
{
"key": "corrupted_buildpack",
"version": "corrupted_v1",
"layers": {
"corrupted-layer": {
"sha": "sha256:258dfa0cc987efebc17559694866ebc91139e7c0e574f60d1d4092f53d7dff59",
"data": null,
"build": false,
"launch": true,
"cache": true
}
}
}
]
}

View File

@ -1,2 +0,0 @@
[run-image]
reference = "host.docker.internal/bar"

View File

@ -1 +1 @@
sha256:2d9c9c638d5c4f0df067eeae7b9c99ad05776a89d19ab863c28850a91e5f2944 sha256:b89860e2f9c62e6b5d66d3ce019e18cdabae30273c25150b7f20a82f7a70e494

View File

@ -1,4 +1,3 @@
[types]
build = false build = false
launch = false launch = false
cache = true cache = true

View File

@ -1,2 +1 @@
[types]
launch = true launch = true

View File

@ -1,3 +0,0 @@
[types]
cache = true
launch = true

View File

@ -1 +0,0 @@
digest-not-match-data

View File

@ -1,14 +1,9 @@
[[group]] [[group]]
id = "some-buildpack-id" id = "some-buildpack-id"
version = "some-buildpack-version" version = "some-buildpack-version"
api = "0.7" api = "0.2"
[[group]] [[group]]
id = "cacher_buildpack" id = "cacher_buildpack"
version = "cacher_v1" version = "cacher_v1"
api = "0.8" api = "0.3"
[[group]]
id = "corrupted_buildpack"
version = "corrupted_v1"
api = "0.8"

View File

@ -0,0 +1,107 @@
{
"architecture": "amd64",
"created": "0001-01-01T00:00:00Z",
"history": [
{
"created": "2023-03-01T03:18:00.301777816Z",
"created_by": "/bin/sh -c #(nop) ARG RELEASE",
"empty_layer": true
},
{
"created": "2023-03-01T03:18:00.360617499Z",
"created_by": "/bin/sh -c #(nop) ARG LAUNCHPAD_BUILD_ARCH",
"empty_layer": true
},
{
"created": "2023-03-01T03:18:00.41927153Z",
"created_by": "/bin/sh -c #(nop) LABEL org.opencontainers.image.ref.name=ubuntu",
"empty_layer": true
},
{
"created": "2023-03-01T03:18:00.485275751Z",
"created_by": "/bin/sh -c #(nop) LABEL org.opencontainers.image.version=18.04",
"empty_layer": true
},
{
"created": "2023-03-01T03:18:02.163454314Z",
"created_by": "/bin/sh -c #(nop) ADD file:66eb2ef5574cdf80bc0cb3af1637407620c1869f58cc7514395e3f5aea45cc3b in / "
},
{
"created": "2023-03-01T03:18:02.508220832Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
"empty_layer": true
},
{
"created": "2023-03-06T17:34:37.7930108Z",
"created_by": "ARG cnb_uid=1234",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
},
{
"created": "2023-03-06T17:34:37.7930108Z",
"created_by": "ARG cnb_gid=1000",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
},
{
"created": "2023-03-06T17:34:37.7930108Z",
"created_by": "ENV CNB_USER_ID=1234",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
},
{
"created": "2023-03-06T17:34:37.7930108Z",
"created_by": "ENV CNB_GROUP_ID=1000",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
},
{
"created": "2023-03-06T17:34:37.7930108Z",
"created_by": "COPY ./container/ / # buildkit",
"comment": "buildkit.dockerfile.v0"
},
{
"created": "2023-03-06T17:34:39.0316521Z",
"created_by": "RUN |2 cnb_uid=1234 cnb_gid=1000 /bin/sh -c groupadd cnb --gid ${cnb_gid} && useradd --uid ${cnb_uid} --gid ${cnb_gid} -m -s /bin/bash cnb # buildkit",
"comment": "buildkit.dockerfile.v0"
},
{
"author": "kaniko",
"created": "0001-01-01T00:00:00Z",
"created_by": "Layer: 'RUN apt-get update && apt-get install -y curl', Created by extension: curl"
},
{
"author": "kaniko",
"created": "0001-01-01T00:00:00Z",
"created_by": "Layer: 'RUN apt-get update && apt-get install -y tree', Created by extension: tree"
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:52c5ca3e9f3bf4c13613fb3269982734b189e1e09563b65b670fc8be0e223e03",
"sha256:a44ae1c29c4868890bf64d0f9627291e9b80d6a80d8938c942c6a43b2f0cfde1",
"sha256:049685392ebd2ca7b262f19e854d6fd3ff60bcb7a96394422b528ea30d04ca67",
"sha256:60600f423214c27fd184ebc96ae765bf2b4703c9981fb4205d28dd35e7eec4ae",
"sha256:1d811b70500e2e9a5e5b8ca7429ef02e091cdf4657b02e456ec54dd1baea0a66"
]
},
"config": {
"Cmd": [
"/bin/bash"
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"CNB_USER_ID=1234",
"CNB_GROUP_ID=1000",
"CNB_STACK_ID=stack-id-from-ext-tree"
],
"Labels": {
"io.buildpacks.rebasable": "false",
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.version": "18.04"
},
"User": "root"
}
}

View File

@ -1,47 +0,0 @@
{
"architecture": "amd64",
"created": "0001-01-01T00:00:00Z",
"history": [
{
"author": "some-base-image-author",
"created": "2023-03-06T17:34:39.0316521Z",
"created_by": "FROM some-base-image"
},
{
"author": "kaniko",
"created": "0001-01-01T00:00:00Z",
"created_by": "Layer: 'RUN mkdir /some-dir && echo some-data > /some-dir/some-file && echo some-data > /some-file', Created by extension: first-extension"
},
{
"author": "kaniko",
"created": "0001-01-01T00:00:00Z",
"created_by": "Layer: 'RUN mkdir /some-other-dir && echo some-data > /some-other-dir/some-file && echo some-data > /some-other-file', Created by extension: second-extension"
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
"sha256:d8dea3a780ba766c08bd11800809652ce5e9eba50b7b94ac09cb7f5e98e07f08",
"sha256:36f3735021a89a605c3da10b9659f0ec69e7c4c72abc802dc32471f1b080fd78"
]
},
"config": {
"Cmd": [
"/bin/bash"
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"CNB_USER_ID=1234",
"CNB_GROUP_ID=1000",
"CNB_STACK_ID=some-stack-id"
],
"Labels": {
"io.buildpacks.rebasable": "false",
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.version": "18.04"
},
"User": "root"
}
}

View File

@ -1,26 +0,0 @@
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 2771,
"digest": "sha256:2dc6ef9f627c01f3f9e4f735c90f0251b5adaf6ad5685c5afb5cf638412fad67"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 26711153,
"digest": "sha256:0064b1b97ec0775813740e8cb92821a6d84fd38eee70bafba9c12d9c37534661"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 38445484,
"digest": "sha256:65c2873d397056a5cb4169790654d787579b005f18b903082b177d4d9b4aecf5"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 146545,
"digest": "sha256:0fb9b88c9cbe9f11b4c8da645f390df59f5949632985a0bfc2a842ef17b2ad18"
}
]
}

View File

@ -0,0 +1,36 @@
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 2771,
"digest": "sha256:259d97828cc7af9f2f9c62ce4caf6c9eac9d0898b114734ee259a698858debd1"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 26711153,
"digest": "sha256:0064b1b97ec0775813740e8cb92821a6d84fd38eee70bafba9c12d9c37534661"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 10083299,
"digest": "sha256:b10e5b54dba2a40f7d836f210ea43d7ae6a908f125374b2f70a5ac538b59e8a6"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 4698,
"digest": "sha256:b9f98a18bb36447e4f8a0420bcdb6f8f7a7494036a42a83265daf3e53f593d20"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 38445484,
"digest": "sha256:482346d1e0c7afa2514ec366d2e000e0667d0a6664690aab3c8ad51c81915b91"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 146545,
"digest": "sha256:0c5f7a6fe14dbd19670f39e7466051cbd40b3a534c0812659740fb03e2137c1a"
}
]
}

View File

@ -4,7 +4,7 @@
{ {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json", "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1083, "size": 1083,
"digest": "sha256:40007d6086160bcdf45770ed12d23f0c594013cf0cd5e65ffc67be8f46e0d9c9" "digest": "sha256:c72eda1cc5b6c41360b95d73f881eff9d0655e56c693a2bc6cc9312c9e70aa24"
} }
] ]
} }

View File

@ -1,9 +0,0 @@
ARG base_image
FROM ${base_image}
USER root
RUN apt-get update && apt-get install -y curl
COPY build-file /
ARG build_id=0
RUN echo ${build_id}

View File

@ -1,15 +0,0 @@
ARG base_image
FROM ${base_image}
USER root
RUN apt-get update && apt-get install -y curl
COPY run-file /
ARG build_id=0
RUN echo ${build_id}
# This should not create a layer as the path should be ignored by kaniko
RUN echo "some-content" > /workspace/some-file
ARG user_id
USER ${user_id}

View File

@ -1,11 +0,0 @@
ARG base_image
FROM ${base_image}
USER root
RUN apt-get update && apt-get install -y tree
COPY shared-file /shared-build
ENV CNB_STACK_ID=stack-id-from-ext-tree
ARG build_id=0
RUN echo ${build_id}

View File

@ -1,17 +0,0 @@
ARG base_image
FROM ${base_image}
USER root
RUN apt-get update && apt-get install -y tree
COPY shared-file /shared-run
ENV CNB_STACK_ID=stack-id-from-ext-tree
ARG build_id=0
RUN echo ${build_id}
# tree is not really rebasable, but we set the label here to test that the label in the extended image is set correctly
LABEL io.buildpacks.rebasable=true
ARG user_id
USER ${user_id}

View File

@ -1,4 +1,4 @@
FROM golang:1.24.6 as builder FROM golang:1.20 as builder
COPY exec.d/ /go/src/exec.d COPY exec.d/ /go/src/exec.d
RUN GO111MODULE=off go build -o helper ./src/exec.d RUN GO111MODULE=off go build -o helper ./src/exec.d
@ -7,9 +7,9 @@ FROM ubuntu:bionic
COPY linux/container / COPY linux/container /
RUN rm /layers/0.9_buildpack/some_layer/exec.d/exec.d-checker/.gitkeep RUN rm /layers/0.5_buildpack/some_layer/exec.d/exec.d-checker/.gitkeep
COPY --from=builder /go/helper /layers/0.9_buildpack/some_layer/exec.d/helper COPY --from=builder /go/helper /layers/0.5_buildpack/some_layer/exec.d/helper
COPY --from=builder /go/helper /layers/0.9_buildpack/some_layer/exec.d/exec.d-checker/helper COPY --from=builder /go/helper /layers/0.5_buildpack/some_layer/exec.d/exec.d-checker/helper
ENV PATH="/cnb/process:/cnb/lifecycle:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ENV PATH="/cnb/process:/cnb/lifecycle:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

View File

@ -0,0 +1,16 @@
FROM golang:1.20-nanoserver-1809
COPY exec.d/ /go/src/exec.d
WORKDIR /go/src
ENV GO111MODULE=off
RUN go build -o helper.exe exec.d
COPY windows/container /
RUN mkdir c:\layers\0.6_buildpack\some_layer\exec.d\exec.d-checker
RUN copy helper.exe c:\layers\0.6_buildpack\some_layer\exec.d\helper.exe
RUN copy helper.exe c:\layers\0.6_buildpack\some_layer\exec.d\exec.d-checker\helper.exe
ENV PATH="c:\cnb\process;c:\cnb\lifecycle;C:\Windows\system32;C:\Windows;"
ENTRYPOINT ["c:\\cnb\\lifecycle\\launcher"]

View File

@ -1,4 +1,5 @@
//go:build unix //go:build linux || darwin
// +build linux darwin
package main package main

View File

@ -1,83 +1,83 @@
[[buildpacks]] [[buildpacks]]
id = "0.9/buildpack" id = "0.5/buildpack"
version = "0.0.1" version = "0.0.1"
api = "0.9" api = "0.5"
[[buildpacks]] [[buildpacks]]
id = "0.8/buildpack" id = "0.4/buildpack"
version = "0.0.1" version = "0.0.1"
api = "0.8" api = "0.4"
[[buildpacks]] [[buildpacks]]
id = "0.7/buildpack" id = "0.3/buildpack"
version = "0.0.1" version = "0.0.1"
api = "0.7" api = "0.3"
[[processes]] [[processes]]
type = "web" type = "web"
command = "echo" command = "echo"
args = ["Executing web process-type"] args = ["Executing web process-type"]
direct = false direct = false
buildpack-id = "0.8/buildpack" buildpack-id = "0.4/buildpack"
[[processes]] [[processes]]
type = "direct-process" type = "direct-process"
command = "echo" command = "echo"
args = ["Executing direct-process process-type"] args = ["Executing direct-process process-type"]
direct = true direct = true
buildpack-id = "0.8/buildpack" buildpack-id = "0.4/buildpack"
[[processes]] [[processes]]
type = "indirect-process-with-args" type = "indirect-process-with-args"
command = "printf" command = "printf"
args = ["'%s' '%s'", "$VAR1", "$VAR2"] args = ["'%s' '%s'", "$VAR1", "$VAR2"]
direct = false direct = false
buildpack-id = "0.8/buildpack" buildpack-id = "0.4/buildpack"
[[processes]] [[processes]]
type = "legacy-indirect-process-with-args" type = "legacy-indirect-process-with-args"
command = "printf \"'%s' '%s'\" \"$0\" \"$1\"" command = "printf \"'%s' '%s'\" \"$0\" \"$1\""
args = ["arg", "arg with spaces"] args = ["arg", "arg with spaces"]
direct = false direct = false
buildpack-id = "0.7/buildpack" buildpack-id = "0.3/buildpack"
[[processes]] [[processes]]
type = "legacy-indirect-process-with-incorrect-args" type = "legacy-indirect-process-with-incorrect-args"
command = "printf" command = "printf"
args = ["'%s' '%s'", "arg", "arg with spaces"] args = ["'%s' '%s'", "arg", "arg with spaces"]
direct = false direct = false
buildpack-id = "0.7/buildpack" buildpack-id = "0.3/buildpack"
[[processes]] [[processes]]
type = "indirect-process-with-script" type = "indirect-process-with-script"
command = "printf \"'%s' '%s'\" \"$VAR1\" \"$VAR2\"" command = "printf \"'%s' '%s'\" \"$VAR1\" \"$VAR2\""
direct = false direct = false
buildpack-id = "0.8/buildpack" buildpack-id = "0.4/buildpack"
[[processes]] [[processes]]
type = "profile-checker" type = "profile-checker"
command = "echo" command = "echo"
args = ["$VAR_FROM_PROFILE"] args = ["$VAR_FROM_PROFILE"]
direct = false direct = false
buildpack-id = "0.8/buildpack" buildpack-id = "0.4/buildpack"
[[processes]] [[processes]]
type = "exec.d-checker" type = "exec.d-checker"
command = "printf" command = "printf"
args = ['VAR_FROM_EXEC_D: "%s"', "$VAR_FROM_EXEC_D"] args = ['VAR_FROM_EXEC_D: "%s"', "$VAR_FROM_EXEC_D"]
direct = false direct = false
buildpack-id = "0.9/buildpack" buildpack-id = "0.5/buildpack"
[[processes]] [[processes]]
type = "worker" type = "worker"
command = "echo" command = "echo"
args = ["$WORKER_VAR"] args = ["$WORKER_VAR"]
direct = false direct = false
buildpack-id = "0.8/buildpack" buildpack-id = "0.4/buildpack"
[[processes]] [[processes]]
type = "process.with.period" type = "process.with.period"
command = "echo" command = "echo"
args = ["Executing process.with.period process-type"] args = ["Executing process.with.period process-type"]
direct = false direct = false
buildpack-id = "0.8/buildpack" buildpack-id = "0.4/buildpack"

View File

@ -1,6 +1,6 @@
[[buildpacks]] [[buildpacks]]
id = "some-buildpack" id = "some-buildpack"
api = "0.9" api = "0.5"
[[processes]] [[processes]]
type = "web" type = "web"

View File

@ -1,12 +1,12 @@
[[buildpacks]] [[buildpacks]]
id = "0.9/buildpack" id = "0.6/buildpack"
version = "0.0.1" version = "0.0.1"
api = "0.9" api = "0.6"
[[buildpacks]] [[buildpacks]]
id = "some/buildpack" id = "some/buildpack"
version = "0.0.1" version = "0.0.1"
api = "0.8" api = "0.4"
[[processes]] [[processes]]
type = "web" type = "web"
@ -48,7 +48,7 @@
command = "cmd" command = "cmd"
args = ["/c", "echo VAR_FROM_EXEC_D: %VAR_FROM_EXEC_D%"] args = ["/c", "echo VAR_FROM_EXEC_D: %VAR_FROM_EXEC_D%"]
direct = false direct = false
buildpack-id = "0.9/buildpack" buildpack-id = "0.5/buildpack"
[[processes]] [[processes]]
type = "process.with.period" type = "process.with.period"

View File

@ -1,6 +1,6 @@
[[buildpacks]] [[buildpacks]]
id = "some-buildpack" id = "some-buildpack"
api = "0.9" api = "0.5"
[[processes]] [[processes]]
type = "web" type = "web"

View File

@ -1,3 +0,0 @@
FROM ubuntu:bionic
COPY ./container/ /

View File

@ -8,10 +8,10 @@ ENV CNB_GROUP_ID=${cnb_gid}
COPY ./container/ / COPY ./container/ /
# We have to pre-create the tar files so that their digests do not change due to timestamps # turn /to_cache/<buildpack> directories into cache tarballs
# But, ':' in the filepath on Windows is not allowed # these are referenced by sha in /cache/committed/io.buildpacks.lifecycle.cache.metadata
RUN mv /cache/committed/sha256_2d9c9c638d5c4f0df067eeae7b9c99ad05776a89d19ab863c28850a91e5f2944.tar /cache/committed/sha256:2d9c9c638d5c4f0df067eeae7b9c99ad05776a89d19ab863c28850a91e5f2944.tar RUN tar cvf /cache/committed/sha256:b89860e2f9c62e6b5d66d3ce019e18cdabae30273c25150b7f20a82f7a70e494.tar -C /to_cache/cacher_buildpack layers
RUN mv /cache/committed/sha256_430338f576c11e5236669f9c843599d96afe28784cffcb2d46ddb07beb00df78.tar /cache/committed/sha256:430338f576c11e5236669f9c843599d96afe28784cffcb2d46ddb07beb00df78.tar RUN tar cvf /cache/committed/sha256:58bafa1e79c8e44151141c95086beb37ca85b69578fc890bce33bb4c6c8e851f.tar -C /to_cache/unused_buildpack layers
ENTRYPOINT ["/cnb/lifecycle/restorer"] ENTRYPOINT ["/cnb/lifecycle/restorer"]

View File

@ -1,43 +1 @@
{ {"buildpacks":[{"key":"cacher_buildpack","version":"cacher_v1","layers":{"cached-layer":{"sha":"sha256:b89860e2f9c62e6b5d66d3ce019e18cdabae30273c25150b7f20a82f7a70e494","data":null,"build":false,"launch":false,"cache":true}}},{"key":"unused_buildpack","version":"v1","layers":{"cached-layer":{"sha":"sha256:58bafa1e79c8e44151141c95086beb37ca85b69578fc890bce33bb4c6c8e851f","data":null,"build":false,"launch":false,"cache":true}}}]}
"buildpacks": [
{
"key": "cacher_buildpack",
"version": "cacher_v1",
"layers": {
"cached-layer": {
"sha": "sha256:2d9c9c638d5c4f0df067eeae7b9c99ad05776a89d19ab863c28850a91e5f2944",
"data": null,
"build": false,
"launch": false,
"cache": true
}
}
},
{
"key": "corrupted_buildpack",
"version": "corrupted_v1",
"layers": {
"corrupted-layer": {
"sha": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
"data": null,
"build": false,
"launch": false,
"cache": true
}
}
},
{
"key": "unused_buildpack",
"version": "v1",
"layers": {
"cached-layer": {
"sha": "sha256:430338f576c11e5236669f9c843599d96afe28784cffcb2d46ddb07beb00df78",
"data": null,
"build": false,
"launch": false,
"cache": true
}
}
}
]
}

View File

@ -1 +1 @@
sha256:2d9c9c638d5c4f0df067eeae7b9c99ad05776a89d19ab863c28850a91e5f2944 sha256:b89860e2f9c62e6b5d66d3ce019e18cdabae30273c25150b7f20a82f7a70e494

View File

@ -1,4 +1,3 @@
[types]
build = false build = false
launch = false launch = false
cache = true cache = true

View File

@ -0,0 +1,3 @@
build = false
launch = false
cache = true

View File

@ -1,19 +1,9 @@
[[group]] [[group]]
id = "some-buildpack-id" id = "some-buildpack-id"
version = "some-buildpack-version" version = "some-buildpack-version"
api = "0.10" api = "0.2"
[[group]] [[group]]
id = "cacher_buildpack" id = "cacher_buildpack"
version = "cacher_v1" version = "cacher_v1"
api = "0.10" api = "0.3"
[[group]]
id = "corrupted_buildpack"
version = "corrupted_v1"
api = "0.11"
[[group-extensions]]
id = "some-extension-id"
version = "v1"
api = "0.10"

View File

@ -1,3 +1,3 @@
[run-image] [run-image]
reference = "some-reference-without-digest" reference = ""
image = "REPLACE" image = "REPLACE"

Some files were not shown because too many files have changed in this diff Show More