Compare commits

..

38 Commits

Author SHA1 Message Date
Daniel Jiang aac2468a8a
Pin trivy adapter to v0.32.4 (#21967)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-05-06 19:20:01 +08:00
Wang Yan ff62f305d8
bump up the base image for v2.12.3 (#21948)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-29 19:04:29 +08:00
Wang Yan 4b3d095df4
build base for patch release (#21934)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-25 19:12:46 +08:00
Daniel Jiang f85f718af4
Bump up trivy adapter (#21917)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-04-23 16:36:31 +08:00
Wang Yan c4249ad2ad
build base for v2.12.3 (#21901)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-18 19:57:40 +08:00
Wang Yan e5a8bd9013
upgrade the build machine to ubuntu 22 (#21899)
Per https://github.com/actions/runner-images/issues/11101, the ububnu 20.04 is out of support. This change it up the git action machine to 22.04

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-18 18:37:47 +08:00
Wang Yan 9068bfad87
upgrade dependencies version (#21897)
* upgrade dependencies version

1, refresh base photon images.
2, update the golang version.
3, update the dependencies.

Signed-off-by: wang yan <wangyan@vmware.com>

* cherry-pick session updates

Signed-off-by: wang yan <wangyan@vmware.com>

---------

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-18 16:31:21 +08:00
Prasanth Baskar 182ab72521
Update UI Version to 2.12.0 in package.json (#21607)
version update to 2.12.0 in package.json

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-03-24 16:42:30 +01:00
Prasanth Baskar 7a64a14ede
[cherry-pick] Fix Incorrect Data Display in Replications Table (#21461) (#21604)
Fix: Incorrect Data Display in Replications Table (#21461)

fix incorrect data display in replications table

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-02-10 16:45:49 +08:00
stonezdj(Daojun Zhang) 73072d0d88
Change current version to v2.12.2 (#21426)
Change current patch version to v2.12.2

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-16 21:53:45 +08:00
Daniel Jiang fab5f8b07a
Bump up trivy to v0.58.2, trivy adapter to v0.32.3 (#21417)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-01-15 09:46:24 +08:00
Wang Yan fc6f5d33dd
bump base images (#21412)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-01-14 09:58:55 +00:00
stonezdj(Daojun Zhang) c92d4cfe2d
Update the ping registry endpoint to harbor in check permission script (#21393)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-09 12:48:37 +08:00
stonezdj(Daojun Zhang) 4f65b4a642
Update the robot account testcase (#21374)
Because the permission of export-cve move from system to project

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-06 14:17:26 +08:00
Wang Yan 80219b7d47
bump go dep (#21338)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-20 13:27:05 +08:00
Daniel Jiang 7d7a415e05
Pin trivy adapter v0.32.2 (#21337)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-12-19 08:33:36 +00:00
stonezdj(Daojun Zhang) 3c53c3451c
[cherry-pick] Update the testcase for v2.12.0, changes include (#21114)
Update the testcase for v2.12.0, changes include

 pull image from registry.goharbor.io instead of dockerhub
 Update testcase to support Docker Image Can Be Pulled With Credential
 Change gitlab project name when user changed.
 Update permissions count and permission count total
 Change webhook_endpoint_ui

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-12-19 10:14:28 +08:00
Wang Yan 42041280fb
fix export cve permission issue (#21327)
The export CVE permission should be included in the project scope, as the API relies on project-level judgment.

Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-17 15:41:07 +08:00
Daniel Jiang d26da85251
Bump up trivy adapter to fix a CVE (#21322)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-12-16 10:24:37 +08:00
stonezdj(Daojun Zhang) b173bfedf1
chore(deps): bump golang.org/x/crypto from 0.29.0 to 0.31.0 in /src (#21316)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.29.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-13 09:28:17 +00:00
stonezdj(Daojun Zhang) 67147ea3d2
refresh base image (#21314)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-12-13 16:42:55 +08:00
Wang Yan d1350cd4cd
fix robot account creation issue (#21313)
fixes #21251

Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-13 06:24:38 +00:00
Daniel Jiang 6eea45d9fb
Bump up to use trivy adapter v0.32.1 (GAed version) (#21308)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-12-13 11:12:06 +08:00
Wang Yan db7ec52838
bump go version (#21305)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-10 16:07:11 +08:00
stonezdj(Daojun Zhang) 5dc78bc735
Bump up base version (#21295)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-12-09 13:53:05 +08:00
Daniel Jiang fc0482ae73
Bump up trivy to v0.57.1 and trivy-adapter to v0.32.1-rc.1 (#21289)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-12-06 12:54:36 +08:00
Wang Yan cdb0d2fc31
revert change of artifact event (#21278)
fixes #20897
Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-04 14:11:55 +08:00
Wang Yan b72b5cec13
[cherry-pick] fix robot deletion event (#21234) (#21272)
fix robot deletion event (#21234)

* fix robot deletion event



* resolve comments



---------

Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-03 16:44:42 +08:00
Wang Yan 7201b3bd5a
remove asc files handling (#21216)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-11-19 16:06:44 +08:00
miner 61d0796bc8
remove slack notification release-2.12.0 (#21186)
remove slack notification

Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
2024-11-14 14:01:12 +08:00
Daniel Jiang bb20c648e9
Bump up to trivy adapter v0.32.0 (#21135)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-11-14 04:10:51 +08:00
Wang Yan 9da38ae048
refresh base (#21126)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-11-05 14:59:44 +08:00
Daniel Jiang aaf23a8994
Bump up trivy adapter to v0.32.0-rc.2 (#21129)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-11-04 19:13:01 +08:00
Wang Yan cfe421ebdd
fix release script (#21101)
since we wil not ship the asc files since v2.12, it needs to remove the stesp to handle signatures.

Signed-off-by: wang yan <wangyan@vmware.com>
2024-10-28 19:27:44 +08:00
Wang Yan b3dab7a93c
[cherry-pick] Update fr-fr-lang.json (#21082) (#21097)
Update fr-fr-lang.json (#21082)

Updated untranslated French strings

Signed-off-by: tostt <tostt@users.noreply.github.com>
Co-authored-by: tostt <tostt@users.noreply.github.com>
2024-10-25 17:53:56 +08:00
Wang Yan b11237ccdc
refresh base for v2.12 (#21095)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-10-25 06:50:43 +00:00
miner 098e79a376
bump up dependencies (#21092)
Signed-off-by: yminer <yminer@vmware.com>
2024-10-25 05:39:52 +00:00
Wang Yan 08fe4d3a0f
fix build package issue (#21090)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-10-24 17:38:21 +08:00
1328 changed files with 10290 additions and 25683 deletions

View File

@ -8,6 +8,27 @@
* Add date here... Add signature here...
- Add your reason here...
* Apr 29 2025 <yan-yw.wang@broadcom.com>
- Refresh base image
* Apr 25 2025 <yan-yw.wang@broadcom.com>
- Refresh base image
* Apr 18 2025 <yan-yw.wang@broadcom.com>
- Refresh base image
* Jul 15 2021 <stone.zhang@broadcom.com>
- refresh base image
* Jan 14 2025 <yan-yw.wang@broadcom.com>
- Refresh base image
* Nov 04 2024 <yan-yw.wang@broadcom.com>
- Refresh base image
* Oct 25 2024 <yan-yw.wang@broadcom.com>
- Refresh base image
* Oct 24 2024 <yan-yw.wang@broadcom.com>
- Refresh base image
@ -30,4 +51,4 @@
- Refresh base image
* Jul 15 2021 <danfengl@vmware.com>
- Create this file to trigger build base action in buld-package workflow
- Create this file to trigger build base action in buld-package workflow

View File

@ -10,6 +10,8 @@ assignees:
- OrlinVasilev
- stonezdj
- chlins
- zyyw
- MinerYang
- AllForNothing
numberOfAssignees: 3

View File

@ -44,7 +44,7 @@ jobs:
- name: Set up Go 1.23
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.8
id: go
- uses: actions/checkout@v3
with:
@ -89,9 +89,9 @@ jobs:
bash ./tests/showtime.sh ./tests/ci/ut_run.sh $IP
df -h
- name: Codecov For BackEnd
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@v4
with:
files: ./src/github.com/goharbor/harbor/profile.cov
file: ./src/github.com/goharbor/harbor/profile.cov
flags: unittests
APITEST_DB:
@ -105,7 +105,7 @@ jobs:
- name: Set up Go 1.23
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.8
id: go
- uses: actions/checkout@v3
with:
@ -160,7 +160,7 @@ jobs:
- name: Set up Go 1.23
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.8
id: go
- uses: actions/checkout@v3
with:
@ -215,7 +215,7 @@ jobs:
- name: Set up Go 1.23
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.8
id: go
- uses: actions/checkout@v3
with:
@ -268,7 +268,7 @@ jobs:
- name: Set up Go 1.23
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.8
id: go
- uses: actions/checkout@v3
with:
@ -331,7 +331,7 @@ jobs:
bash ./tests/showtime.sh ./tests/ci/ui_ut_run.sh
df -h
- name: Codecov For UI
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@v4
with:
files: ./src/github.com/goharbor/harbor/src/portal/coverage/lcov.info
file: ./src/github.com/goharbor/harbor/src/portal/coverage/lcov.info
flags: unittests

View File

@ -15,16 +15,18 @@ jobs:
runs-on:
- ubuntu-22.04
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
- uses: actions/checkout@v3
- uses: 'google-github-actions/auth@v2'
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
with:
version: '430.0.0'
- run: gcloud info
- name: Set up Go 1.22
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.8
id: go
- name: Setup Docker
uses: docker-practice/actions-setup-docker@master
@ -87,8 +89,8 @@ jobs:
else
build_base_params=" BUILD_BASE=true PUSHBASEIMAGE=true REGISTRYUSER=\"${{ secrets.DOCKER_HUB_USERNAME }}\" REGISTRYPASSWORD=\"${{ secrets.DOCKER_HUB_PASSWORD }}\""
fi
sudo make package_offline GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true EXPORTERFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_online GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true EXPORTERFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_offline GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_online GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true HTTPPROXY= ${build_base_params}
harbor_offline_build_bundle=$(basename harbor-offline-installer-*.tgz)
harbor_online_build_bundle=$(basename harbor-online-installer-*.tgz)
echo "Package name is: $harbor_offline_build_bundle"

View File

@ -17,16 +17,18 @@ jobs:
#- self-hosted
- ubuntu-latest
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
- uses: actions/checkout@v3
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: google-github-actions/auth@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
- run: gcloud info
- name: Set up Go 1.21
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.8
id: go
- uses: actions/checkout@v3
with:
@ -63,5 +65,6 @@ jobs:
- name: upload test result to gs
run: |
cd src/github.com/goharbor/harbor
aws s3 cp ./distribution-spec/conformance/report.html s3://harbor-conformance-test/report.html
gsutil cp ./distribution-spec/conformance/report.html gs://harbor-conformance-test/report.html
gsutil acl ch -u AllUsers:R gs://harbor-conformance-test/report.html
if: always()

View File

@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9.1.0
- uses: actions/stale@v9.0.0
with:
stale-issue-message: 'This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.'
stale-pr-message: 'This PR is being marked stale due to a period of inactivty. If this PR is still relevant, please comment or remove the stale label. Otherwise, this PR will close in 30 days.'

View File

@ -12,7 +12,7 @@ jobs:
matrix:
# maintain the versions of harbor that need to be actively
# security scanned
versions: [dev, v2.12.0-dev]
versions: [dev, v2.11.0-dev]
# list of images that need to be scanned
images: [harbor-core, harbor-db, harbor-exporter, harbor-jobservice, harbor-log, harbor-portal, harbor-registryctl, prepare]
permissions:
@ -30,11 +30,7 @@ jobs:
format: 'template'
template: '@/contrib/sarif.tpl'
output: 'trivy-results.sarif'
env:
# Use AWS' ECR mirror for the trivy-db image, as GitHub's Container
# Registry is returning a TOOMANYREQUESTS error.
# Ref: https://github.com/aquasecurity/trivy-action/issues/389
TRIVY_DB_REPOSITORY: 'public.ecr.aws/aquasecurity/trivy-db:2'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
with:

View File

@ -9,9 +9,6 @@ on:
- '!tests/**.sh'
- '!tests/apitests/**'
- '!tests/ci/**'
- '!tests/resources/**'
- '!tests/robot-cases/**'
- '!tests/robot-cases/Group1-Nightly/**'
push:
paths:
- 'docs/**'
@ -20,9 +17,6 @@ on:
- '!tests/**.sh'
- '!tests/apitests/**'
- '!tests/ci/**'
- '!tests/resources/**'
- '!tests/robot-cases/**'
- '!tests/robot-cases/Group1-Nightly/**'
jobs:
UTTEST:

View File

@ -19,12 +19,12 @@ jobs:
echo "PRE_TAG=$(echo $release | jq -r '.body' | jq -r '.preTag')" >> $GITHUB_ENV
echo "BRANCH=$(echo $release | jq -r '.target_commitish')" >> $GITHUB_ENV
echo "PRERELEASE=$(echo $release | jq -r '.prerelease')" >> $GITHUB_ENV
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
- uses: 'google-github-actions/auth@v2'
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
with:
version: '430.0.0'
- name: Prepare Assets
run: |
if [ ! ${{ env.BUILD_NO }} -o ${{ env.BUILD_NO }} = "null" ]
@ -39,8 +39,8 @@ jobs:
src_online_package=harbor-online-installer-${{ env.BASE_TAG }}-${{ env.BUILD_NO }}.tgz
dst_offline_package=harbor-offline-installer-${{ env.CUR_TAG }}.tgz
dst_online_package=harbor-online-installer-${{ env.CUR_TAG }}.tgz
aws s3 cp s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_offline_package} s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_offline_package}
aws s3 cp s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_online_package} s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_online_package}
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_offline_package} gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_offline_package}
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_online_package} gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_online_package}
assets_path=$(pwd)/assets
source tools/release/release_utils.sh && getAssets ${{ secrets.HARBOR_RELEASE_BUILD }} ${{ env.BRANCH }} $dst_offline_package $dst_online_package ${{ env.PRERELEASE }} $assets_path

2
.gitignore vendored
View File

@ -50,7 +50,6 @@ src/portal/cypress/screenshots
**/aot
**/dist
**/.bin
**/robotvars.py
src/core/conf/app.conf
src/server/v2.0/models/
@ -59,4 +58,3 @@ src/server/v2.0/restapi/
harborclient/
openapi-generator-cli.jar
tests/e2e_setup/robotvars.py

View File

@ -31,10 +31,10 @@ API explorer integration. End users can now explore and trigger Harbors API v
* Support Image Retag, enables the user to tag image to different repositories and projects, this is particularly useful in cases when images need to be retagged programmatically in a CI pipeline.
* Support Image Build History, makes it easy to see the contents of a container image, refer to the [User Guide](https://github.com/goharbor/harbor/blob/release-1.7.0/docs/user_guide.md#build-history).
* Support Logger customization, enables the user to customize STDOUT / STDERR / FILE / DB logger of running jobs.
* Improve the user experience of Helm Chart Repository:
- Chart searching is included in the global search results
- Show the total number of chart versions in the chart list
- Mark labels in helm charts
* Improve user experience of Helm Chart Repository:
- Chart searching included in the global search results
- Show chart versions total number in the chart list
- Mark labels to helm charts
- The latest version can be downloaded as default one on the chart list view
- The chart can be deleted by deleting all the versions under it
@ -58,7 +58,7 @@ API explorer integration. End users can now explore and trigger Harbors API v
- Replication policy rework to support wildcard, scheduled replication.
- Support repository level description.
- Batch operation on projects/repositories/users from UI.
- On board LDAP user when adding a member to a project.
- On board LDAP user when adding member to a project.
## v1.3.0 (2018-01-04)
@ -75,11 +75,11 @@ API explorer integration. End users can now explore and trigger Harbors API v
## v1.1.0 (2017-04-18)
- Add in Notary support
- User can update the configuration through Harbor UI
- User can update configuration through Harbor UI
- Redesign of Harbor's UI using Clarity
- Some changes to API
- Fix some security issues in the token service
- Upgrade the base image of nginx to the latest openssl version
- Fix some security issues in token service
- Upgrade base image of nginx for latest openssl version
- Various bug fixes.
## v0.5.0 (2016-12-6)
@ -88,7 +88,7 @@ API explorer integration. End users can now explore and trigger Harbors API v
- Easier configuration for HTTPS in prepare script
- Script to collect logs of a Harbor deployment
- User can view the storage usage (default location) of Harbor.
- Add an attribute to disable normal users from creating projects.
- Add an attribute to disable normal user to create project
- Various bug fixes.
For Harbor virtual appliance:

View File

@ -14,7 +14,7 @@ Contributors are encouraged to collaborate using the following resources in addi
* Chat with us on the CNCF Slack ([get an invitation here][cncf-slack] )
* [#harbor][users-slack] for end-user discussions
* [#harbor-dev][dev-slack] for development of Harbor
* Want long-form communication instead of Slack? We have two distribution lists:
* Want long-form communication instead of Slack? We have two distributions lists:
* [harbor-users][users-dl] for end-user discussions
* [harbor-dev][dev-dl] for development of Harbor
@ -49,7 +49,7 @@ To build the project, please refer the [build](https://goharbor.io/docs/edge/bui
### Repository Structure
Here is the basic structure of the Harbor code base. Some key folders / files are commented for your reference.
Here is the basic structure of the harbor code base. Some key folders / files are commented for your references.
```
.
...
@ -166,16 +166,14 @@ Harbor backend is written in [Go](http://golang.org/). If you don't have a Harbo
| 2.9 | 1.21.3 |
| 2.10 | 1.21.8 |
| 2.11 | 1.22.3 |
| 2.12 | 1.23.2 |
| 2.13 | 1.23.8 |
| 2.14 | 1.24.5 |
| 2.12 | 1.23.8 |
Ensure your GOPATH and PATH have been configured in accordance with the Go environment instructions.
#### Web
Harbor web UI is built based on [Clarity](https://vmware.github.io/clarity/) and [Angular](https://angular.io/) web framework. To setup a web UI development environment, please make sure that the [npm](https://www.npmjs.com/get-npm) tool is installed first.
Harbor web UI is built based on [Clarity](https://vmware.github.io/clarity/) and [Angular](https://angular.io/) web framework. To setup web UI development environment, please make sure the [npm](https://www.npmjs.com/get-npm) tool is installed first.
| Harbor | Requires Angular | Requires Clarity |
|----------|--------------------|--------------------|
@ -205,7 +203,7 @@ PR are always welcome, even if they only contain small fixes like typos or a few
Please submit a PR broken down into small changes bit by bit. A PR consisting of a lot of features and code changes may be hard to review. It is recommended to submit PRs in an incremental fashion.
Note: If you split your pull request to small changes, please make sure any of the changes goes to `main` will not break anything. Otherwise, it can not be merged until this feature completed.
Note: If you split your pull request to small changes, please make sure any of the changes goes to `main` will not break anything. Otherwise, it can not be merged until this feature complete.
### Fork and clone
@ -279,7 +277,7 @@ To build the code, please refer to [build](https://goharbor.io/docs/edge/build-c
**Note**: from v2.0, Harbor uses [go-swagger](https://github.com/go-swagger/go-swagger) to generate API server from Swagger 2.0 (aka [OpenAPI 2.0](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md)). To add or change the APIs, first update the `api/v2.0/swagger.yaml` file, then run `make gen_apis` to generate the API server, finally, implement or update the API handlers in `src/server/v2.0/handler` package.
As Harbor now uses `controller/manager/dao` programming model, we suggest using [testify mock](https://github.com/stretchr/testify/blob/master/mock/doc.go) to test `controller` and `manager`. Harbor integrates [mockery](https://github.com/vektra/mockery) to generate mocks for golang interfaces using the testify mock package. To generate mocks for the interface, first add mock config in the `src/.mockery.yaml`, then run `make gen_mocks` to generate mocks.
As now Harbor uses `controller/manager/dao` programming model, we suggest to use [testify mock](https://github.com/stretchr/testify/blob/master/mock/doc.go) to test `controller` and `manager`. Harbor integrates [mockery](https://github.com/vektra/mockery) to generate mocks for golang interfaces using the testify mock package. To generate mocks for the interface, first add mock config in the `src/.mockery.yaml`, then run `make gen_mocks` to generate mocks.
### Keep sync with upstream
@ -314,19 +312,19 @@ The commit message should follow the convention on [How to Write a Git Commit Me
To help write conformant commit messages, it is recommended to set up the [git-good-commit](https://github.com/tommarshall/git-good-commit) commit hook. Run this command in the Harbor repo's root directory:
```sh
curl https://cdn.jsdelivr.net/gh/tommarshall/git-good-commit@v0.6.1/hook.sh > .git/hooks/commit-msg && chmod +x .git/hooks/commit-msg
curl https://cdn.rawgit.com/tommarshall/git-good-commit/v0.6.1/hook.sh > .git/hooks/commit-msg && chmod +x .git/hooks/commit-msg
```
### Automated Testing
Once your pull request has been opened, Harbor will run two CI pipelines against it.
Once your pull request has been opened, harbor will run two CI pipelines against it.
1. In the travis CI, your source code will be checked via `golint`, `go vet` and `go race` that makes sure the code is readable, safe and correct. Also, all of unit tests will be triggered via `go test` against the pull request. What you need to pay attention to is the travis result and the coverage report.
* If any failure in travis, you need to figure out whether it is introduced by your commits.
* If the coverage dramatically declines, then you need to commit a unit test to cover your code.
2. In the drone CI, the E2E test will be triggered against the pull request. Also, the source code will be checked via `gosec`, and the result is stored in google storage for later analysis. The pipeline is about to build and install harbor from source code, then to run four very basic E2E tests to validate the basic functionalities of Harbor, like:
* Registry Basic Verification, to validate that the image can be pulled and pushed successfully.
* Trivy Basic Verification, to validate that the image can be scanned successfully.
* Notary Basic Verification, to validate that the image can be signed successfully.
* Ldap Basic Verification, to validate that Harbor can work in LDAP environment.
* If the coverage dramatic decline, you need to commit unit test to coverage your code.
2. In the drone CI, the E2E test will be triggered against the pull request. Also, the source code will be checked via `gosec`, and the result is stored in google storage for later analysis. The pipeline is about to build and install harbor from source code, then to run four very basic E2E tests to validate the basic functionalities of harbor, like:
* Registry Basic Verification, to validate the image can be pulled and pushed successful.
* Trivy Basic Verification, to validate the image can be scanned successful.
* Notary Basic Verification, to validate the image can be signed successful.
* Ldap Basic Verification, to validate harbor can work in LDAP environment.
### Push and Create PR
When ready for review, push your branch to your fork repository on `github.com`:
@ -345,7 +343,7 @@ Commit changes made in response to review comments to the same branch on your fo
It is a great way to contribute to Harbor by reporting an issue. Well-written and complete bug reports are always welcome! Please open an issue on GitHub and follow the template to fill in required information.
Before opening any issue, please look up the existing [issues](https://github.com/goharbor/harbor/issues) to avoid submitting a duplicate.
Before opening any issue, please look up the existing [issues](https://github.com/goharbor/harbor/issues) to avoid submitting a duplication.
If you find a match, you can "subscribe" to it to get notified on updates. If you have additional helpful information about the issue, please leave a comment.
When reporting issues, always include:

View File

@ -78,10 +78,8 @@ REGISTRYSERVER=
REGISTRYPROJECTNAME=goharbor
DEVFLAG=true
TRIVYFLAG=false
EXPORTERFLAG=false
HTTPPROXY=
BUILDREG=true
BUILDTRIVYADP=true
BUILDBIN=true
NPM_REGISTRY=https://registry.npmjs.org
BUILDTARGET=build
GEN_TLS=
@ -93,12 +91,7 @@ VERSIONTAG=dev
BUILD_BASE=true
PUSHBASEIMAGE=false
BASEIMAGETAG=dev
# for skip build prepare and log container while BUILD_INSTALLER=false
BUILD_INSTALLER=true
BUILDBASETARGET=trivy-adapter core db jobservice nginx portal redis registry registryctl exporter
ifeq ($(BUILD_INSTALLER), true)
BUILDBASETARGET += prepare log
endif
BUILDBASETARGET=trivy-adapter core db jobservice log nginx portal prepare redis registry registryctl exporter
IMAGENAMESPACE=goharbor
BASEIMAGENAMESPACE=goharbor
# #input true/false only
@ -111,14 +104,13 @@ PREPARE_VERSION_NAME=versions
#versions
REGISTRYVERSION=v2.8.3-patch-redis
TRIVYVERSION=v0.61.0
TRIVYADAPTERVERSION=v0.33.0-rc.2
NODEBUILDIMAGE=node:16.18.0
TRIVYVERSION=v0.61.1
TRIVYADAPTERVERSION=v0.32.4
# version of registry for pulling the source code
REGISTRY_SRC_TAG=release/2.8
REGISTRY_SRC_TAG=v2.8.3
# source of upstream distribution code
DISTRIBUTION_SRC=https://github.com/goharbor/distribution.git
DISTRIBUTION_SRC=https://github.com/distribution/distribution.git
# dependency binaries
REGISTRYURL=https://storage.googleapis.com/harbor-builds/bin/registry/release-${REGISTRYVERSION}/registry
@ -135,7 +127,6 @@ endef
# docker parameters
DOCKERCMD=$(shell which docker)
DOCKERBUILD=$(DOCKERCMD) build
DOCKERNETWORK=default
DOCKERRMIMAGE=$(DOCKERCMD) rmi
DOCKERPULL=$(DOCKERCMD) pull
DOCKERIMAGES=$(DOCKERCMD) images
@ -151,7 +142,7 @@ GOINSTALL=$(GOCMD) install
GOTEST=$(GOCMD) test
GODEP=$(GOTEST) -i
GOFMT=gofmt -w
GOBUILDIMAGE=golang:1.24.5
GOBUILDIMAGE=golang:1.23.8
GOBUILDPATHINCONTAINER=/harbor
# go build
@ -245,27 +236,18 @@ REGISTRYUSER=
REGISTRYPASSWORD=
# cmds
DOCKERSAVE_PARA=$(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) \
DOCKERSAVE_PARA=$(DOCKER_IMAGE_NAME_PREPARE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) \
$(DOCKERIMAGENAME_CORE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_LOG):$(VERSIONTAG) \
$(DOCKERIMAGENAME_DB):$(VERSIONTAG) \
$(DOCKERIMAGENAME_JOBSERVICE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_REGCTL):$(VERSIONTAG) \
$(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG) \
$(IMAGENAMESPACE)/redis-photon:$(VERSIONTAG) \
$(IMAGENAMESPACE)/nginx-photon:$(VERSIONTAG) \
$(IMAGENAMESPACE)/registry-photon:$(VERSIONTAG)
ifeq ($(BUILD_INSTALLER), true)
DOCKERSAVE_PARA+= $(DOCKER_IMAGE_NAME_PREPARE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_LOG):$(VERSIONTAG)
endif
ifeq ($(TRIVYFLAG), true)
DOCKERSAVE_PARA+= $(IMAGENAMESPACE)/trivy-adapter-photon:$(VERSIONTAG)
endif
ifeq ($(EXPORTERFLAG), true)
DOCKERSAVE_PARA+= $(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG)
endif
PACKAGE_OFFLINE_PARA=-zcvf harbor-offline-installer-$(PKGVERSIONTAG).tgz \
$(HARBORPKG)/$(DOCKERIMGFILE).$(VERSIONTAG).tar.gz \
$(HARBORPKG)/prepare \
@ -282,6 +264,11 @@ PACKAGE_ONLINE_PARA=-zcvf harbor-online-installer-$(PKGVERSIONTAG).tgz \
DOCKERCOMPOSE_FILE_OPT=-f $(DOCKERCOMPOSEFILEPATH)/$(DOCKERCOMPOSEFILENAME)
ifeq ($(TRIVYFLAG), true)
DOCKERSAVE_PARA+= $(IMAGENAMESPACE)/trivy-adapter-photon:$(VERSIONTAG)
endif
RUNCONTAINER=$(DOCKERCMD) run --rm -u $(shell id -u):$(shell id -g) -v $(BUILDPATH):$(BUILDPATH) -w $(BUILDPATH)
# $1 the name of the docker image
@ -295,8 +282,8 @@ endef
# lint swagger doc
SPECTRAL_IMAGENAME=$(IMAGENAMESPACE)/spectral
SPECTRAL_VERSION=v6.14.2
SPECTRAL_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/spectral/Dockerfile --build-arg NODE=${NODEBUILDIMAGE} --build-arg SPECTRAL_VERSION=${SPECTRAL_VERSION} -t ${SPECTRAL_IMAGENAME}:$(SPECTRAL_VERSION) .
SPECTRAL_VERSION=v6.11.1
SPECTRAL_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/spectral/Dockerfile --build-arg GOLANG=${GOBUILDIMAGE} --build-arg SPECTRAL_VERSION=${SPECTRAL_VERSION} -t ${SPECTRAL_IMAGENAME}:$(SPECTRAL_VERSION) .
SPECTRAL=$(RUNCONTAINER) $(SPECTRAL_IMAGENAME):$(SPECTRAL_VERSION)
lint_apis:
@ -304,7 +291,7 @@ lint_apis:
$(SPECTRAL) lint ./api/v2.0/swagger.yaml
SWAGGER_IMAGENAME=$(IMAGENAMESPACE)/swagger
SWAGGER_VERSION=v0.31.0
SWAGGER_VERSION=v0.25.0
SWAGGER=$(RUNCONTAINER) ${SWAGGER_IMAGENAME}:${SWAGGER_VERSION}
SWAGGER_GENERATE_SERVER=${SWAGGER} generate server --template-dir=$(TOOLSPATH)/swagger/templates --exclude-main --additional-initialism=CVE --additional-initialism=GC --additional-initialism=OIDC
SWAGGER_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/swagger/Dockerfile --build-arg GOLANG=${GOBUILDIMAGE} --build-arg SWAGGER_VERSION=${SWAGGER_VERSION} -t ${SWAGGER_IMAGENAME}:$(SWAGGER_VERSION) .
@ -319,13 +306,13 @@ define swagger_generate_server
@$(SWAGGER_GENERATE_SERVER) -f $(1) -A $(3) --target $(2)
endef
gen_apis:
gen_apis: lint_apis
$(call prepare_docker_image,${SWAGGER_IMAGENAME},${SWAGGER_VERSION},${SWAGGER_IMAGE_BUILD_CMD})
$(call swagger_generate_server,api/v2.0/swagger.yaml,src/server/v2.0,harbor)
MOCKERY_IMAGENAME=$(IMAGENAMESPACE)/mockery
MOCKERY_VERSION=v2.53.3
MOCKERY_VERSION=v2.46.2
MOCKERY=$(RUNCONTAINER)/src ${MOCKERY_IMAGENAME}:${MOCKERY_VERSION}
MOCKERY_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/mockery/Dockerfile --build-arg GOLANG=${GOBUILDIMAGE} --build-arg MOCKERY_VERSION=${MOCKERY_VERSION} -t ${MOCKERY_IMAGENAME}:$(MOCKERY_VERSION) .
@ -349,7 +336,7 @@ versions_prepare:
check_environment:
@$(MAKEPATH)/$(CHECKENVCMD)
compile_core: lint_apis gen_apis
compile_core: gen_apis
@echo "compiling binary for core (golang image)..."
@echo $(GOBUILDPATHINCONTAINER)
@$(DOCKERCMD) run --rm -v $(BUILDPATH):$(GOBUILDPATHINCONTAINER) -w $(GOBUILDPATH_CORE) $(GOBUILDIMAGE) $(GOIMAGEBUILD_CORE) -o $(GOBUILDPATHINCONTAINER)/$(GOBUILDMAKEPATH_CORE)/$(CORE_BINARYNAME)
@ -400,19 +387,17 @@ build:
echo Should pull base images from registry in docker configuration since no base images built. ; \
exit 1; \
fi
make -f $(MAKEFILEPATH_PHOTON)/Makefile $(BUILDTARGET) -e DEVFLAG=$(DEVFLAG) -e GOBUILDIMAGE=$(GOBUILDIMAGE) -e NODEBUILDIMAGE=$(NODEBUILDIMAGE) \
make -f $(MAKEFILEPATH_PHOTON)/Makefile $(BUILDTARGET) -e DEVFLAG=$(DEVFLAG) -e GOBUILDIMAGE=$(GOBUILDIMAGE) \
-e REGISTRYVERSION=$(REGISTRYVERSION) -e REGISTRY_SRC_TAG=$(REGISTRY_SRC_TAG) -e DISTRIBUTION_SRC=$(DISTRIBUTION_SRC)\
-e TRIVYVERSION=$(TRIVYVERSION) -e TRIVYADAPTERVERSION=$(TRIVYADAPTERVERSION) \
-e VERSIONTAG=$(VERSIONTAG) \
-e DOCKERNETWORK=$(DOCKERNETWORK) \
-e BUILDREG=$(BUILDREG) -e BUILDTRIVYADP=$(BUILDTRIVYADP) \
-e BUILD_INSTALLER=$(BUILD_INSTALLER) \
-e BUILDBIN=$(BUILDBIN) \
-e NPM_REGISTRY=$(NPM_REGISTRY) -e BASEIMAGETAG=$(BASEIMAGETAG) -e IMAGENAMESPACE=$(IMAGENAMESPACE) -e BASEIMAGENAMESPACE=$(BASEIMAGENAMESPACE) \
-e REGISTRYURL=$(REGISTRYURL) \
-e TRIVY_DOWNLOAD_URL=$(TRIVY_DOWNLOAD_URL) -e TRIVY_ADAPTER_DOWNLOAD_URL=$(TRIVY_ADAPTER_DOWNLOAD_URL) \
-e PULL_BASE_FROM_DOCKERHUB=$(PULL_BASE_FROM_DOCKERHUB) -e BUILD_BASE=$(BUILD_BASE) \
-e REGISTRYUSER=$(REGISTRYUSER) -e REGISTRYPASSWORD=$(REGISTRYPASSWORD) \
-e PUSHBASEIMAGE=$(PUSHBASEIMAGE) -e GOBUILDIMAGE=$(GOBUILDIMAGE)
-e PUSHBASEIMAGE=$(PUSHBASEIMAGE)
build_standalone_db_migrator: compile_standalone_db_migrator
make -f $(MAKEFILEPATH_PHOTON)/Makefile _build_standalone_db_migrator -e BASEIMAGETAG=$(BASEIMAGETAG) -e VERSIONTAG=$(VERSIONTAG)
@ -453,14 +438,7 @@ package_online: update_prepare_version
@rm -rf $(HARBORPKG)
@echo "Done."
.PHONY: check_buildinstaller
check_buildinstaller:
@if [ "$(BUILD_INSTALLER)" != "true" ]; then \
echo "Must set BUILD_INSTALLER as true while triggering package_offline build" ; \
exit 1; \
fi
package_offline: check_buildinstaller update_prepare_version compile build
package_offline: update_prepare_version compile build
@echo "packing offline package ..."
@cp -r make $(HARBORPKG)
@ -490,8 +468,8 @@ misspell:
@echo checking misspell...
@find . -type d \( -path ./tests \) -prune -o -name '*.go' -print | xargs misspell -error
# golangci-lint binary installation or refer to https://golangci-lint.run/usage/install/#local-installation
# curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v2.1.2
# golangci-lint binary installation or refer to https://golangci-lint.run/usage/install/#local-installation
# curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.55.2
GOLANGCI_LINT := $(shell go env GOPATH)/bin/golangci-lint
lint:
@echo checking lint

View File

@ -9,7 +9,6 @@
[![Nightly Status](https://us-central1-eminent-nation-87317.cloudfunctions.net/harbor-nightly-result)](https://www.googleapis.com/storage/v1/b/harbor-nightly/o)
![CONFORMANCE_TEST](https://github.com/goharbor/harbor/workflows/CONFORMANCE_TEST/badge.svg)
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fgoharbor%2Fharbor.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fgoharbor%2Fharbor?ref=badge_shield)
[![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/harbor)](https://artifacthub.io/packages/helm/harbor/harbor)
</br>
|![notification](https://raw.githubusercontent.com/goharbor/website/master/docs/img/readme/bell-outline-badged.svg)Community Meeting|
@ -19,7 +18,7 @@
</br> </br>
**Note**: The `main` branch may be in an *unstable or even broken state* during development.
Please use [releases](https://github.com/goharbor/harbor/releases) instead of the `main` branch in order to get a stable set of binaries.
Please use [releases](https://github.com/vmware/harbor/releases) instead of the `main` branch in order to get a stable set of binaries.
<img alt="Harbor" src="https://raw.githubusercontent.com/goharbor/website/master/docs/img/readme/harbor_logo.png">
@ -58,7 +57,7 @@ For learning the architecture design of Harbor, check the document [Architecture
**On a Linux host:** docker 20.10.10-ce+ and docker-compose 1.18.0+ .
Download binaries of **[Harbor release ](https://github.com/goharbor/harbor/releases)** and follow **[Installation & Configuration Guide](https://goharbor.io/docs/latest/install-config/)** to install Harbor.
Download binaries of **[Harbor release ](https://github.com/vmware/harbor/releases)** and follow **[Installation & Configuration Guide](https://goharbor.io/docs/latest/install-config/)** to install Harbor.
If you want to deploy Harbor on Kubernetes, please use the **[Harbor chart](https://github.com/goharbor/harbor-helm)**.

View File

@ -1,27 +1,27 @@
# Versioning and Release
This document describes the versioning and release process of Harbor. This document is a living document, it's contents will be updated according to each release.
This document describes the versioning and release process of Harbor. This document is a living document, contents will be updated according to each release.
## Releases
Harbor releases will be versioned using dotted triples, similar to [Semantic Version](http://semver.org/). For this specific document, we will refer to the respective components of this triple as `<major>.<minor>.<patch>`. The version number may have additional information, such as "-rc1,-rc2,-rc3" to mark release candidate builds for earlier access. Such releases will be considered as "pre-releases".
### Major and Minor Releases
Major and minor releases of Harbor will be branched from `main` when the release reaches to `RC(release candidate)` state. The branch format should follow `release-<major>.<minor>.0`. For example, once the release `v1.0.0` reaches to RC, a branch will be created with the format `release-1.0.0`. When the release reaches to `GA(General Available)` state, the tag with format `v<major>.<minor>.<patch>` and should be made with the command `git tag -s v<major>.<minor>.<patch>`. The release cadence is around 3 months, might be adjusted based on open source events, but will communicate it clearly.
Major and minor releases of Harbor will be branched from `main` when the release reaches to `RC(release candidate)` state. The branch format should follow `release-<major>.<minor>.0`. For example, once the release `v1.0.0` reaches to RC, a branch will be created with the format `release-1.0.0`. When the release reaches to `GA(General Available)` state, The tag with format `v<major>.<minor>.<patch>` and should be made with command `git tag -s v<major>.<minor>.<patch>`. The release cadence is around 3 months, might be adjusted based on open source event, but will communicate it clearly.
### Patch releases
Patch releases are based on the major/minor release branch, the release cadence for patch release of recent minor release is one month to solve critical community and security issues. The cadence for patch release of recent minus two minor releases are on-demand driven based on the severity of the issue to be fixed.
### Pre-releases
`Pre-releases:mainly the different RC builds` will be compiled from their corresponding branches. Please note that they are done to assist in the stabilization process, no guarantees are provided.
`Pre-releases:mainly the different RC builds` will be compiled from their corresponding branches. Please note they are done to assist in the stabilization process, no guarantees are provided.
### Minor Release Support Matrix
| Version | Supported |
|----------------| ------------------ |
| Harbor v2.13.x | :white_check_mark: |
| Harbor v2.12.x | :white_check_mark: |
| Harbor v2.11.x | :white_check_mark: |
| Harbor v2.10.x | :white_check_mark: |
| Harbor v2.9.x | :white_check_mark: |
### Upgrade path and support policy
The upgrade path for Harbor is (1) 2.2.x patch releases are always compatible with its major and minor versions. For example, previous released 2.2.x can be upgraded to most recent 2.2.3 release. (2) Harbor only supports two previous minor releases to upgrade to current minor release. For example, 2.3.0 will only support 2.1.0 and 2.2.0 to upgrade from, 2.0.0 to 2.3.0 is not supported. One should upgrade to 2.2.0 first, then to 2.3.0.
The upgrade path for Harbor is (1) 2.2.x patch releases are always compatible with its major and minor version. For example, previous released 2.2.x can be upgraded to most recent 2.2.3 release. (2) Harbor only supports two previous minor releases to upgrade to current minor release. For example, 2.3.0 will only support 2.1.0 and 2.2.0 to upgrade from, 2.0.0 to 2.3.0 is not supported. One should upgrade to 2.2.0 first, then to 2.3.0.
The Harbor project maintains release branches for the three most recent minor releases, each minor release will be maintained for approximately 9 months.
### Next Release
@ -32,12 +32,12 @@ The activity for next release will be tracked in the [up-to-date project board](
The following steps outline what to do when it's time to plan for and publish a release. Depending on the release (major/minor/patch), not all the following items are needed.
1. Prepare information about what's new in the release.
* For every release, update the documentation for changes that have happened in the release. See the [goharbor/website](https://github.com/goharbor/website) repo for more details on how to create documentation for a release. All documentation for a release should be published by the time the release is out.
* For every release, update documentation for changes that have happened in the release. See the [goharbor/website](https://github.com/goharbor/website) repo for more details on how to create documentation for a release. All documentation for a release should be published by the time the release is out.
* For every release, write release notes. See [previous releases](https://github.com/goharbor/harbor/releases) for examples of what to include in release notes.
* For a major/minor release, write a blog post that highlights new features in the release. Plan to publish this on the same day as the release. Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. If there are any new features or workflows introduced in a release, consider writing additional blog posts to help users learn about the new features. Plan to publish these after the release date (all blogs dont have to be published all at once).
* For a major/minor release, write a blog post that highlights new features in the release. Plan to publish this the same day as the release. Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. If there are any new features or workflows introduced in a release, consider writing additional blog posts to help users learn about the new features. Plan to publish these after the release date (all blogs dont have to be published all at once).
1. Release a new version. Make the new version, docs updates, and blog posts available.
1. Announce the release and thank contributors. We should be doing the following for all releases.
* In all messages to the community include a brief list of highlights and links to the new release blog, release notes, or download location. Also include shoutouts to community members contributions included in the release.
* In all messages to the community include a brief list of highlights and links to the new release blog, release notes, or download location. Also include shoutouts to community member contribution included in the release.
* Send an email to the community via the [mailing list](https://lists.cncf.io/g/harbor-users)
* Post a message in the Harbor [slack channel](https://cloud-native.slack.com/archives/CC1E09J6S)
* Post to social media. Maintainers are encouraged to also post or repost from the Harbor account to help spread the word.

View File

@ -9,11 +9,11 @@ This document provides a link to the [Harbor Project board](https://github.com/o
Discussion on the roadmap can take place in threads under [Issues](https://github.com/goharbor/harbor/issues) or in [community meetings](https://goharbor.io/community/). Please open and comment on an issue if you want to provide suggestions and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
Please open an issue to track any initiative on the roadmap of Harbor (Usually driven by new feature requests). We will work with and rely on our community to focus our efforts on improving Harbor.
Please open an issue to track any initiative on the roadmap of Harbor (Usually driven by new feature requests). We will work with and rely on our community to focus our efforts to improve Harbor.
### Current Roadmap
The following table includes the current roadmap for Harbor. If you have any questions or would like to contribute to Harbor, please attend a [community meeting](https://goharbor.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors who will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Harbor.
The following table includes the current roadmap for Harbor. If you have any questions or would like to contribute to Harbor, please attend a [community meeting](https://goharbor.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Harbor.
`Last Updated: June 2022`
@ -49,4 +49,4 @@ The following table includes the current roadmap for Harbor. If you have any que
|I&AM and RBAC|Improved Multi-tenancy through granular access and ability to manage teams of users and robot accounts through workspaces|Dec 2020|
|Observability|Expose Harbor metrics through Prometheus Integration|Mar 2021|
|Tracing|Leverage OpenTelemetry for enhanced tracing capabilities and identify bottlenecks and improve performance |Mar 2021|
|Image Signing|Leverage Sigstore Cosign to deliver persistent image signatures across image replications|Apr 2021|
|Image Signing|Leverage Sigstore Cosign to deliver persisting image signatures across image replications|Apr 2021|

View File

@ -1 +1 @@
v2.14.0
v2.12.3

View File

@ -336,8 +336,6 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'404':
$ref: '#/responses/404'
'500':
@ -999,6 +997,12 @@ paths:
type: boolean
required: false
default: false
- name: with_signature
in: query
description: Specify whether the signature is included inside the tags of the returning artifacts. Only works when setting "with_tag=true"
type: boolean
required: false
default: false
- name: with_immutable_status
in: query
description: Specify whether the immutable status is included inside the tags of the returning artifacts. Only works when setting "with_immutable_status=true"
@ -1189,7 +1193,7 @@ paths:
'404':
$ref: '#/responses/404'
'422':
$ref: '#/responses/422'
$ref: '#/responses/422'
'500':
$ref: '#/responses/500'
/projects/{project_name}/repositories/{repository_name}/artifacts/{reference}/scan/stop:
@ -1222,7 +1226,7 @@ paths:
'404':
$ref: '#/responses/404'
'422':
$ref: '#/responses/422'
$ref: '#/responses/422'
'500':
$ref: '#/responses/500'
/projects/{project_name}/repositories/{repository_name}/artifacts/{reference}/scan/{report_id}/log:
@ -1309,6 +1313,12 @@ paths:
- $ref: '#/parameters/sort'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
- name: with_signature
in: query
description: Specify whether the signature is included inside the returning tags
type: boolean
required: false
default: false
- name: with_immutable_status
in: query
description: Specify whether the immutable status is included inside the returning tags
@ -1451,14 +1461,7 @@ paths:
in: path
description: The type of addition.
type: string
enum:
- build_history
- values.yaml
- readme.md
- dependencies
- sbom
- license
- files
enum: [build_history, values.yaml, readme.md, dependencies, sbom]
required: true
responses:
'200':
@ -1720,9 +1723,9 @@ paths:
$ref: '#/responses/500'
/audit-logs:
get:
summary: Get recent logs of projects which the user is a member with project admin role, or return all audit logs for system admin user (deprecated)
summary: Get recent logs of the projects which the user is a member of
description: |
This endpoint let the user see the recent operation logs of projects which the user is a member with project admin role,, or return all audit logs for system admin user, it only query the audit log in previous version.
This endpoint let user see the recent operation logs of the projects which he is member of
tags:
- auditlog
operationId: listAuditLogs
@ -1752,63 +1755,10 @@ paths:
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/auditlog-exts:
get:
summary: Get recent logs of the projects which the user is a member with project_admin role, or return all audit logs for system admin user
description: |
This endpoint let user see the recent operation logs of the projects which he is member with project_admin role, or return all audit logs for system admin user.
tags:
- auditlog
operationId: listAuditLogExts
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/query'
- $ref: '#/parameters/sort'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
responses:
'200':
description: Success
headers:
X-Total-Count:
description: The total count of auditlogs
type: integer
Link:
description: Link refers to the previous page and next page
type: string
schema:
type: array
items:
$ref: '#/definitions/AuditLogExt'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/auditlog-exts/events:
get:
summary: Get all event types of audit log
description: |
Get all event types of audit log
tags:
- auditlog
operationId: listAuditLogEventTypes
parameters:
- $ref: '#/parameters/requestId'
responses:
'200':
description: Success
schema:
type: array
items:
$ref: '#/definitions/AuditLogEventType'
'401':
$ref: '#/responses/401'
/projects/{project_name}/logs:
get:
summary: Get recent logs of the projects (deprecated)
description: Get recent logs of the projects, it only query the previous version's audit log
summary: Get recent logs of the projects
description: Get recent logs of the projects
tags:
- project
operationId: getLogs
@ -1839,40 +1789,6 @@ paths:
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/projects/{project_name}/auditlog-exts:
get:
summary: Get recent logs of the projects
description: Get recent logs of the projects
tags:
- project
operationId: getLogExts
parameters:
- $ref: '#/parameters/projectName'
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/query'
- $ref: '#/parameters/sort'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
responses:
'200':
description: Success
headers:
X-Total-Count:
description: The total count of auditlogs
type: integer
Link:
description: Link refers to the previous page and next page
type: string
schema:
type: array
items:
$ref: '#/definitions/AuditLogExt'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/p2p/preheat/providers:
get:
summary: List P2P providers
@ -2435,6 +2351,160 @@ paths:
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
/projects/{project_name_or_id}/robots:
get:
summary: Get all robot accounts of specified project
description: Get all robot accounts of specified project
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
- $ref: '#/parameters/query'
- $ref: '#/parameters/sort'
tags:
- robotv1
operationId: ListRobotV1
responses:
'200':
description: Success
headers:
X-Total-Count:
description: The total count of robot accounts
type: integer
Link:
description: Link refers to the previous page and next page
type: string
schema:
type: array
items:
$ref: '#/definitions/Robot'
'400':
$ref: '#/responses/400'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
post:
summary: Create a robot account
description: Create a robot account
tags:
- robotv1
operationId: CreateRobotV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- name: robot
in: body
description: The JSON object of a robot account.
required: true
schema:
$ref: '#/definitions/RobotCreateV1'
responses:
'201':
description: Created
headers:
X-Request-Id:
description: The ID of the corresponding request for the response
type: string
Location:
description: The location of the resource
type: string
schema:
$ref: '#/definitions/RobotCreated'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
/projects/{project_name_or_id}/robots/{robot_id}:
get:
summary: Get a robot account
description: This endpoint returns specific robot account information by robot ID.
tags:
- robotv1
operationId: GetRobotByIDV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/robotId'
responses:
'200':
description: Return matched robot information.
schema:
$ref: '#/definitions/Robot'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
put:
summary: Update status of robot account.
description: Used to disable/enable a specified robot account.
tags:
- robotv1
operationId: UpdateRobotV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/robotId'
- name: robot
in: body
description: The JSON object of a robot account.
required: true
schema:
$ref: '#/definitions/Robot'
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'409':
$ref: '#/responses/409'
'500':
$ref: '#/responses/500'
delete:
summary: Delete a robot account
description: This endpoint deletes specific robot account information by robot ID.
tags:
- robotv1
operationId: DeleteRobotV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/robotId'
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
'/projects/{project_name_or_id}/immutabletagrules':
get:
summary: List all immutable tag rules of current project
@ -3031,8 +3101,6 @@ paths:
type: string
'401':
$ref: '#/responses/401'
'409':
$ref: '#/responses/409'
'500':
$ref: '#/responses/500'
'/usergroups/{group_id}':
@ -3564,8 +3632,6 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
@ -4004,8 +4070,6 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
@ -4501,7 +4565,7 @@ paths:
description: |
The purge job's schedule, it is a json object. |
The sample format is |
{"parameters":{"audit_retention_hour":168,"dry_run":true,"include_event_types":"create_artifact,delete_artifact,pull_artifact"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
{"parameters":{"audit_retention_hour":168,"dry_run":true, "include_operations":"create,delete,pull"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
the include_operation should be a comma separated string, e.g. create,delete,pull, if it is empty, no operation will be purged.
tags:
- purge
@ -4531,7 +4595,7 @@ paths:
description: |
The purge job's schedule, it is a json object. |
The sample format is |
{"parameters":{"audit_retention_hour":168,"dry_run":true,"include_event_types":"create_artifact,delete_artifact,pull_artifact"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
{"parameters":{"audit_retention_hour":168,"dry_run":true, "include_operations":"create,delete,pull"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
the include_operation should be a comma separated string, e.g. create,delete,pull, if it is empty, no operation will be purged.
tags:
- purge
@ -6146,7 +6210,6 @@ paths:
cve_id(exact match)
cvss_score_v3(range condition)
severity(exact match)
status(exact match)
repository_name(exact match)
project_id(exact match)
package(exact match)
@ -6490,7 +6553,7 @@ responses:
description: The ID of the corresponding request for the response
type: string
schema:
$ref: '#/definitions/Errors'
$ref: '#/definitions/Errors'
'500':
description: Internal server error
headers:
@ -6933,43 +6996,6 @@ definitions:
format: date-time
example: '2006-01-02T15:04:05Z'
description: The time when this operation is triggered.
AuditLogExt:
type: object
properties:
id:
type: integer
description: The ID of the audit log entry.
username:
type: string
description: The username of the operator in this log entry.
resource:
type: string
description: Name of the resource in this log entry.
resource_type:
type: string
description: Type of the resource in this log entry.
operation:
type: string
description: The operation against the resource in this log entry.
operation_description:
type: string
description: The operation's detail description
operation_result:
type: boolean
x-omitempty: false
description: the operation's result, true for success, false for fail
op_time:
type: string
format: date-time
example: '2006-01-02T15:04:05Z'
description: The time when this operation is triggered.
AuditLogEventType:
type: object
properties:
event_type:
type: string
description: the event type, such as create_user.
example: create_user
Metadata:
type: object
properties:
@ -7069,9 +7095,9 @@ definitions:
type: boolean
description: Whether the preheat policy enabled
x-omitempty: false
extra_attrs:
scope:
type: string
description: The extra attributes of preheat policy
description: The scope of preheat policy
creation_time:
type: string
format: date-time
@ -7911,7 +7937,7 @@ definitions:
properties:
resource:
type: string
description: The resource of the access. Possible resources are listed here for system and project level https://github.com/goharbor/harbor/blob/main/src/common/rbac/const.go
description: The resource of the access. Possible resources are listed here for system and project level https://github.com/goharbor/harbor/blob/main/src/common/rbac/const.go
action:
type: string
description: The action of the access. Possible actions are *, pull, push, create, read, update, delete, list, operate, scanner-pull and stop.
@ -9074,9 +9100,6 @@ definitions:
oidc_extra_redirect_parms:
$ref: '#/definitions/StringConfigItem'
description: Extra parameters to add when redirect request to OIDC provider
oidc_logout:
$ref: '#/definitions/BoolConfigItem'
description: Extra parameters to logout user session from the OIDC provider
robot_token_duration:
$ref: '#/definitions/IntegerConfigItem'
description: The robot account token duration in days
@ -9120,9 +9143,6 @@ definitions:
banner_message:
$ref: '#/definitions/StringConfigItem'
description: The banner message for the UI.It is the stringified result of the banner message object
disabled_audit_log_event_types:
$ref: '#/definitions/StringConfigItem'
description: The audit log event types to skip to log in database
Configurations:
type: object
properties:
@ -9351,11 +9371,6 @@ definitions:
description: Extra parameters to add when redirect request to OIDC provider
x-omitempty: true
x-isnullable: true
oidc_logout:
type: boolean
description: Logout OIDC user session
x-omitempty: true
x-isnullable: true
robot_token_duration:
type: integer
description: The robot account token duration in days
@ -9406,11 +9421,6 @@ definitions:
description: The banner message for the UI.It is the stringified result of the banner message object
x-omitempty: true
x-isnullable: true
disabled_audit_log_event_types:
type: string
description: the list to disable log audit event types.
x-omitempty: true
x-isnullable: true
StringConfigItem:
type: object
properties:
@ -10075,9 +10085,6 @@ definitions:
severity:
type: string
description: the severity of the vulnerability
status:
type: string
description: the status of the vulnerability, example "fixed", "won't fix"
cvss_v3_score:
type: number
format: float
@ -10105,4 +10112,4 @@ definitions:
scan_type:
type: string
description: 'The scan type for the scan request. Two options are currently supported, vulnerability and sbom'
enum: [ vulnerability, sbom ]
enum: [ vulnerability, sbom ]

View File

@ -0,0 +1,30 @@
# Configuring Harbor as a local registry mirror
Harbor runs as a local registry by default. It can also be configured as a registry mirror,
which caches downloaded images for subsequent use. Note that under this setup, the Harbor registry only acts as a mirror server and
no longer accepts image pushing requests. Edit `Deploy/templates/registry/config.yml` before executing `./prepare`, and append a `proxy` section as follows:
```
proxy:
remoteurl: https://registry-1.docker.io
```
In order to access private images on the Docker Hub, a username and a password can be supplied:
```
proxy:
remoteurl: https://registry-1.docker.io
username: [username]
password: [password]
```
You will need to pass the `--registry-mirror` option to your Docker daemon on startup:
```
docker --registry-mirror=https://<my-docker-mirror-host> daemon
```
For example, if your mirror is serving on `http://reg.yourdomain.com`, you would run:
```
docker --registry-mirror=https://reg.yourdomain.com daemon
```
Refer to the [Registry as a pull through cache](https://docs.docker.com/registry/recipes/mirror/) for detailed information.

View File

@ -0,0 +1,29 @@
# registryapi
api for docker registry by token authorization
+ a simple api class which lies in registryapi.py, which simulates the interactions
between docker registry and the vendor authorization platform like harbor.
```
usage:
from registryapi import RegistryApi
api = RegistryApi('username', 'password', 'http://www.your_registry_url.com/')
repos = api.getRepositoryList()
tags = api.getTagList('public/ubuntu')
manifest = api.getManifest('public/ubuntu', 'latest')
res = api.deleteManifest('public/ubuntu', '23424545**4343')
```
+ a simple client tool based on api class, which contains basic read and delete
operations for repo, tag, manifest
```
usage:
./cli.py --username username --password password --registry_endpoint http://www.your_registry_url.com/ target action params
target can be: repo, tag, manifest
action can be: list, get, delete
params can be: --repo --ref --tag
more see: ./cli.py -h
```

135
contrib/registryapi/cli.py Executable file
View File

@ -0,0 +1,135 @@
#!/usr/bin/env python
# -*- coding:utf-8 -*-
# bug-report: feilengcui008@gmail.com
""" cli tool """
import argparse
import sys
import json
from registry import RegistryApi
class ApiProxy(object):
""" user RegistryApi """
def __init__(self, registry, args):
self.registry = registry
self.args = args
self.callbacks = dict()
self.register_callback("repo", "list", self.list_repo)
self.register_callback("tag", "list", self.list_tag)
self.register_callback("tag", "delete", self.delete_tag)
self.register_callback("manifest", "list", self.list_manifest)
self.register_callback("manifest", "delete", self.delete_manifest)
self.register_callback("manifest", "get", self.get_manifest)
def register_callback(self, target, action, func):
""" register real actions """
if not target in self.callbacks.keys():
self.callbacks[target] = {action: func}
return
self.callbacks[target][action] = func
def execute(self, target, action):
""" execute """
print json.dumps(self.callbacks[target][action](), indent=4, sort_keys=True)
def list_repo(self):
""" list repo """
return self.registry.getRepositoryList(self.args.num)
def list_tag(self):
""" list tag """
return self.registry.getTagList(self.args.repo)
def delete_tag(self):
""" delete tag """
(_, ref) = self.registry.existManifest(self.args.repo, self.args.tag)
if ref is not None:
return self.registry.deleteManifest(self.args.repo, ref)
return False
def list_manifest(self):
""" list manifest """
tags = self.registry.getTagList(self.args.repo)["tags"]
manifests = list()
if tags is None:
return None
for i in tags:
content = self.registry.getManifestWithConf(self.args.repo, i)
manifests.append({i: content})
return manifests
def delete_manifest(self):
""" delete manifest """
return self.registry.deleteManifest(self.args.repo, self.args.ref)
def get_manifest(self):
""" get manifest """
return self.registry.getManifestWithConf(self.args.repo, self.args.tag)
# since just a script tool, we do not construct whole target->action->args
# structure with oo abstractions which has more flexibility, just register
# parser directly
def get_parser():
""" return a parser """
parser = argparse.ArgumentParser("cli")
parser.add_argument('--username', action='store', required=True, help='username')
parser.add_argument('--password', action='store', required=True, help='password')
parser.add_argument('--registry_endpoint', action='store', required=True,
help='registry endpoint')
subparsers = parser.add_subparsers(dest='target', help='target to operate on')
# repo target
repo_target_parser = subparsers.add_parser('repo', help='target repository')
repo_target_subparsers = repo_target_parser.add_subparsers(dest='action',
help='repository subcommand')
repo_cmd_parser = repo_target_subparsers.add_parser('list', help='list repositories')
repo_cmd_parser.add_argument('--num', action='store', required=False, default=None,
help='the number of data to return')
# tag target
tag_target_parser = subparsers.add_parser('tag', help='target tag')
tag_target_subparsers = tag_target_parser.add_subparsers(dest='action',
help='tag subcommand')
tag_list_parser = tag_target_subparsers.add_parser('list', help='list tags')
tag_list_parser.add_argument('--repo', action='store', required=True, help='list tags')
tag_delete_parser = tag_target_subparsers.add_parser('delete', help='delete tag')
tag_delete_parser.add_argument('--repo', action='store', required=True, help='delete tags')
tag_delete_parser.add_argument('--tag', action='store', required=True,
help='tag reference')
# manifest target
manifest_target_parser = subparsers.add_parser('manifest', help='target manifest')
manifest_target_subparsers = manifest_target_parser.add_subparsers(dest='action',
help='manifest subcommand')
manifest_list_parser = manifest_target_subparsers.add_parser('list', help='list manifests')
manifest_list_parser.add_argument('--repo', action='store', required=True,
help='list manifests')
manifest_delete_parser = manifest_target_subparsers.add_parser('delete', help='delete manifest')
manifest_delete_parser.add_argument('--repo', action='store', required=True,
help='delete manifest')
manifest_delete_parser.add_argument('--ref', action='store', required=True,
help='manifest reference')
manifest_get_parser = manifest_target_subparsers.add_parser('get', help='get manifest content')
manifest_get_parser.add_argument('--repo', action='store', required=True, help='delete tags')
manifest_get_parser.add_argument('--tag', action='store', required=True,
help='manifest reference')
return parser
def main():
""" main entrance """
parser = get_parser()
options = parser.parse_args(sys.argv[1:])
registry = RegistryApi(options.username, options.password, options.registry_endpoint)
proxy = ApiProxy(registry, options)
proxy.execute(options.target, options.action)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,165 @@
#!/usr/bin/env python
# -*- coding:utf-8 -*-
# bug-report: feilengcui008@gmail.com
""" api for docker registry """
import urllib2
import urllib
import json
import base64
class RegistryException(Exception):
""" registry api related exception """
pass
class RegistryApi(object):
""" interact with docker registry and harbor """
def __init__(self, username, password, registry_endpoint):
self.username = username
self.password = password
self.basic_token = base64.encodestring("%s:%s" % (str(username), str(password)))[0:-1]
self.registry_endpoint = registry_endpoint.rstrip('/')
auth = self.pingRegistry("%s/v2/_catalog" % (self.registry_endpoint,))
if auth is None:
raise RegistryException("get token realm and service failed")
self.token_endpoint = auth[0]
self.service = auth[1]
def pingRegistry(self, registry_endpoint):
""" ping v2 registry and get realm and service """
headers = dict()
try:
res = urllib2.urlopen(registry_endpoint)
except urllib2.HTTPError as e:
headers = e.hdrs.dict
try:
(realm, service, _) = headers['www-authenticate'].split(',')
return (realm[14:-1:], service[9:-1])
except Exception as e:
return None
def getBearerTokenForScope(self, scope):
""" get bearer token from harbor """
payload = urllib.urlencode({'service': self.service, 'scope': scope})
url = "%s?%s" % (self.token_endpoint, payload)
req = urllib2.Request(url)
req.add_header('Authorization', 'Basic %s' % (self.basic_token,))
try:
response = urllib2.urlopen(req)
return json.loads(response.read())["token"]
except Exception as e:
return None
def getRepositoryList(self, n=None):
""" get repository list """
scope = "registry:catalog:*"
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/_catalog" % (self.registry_endpoint,)
if n is not None:
url = "%s?n=%s" % (url, str(n))
req = urllib2.Request(url)
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
try:
response = urllib2.urlopen(req)
return json.loads(response.read())
except Exception as e:
return None
def getTagList(self, repository):
""" get tag list for repository """
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/%s/tags/list" % (self.registry_endpoint, repository)
req = urllib2.Request(url)
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
try:
response = urllib2.urlopen(req)
return json.loads(response.read())
except Exception as e:
return None
def getManifest(self, repository, reference="latest", v1=False):
""" get manifest for tag or digest """
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/%s/manifests/%s" % (self.registry_endpoint, repository, reference)
req = urllib2.Request(url)
req.get_method = lambda: 'GET'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v2+json')
if v1:
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v1+json')
try:
response = urllib2.urlopen(req)
return json.loads(response.read())
except Exception as e:
return None
def existManifest(self, repository, reference, v1=False):
""" check to see it manifest exist """
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
raise RegistryException("manifestExist failed due to token error")
url = "%s/v2/%s/manifests/%s" % (self.registry_endpoint, repository, reference)
req = urllib2.Request(url)
req.get_method = lambda: 'HEAD'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v2+json')
if v1:
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v1+json')
try:
response = urllib2.urlopen(req)
return (True, response.headers.dict["docker-content-digest"])
except Exception as e:
return (False, None)
def deleteManifest(self, repository, reference):
""" delete manifest by tag """
(is_exist, digest) = self.existManifest(repository, reference)
if not is_exist:
raise RegistryException("manifest not exist")
scope = "repository:%s:pull,push" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
raise RegistryException("delete manifest failed due to token error")
url = "%s/v2/%s/manifests/%s" % (self.registry_endpoint, repository, digest)
req = urllib2.Request(url)
req.get_method = lambda: 'DELETE'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
try:
urllib2.urlopen(req)
except Exception as e:
return False
return True
def getManifestWithConf(self, repository, reference="latest"):
""" get manifest for tag or digest """
manifest = self.getManifest(repository, reference)
if manifest is None:
raise RegistryException("manifest for %s %s not exist" % (repository, reference))
config_digest = manifest["config"]["digest"]
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/%s/blobs/%s" % (self.registry_endpoint, repository, config_digest)
req = urllib2.Request(url)
req.get_method = lambda: 'GET'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v2+json')
try:
response = urllib2.urlopen(req)
manifest["configContent"] = json.loads(response.read())
return manifest
except Exception as e:
return None

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.1 KiB

View File

@ -135,8 +135,6 @@ trivy:
jobservice:
# Maximum number of job workers in job service
max_job_workers: 10
# Maximum hours of task duration in job service, default 24
max_job_duration_hours: 24
# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"
job_loggers:
- STD_OUTPUT
@ -176,7 +174,7 @@ log:
# port: 5140
#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.13.0
_version: 2.12.0
# Uncomment external_database if using external database.
# external_database:
@ -215,14 +213,6 @@ _version: 2.13.0
# # username:
# # sentinel_master_set must be set to support redis+sentinel
# #sentinel_master_set:
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
# tlsOptions:
# enable: false
# # if it is a self-signed ca, please set the ca path specifically.
# rootCA:
# # db_index 0 is for core, it's unchangeable
# registry_db_index: 1
# jobservice_db_index: 2

View File

@ -1,23 +0,0 @@
ALTER TABLE p2p_preheat_policy DROP COLUMN IF EXISTS scope;
ALTER TABLE p2p_preheat_policy ADD COLUMN IF NOT EXISTS extra_attrs text;
CREATE TABLE IF NOT EXISTS audit_log_ext
(
id BIGSERIAL PRIMARY KEY NOT NULL,
project_id BIGINT,
operation VARCHAR(50) NULL,
resource_type VARCHAR(255) NULL,
resource VARCHAR(1024) NULL,
username VARCHAR(255) NULL,
op_desc VARCHAR(1024) NULL,
op_result BOOLEAN DEFAULT true,
payload TEXT NULL,
source_ip VARCHAR(50) NULL,
op_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- add index to the audit_log_ext table
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_op_time ON audit_log_ext (op_time);
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_project_id_optime ON audit_log_ext (project_id, op_time);
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_project_id_resource_type ON audit_log_ext (project_id, resource_type);
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_project_id_operation ON audit_log_ext (project_id, operation);

View File

@ -1,9 +0,0 @@
ALTER TABLE role_permission ALTER COLUMN id TYPE BIGINT;
ALTER SEQUENCE role_permission_id_seq AS BIGINT;
ALTER TABLE permission_policy ALTER COLUMN id TYPE BIGINT;
ALTER SEQUENCE permission_policy_id_seq AS BIGINT;
ALTER TABLE role_permission ALTER COLUMN permission_policy_id TYPE BIGINT;
ALTER TABLE vulnerability_record ADD COLUMN IF NOT EXISTS status text;

View File

@ -18,7 +18,7 @@ TIMESTAMP=$(shell date +"%Y%m%d")
# docker parameters
DOCKERCMD=$(shell which docker)
DOCKERBUILD=$(DOCKERCMD) build --no-cache --network=$(DOCKERNETWORK)
DOCKERBUILD=$(DOCKERCMD) build --no-cache
DOCKERBUILD_WITH_PULL_PARA=$(DOCKERBUILD) --pull=$(PULL_BASE_FROM_DOCKERHUB)
DOCKERRMIMAGE=$(DOCKERCMD) rmi
DOCKERIMAGES=$(DOCKERCMD) images
@ -122,7 +122,7 @@ _build_db:
_build_portal:
@$(call _build_base,$(PORTAL),$(DOCKERFILEPATH_PORTAL))
@echo "building portal container for photon..."
$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg NODE=${NODEBUILDIMAGE} --build-arg npm_registry=$(NPM_REGISTRY) -f $(DOCKERFILEPATH_PORTAL)/$(DOCKERFILENAME_PORTAL) -t $(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) .
$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg npm_registry=$(NPM_REGISTRY) -f $(DOCKERFILEPATH_PORTAL)/$(DOCKERFILENAME_PORTAL) -t $(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) .
@echo "Done."
_build_core:
@ -149,12 +149,12 @@ _build_trivy_adapter:
rm -rf $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary && mkdir -p $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary ; \
echo "Downloading Trivy scanner $(TRIVYVERSION)..." ; \
$(call _extract_archive, $(TRIVY_DOWNLOAD_URL), $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary/) ; \
if [ "$(BUILDTRIVYADP)" != "true" ] ; then \
if [ "$(BUILDBIN)" != "true" ] ; then \
echo "Downloading Trivy adapter $(TRIVYADAPTERVERSION)..." ; \
$(call _extract_archive, $(TRIVY_ADAPTER_DOWNLOAD_URL), $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary/) ; \
else \
echo "Building Trivy adapter $(TRIVYADAPTERVERSION) from sources..." ; \
cd $(DOCKERFILEPATH_TRIVY_ADAPTER) && $(DOCKERFILEPATH_TRIVY_ADAPTER)/builder.sh $(TRIVYADAPTERVERSION) $(GOBUILDIMAGE) $(DOCKERNETWORK) && cd - ; \
cd $(DOCKERFILEPATH_TRIVY_ADAPTER) && $(DOCKERFILEPATH_TRIVY_ADAPTER)/builder.sh $(TRIVYADAPTERVERSION) && cd - ; \
fi ; \
echo "Building Trivy adapter container for photon..." ; \
$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) \
@ -174,11 +174,11 @@ _build_nginx:
_build_registry:
@$(call _build_base,$(REGISTRY),$(DOCKERFILEPATH_REG))
@if [ "$(BUILDREG)" != "true" ] ; then \
@if [ "$(BUILDBIN)" != "true" ] ; then \
rm -rf $(DOCKERFILEPATH_REG)/binary && mkdir -p $(DOCKERFILEPATH_REG)/binary && \
$(call _get_binary, $(REGISTRYURL), $(DOCKERFILEPATH_REG)/binary/registry); \
else \
cd $(DOCKERFILEPATH_REG) && $(DOCKERFILEPATH_REG)/builder $(REGISTRY_SRC_TAG) $(DISTRIBUTION_SRC) $(GOBUILDIMAGE) $(DOCKERNETWORK) && cd - ; \
cd $(DOCKERFILEPATH_REG) && $(DOCKERFILEPATH_REG)/builder $(REGISTRY_SRC_TAG) $(DISTRIBUTION_SRC) && cd - ; \
fi
@echo "building registry container for photon..."
@chmod 655 $(DOCKERFILEPATH_REG)/binary/registry && $(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) -f $(DOCKERFILEPATH_REG)/$(DOCKERFILENAME_REG) -t $(DOCKERIMAGENAME_REG):$(VERSIONTAG) .
@ -205,7 +205,7 @@ _build_standalone_db_migrator:
_compile_and_build_exporter:
@$(call _build_base,$(EXPORTER),$(DOCKERFILEPATH_EXPORTER))
@echo "compiling and building image for exporter..."
@$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg build_image=$(GOBUILDIMAGE) -f ${DOCKERFILEPATH_EXPORTER}/${DOCKERFILENAME_EXPORTER} -t $(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG) .
@$(DOCKERCMD) build --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg build_image=$(GOBUILDIMAGE) -f ${DOCKERFILEPATH_EXPORTER}/${DOCKERFILENAME_EXPORTER} -t $(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG) .
@echo "Done."
define _extract_archive
@ -233,17 +233,10 @@ define _build_base
fi
endef
ifeq ($(BUILD_INSTALLER), true)
buildcompt: _build_prepare _build_db _build_portal _build_core _build_jobservice _build_log _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
else
buildcompt: _build_db _build_portal _build_core _build_jobservice _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
endif
build: buildcompt
build: _build_prepare _build_db _build_portal _build_core _build_jobservice _build_log _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
@if [ -n "$(REGISTRYUSER)" ] && [ -n "$(REGISTRYPASSWORD)" ] ; then \
docker logout ; \
fi
cleanimage:
@echo "cleaning image for photon..."
- $(DOCKERRMIMAGE) -f $(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG)
@ -253,3 +246,4 @@ cleanimage:
.PHONY: clean
clean: cleanimage

View File

@ -9,9 +9,8 @@ COPY ./make/photon/db/initdb.sh /initdb.sh
COPY ./make/photon/db/upgrade.sh /upgrade.sh
COPY ./make/photon/db/docker-healthcheck.sh /docker-healthcheck.sh
COPY ./make/photon/db/initial-registry.sql /docker-entrypoint-initdb.d/
RUN chown -R postgres:postgres /docker-entrypoint.sh /initdb.sh /upgrade.sh \
/docker-healthcheck.sh /docker-entrypoint-initdb.d \
&& chmod u+x /initdb.sh /upgrade.sh /docker-entrypoint.sh /docker-healthcheck.sh
RUN chown -R postgres:postgres /docker-entrypoint.sh /docker-healthcheck.sh /docker-entrypoint-initdb.d \
&& chmod u+x /docker-entrypoint.sh /docker-healthcheck.sh
ENTRYPOINT ["/docker-entrypoint.sh", "14", "15"]
HEALTHCHECK CMD ["/docker-healthcheck.sh"]

View File

@ -1,7 +1,6 @@
ARG harbor_base_image_version
ARG harbor_base_namespace
ARG NODE
FROM ${NODE} as nodeportal
FROM node:16.18.0 as nodeportal
WORKDIR /build_dir

View File

@ -1,7 +1,7 @@
FROM photon:5.0
RUN tdnf install -y python3 python3-pip python3-PyYAML python3-jinja2 && tdnf clean all
RUN pip3 install pipenv==2025.0.3
RUN pip3 install pipenv==2022.1.8
#To install only htpasswd binary from photon package httpd
RUN tdnf install -y rpm cpio apr-util

View File

@ -12,4 +12,4 @@ pylint = "*"
pytest = "*"
[requires]
python_version = "3.13"
python_version = "3.9.1"

View File

@ -1,11 +1,11 @@
{
"_meta": {
"hash": {
"sha256": "d3a89b8575c29b9f822b892ffd31fd4a997effb1ebf3e3ed061a41e2d04b4490"
"sha256": "0c84f574a48755d88f78a64d754b3f834a72f2a86808370dd5f3bf3e650bfa13"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.13"
"python_version": "3.9.1"
},
"sources": [
{
@ -18,122 +18,157 @@
"default": {
"click": {
"hashes": [
"sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202",
"sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b"
"sha256:8c04c11192119b1ef78ea049e0a6f0463e4c48ef00a30160c704337586f3ad7a",
"sha256:fba402a4a47334742d782209a7c79bc448911afe1149d07bdabdf480b3e2f4b6"
],
"index": "pypi",
"markers": "python_version >= '3.10'",
"version": "==8.2.1"
"version": "==8.0.1"
},
"packaging": {
"hashes": [
"sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484",
"sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"
"sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5",
"sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==25.0"
"version": "==20.9"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
"sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.7"
}
},
"develop": {
"astroid": {
"hashes": [
"sha256:104fb9cb9b27ea95e847a94c003be03a9e039334a8ebca5ee27dafaf5c5711eb",
"sha256:c332157953060c6deb9caa57303ae0d20b0fbdb2e59b4a4f2a6ba49d0a7961ce"
"sha256:4db03ab5fc3340cf619dbc25e42c2cc3755154ce6009469766d7143d1fc2ee4e",
"sha256:8a398dfce302c13f14bab13e2b14fe385d32b73f4e4853b9bdfb64598baa1975"
],
"markers": "python_full_version >= '3.9.0'",
"version": "==3.3.10"
"markers": "python_version ~= '3.6'",
"version": "==2.5.6"
},
"dill": {
"attrs": {
"hashes": [
"sha256:0633f1d2df477324f53a895b02c901fb961bdbf65a17122586ea7019292cbcf0",
"sha256:44f54bf6412c2c8464c14e8243eb163690a9800dbe2c367330883b19c7561049"
"sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1",
"sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"
],
"markers": "python_version >= '3.8'",
"version": "==0.4.0"
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==21.2.0"
},
"iniconfig": {
"hashes": [
"sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7",
"sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"
"sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3",
"sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"
],
"markers": "python_version >= '3.8'",
"version": "==2.1.0"
"version": "==1.1.1"
},
"isort": {
"hashes": [
"sha256:1cb5df28dfbc742e490c5e41bad6da41b805b0a8be7bc93cd0fb2a8a890ac450",
"sha256:2dc5d7f65c9678d94c88dfc29161a320eec67328bc97aad576874cb4be1e9615"
"sha256:0a943902919f65c5684ac4e0154b1ad4fac6dcaa5d9f3426b732f1c8b5419be6",
"sha256:2bb1680aad211e3c9944dbce1d4ba09a989f04e238296c87fe2139faa26d655d"
],
"markers": "python_full_version >= '3.9.0'",
"version": "==6.0.1"
"markers": "python_version >= '3.6' and python_version < '4.0'",
"version": "==5.8.0"
},
"lazy-object-proxy": {
"hashes": [
"sha256:17e0967ba374fc24141738c69736da90e94419338fd4c7c7bef01ee26b339653",
"sha256:1fee665d2638491f4d6e55bd483e15ef21f6c8c2095f235fef72601021e64f61",
"sha256:22ddd618cefe54305df49e4c069fa65715be4ad0e78e8d252a33debf00f6ede2",
"sha256:24a5045889cc2729033b3e604d496c2b6f588c754f7a62027ad4437a7ecc4837",
"sha256:410283732af311b51b837894fa2f24f2c0039aa7f220135192b38fcc42bd43d3",
"sha256:4732c765372bd78a2d6b2150a6e99d00a78ec963375f236979c0626b97ed8e43",
"sha256:489000d368377571c6f982fba6497f2aa13c6d1facc40660963da62f5c379726",
"sha256:4f60460e9f1eb632584c9685bccea152f4ac2130e299784dbaf9fae9f49891b3",
"sha256:5743a5ab42ae40caa8421b320ebf3a998f89c85cdc8376d6b2e00bd12bd1b587",
"sha256:85fb7608121fd5621cc4377a8961d0b32ccf84a7285b4f1d21988b2eae2868e8",
"sha256:9698110e36e2df951c7c36b6729e96429c9c32b3331989ef19976592c5f3c77a",
"sha256:9d397bf41caad3f489e10774667310d73cb9c4258e9aed94b9ec734b34b495fd",
"sha256:b579f8acbf2bdd9ea200b1d5dea36abd93cabf56cf626ab9c744a432e15c815f",
"sha256:b865b01a2e7f96db0c5d12cfea590f98d8c5ba64ad222300d93ce6ff9138bcad",
"sha256:bf34e368e8dd976423396555078def5cfc3039ebc6fc06d1ae2c5a65eebbcde4",
"sha256:c6938967f8528b3668622a9ed3b31d145fab161a32f5891ea7b84f6b790be05b",
"sha256:d1c2676e3d840852a2de7c7d5d76407c772927addff8d742b9808fe0afccebdf",
"sha256:d7124f52f3bd259f510651450e18e0fd081ed82f3c08541dffc7b94b883aa981",
"sha256:d900d949b707778696fdf01036f58c9876a0d8bfe116e8d220cfd4b15f14e741",
"sha256:ebfd274dcd5133e0afae738e6d9da4323c3eb021b3e13052d8cbd0e457b1256e",
"sha256:ed361bb83436f117f9917d282a456f9e5009ea12fd6de8742d1a4752c3017e93",
"sha256:f5144c75445ae3ca2057faac03fda5a902eff196702b0a24daf1d6ce0650514b"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'",
"version": "==1.6.0"
},
"mccabe": {
"hashes": [
"sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325",
"sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
],
"markers": "python_version >= '3.6'",
"version": "==0.7.0"
"version": "==0.6.1"
},
"packaging": {
"hashes": [
"sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484",
"sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"
"sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5",
"sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==25.0"
},
"platformdirs": {
"hashes": [
"sha256:3d512d96e16bcb959a814c9f348431070822a6496326a4be0911c40b5a74c2bc",
"sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4"
],
"markers": "python_version >= '3.9'",
"version": "==4.3.8"
"version": "==20.9"
},
"pluggy": {
"hashes": [
"sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3",
"sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746"
"sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0",
"sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"
],
"markers": "python_version >= '3.9'",
"version": "==1.6.0"
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.13.1"
},
"pygments": {
"py": {
"hashes": [
"sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887",
"sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"
"sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3",
"sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"
],
"markers": "python_version >= '3.8'",
"version": "==2.19.2"
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==1.10.0"
},
"pylint": {
"hashes": [
"sha256:2b11de8bde49f9c5059452e0c310c079c746a0a8eeaa789e5aa966ecc23e4559",
"sha256:43860aafefce92fca4cf6b61fe199cdc5ae54ea28f9bf4cd49de267b5195803d"
"sha256:586d8fa9b1891f4b725f587ef267abe2a1bad89d6b184520c7f07a253dd6e217",
"sha256:f7e2072654a6b6afdf5e2fb38147d3e2d2d43c89f648637baab63e026481279b"
],
"index": "pypi",
"markers": "python_full_version >= '3.9.0'",
"version": "==3.3.7"
"version": "==2.8.2"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
"sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.7"
},
"pytest": {
"hashes": [
"sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7",
"sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"
"sha256:50bcad0a0b9c5a72c8e4e7c9855a3ad496ca6a881a3641b4260605450772c54b",
"sha256:91ef2131a9bd6be8f76f1f08eac5c5317221d6ad1e143ae03894b862e8976890"
],
"index": "pypi",
"markers": "python_version >= '3.9'",
"version": "==8.4.1"
"version": "==6.2.4"
},
"tomlkit": {
"toml": {
"hashes": [
"sha256:430cf247ee57df2b94ee3fbe588e71d362a941ebb545dec29b53961d61add2a1",
"sha256:c89c649d79ee40629a9fda55f8ace8c6a1b42deb912b2a8fd8d942ddadb606b0"
"sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b",
"sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"
],
"markers": "python_version >= '3.8'",
"version": "==0.13.3"
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.10.2"
},
"wrapt": {
"hashes": [
"sha256:b62ffa81fb85f4332a4f609cab4ac40709470da05643a082ec1eb88e6d9b97d7"
],
"version": "==1.12.1"
}
}
}

View File

@ -10,7 +10,7 @@ from migrations import accept_versions
@click.command()
@click.option('-i', '--input', 'input_', required=True, help="The path of original config file")
@click.option('-o', '--output', default='', help="the path of output config file")
@click.option('-t', '--target', default='2.13.0', help="target version of input path")
@click.option('-t', '--target', default='2.12.0', help="target version of input path")
def migrate(input_, output, target):
"""
migrate command will migrate config file style to specific version

View File

@ -27,7 +27,6 @@ internal_tls_dir = secret_dir.joinpath('tls')
storage_ca_bundle_filename = 'storage_ca_bundle.crt'
internal_ca_filename = 'harbor_internal_ca.crt'
redis_tls_ca_filename = 'redis_tls_ca.crt'
old_private_key_pem_path = Path('/config/core/private_key.pem')
old_crt_path = Path('/config/registry/root.crt')

View File

@ -2,4 +2,4 @@ import os
MIGRATION_BASE_DIR = os.path.dirname(__file__)
accept_versions = {'1.9.0', '1.10.0', '2.0.0', '2.1.0', '2.2.0', '2.3.0', '2.4.0', '2.5.0', '2.6.0', '2.7.0', '2.8.0', '2.9.0','2.10.0', '2.11.0', '2.12.0', '2.13.0'}
accept_versions = {'1.9.0', '1.10.0', '2.0.0', '2.1.0', '2.2.0', '2.3.0', '2.4.0', '2.5.0', '2.6.0', '2.7.0', '2.8.0', '2.9.0','2.10.0', '2.11.0', '2.12.0'}

View File

@ -1,21 +0,0 @@
import os
from jinja2 import Environment, FileSystemLoader, StrictUndefined, select_autoescape
from utils.migration import read_conf
revision = '2.13.0'
down_revisions = ['2.12.0']
def migrate(input_cfg, output_cfg):
current_dir = os.path.dirname(__file__)
tpl = Environment(
loader=FileSystemLoader(current_dir),
undefined=StrictUndefined,
trim_blocks=True,
lstrip_blocks=True,
autoescape = select_autoescape()
).get_template('harbor.yml.jinja')
config_dict = read_conf(input_cfg)
with open(output_cfg, 'w') as f:
f.write(tpl.render(**config_dict))

View File

@ -1,775 +0,0 @@
# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: {{ hostname }}
# http related config
{% if http is defined %}
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: {{ http.port }}
{% else %}
# http:
# # port for http, default is 80. If https enabled, this port will redirect to https port
# port: 80
{% endif %}
{% if https is defined %}
# https related config
https:
# https port for harbor, default is 443
port: {{ https.port }}
# The path of cert and key files for nginx
certificate: {{ https.certificate }}
private_key: {{ https.private_key }}
# enable strong ssl ciphers (default: false)
{% if strong_ssl_ciphers is defined %}
strong_ssl_ciphers: {{ strong_ssl_ciphers | lower }}
{% else %}
strong_ssl_ciphers: false
{% endif %}
{% else %}
# https related config
# https:
# # https port for harbor, default is 443
# port: 443
# # The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
# enable strong ssl ciphers (default: false)
# strong_ssl_ciphers: false
{% endif %}
# # Harbor will set ipv4 enabled only by default if this block is not configured
# # Otherwise, please uncomment this block to configure your own ip_family stacks
{% if ip_family is defined %}
ip_family:
# ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component
{% if ip_family.ipv6 is defined %}
ipv6:
enabled: {{ ip_family.ipv6.enabled | lower }}
{% else %}
ipv6:
enabled: false
{% endif %}
# ipv4Enabled set to true by default, currently it affected the nginx related component
{% if ip_family.ipv4 is defined %}
ipv4:
enabled: {{ ip_family.ipv4.enabled | lower }}
{% else %}
ipv4:
enabled: true
{% endif %}
{% else %}
# ip_family:
# # ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component
# ipv6:
# enabled: false
# # ipv4Enabled set to true by default, currently it affected the nginx related component
# ipv4:
# enabled: true
{% endif %}
{% if internal_tls is defined %}
# Uncomment following will enable tls communication between all harbor components
internal_tls:
# set enabled to true means internal tls is enabled
enabled: {{ internal_tls.enabled | lower }}
{% if internal_tls.dir is defined %}
# put your cert and key files on dir
dir: {{ internal_tls.dir }}
{% endif %}
{% else %}
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal
{% endif %}
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
{% if external_url is defined %}
external_url: {{ external_url }}
{% else %}
# external_url: https://reg.mydomain.com:8433
{% endif %}
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
{% if harbor_admin_password is defined %}
harbor_admin_password: {{ harbor_admin_password }}
{% else %}
harbor_admin_password: Harbor12345
{% endif %}
# Harbor DB configuration
database:
{% if database is defined %}
# The password for the root user of Harbor DB. Change this before any production use.
password: {{ database.password}}
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: {{ database.max_idle_conns }}
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: {{ database.max_open_conns }}
# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
{% if database.conn_max_lifetime is defined %}
conn_max_lifetime: {{ database.conn_max_lifetime }}
{% else %}
conn_max_lifetime: 5m
{% endif %}
# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
{% if database.conn_max_idle_time is defined %}
conn_max_idle_time: {{ database.conn_max_idle_time }}
{% else %}
conn_max_idle_time: 0
{% endif %}
{% else %}
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 100
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 900
# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
conn_max_lifetime: 5m
# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
conn_max_idle_time: 0
{% endif %}
{% if data_volume is defined %}
# The default data volume
data_volume: {{ data_volume }}
{% else %}
# The default data volume
data_volume: /data
{% endif %}
# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
{% if storage_service is defined %}
storage_service:
{% for key, value in storage_service.items() %}
{% if key == 'ca_bundle' %}
# # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# # of registry's and chart repository's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
ca_bundle: {{ value if value is not none else '' }}
{% elif key == 'redirect' %}
# # set disable to true when you want to disable registry redirect
redirect:
{% if storage_service.redirect.disabled is defined %}
disable: {{ storage_service.redirect.disabled | lower}}
{% else %}
disable: {{ storage_service.redirect.disable | lower}}
{% endif %}
{% else %}
# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# # for more info about this configuration please refer https://distribution.github.io/distribution/about/configuration/
# # and https://distribution.github.io/distribution/storage-drivers/
{{ key }}:
{% for k, v in value.items() %}
{{ k }}: {{ v if v is not none else '' }}
{% endfor %}
{% endif %}
{% endfor %}
{% else %}
# storage_service:
# # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# # of registry's and chart repository's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
# ca_bundle:
# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# # for more info about this configuration please refer https://distribution.github.io/distribution/about/configuration/
# # and https://distribution.github.io/distribution/storage-drivers/
# filesystem:
# maxthreads: 100
# # set disable to true when you want to disable registry redirect
# redirect:
# disable: false
{% endif %}
# Trivy configuration
#
# Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
# in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
# should download a newer version from the Internet or use the cached one. Currently, the database is updated every
# 12 hours and published as a new release to GitHub.
{% if trivy is defined %}
trivy:
# ignoreUnfixed The flag to display only fixed vulnerabilities
{% if trivy.ignore_unfixed is defined %}
ignore_unfixed: {{ trivy.ignore_unfixed | lower }}
{% else %}
ignore_unfixed: false
{% endif %}
# skipUpdate The flag to enable or disable Trivy DB downloads from GitHub
#
# You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.
# If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and
# `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.
{% if trivy.skip_update is defined %}
skip_update: {{ trivy.skip_update | lower }}
{% else %}
skip_update: false
{% endif %}
{% if trivy.skip_java_db_update is defined %}
# skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the
# `/home/scanner/.cache/trivy/java-db/trivy-java.db` path
skip_java_db_update: {{ trivy.skip_java_db_update | lower }}
{% else %}
skip_java_db_update: false
{% endif %}
#
{% if trivy.offline_scan is defined %}
offline_scan: {{ trivy.offline_scan | lower }}
{% else %}
offline_scan: false
{% endif %}
#
# Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.
{% if trivy.security_check is defined %}
security_check: {{ trivy.security_check }}
{% else %}
security_check: vuln
{% endif %}
#
# insecure The flag to skip verifying registry certificate
{% if trivy.insecure is defined %}
insecure: {{ trivy.insecure | lower }}
{% else %}
insecure: false
{% endif %}
#
{% if trivy.timeout is defined %}
# timeout The duration to wait for scan completion.
# There is upper bound of 30 minutes defined in scan job. So if this `timeout` is larger than 30m0s, it will also timeout at 30m0s.
timeout: {{ trivy.timeout}}
{% else %}
timeout: 5m0s
{% endif %}
#
# github_token The GitHub access token to download Trivy DB
#
# Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
# https://developer.github.com/v3/#rate-limiting
#
# You can create a GitHub token by following the instructions in
# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
#
{% if trivy.github_token is defined %}
github_token: {{ trivy.github_token }}
{% else %}
# github_token: xxx
{% endif %}
{% else %}
# trivy:
# # ignoreUnfixed The flag to display only fixed vulnerabilities
# ignore_unfixed: false
# # skipUpdate The flag to enable or disable Trivy DB downloads from GitHub
# #
# # You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.
# # If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and
# # `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.
# skip_update: false
# #
# # skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the
# # `/home/scanner/.cache/trivy/java-db/trivy-java.db` path
# skip_java_db_update: false
# #
# #The offline_scan option prevents Trivy from sending API requests to identify dependencies.
# # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.
# # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't
# # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.
# # It would work if all the dependencies are in local.
# # This option doesnt affect DB download. You need to specify "skip-update" as well as "offline-scan" in an air-gapped environment.
# offline_scan: false
# #
# # insecure The flag to skip verifying registry certificate
# insecure: false
# # github_token The GitHub access token to download Trivy DB
# #
# # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
# # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
# # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
# # https://developer.github.com/v3/#rate-limiting
# #
# # timeout The duration to wait for scan completion.
# # There is upper bound of 30 minutes defined in scan job. So if this `timeout` is larger than 30m0s, it will also timeout at 30m0s.
# timeout: 5m0s
# #
# # You can create a GitHub token by following the instructions in
# # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
# #
# # github_token: xxx
{% endif %}
jobservice:
# Maximum number of job workers in job service
{% if jobservice is defined %}
max_job_workers: {{ jobservice.max_job_workers }}
# Maximum hours of task duration in job service, default 24
{% if jobservice.max_job_duration_hours is defined %}
max_job_duration_hours: {{ jobservice.max_job_duration_hours }}
{% else %}
max_job_duration_hours: 24
{% endif %}
# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"
{% if jobservice.job_loggers is defined %}
job_loggers:
{% for job_logger in jobservice.job_loggers %}
- {{job_logger}}
{% endfor %}
{% else %}
job_loggers:
- STD_OUTPUT
- FILE
# - DB
{% endif %}
# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
{% if jobservice.logger_sweeper_duration is defined %}
logger_sweeper_duration: {{ jobservice.logger_sweeper_duration }}
{% else %}
logger_sweeper_duration: 1
{% endif %}
{% else %}
max_job_workers: 10
max_job_duration_hours: 24
# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"
job_loggers:
- STD_OUTPUT
- FILE
# - DB
# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
logger_sweeper_duration: 1
{% endif %}
notification:
# Maximum retry count for webhook job
{% if notification is defined %}
webhook_job_max_retry: {{ notification.webhook_job_max_retry}}
# HTTP client timeout for webhook job
{% if notification.webhook_job_http_client_timeout is defined %}
webhook_job_http_client_timeout: {{ notification.webhook_job_http_client_timeout }}
{% else %}
webhook_job_http_client_timeout: 3 #seconds
{% endif %}
{% else %}
webhook_job_max_retry: 3
# HTTP client timeout for webhook job
webhook_job_http_client_timeout: 3 #seconds
{% endif %}
# Log configurations
log:
# options are debug, info, warning, error, fatal
{% if log is defined %}
level: {{ log.level }}
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: {{ log.local.rotate_count }}
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: {{ log.local.rotate_size }}
# The directory on your host that store log
location: {{ log.local.location }}
{% if log.external_endpoint is defined %}
external_endpoint:
# protocol used to transmit log to external endpoint, options is tcp or udp
protocol: {{ log.external_endpoint.protocol }}
# The host of external endpoint
host: {{ log.external_endpoint.host }}
# Port of external endpoint
port: {{ log.external_endpoint.port }}
{% else %}
# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140
{% endif %}
{% else %}
level: info
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: 50
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: 200M
# The directory on your host that store log
location: /var/log/harbor
# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140
{% endif %}
#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.13.0
{% if external_database is defined %}
# Uncomment external_database if using external database.
external_database:
harbor:
host: {{ external_database.harbor.host }}
port: {{ external_database.harbor.port }}
db_name: {{ external_database.harbor.db_name }}
username: {{ external_database.harbor.username }}
password: {{ external_database.harbor.password }}
ssl_mode: {{ external_database.harbor.ssl_mode }}
max_idle_conns: {{ external_database.harbor.max_idle_conns}}
max_open_conns: {{ external_database.harbor.max_open_conns}}
{% else %}
# Uncomment external_database if using external database.
# external_database:
# harbor:
# host: harbor_db_host
# port: harbor_db_port
# db_name: harbor_db_name
# username: harbor_db_username
# password: harbor_db_password
# ssl_mode: disable
# max_idle_conns: 2
# max_open_conns: 0
{% endif %}
{% if redis is defined %}
redis:
# # db_index 0 is for core, it's unchangeable
{% if redis.registry_db_index is defined %}
registry_db_index: {{ redis.registry_db_index }}
{% else %}
# # registry_db_index: 1
{% endif %}
{% if redis.jobservice_db_index is defined %}
jobservice_db_index: {{ redis.jobservice_db_index }}
{% else %}
# # jobservice_db_index: 2
{% endif %}
{% if redis.trivy_db_index is defined %}
trivy_db_index: {{ redis.trivy_db_index }}
{% else %}
# # trivy_db_index: 5
{% endif %}
{% if redis.harbor_db_index is defined %}
harbor_db_index: {{ redis.harbor_db_index }}
{% else %}
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
{% endif %}
{% if redis.cache_layer_db_index is defined %}
cache_layer_db_index: {{ redis.cache_layer_db_index }}
{% else %}
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% else %}
# Uncomment redis if need to customize redis db
# redis:
# # db_index 0 is for core, it's unchangeable
# # registry_db_index: 1
# # jobservice_db_index: 2
# # trivy_db_index: 5
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% if external_redis is defined %}
external_redis:
# support redis, redis+sentinel
# host for redis: <host_redis>:<port_redis>
# host for redis+sentinel:
# <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
host: {{ external_redis.host }}
password: {{ external_redis.password }}
# Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH <username> <password> form.
{% if external_redis.username is defined %}
username: {{ external_redis.username }}
{% else %}
# username:
{% endif %}
# sentinel_master_set must be set to support redis+sentinel
#sentinel_master_set:
{% if external_redis.tlsOptions is defined %}
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
tlsOptions:
enable: {{ external_redis.tlsOptions.enable }}
# if it is a self-signed ca, please set the ca path specifically.
{% if external_redis.tlsOptions.rootCA is defined %}
rootCA: {{ external_redis.tlsOptions.rootCA }}
{% else %}
# rootCA:
{% endif %}
{% else %}
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
# tlsOptions:
# enable: false
# # if it is a self-signed ca, please set the ca path specifically.
# rootCA:
{% endif %}
# db_index 0 is for core, it's unchangeable
registry_db_index: {{ external_redis.registry_db_index }}
jobservice_db_index: {{ external_redis.jobservice_db_index }}
trivy_db_index: 5
idle_timeout_seconds: 30
{% if external_redis.harbor_db_index is defined %}
harbor_db_index: {{ redis.harbor_db_index }}
{% else %}
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
{% endif %}
{% if external_redis.cache_layer_db_index is defined %}
cache_layer_db_index: {{ redis.cache_layer_db_index }}
{% else %}
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% else %}
# Uncomments external_redis if using external Redis server
# external_redis:
# # support redis, redis+sentinel
# # host for redis: <host_redis>:<port_redis>
# # host for redis+sentinel:
# # <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
# host: redis:6379
# password:
# # Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH <username> <password> form.
# # username:
# # sentinel_master_set must be set to support redis+sentinel
# #sentinel_master_set:
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
# tlsOptions:
# enable: false
# # if it is a self-signed ca, please set the ca path specifically.
# rootCA:
# # db_index 0 is for core, it's unchangeable
# registry_db_index: 1
# jobservice_db_index: 2
# trivy_db_index: 5
# idle_timeout_seconds: 30
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% if uaa is defined %}
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
uaa:
ca_file: {{ uaa.ca_file }}
{% else %}
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
# ca_file: /path/to/ca
{% endif %}
# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
{% if proxy is defined %}
proxy:
http_proxy: {{ proxy.http_proxy or ''}}
https_proxy: {{ proxy.https_proxy or ''}}
no_proxy: {{ proxy.no_proxy or ''}}
{% if proxy.components is defined %}
components:
{% for component in proxy.components %}
{% if component != 'clair' %}
- {{component}}
{% endif %}
{% endfor %}
{% endif %}
{% else %}
proxy:
http_proxy:
https_proxy:
no_proxy:
components:
- core
- jobservice
- trivy
{% endif %}
{% if metric is defined %}
metric:
enabled: {{ metric.enabled }}
port: {{ metric.port }}
path: {{ metric.path }}
{% else %}
# metric:
# enabled: false
# port: 9090
# path: /metrics
{% endif %}
# Trace related config
# only can enable one trace provider(jaeger or otel) at the same time,
# and when using jaeger as provider, can only enable it with agent mode or collector mode.
# if using jaeger collector mode, uncomment endpoint and uncomment username, password if needed
# if using jaeger agetn mode uncomment agent_host and agent_port
{% if trace is defined %}
trace:
enabled: {{ trace.enabled | lower}}
sample_rate: {{ trace.sample_rate }}
# # namespace used to differentiate different harbor services
{% if trace.namespace is defined %}
namespace: {{ trace.namespace }}
{% else %}
# namespace:
{% endif %}
# # attributes is a key value dict contains user defined attributes used to initialize trace provider
{% if trace.attributes is defined%}
attributes:
{% for name, value in trace.attributes.items() %}
{{name}}: {{value}}
{% endfor %}
{% else %}
# attributes:
# application: harbor
{% endif %}
{% if trace.jaeger is defined%}
jaeger:
endpoint: {{trace.jaeger.endpoint or '' }}
username: {{trace.jaeger.username or ''}}
password: {{trace.jaeger.password or ''}}
agent_host: {{trace.jaeger.agent_host or ''}}
agent_port: {{trace.jaeger.agent_port or ''}}
{% else %}
# jaeger:
# endpoint:
# username:
# password:
# agent_host:
# agent_port:
{% endif %}
{% if trace. otel is defined %}
otel:
endpoint: {{trace.otel.endpoint or '' }}
url_path: {{trace.otel.url_path or '' }}
compression: {{trace.otel.compression | lower }}
insecure: {{trace.otel.insecure | lower }}
timeout: {{trace.otel.timeout or '' }}
{% else %}
# otel:
# endpoint: hostname:4318
# url_path: /v1/traces
# compression: false
# insecure: true
# # timeout is in seconds
# timeout: 10
{% endif%}
{% else %}
# trace:
# enabled: true
# # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
# sample_rate: 1
# # # namespace used to differentiate different harbor services
# # namespace:
# # # attributes is a key value dict contains user defined attributes used to initialize trace provider
# # attributes:
# # application: harbor
# # jaeger:
# # endpoint: http://hostname:14268/api/traces
# # username:
# # password:
# # agent_host: hostname
# # agent_port: 6831
# # otel:
# # endpoint: hostname:4318
# # url_path: /v1/traces
# # compression: false
# # insecure: true
# # # timeout is in seconds
# # timeout: 10
{% endif %}
# enable purge _upload directories
{% if upload_purging is defined %}
upload_purging:
enabled: {{ upload_purging.enabled | lower}}
age: {{ upload_purging.age }}
interval: {{ upload_purging.interval }}
dryrun: {{ upload_purging.dryrun | lower}}
{% else %}
upload_purging:
enabled: true
# remove files in _upload directories which exist for a period of time, default is one week.
age: 168h
# the interval of the purge operations
interval: 24h
dryrun: false
{% endif %}
# Cache layer related config
{% if cache is defined %}
cache:
enabled: {{ cache.enabled | lower}}
expire_hours: {{ cache.expire_hours }}
{% else %}
cache:
enabled: false
expire_hours: 24
{% endif %}
# Harbor core configurations
# Uncomment to enable the following harbor core related configuration items.
{% if core is defined %}
core:
# The provider for updating project quota(usage), there are 2 options, redis or db,
# by default is implemented by db but you can switch the updation via redis which
# can improve the performance of high concurrent pushing to the same project,
# and reduce the database connections spike and occupies.
# By redis will bring up some delay for quota usage updation for display, so only
# suggest switch provider to redis if you were ran into the db connections spike aroud
# the scenario of high concurrent pushing to same project, no improvment for other scenes.
quota_update_provider: {{ core.quota_update_provider }}
{% else %}
# core:
# # The provider for updating project quota(usage), there are 2 options, redis or db,
# # by default is implemented by db but you can switch the updation via redis which
# # can improve the performance of high concurrent pushing to the same project,
# # and reduce the database connections spike and occupies.
# # By redis will bring up some delay for quota usage updation for display, so only
# # suggest switch provider to redis if you were ran into the db connections spike around
# # the scenario of high concurrent pushing to same project, no improvement for other scenes.
# quota_update_provider: redis # Or db
{% endif %}

View File

@ -41,7 +41,6 @@ REGISTRY_CREDENTIAL_PASSWORD={{registry_password}}
CSRF_KEY={{csrf_key}}
ROBOT_SCANNER_NAME_PREFIX={{scan_robot_prefix}}
PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE=docker-hub,harbor,azure-acr,ali-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory
REPLICATION_ADAPTER_WHITELIST=ali-acr,aws-ecr,azure-acr,docker-hub,docker-registry,github-ghcr,google-gcr,harbor,huawei-SWR,jfrog-artifactory,tencent-tcr,volcengine-cr
HTTP_PROXY={{core_http_proxy}}
HTTPS_PROXY={{core_https_proxy}}

View File

@ -67,7 +67,7 @@ metric:
reaper:
# the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24,
max_update_hours: {{ max_job_duration_hours }}
max_update_hours: 24
# the max time for execution in running state without new task created
max_dangling_hours: 168

View File

@ -6,8 +6,6 @@ REGISTRY_CONTROLLER_URL={{registry_controller_url}}
JOBSERVICE_WEBHOOK_JOB_MAX_RETRY={{notification_webhook_job_max_retry}}
JOBSERVICE_WEBHOOK_JOB_HTTP_CLIENT_TIMEOUT={{notification_webhook_job_http_client_timeout}}
LOG_LEVEL={{log_level}}
{%if internal_tls.enabled %}
INTERNAL_TLS_ENABLED=true
INTERNAL_TLS_TRUST_CA_PATH=/harbor_cust_cert/harbor_internal_ca.crt

View File

@ -40,7 +40,6 @@ redis:
dialtimeout: 10s
password: {{redis_password}}
db: {{redis_db_index_reg}}
enableTLS: {{redis_enableTLS}}
pool:
maxidle: 100
maxactive: 500

View File

@ -4,7 +4,7 @@ from pathlib import Path
from subprocess import DEVNULL
import logging
from g import DEFAULT_GID, DEFAULT_UID, shared_cert_dir, storage_ca_bundle_filename, internal_tls_dir, internal_ca_filename, redis_tls_ca_filename
from g import DEFAULT_GID, DEFAULT_UID, shared_cert_dir, storage_ca_bundle_filename, internal_tls_dir, internal_ca_filename
from .misc import (
mark_file,
generate_random_string,
@ -120,23 +120,18 @@ def prepare_trust_ca(config_dict):
internal_ca_src = internal_tls_dir.joinpath(internal_ca_filename)
ca_bundle_src = config_dict.get('registry_custom_ca_bundle_path')
redis_tls_ca_src = config_dict.get('redis_custom_tls_ca_path')
for src_path, dst_filename in (
(internal_ca_src, internal_ca_filename),
(ca_bundle_src, storage_ca_bundle_filename),
(redis_tls_ca_src, redis_tls_ca_filename)):
print('copy {} to shared trust ca dir as name {} ...'.format(src_path, dst_filename))
(ca_bundle_src, storage_ca_bundle_filename)):
logging.info('copy {} to shared trust ca dir as name {} ...'.format(src_path, dst_filename))
# check if source file valied
if not src_path:
continue
real_src_path = get_realpath(str(src_path))
if not real_src_path.exists():
print('ca file {} is not exist'.format(real_src_path))
logging.info('ca file {} is not exist'.format(real_src_path))
continue
if not real_src_path.is_file():
print('{} is not file'.format(real_src_path))
logging.info('{} is not file'.format(real_src_path))
continue

View File

@ -1,4 +1,3 @@
from distutils.command.config import config
import logging
import os
import yaml
@ -223,10 +222,6 @@ def parse_yaml_config(config_file_path, with_trivy):
# jobservice config
js_config = configs.get('jobservice') or {}
config_dict['max_job_workers'] = js_config["max_job_workers"]
config_dict['max_job_duration_hours'] = js_config.get("max_job_duration_hours") or 24
value = config_dict["max_job_duration_hours"]
if not isinstance(value, int) or value < 24:
config_dict["max_job_duration_hours"] = 24
config_dict['job_loggers'] = js_config["job_loggers"]
config_dict['logger_sweeper_duration'] = js_config["logger_sweeper_duration"]
config_dict['jobservice_secret'] = generate_random_string(16)
@ -354,11 +349,6 @@ def parse_yaml_config(config_file_path, with_trivy):
return config_dict
def get_redis_schema(redis=None):
if 'tlsOptions' in redis and redis['tlsOptions'].get('enable'):
return redis.get('sentinel_master_set', None) and 'rediss+sentinel' or 'rediss'
else:
return redis.get('sentinel_master_set', None) and 'redis+sentinel' or 'redis'
def get_redis_url(db, redis=None):
"""Returns redis url with format `redis://[arbitrary_username:password@]ipaddress:port/database_index?idle_timeout_seconds=30`
@ -378,7 +368,7 @@ def get_redis_url(db, redis=None):
'password': '',
}
kwargs.update(redis or {})
kwargs['scheme'] = get_redis_schema(kwargs)
kwargs['scheme'] = kwargs.get('sentinel_master_set', None) and 'redis+sentinel' or 'redis'
kwargs['db_part'] = db and ("/%s" % db) or ""
kwargs['sentinel_part'] = kwargs.get('sentinel_master_set', None) and ("/" + kwargs['sentinel_master_set']) or ''
kwargs['password_part'] = quote(str(kwargs.get('password', None)), safe='') and (':%s@' % quote(str(kwargs['password']), safe='')) or ''
@ -463,8 +453,5 @@ def get_redis_configs(internal_redis=None, external_redis=None, with_trivy=True)
if with_trivy:
configs['trivy_redis_url'] = get_redis_url(redis['trivy_db_index'], redis)
if 'tlsOptions' in redis and redis['tlsOptions'].get('enable'):
configs['redis_custom_tls_ca_path'] = redis['tlsOptions']['rootCA']
return configs

View File

@ -33,7 +33,6 @@ def prepare_job_service(config_dict):
gid=DEFAULT_GID,
internal_tls=config_dict['internal_tls'],
max_job_workers=config_dict['max_job_workers'],
max_job_duration_hours=config_dict['max_job_duration_hours'],
job_loggers=config_dict['job_loggers'],
logger_sweeper_duration=config_dict['logger_sweeper_duration'],
redis_url=config_dict['redis_url_js'],

View File

@ -48,14 +48,6 @@ def parse_redis(redis_url):
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': u.path and int(u.path[1:]) or 0,
'redis_enableTLS': 'false',
}
elif u.scheme == 'rediss':
return {
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': u.path and int(u.path[1:]) or 0,
'redis_enableTLS': 'true',
}
elif u.scheme == 'redis+sentinel':
return {
@ -63,15 +55,6 @@ def parse_redis(redis_url):
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': len(u.path.split('/')) == 3 and int(u.path.split('/')[2]) or 0,
'redis_enableTLS': 'false',
}
elif u.scheme == 'rediss+sentinel':
return {
'sentinel_master_set': u.path.split('/')[1],
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': len(u.path.split('/')) == 3 and int(u.path.split('/')[2]) or 0,
'redis_enableTLS': 'true',
}
else:
raise Exception('bad redis url for registry:' + redis_url)

View File

@ -1,5 +1,4 @@
ARG golang_image
FROM ${golang_image}
FROM golang:1.23.8
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
ENV BUILDTAGS include_oss include_gcs

View File

@ -14,8 +14,6 @@ fi
VERSION="$1"
DISTRIBUTION_SRC="$2"
GOBUILDIMAGE="$3"
DOCKERNETWORK="$4"
set -e
@ -30,11 +28,14 @@ cur=$PWD
TEMP=`mktemp -d ${TMPDIR-/tmp}/distribution.XXXXXX`
git clone -b $VERSION $DISTRIBUTION_SRC $TEMP
# add patch redis
cd $TEMP
git apply $cur/redis.patch
cd $cur
echo 'build the registry binary ...'
cp Dockerfile.binary $TEMP
docker build --network=$DOCKERNETWORK --build-arg golang_image=$GOBUILDIMAGE -f $TEMP/Dockerfile.binary -t registry-golang $TEMP
docker build -f $TEMP/Dockerfile.binary -t registry-golang $TEMP
echo 'copy the registry binary to local...'
ID=$(docker create registry-golang)

View File

@ -0,0 +1,883 @@
diff --git a/configuration/configuration.go b/configuration/configuration.go
index 7076df85d4..3e74330321 100644
--- a/configuration/configuration.go
+++ b/configuration/configuration.go
@@ -168,6 +168,9 @@ type Configuration struct {
// Addr specifies the the redis instance available to the application.
Addr string `yaml:"addr,omitempty"`
+ // SentinelMasterSet specifies the the redis sentinel master set name.
+ SentinelMasterSet string `yaml:"sentinelMasterSet,omitempty"`
+
// Password string to use when making a connection.
Password string `yaml:"password,omitempty"`
diff --git a/registry/handlers/app.go b/registry/handlers/app.go
index bf56cea22a..4a7cee9a2e 100644
--- a/registry/handlers/app.go
+++ b/registry/handlers/app.go
@@ -3,6 +3,7 @@ package handlers
import (
"context"
"crypto/rand"
+ "errors"
"expvar"
"fmt"
"math"
@@ -16,6 +17,7 @@ import (
"strings"
"time"
+ "github.com/FZambia/sentinel"
"github.com/distribution/reference"
"github.com/docker/distribution"
"github.com/docker/distribution/configuration"
@@ -499,6 +501,45 @@ func (app *App) configureRedis(configuration *configuration.Configuration) {
return
}
+ var getRedisAddr func() (string, error)
+ var testOnBorrow func(c redis.Conn, t time.Time) error
+ if configuration.Redis.SentinelMasterSet != "" {
+ sntnl := &sentinel.Sentinel{
+ Addrs: strings.Split(configuration.Redis.Addr, ","),
+ MasterName: configuration.Redis.SentinelMasterSet,
+ Dial: func(addr string) (redis.Conn, error) {
+ c, err := redis.DialTimeout("tcp", addr,
+ configuration.Redis.DialTimeout,
+ configuration.Redis.ReadTimeout,
+ configuration.Redis.WriteTimeout)
+ if err != nil {
+ return nil, err
+ }
+ return c, nil
+ },
+ }
+ getRedisAddr = func() (string, error) {
+ return sntnl.MasterAddr()
+ }
+ testOnBorrow = func(c redis.Conn, t time.Time) error {
+ if !sentinel.TestRole(c, "master") {
+ return errors.New("role check failed")
+ }
+ return nil
+ }
+
+ } else {
+ getRedisAddr = func() (string, error) {
+ return configuration.Redis.Addr, nil
+ }
+ testOnBorrow = func(c redis.Conn, t time.Time) error {
+ // TODO(stevvooe): We can probably do something more interesting
+ // here with the health package.
+ _, err := c.Do("PING")
+ return err
+ }
+ }
+
pool := &redis.Pool{
Dial: func() (redis.Conn, error) {
// TODO(stevvooe): Yet another use case for contextual timing.
@@ -514,8 +555,11 @@ func (app *App) configureRedis(configuration *configuration.Configuration) {
}
}
- conn, err := redis.DialTimeout("tcp",
- configuration.Redis.Addr,
+ redisAddr, err := getRedisAddr()
+ if err != nil {
+ return nil, err
+ }
+ conn, err := redis.DialTimeout("tcp", redisAddr,
configuration.Redis.DialTimeout,
configuration.Redis.ReadTimeout,
configuration.Redis.WriteTimeout)
@@ -547,16 +591,11 @@ func (app *App) configureRedis(configuration *configuration.Configuration) {
done(nil)
return conn, nil
},
- MaxIdle: configuration.Redis.Pool.MaxIdle,
- MaxActive: configuration.Redis.Pool.MaxActive,
- IdleTimeout: configuration.Redis.Pool.IdleTimeout,
- TestOnBorrow: func(c redis.Conn, t time.Time) error {
- // TODO(stevvooe): We can probably do something more interesting
- // here with the health package.
- _, err := c.Do("PING")
- return err
- },
- Wait: false, // if a connection is not available, proceed without cache.
+ MaxIdle: configuration.Redis.Pool.MaxIdle,
+ MaxActive: configuration.Redis.Pool.MaxActive,
+ IdleTimeout: configuration.Redis.Pool.IdleTimeout,
+ TestOnBorrow: testOnBorrow,
+ Wait: false, // if a connection is not available, proceed without cache.
}
app.redis = pool
diff --git a/registry/handlers/app_test.go b/registry/handlers/app_test.go
index 60a57e6c15..8a644d83d8 100644
--- a/registry/handlers/app_test.go
+++ b/registry/handlers/app_test.go
@@ -140,7 +140,29 @@ func TestAppDispatcher(t *testing.T) {
// TestNewApp covers the creation of an application via NewApp with a
// configuration.
func TestNewApp(t *testing.T) {
- ctx := context.Background()
+
+ config := configuration.Configuration{
+ Storage: configuration.Storage{
+ "testdriver": nil,
+ "maintenance": configuration.Parameters{"uploadpurging": map[interface{}]interface{}{
+ "enabled": false,
+ }},
+ },
+ Auth: configuration.Auth{
+ // For now, we simply test that new auth results in a viable
+ // application.
+ "silly": {
+ "realm": "realm-test",
+ "service": "service-test",
+ },
+ },
+ }
+ runAppWithConfig(t, config)
+}
+
+// TestNewApp covers the creation of an application via NewApp with a
+// configuration(with redis).
+func TestNewAppWithRedis(t *testing.T) {
config := configuration.Configuration{
Storage: configuration.Storage{
"testdriver": nil,
@@ -157,7 +179,38 @@ func TestNewApp(t *testing.T) {
},
},
}
+ config.Redis.Addr = "127.0.0.1:6379"
+ config.Redis.DB = 0
+ runAppWithConfig(t, config)
+}
+// TestNewApp covers the creation of an application via NewApp with a
+// configuration(with redis sentinel cluster).
+func TestNewAppWithRedisSentinelCluster(t *testing.T) {
+ config := configuration.Configuration{
+ Storage: configuration.Storage{
+ "testdriver": nil,
+ "maintenance": configuration.Parameters{"uploadpurging": map[interface{}]interface{}{
+ "enabled": false,
+ }},
+ },
+ Auth: configuration.Auth{
+ // For now, we simply test that new auth results in a viable
+ // application.
+ "silly": {
+ "realm": "realm-test",
+ "service": "service-test",
+ },
+ },
+ }
+ config.Redis.Addr = "192.168.0.11:26379,192.168.0.12:26379"
+ config.Redis.DB = 0
+ config.Redis.SentinelMasterSet = "mymaster"
+ runAppWithConfig(t, config)
+}
+
+func runAppWithConfig(t *testing.T, config configuration.Configuration) {
+ ctx := context.Background()
// Mostly, with this test, given a sane configuration, we are simply
// ensuring that NewApp doesn't panic. We might want to tweak this
// behavior.
diff --git a/vendor.conf b/vendor.conf
index 33fe616b76..a8d8f58bc6 100644
--- a/vendor.conf
+++ b/vendor.conf
@@ -51,3 +51,4 @@ gopkg.in/yaml.v2 v2.2.1
rsc.io/letsencrypt e770c10b0f1a64775ae91d240407ce00d1a5bdeb https://github.com/dmcgowan/letsencrypt.git
github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0
github.com/opencontainers/image-spec 67d2d5658fe0476ab9bf414cec164077ebff3920 # v1.0.2
+github.com/FZambia/sentinel 5585739eb4b6478aa30161866ccf9ce0ef5847c7 https://github.com/jeremyxu2010/sentinel.git
diff --git a/vendor/github.com/FZambia/sentinel/LICENSE b/vendor/github.com/FZambia/sentinel/LICENSE
new file mode 100644
index 0000000000..8dada3edaf
--- /dev/null
+++ b/vendor/github.com/FZambia/sentinel/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/FZambia/sentinel/README.md b/vendor/github.com/FZambia/sentinel/README.md
new file mode 100644
index 0000000000..f544c54ef6
--- /dev/null
+++ b/vendor/github.com/FZambia/sentinel/README.md
@@ -0,0 +1,39 @@
+go-sentinel
+===========
+
+Redis Sentinel support for [redigo](https://github.com/gomodule/redigo) library.
+
+Documentation
+-------------
+
+- [API Reference](http://godoc.org/github.com/FZambia/sentinel)
+
+Alternative solution
+--------------------
+
+You can alternatively configure Haproxy between your application and Redis to proxy requests to Redis master instance if you only need HA:
+
+```
+listen redis
+ server redis-01 127.0.0.1:6380 check port 6380 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2
+ server redis-02 127.0.0.1:6381 check port 6381 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 backup
+ bind *:6379
+ mode tcp
+ option tcpka
+ option tcplog
+ option tcp-check
+ tcp-check send PING\r\n
+ tcp-check expect string +PONG
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
+ tcp-check send QUIT\r\n
+ tcp-check expect string +OK
+ balance roundrobin
+```
+
+This way you don't need to use this library.
+
+License
+-------
+
+Library is available under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html).
diff --git a/vendor/github.com/FZambia/sentinel/sentinel.go b/vendor/github.com/FZambia/sentinel/sentinel.go
new file mode 100644
index 0000000000..79209e9f0d
--- /dev/null
+++ b/vendor/github.com/FZambia/sentinel/sentinel.go
@@ -0,0 +1,426 @@
+package sentinel
+
+import (
+ "errors"
+ "fmt"
+ "net"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/garyburd/redigo/redis"
+)
+
+// Sentinel provides a way to add high availability (HA) to Redis Pool using
+// preconfigured addresses of Sentinel servers and name of master which Sentinels
+// monitor. It works with Redis >= 2.8.12 (mostly because of ROLE command that
+// was introduced in that version, it's possible though to support old versions
+// using INFO command).
+//
+// Example of the simplest usage to contact master "mymaster":
+//
+// func newSentinelPool() *redis.Pool {
+// sntnl := &sentinel.Sentinel{
+// Addrs: []string{":26379", ":26380", ":26381"},
+// MasterName: "mymaster",
+// Dial: func(addr string) (redis.Conn, error) {
+// timeout := 500 * time.Millisecond
+// c, err := redis.DialTimeout("tcp", addr, timeout, timeout, timeout)
+// if err != nil {
+// return nil, err
+// }
+// return c, nil
+// },
+// }
+// return &redis.Pool{
+// MaxIdle: 3,
+// MaxActive: 64,
+// Wait: true,
+// IdleTimeout: 240 * time.Second,
+// Dial: func() (redis.Conn, error) {
+// masterAddr, err := sntnl.MasterAddr()
+// if err != nil {
+// return nil, err
+// }
+// c, err := redis.Dial("tcp", masterAddr)
+// if err != nil {
+// return nil, err
+// }
+// return c, nil
+// },
+// TestOnBorrow: func(c redis.Conn, t time.Time) error {
+// if !sentinel.TestRole(c, "master") {
+// return errors.New("Role check failed")
+// } else {
+// return nil
+// }
+// },
+// }
+// }
+type Sentinel struct {
+ // Addrs is a slice with known Sentinel addresses.
+ Addrs []string
+
+ // MasterName is a name of Redis master Sentinel servers monitor.
+ MasterName string
+
+ // Dial is a user supplied function to connect to Sentinel on given address. This
+ // address will be chosen from Addrs slice.
+ // Note that as per the redis-sentinel client guidelines, a timeout is mandatory
+ // while connecting to Sentinels, and should not be set to 0.
+ Dial func(addr string) (redis.Conn, error)
+
+ // Pool is a user supplied function returning custom connection pool to Sentinel.
+ // This can be useful to tune options if you are not satisfied with what default
+ // Sentinel pool offers. See defaultPool() method for default pool implementation.
+ // In most cases you only need to provide Dial function and let this be nil.
+ Pool func(addr string) *redis.Pool
+
+ mu sync.RWMutex
+ pools map[string]*redis.Pool
+ addr string
+}
+
+// NoSentinelsAvailable is returned when all sentinels in the list are exhausted
+// (or none configured), and contains the last error returned by Dial (which
+// may be nil)
+type NoSentinelsAvailable struct {
+ lastError error
+}
+
+func (ns NoSentinelsAvailable) Error() string {
+ if ns.lastError != nil {
+ return fmt.Sprintf("redigo: no sentinels available; last error: %s", ns.lastError.Error())
+ }
+ return fmt.Sprintf("redigo: no sentinels available")
+}
+
+// putToTop puts Sentinel address to the top of address list - this means
+// that all next requests will use Sentinel on this address first.
+//
+// From Sentinel guidelines:
+//
+// The first Sentinel replying to the client request should be put at the
+// start of the list, so that at the next reconnection, we'll try first
+// the Sentinel that was reachable in the previous connection attempt,
+// minimizing latency.
+//
+// Lock must be held by caller.
+func (s *Sentinel) putToTop(addr string) {
+ addrs := s.Addrs
+ if addrs[0] == addr {
+ // Already on top.
+ return
+ }
+ newAddrs := []string{addr}
+ for _, a := range addrs {
+ if a == addr {
+ continue
+ }
+ newAddrs = append(newAddrs, a)
+ }
+ s.Addrs = newAddrs
+}
+
+// putToBottom puts Sentinel address to the bottom of address list.
+// We call this method internally when see that some Sentinel failed to answer
+// on application request so next time we start with another one.
+//
+// Lock must be held by caller.
+func (s *Sentinel) putToBottom(addr string) {
+ addrs := s.Addrs
+ if addrs[len(addrs)-1] == addr {
+ // Already on bottom.
+ return
+ }
+ newAddrs := []string{}
+ for _, a := range addrs {
+ if a == addr {
+ continue
+ }
+ newAddrs = append(newAddrs, a)
+ }
+ newAddrs = append(newAddrs, addr)
+ s.Addrs = newAddrs
+}
+
+// defaultPool returns a connection pool to one Sentinel. This allows
+// us to call concurrent requests to Sentinel using connection Do method.
+func (s *Sentinel) defaultPool(addr string) *redis.Pool {
+ return &redis.Pool{
+ MaxIdle: 3,
+ MaxActive: 10,
+ Wait: true,
+ IdleTimeout: 240 * time.Second,
+ Dial: func() (redis.Conn, error) {
+ return s.Dial(addr)
+ },
+ TestOnBorrow: func(c redis.Conn, t time.Time) error {
+ _, err := c.Do("PING")
+ return err
+ },
+ }
+}
+
+func (s *Sentinel) get(addr string) redis.Conn {
+ pool := s.poolForAddr(addr)
+ return pool.Get()
+}
+
+func (s *Sentinel) poolForAddr(addr string) *redis.Pool {
+ s.mu.Lock()
+ if s.pools == nil {
+ s.pools = make(map[string]*redis.Pool)
+ }
+ pool, ok := s.pools[addr]
+ if ok {
+ s.mu.Unlock()
+ return pool
+ }
+ s.mu.Unlock()
+ newPool := s.newPool(addr)
+ s.mu.Lock()
+ p, ok := s.pools[addr]
+ if ok {
+ s.mu.Unlock()
+ return p
+ }
+ s.pools[addr] = newPool
+ s.mu.Unlock()
+ return newPool
+}
+
+func (s *Sentinel) newPool(addr string) *redis.Pool {
+ if s.Pool != nil {
+ return s.Pool(addr)
+ }
+ return s.defaultPool(addr)
+}
+
+// close connection pool to Sentinel.
+// Lock must be hold by caller.
+func (s *Sentinel) close() {
+ if s.pools != nil {
+ for _, pool := range s.pools {
+ pool.Close()
+ }
+ }
+ s.pools = nil
+}
+
+func (s *Sentinel) doUntilSuccess(f func(redis.Conn) (interface{}, error)) (interface{}, error) {
+ s.mu.RLock()
+ addrs := s.Addrs
+ s.mu.RUnlock()
+
+ var lastErr error
+
+ for _, addr := range addrs {
+ conn := s.get(addr)
+ reply, err := f(conn)
+ conn.Close()
+ if err != nil {
+ lastErr = err
+ s.mu.Lock()
+ pool, ok := s.pools[addr]
+ if ok {
+ pool.Close()
+ delete(s.pools, addr)
+ }
+ s.putToBottom(addr)
+ s.mu.Unlock()
+ continue
+ }
+ s.putToTop(addr)
+ return reply, nil
+ }
+
+ return nil, NoSentinelsAvailable{lastError: lastErr}
+}
+
+// MasterAddr returns an address of current Redis master instance.
+func (s *Sentinel) MasterAddr() (string, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForMaster(c, s.MasterName)
+ })
+ if err != nil {
+ return "", err
+ }
+ return res.(string), nil
+}
+
+// SlaveAddrs returns a slice with known slave addresses of current master instance.
+func (s *Sentinel) SlaveAddrs() ([]string, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForSlaveAddrs(c, s.MasterName)
+ })
+ if err != nil {
+ return nil, err
+ }
+ return res.([]string), nil
+}
+
+// Slave represents a Redis slave instance which is known by Sentinel.
+type Slave struct {
+ ip string
+ port string
+ flags string
+}
+
+// Addr returns an address of slave.
+func (s *Slave) Addr() string {
+ return net.JoinHostPort(s.ip, s.port)
+}
+
+// Available returns if slave is in working state at moment based on information in slave flags.
+func (s *Slave) Available() bool {
+ return !strings.Contains(s.flags, "disconnected") && !strings.Contains(s.flags, "s_down")
+}
+
+// Slaves returns a slice with known slaves of master instance.
+func (s *Sentinel) Slaves() ([]*Slave, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForSlaves(c, s.MasterName)
+ })
+ if err != nil {
+ return nil, err
+ }
+ return res.([]*Slave), nil
+}
+
+// SentinelAddrs returns a slice of known Sentinel addresses Sentinel server aware of.
+func (s *Sentinel) SentinelAddrs() ([]string, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForSentinels(c, s.MasterName)
+ })
+ if err != nil {
+ return nil, err
+ }
+ return res.([]string), nil
+}
+
+// Discover allows to update list of known Sentinel addresses. From docs:
+//
+// A client may update its internal list of Sentinel nodes following this procedure:
+// 1) Obtain a list of other Sentinels for this master using the command SENTINEL sentinels <master-name>.
+// 2) Add every ip:port pair not already existing in our list at the end of the list.
+func (s *Sentinel) Discover() error {
+ addrs, err := s.SentinelAddrs()
+ if err != nil {
+ return err
+ }
+ s.mu.Lock()
+ for _, addr := range addrs {
+ if !stringInSlice(addr, s.Addrs) {
+ s.Addrs = append(s.Addrs, addr)
+ }
+ }
+ s.mu.Unlock()
+ return nil
+}
+
+// Close closes current connection to Sentinel.
+func (s *Sentinel) Close() error {
+ s.mu.Lock()
+ s.close()
+ s.mu.Unlock()
+ return nil
+}
+
+// TestRole wraps GetRole in a test to verify if the role matches an expected
+// role string. If there was any error in querying the supplied connection,
+// the function returns false. Works with Redis >= 2.8.12.
+// It's not goroutine safe, but if you call this method on pooled connections
+// then you are OK.
+func TestRole(c redis.Conn, expectedRole string) bool {
+ role, err := getRole(c)
+ if err != nil || role != expectedRole {
+ return false
+ }
+ return true
+}
+
+// getRole is a convenience function supplied to query an instance (master or
+// slave) for its role. It attempts to use the ROLE command introduced in
+// redis 2.8.12.
+func getRole(c redis.Conn) (string, error) {
+ res, err := c.Do("ROLE")
+ if err != nil {
+ return "", err
+ }
+ rres, ok := res.([]interface{})
+ if ok {
+ return redis.String(rres[0], nil)
+ }
+ return "", errors.New("redigo: can not transform ROLE reply to string")
+}
+
+func queryForMaster(conn redis.Conn, masterName string) (string, error) {
+ res, err := redis.Strings(conn.Do("SENTINEL", "get-master-addr-by-name", masterName))
+ if err != nil {
+ return "", err
+ }
+ if len(res) < 2 {
+ return "", errors.New("redigo: malformed get-master-addr-by-name reply")
+ }
+ masterAddr := net.JoinHostPort(res[0], res[1])
+ return masterAddr, nil
+}
+
+func queryForSlaveAddrs(conn redis.Conn, masterName string) ([]string, error) {
+ slaves, err := queryForSlaves(conn, masterName)
+ if err != nil {
+ return nil, err
+ }
+ slaveAddrs := make([]string, 0)
+ for _, slave := range slaves {
+ slaveAddrs = append(slaveAddrs, slave.Addr())
+ }
+ return slaveAddrs, nil
+}
+
+func queryForSlaves(conn redis.Conn, masterName string) ([]*Slave, error) {
+ res, err := redis.Values(conn.Do("SENTINEL", "slaves", masterName))
+ if err != nil {
+ return nil, err
+ }
+ slaves := make([]*Slave, 0)
+ for _, a := range res {
+ sm, err := redis.StringMap(a, err)
+ if err != nil {
+ return slaves, err
+ }
+ slave := &Slave{
+ ip: sm["ip"],
+ port: sm["port"],
+ flags: sm["flags"],
+ }
+ slaves = append(slaves, slave)
+ }
+ return slaves, nil
+}
+
+func queryForSentinels(conn redis.Conn, masterName string) ([]string, error) {
+ res, err := redis.Values(conn.Do("SENTINEL", "sentinels", masterName))
+ if err != nil {
+ return nil, err
+ }
+ sentinels := make([]string, 0)
+ for _, a := range res {
+ sm, err := redis.StringMap(a, err)
+ if err != nil {
+ return sentinels, err
+ }
+ sentinels = append(sentinels, fmt.Sprintf("%s:%s", sm["ip"], sm["port"]))
+ }
+ return sentinels, nil
+}
+
+func stringInSlice(str string, slice []string) bool {
+ for _, s := range slice {
+ if s == str {
+ return true
+ }
+ }
+ return false
+}

View File

@ -1,5 +1,4 @@
ARG golang_image
FROM ${golang_image}
FROM golang:1.23.8
ADD . /go/src/github.com/goharbor/harbor-scanner-trivy/
WORKDIR /go/src/github.com/goharbor/harbor-scanner-trivy/

View File

@ -8,8 +8,6 @@ if [ -z $1 ]; then
fi
VERSION="$1"
GOBUILDIMAGE="$2"
DOCKERNETWORK="$3"
set -e
@ -21,9 +19,9 @@ TEMP=$(mktemp -d ${TMPDIR-/tmp}/trivy-adapter.XXXXXX)
git clone https://github.com/goharbor/harbor-scanner-trivy.git $TEMP
cd $TEMP; git checkout $VERSION; cd -
echo "Building Trivy adapter binary ..."
echo "Building Trivy adapter binary based on golang:1.23.8..."
cp Dockerfile.binary $TEMP
docker build --network=$DOCKERNETWORK --build-arg golang_image=$GOBUILDIMAGE -f $TEMP/Dockerfile.binary -t trivy-adapter-golang $TEMP
docker build -f $TEMP/Dockerfile.binary -t trivy-adapter-golang $TEMP
echo "Copying Trivy adapter binary from the container to the local directory..."
ID=$(docker create trivy-adapter-golang)

View File

@ -1,56 +1,76 @@
version: "2"
linters-settings:
gofmt:
# Simplify code: gofmt with `-s` option.
# Default: true
simplify: false
misspell:
locale: US,UK
goimports:
local-prefixes: github.com/goharbor/harbor
stylecheck:
checks: [
"ST1019", # Importing the same package multiple times.
]
goheader:
template-path: copyright.tmpl
linters:
default: none
enable:
- bodyclose
- errcheck
- goheader
- govet
- ineffassign
- misspell
- revive
- staticcheck
- whitespace
settings:
goheader:
template-path: copyright.tmpl
misspell:
locale: US,UK
staticcheck:
checks:
- ST1019
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
- .*_test\.go
- .*test\.go
- testing
- src/jobservice/mgt/mock_manager.go
formatters:
disable-all: true
enable:
- gofmt
- goheader
- misspell
- typecheck
# - dogsled
# - dupl
# - depguard
# - funlen
# - goconst
# - gocritic
# - gocyclo
# - goimports
# - goprintffuncname
- ineffassign
# - nakedret
# - nolintlint
- revive
- whitespace
- bodyclose
- errcheck
# - gosec
- gosimple
- goimports
settings:
gofmt:
simplify: false
goimports:
local-prefixes:
- github.com/goharbor/harbor
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
- .*_test\.go
- .*test\.go
- testing
- src/jobservice/mgt/mock_manager.go
- govet
# - noctx
# - rowserrcheck
- staticcheck
- stylecheck
# - unconvert
# - unparam
# - unused // disabled due to too many false positive check and limited support golang 1.19 https://github.com/dominikh/go-tools/issues/1282
run:
skip-files:
- ".*_test.go"
- ".*test.go"
skip-dirs:
- "testing"
timeout: 20m
issue:
max-same-issues: 0
max-per-linter: 0
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- goimports
- path: src/testing/*.go
linters:
- goimports
- path: src/jobservice/mgt/mock_manager.go
linters:
- goimports

View File

@ -462,16 +462,6 @@ packages:
DAO:
config:
dir: testing/pkg/audit/dao
github.com/goharbor/harbor/src/pkg/auditext:
interfaces:
Manager:
config:
dir: testing/pkg/auditext
github.com/goharbor/harbor/src/pkg/auditext/dao:
interfaces:
DAO:
config:
dir: testing/pkg/auditext/dao
github.com/goharbor/harbor/src/pkg/systemartifact:
interfaces:
Manager:

View File

@ -78,7 +78,7 @@ func (b *BaseAPI) RenderError(code int, text string) {
}
// DecodeJSONReq decodes a json request
func (b *BaseAPI) DecodeJSONReq(v any) error {
func (b *BaseAPI) DecodeJSONReq(v interface{}) error {
err := json.Unmarshal(b.Ctx.Input.CopyBody(1<<35), v)
if err != nil {
log.Errorf("Error while decoding the json request, error: %v, %v",
@ -89,7 +89,7 @@ func (b *BaseAPI) DecodeJSONReq(v any) error {
}
// Validate validates v if it implements interface validation.ValidFormer
func (b *BaseAPI) Validate(v any) (bool, error) {
func (b *BaseAPI) Validate(v interface{}) (bool, error) {
validator := validation.Validation{}
isValid, err := validator.Valid(v)
if err != nil {
@ -108,7 +108,7 @@ func (b *BaseAPI) Validate(v any) (bool, error) {
}
// DecodeJSONReqAndValidate does both decoding and validation
func (b *BaseAPI) DecodeJSONReqAndValidate(v any) (bool, error) {
func (b *BaseAPI) DecodeJSONReqAndValidate(v interface{}) (bool, error) {
if err := b.DecodeJSONReq(v); err != nil {
return false, err
}

View File

@ -119,7 +119,6 @@ const (
OIDCExtraRedirectParms = "oidc_extra_redirect_parms"
OIDCScope = "oidc_scope"
OIDCUserClaim = "oidc_user_claim"
OIDCLogout = "oidc_logout"
CfgDriverDB = "db"
NewHarborAdminName = "admin@harbor.local"
@ -152,7 +151,6 @@ const (
OIDCCallbackPath = "/c/oidc/callback"
OIDCLoginPath = "/c/oidc/login"
OIDCLoginoutPath = "/c/oidc/logout"
AuthProxyRedirectPath = "/c/authproxy/redirect"
@ -210,7 +208,7 @@ const (
// 24h.
DefaultCacheExpireHours = 24
PurgeAuditIncludeEventTypes = "include_event_types"
PurgeAuditIncludeOperations = "include_operations"
PurgeAuditDryRun = "dry_run"
PurgeAuditRetentionHour = "audit_retention_hour"
// AuditLogForwardEndpoint indicate to forward the audit log to an endpoint
@ -222,9 +220,6 @@ const (
// ScannerSkipUpdatePullTime
ScannerSkipUpdatePullTime = "scanner_skip_update_pulltime"
// AuditLogEventsDisabled ...
AuditLogEventsDisabled = "disabled_audit_log_event_types"
// SessionTimeout defines the web session timeout
SessionTimeout = "session_timeout"
@ -252,7 +247,4 @@ const (
// Global Leeway used for token validation
JwtLeeway = 60 * time.Second
// The replication adapter whitelist
ReplicationAdapterWhiteList = "REPLICATION_ADAPTER_WHITELIST"
)

View File

@ -144,6 +144,6 @@ func (l *mLogger) Verbose() bool {
}
// Printf ...
func (l *mLogger) Printf(format string, v ...any) {
func (l *mLogger) Printf(format string, v ...interface{}) {
l.logger.Infof(format, v...)
}

View File

@ -29,7 +29,7 @@ import (
var testCtx context.Context
func execUpdate(o orm.TxOrmer, sql string, params ...any) error {
func execUpdate(o orm.TxOrmer, sql string, params ...interface{}) error {
p, err := o.Raw(sql).Prepare()
if err != nil {
return err

View File

@ -27,7 +27,7 @@ func TestMaxOpenConns(t *testing.T) {
queryNum := 200
results := make([]bool, queryNum)
for i := range queryNum {
for i := 0; i < queryNum; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()

View File

@ -142,7 +142,7 @@ func ArrayEqual(arrayA, arrayB []int) bool {
return false
}
size := len(arrayA)
for i := range size {
for i := 0; i < size; i++ {
if arrayA[i] != arrayB[i] {
return false
}

View File

@ -69,7 +69,7 @@ func (c *Client) Do(req *http.Request) (*http.Response, error) {
}
// Get ...
func (c *Client) Get(url string, v ...any) error {
func (c *Client) Get(url string, v ...interface{}) error {
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return err
@ -98,7 +98,7 @@ func (c *Client) Head(url string) error {
}
// Post ...
func (c *Client) Post(url string, v ...any) error {
func (c *Client) Post(url string, v ...interface{}) error {
var reader io.Reader
if len(v) > 0 {
if r, ok := v[0].(io.Reader); ok {
@ -123,7 +123,7 @@ func (c *Client) Post(url string, v ...any) error {
}
// Put ...
func (c *Client) Put(url string, v ...any) error {
func (c *Client) Put(url string, v ...interface{}) error {
var reader io.Reader
if len(v) > 0 {
data, err := json.Marshal(v[0])
@ -176,7 +176,7 @@ func (c *Client) do(req *http.Request) ([]byte, error) {
// GetAndIteratePagination iterates the pagination header and returns all resources
// The parameter "v" must be a pointer to a slice
func (c *Client) GetAndIteratePagination(endpoint string, v any) error {
func (c *Client) GetAndIteratePagination(endpoint string, v interface{}) error {
url, err := url.Parse(endpoint)
if err != nil {
return err

View File

@ -15,7 +15,7 @@
package models
// Parameters for job execution.
type Parameters map[string]any
type Parameters map[string]interface{}
// JobRequest is the request of launching a job.
type JobRequest struct {
@ -96,5 +96,5 @@ type JobStatusChange struct {
// Message is designed for sub/pub messages
type Message struct {
Event string
Data any // generic format
Data interface{} // generic format
}

View File

@ -23,7 +23,7 @@ type OIDCUser struct {
ID int64 `orm:"pk;auto;column(id)" json:"id"`
UserID int `orm:"column(user_id)" json:"user_id"`
// encrypted secret
Secret string `orm:"column(secret)" filter:"false" json:"-"`
Secret string `orm:"column(secret)" json:"-"`
// secret in plain text
PlainSecret string `orm:"-" json:"secret"`
SubIss string `orm:"column(subiss)" json:"subiss"`

View File

@ -119,7 +119,7 @@ func BenchmarkProjectEvaluator(b *testing.B) {
resource := NewNamespace(public.ProjectID).Resource(rbac.ResourceRepository)
b.ResetTimer()
for b.Loop() {
for i := 0; i < b.N; i++ {
evaluator.HasPermission(context.TODO(), resource, rbac.ActionPull)
}
}

View File

@ -43,7 +43,7 @@ func (ns *projectNamespace) Resource(subresources ...types.Resource) types.Resou
return types.Resource(fmt.Sprintf("/project/%d", ns.projectID)).Subresource(subresources...)
}
func (ns *projectNamespace) Identity() any {
func (ns *projectNamespace) Identity() interface{} {
return ns.projectID
}

View File

@ -127,6 +127,8 @@ var (
{Resource: rbac.ResourceMetadata, Action: rbac.ActionRead},
{Resource: rbac.ResourceLog, Action: rbac.ActionList},
{Resource: rbac.ResourceQuota, Action: rbac.ActionRead},
{Resource: rbac.ResourceLabel, Action: rbac.ActionCreate},
@ -162,7 +164,6 @@ var (
{Resource: rbac.ResourceRobot, Action: rbac.ActionRead},
{Resource: rbac.ResourceRobot, Action: rbac.ActionList},
{Resource: rbac.ResourceNotificationPolicy, Action: rbac.ActionRead},
{Resource: rbac.ResourceNotificationPolicy, Action: rbac.ActionList},
{Resource: rbac.ResourceScan, Action: rbac.ActionCreate},
@ -198,6 +199,8 @@ var (
{Resource: rbac.ResourceMember, Action: rbac.ActionRead},
{Resource: rbac.ResourceMember, Action: rbac.ActionList},
{Resource: rbac.ResourceLog, Action: rbac.ActionList},
{Resource: rbac.ResourceLabel, Action: rbac.ActionRead},
{Resource: rbac.ResourceLabel, Action: rbac.ActionList},
@ -251,6 +254,8 @@ var (
{Resource: rbac.ResourceMember, Action: rbac.ActionRead},
{Resource: rbac.ResourceMember, Action: rbac.ActionList},
{Resource: rbac.ResourceLog, Action: rbac.ActionList},
{Resource: rbac.ResourceLabel, Action: rbac.ActionRead},
{Resource: rbac.ResourceLabel, Action: rbac.ActionList},

View File

@ -38,7 +38,7 @@ func (ns *systemNamespace) Resource(subresources ...types.Resource) types.Resour
return types.Resource("/system/").Subresource(subresources...)
}
func (ns *systemNamespace) Identity() any {
func (ns *systemNamespace) Identity() interface{} {
return nil
}

View File

@ -63,7 +63,7 @@ func (t *tokenSecurityCtx) GetMyProjects() ([]*models.Project, error) {
return []*models.Project{}, nil
}
func (t *tokenSecurityCtx) GetProjectRoles(_ any) []int {
func (t *tokenSecurityCtx) GetProjectRoles(_ interface{}) []int {
return []int{}
}

View File

@ -18,7 +18,7 @@ import (
"github.com/goharbor/harbor/src/common"
)
var defaultConfig = map[string]any{
var defaultConfig = map[string]interface{}{
common.ExtEndpoint: "https://host01.com",
common.AUTHMode: common.DBAuth,
common.DatabaseType: "postgresql",
@ -66,6 +66,6 @@ var defaultConfig = map[string]any{
}
// GetDefaultConfigMap returns the default config map for easier modification.
func GetDefaultConfigMap() map[string]any {
func GetDefaultConfigMap() map[string]interface{} {
return defaultConfig
}

View File

@ -30,7 +30,7 @@ type GCResult struct {
}
// NewRegistryCtl returns a mock registry server
func NewRegistryCtl(_ map[string]any) (*httptest.Server, error) {
func NewRegistryCtl(_ map[string]interface{}) (*httptest.Server, error) {
m := []*RequestHandlerMapping{}
gcr := GCResult{true, "hello-world", time.Now(), time.Now()}

View File

@ -94,9 +94,9 @@ func NewServer(mappings ...*RequestHandlerMapping) *httptest.Server {
}
// GetUnitTestConfig ...
func GetUnitTestConfig() map[string]any {
func GetUnitTestConfig() map[string]interface{} {
ipAddress := os.Getenv("IP")
return map[string]any{
return map[string]interface{}{
common.ExtEndpoint: fmt.Sprintf("https://%s", ipAddress),
common.AUTHMode: "db_auth",
common.DatabaseType: "postgresql",
@ -130,7 +130,7 @@ func GetUnitTestConfig() map[string]any {
}
// TraceCfgMap ...
func TraceCfgMap(cfgs map[string]any) {
func TraceCfgMap(cfgs map[string]interface{}) {
var keys []string
for k := range cfgs {
keys = append(keys, k)

View File

@ -89,7 +89,7 @@ type SearchUserEntry struct {
ExtID string `json:"externalId"`
UserName string `json:"userName"`
Emails []SearchUserEmailEntry `json:"emails"`
Groups []any
Groups []interface{}
}
// SearchUserRes is the struct to parse the result of search user API of UAA

View File

@ -75,7 +75,7 @@ func GenerateRandomStringWithLen(length int) string {
if err != nil {
log.Warningf("Error reading random bytes: %v", err)
}
for i := range length {
for i := 0; i < length; i++ {
result[i] = chars[int(result[i])%l]
}
return string(result)
@ -140,7 +140,7 @@ func ParseTimeStamp(timestamp string) (*time.Time, error) {
}
// ConvertMapToStruct is used to fill the specified struct with map.
func ConvertMapToStruct(object any, values any) error {
func ConvertMapToStruct(object interface{}, values interface{}) error {
if object == nil {
return errors.New("nil struct is not supported")
}
@ -158,7 +158,7 @@ func ConvertMapToStruct(object any, values any) error {
}
// ParseProjectIDOrName parses value to ID(int64) or name(string)
func ParseProjectIDOrName(value any) (int64, string, error) {
func ParseProjectIDOrName(value interface{}) (int64, string, error) {
if value == nil {
return 0, "", errors.New("harborIDOrName is nil")
}
@ -177,7 +177,7 @@ func ParseProjectIDOrName(value any) (int64, string, error) {
}
// SafeCastString -- cast an object to string safely
func SafeCastString(value any) string {
func SafeCastString(value interface{}) string {
if result, ok := value.(string); ok {
return result
}
@ -185,7 +185,7 @@ func SafeCastString(value any) string {
}
// SafeCastInt --
func SafeCastInt(value any) int {
func SafeCastInt(value interface{}) int {
if result, ok := value.(int); ok {
return result
}
@ -193,7 +193,7 @@ func SafeCastInt(value any) int {
}
// SafeCastBool --
func SafeCastBool(value any) bool {
func SafeCastBool(value interface{}) bool {
if result, ok := value.(bool); ok {
return result
}
@ -201,7 +201,7 @@ func SafeCastBool(value any) bool {
}
// SafeCastFloat64 --
func SafeCastFloat64(value any) float64 {
func SafeCastFloat64(value interface{}) float64 {
if result, ok := value.(float64); ok {
return result
}
@ -214,9 +214,9 @@ func TrimLower(str string) string {
}
// GetStrValueOfAnyType return string format of any value, for map, need to convert to json
func GetStrValueOfAnyType(value any) string {
func GetStrValueOfAnyType(value interface{}) string {
var strVal string
if _, ok := value.(map[string]any); ok {
if _, ok := value.(map[string]interface{}); ok {
b, err := json.Marshal(value)
if err != nil {
log.Errorf("can not marshal json object, error %v", err)
@ -237,18 +237,18 @@ func GetStrValueOfAnyType(value any) string {
}
// IsIllegalLength ...
func IsIllegalLength(s string, minVal int, maxVal int) bool {
if minVal == -1 {
return (len(s) > maxVal)
func IsIllegalLength(s string, min int, max int) bool {
if min == -1 {
return (len(s) > max)
}
if maxVal == -1 {
return (len(s) <= minVal)
if max == -1 {
return (len(s) <= min)
}
return (len(s) < minVal || len(s) > maxVal)
return (len(s) < min || len(s) > max)
}
// ParseJSONInt ...
func ParseJSONInt(value any) (int, bool) {
func ParseJSONInt(value interface{}) (int, bool) {
switch v := value.(type) {
case float64:
return int(v), true

View File

@ -216,7 +216,7 @@ type testingStruct struct {
}
func TestConvertMapToStruct(t *testing.T) {
dataMap := make(map[string]any)
dataMap := make(map[string]interface{})
dataMap["Name"] = "testing"
dataMap["Count"] = 100
@ -232,7 +232,7 @@ func TestConvertMapToStruct(t *testing.T) {
func TestSafeCastString(t *testing.T) {
type args struct {
value any
value interface{}
}
tests := []struct {
name string
@ -254,7 +254,7 @@ func TestSafeCastString(t *testing.T) {
func TestSafeCastBool(t *testing.T) {
type args struct {
value any
value interface{}
}
tests := []struct {
name string
@ -276,7 +276,7 @@ func TestSafeCastBool(t *testing.T) {
func TestSafeCastInt(t *testing.T) {
type args struct {
value any
value interface{}
}
tests := []struct {
name string
@ -298,7 +298,7 @@ func TestSafeCastInt(t *testing.T) {
func TestSafeCastFloat64(t *testing.T) {
type args struct {
value any
value interface{}
}
tests := []struct {
name string
@ -342,7 +342,7 @@ func TestTrimLower(t *testing.T) {
func TestGetStrValueOfAnyType(t *testing.T) {
type args struct {
value any
value interface{}
}
tests := []struct {
name string
@ -357,7 +357,7 @@ func TestGetStrValueOfAnyType(t *testing.T) {
{"string", args{"hello world"}, "hello world"},
{"bool", args{true}, "true"},
{"bool", args{false}, "false"},
{"map", args{map[string]any{"key1": "value1"}}, "{\"key1\":\"value1\"}"},
{"map", args{map[string]interface{}{"key1": "value1"}}, "{\"key1\":\"value1\"}"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@ -83,7 +83,7 @@ func (a *abstractor) AbstractMetadata(ctx context.Context, artifact *artifact.Ar
default:
return fmt.Errorf("unsupported manifest media type: %s", artifact.ManifestMediaType)
}
return processor.Get(artifact.ResolveArtifactType()).AbstractMetadata(ctx, artifact, content)
return processor.Get(artifact.MediaType).AbstractMetadata(ctx, artifact, content)
}
// the artifact is enveloped by docker manifest v1

View File

@ -66,7 +66,8 @@ func parseV1alpha1SkipList(artifact *artifact.Artifact, manifest *v1.Manifest) {
skipListAnnotationKey := fmt.Sprintf("%s.%s.%s", AnnotationPrefix, V1alpha1, SkipList)
skipList, ok := manifest.Config.Annotations[skipListAnnotationKey]
if ok {
for skipKey := range strings.SplitSeq(skipList, ",") {
skipKeyList := strings.Split(skipList, ",")
for _, skipKey := range skipKeyList {
delete(metadata, skipKey)
}
artifact.ExtraAttrs = metadata

View File

@ -231,7 +231,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err := manifest.Payload()
p.Require().Nil(err)
metadata := map[string]any{}
metadata := map[string]interface{}{}
configBlob := io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -244,7 +244,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 12)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("sha256:d923b93eadde0af5c639a972710a4d919066aba5d0dfbf4b9385099f70272da0", art.Icon)
// ormbManifestWithoutSkipList
@ -255,7 +255,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err = manifest.Payload()
p.Require().Nil(err)
metadata = map[string]any{}
metadata = map[string]interface{}{}
configBlob = io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -268,7 +268,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 13)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("sha256:d923b93eadde0af5c639a972710a4d919066aba5d0dfbf4b9385099f70272da0", art.Icon)
// ormbManifestWithoutIcon
@ -279,7 +279,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err = manifest.Payload()
p.Require().Nil(err)
metadata = map[string]any{}
metadata = map[string]interface{}{}
configBlob = io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -290,7 +290,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 12)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("", art.Icon)
}

View File

@ -25,16 +25,13 @@ import (
"github.com/opencontainers/go-digest"
"github.com/goharbor/harbor/src/common/utils"
"github.com/goharbor/harbor/src/controller/artifact/processor"
"github.com/goharbor/harbor/src/controller/artifact/processor/chart"
"github.com/goharbor/harbor/src/controller/artifact/processor/cnab"
"github.com/goharbor/harbor/src/controller/artifact/processor/cnai"
"github.com/goharbor/harbor/src/controller/artifact/processor/image"
"github.com/goharbor/harbor/src/controller/artifact/processor/sbom"
"github.com/goharbor/harbor/src/controller/artifact/processor/wasm"
"github.com/goharbor/harbor/src/controller/event/metadata"
"github.com/goharbor/harbor/src/controller/project"
"github.com/goharbor/harbor/src/controller/tag"
"github.com/goharbor/harbor/src/lib"
"github.com/goharbor/harbor/src/lib/errors"
@ -47,7 +44,7 @@ import (
accessorymodel "github.com/goharbor/harbor/src/pkg/accessory/model"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/artifactrash"
trashmodel "github.com/goharbor/harbor/src/pkg/artifactrash/model"
"github.com/goharbor/harbor/src/pkg/artifactrash/model"
"github.com/goharbor/harbor/src/pkg/blob"
"github.com/goharbor/harbor/src/pkg/immutable/match"
"github.com/goharbor/harbor/src/pkg/immutable/match/rule"
@ -81,7 +78,6 @@ var (
cnab.ArtifactTypeCNAB: icon.DigestOfIconCNAB,
wasm.ArtifactTypeWASM: icon.DigestOfIconWASM,
sbom.ArtifactTypeSBOM: icon.DigestOfIconAccSBOM,
cnai.ArtifactTypeCNAI: icon.DigestOfIconCNAI,
}
)
@ -139,7 +135,6 @@ func NewController() Controller {
regCli: registry.Cli,
abstractor: NewAbstractor(),
accessoryMgr: accessory.Mgr,
proCtl: project.Ctl,
}
}
@ -154,7 +149,6 @@ type controller struct {
regCli registry.Client
abstractor Abstractor
accessoryMgr accessory.Manager
proCtl project.Controller
}
type ArtOption struct {
@ -179,18 +173,6 @@ func (c *controller) Ensure(ctx context.Context, repository, digest string, opti
}
}
}
projectName, _ := utils.ParseRepository(repository)
p, err := c.proCtl.GetByName(ctx, projectName)
if err != nil {
return false, 0, err
}
// Does not fire event only when the current project is a proxy-cache project and the artifact already exists.
if p.IsProxy() && !created {
return created, artifact.ID, nil
}
// fire event for create
e := &metadata.PushArtifactEventMetadata{
Ctx: ctx,
@ -235,7 +217,7 @@ func (c *controller) ensureArtifact(ctx context.Context, repository, digest stri
}
// populate the artifact type
artifact.Type = processor.Get(artifact.ResolveArtifactType()).GetArtifactType(ctx, artifact)
artifact.Type = processor.Get(artifact.MediaType).GetArtifactType(ctx, artifact)
// create it
// use orm.WithTransaction here to avoid the issue:
@ -313,7 +295,7 @@ func (c *controller) getByTag(ctx context.Context, repository, tag string, optio
return nil, err
}
tags, err := c.tagCtl.List(ctx, &q.Query{
Keywords: map[string]any{
Keywords: map[string]interface{}{
"RepositoryID": repo.RepositoryID,
"Name": tag,
},
@ -342,7 +324,7 @@ func (c *controller) Delete(ctx context.Context, id int64) error {
// the error handling logic for the root parent artifact and others is different
// "isAccessory" is used to specify whether the artifact is an accessory.
func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAccessory bool) error {
art, err := c.Get(ctx, id, &Option{WithTag: true, WithAccessory: true, WithLabel: true})
art, err := c.Get(ctx, id, &Option{WithTag: true, WithAccessory: true})
if err != nil {
// return nil if the nonexistent artifact isn't the root parent
if !isRoot && errors.IsErr(err, errors.NotFoundCode) {
@ -356,7 +338,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
return nil
}
parents, err := c.artMgr.ListReferences(ctx, &q.Query{
Keywords: map[string]any{
Keywords: map[string]interface{}{
"ChildID": id,
},
})
@ -385,7 +367,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
if acc.IsHard() {
// if this acc artifact has parent(is child), set isRoot to false
parents, err := c.artMgr.ListReferences(ctx, &q.Query{
Keywords: map[string]any{
Keywords: map[string]interface{}{
"ChildID": acc.GetData().ArtifactID,
},
})
@ -453,7 +435,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
// use orm.WithTransaction here to avoid the issue:
// https://www.postgresql.org/message-id/002e01c04da9%24a8f95c20%2425efe6c1%40lasting.ro
if err = orm.WithTransaction(func(ctx context.Context) error {
_, err = c.artrashMgr.Create(ctx, &trashmodel.ArtifactTrash{
_, err = c.artrashMgr.Create(ctx, &model.ArtifactTrash{
MediaType: art.MediaType,
ManifestMediaType: art.ManifestMediaType,
RepositoryName: art.RepositoryName,
@ -466,20 +448,14 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
// only fire event for the root parent artifact
if isRoot {
var tags, labels []string
var tags []string
for _, tag := range art.Tags {
tags = append(tags, tag.Name)
}
for _, label := range art.Labels {
labels = append(labels, label.Name)
}
notification.AddEvent(ctx, &metadata.DeleteArtifactEventMetadata{
Ctx: ctx,
Artifact: &art.Artifact,
Tags: tags,
Labels: labels,
})
}
@ -615,7 +591,7 @@ func (c *controller) GetAddition(ctx context.Context, artifactID int64, addition
if err != nil {
return nil, err
}
return processor.Get(artifact.ResolveArtifactType()).AbstractAddition(ctx, artifact, addition)
return processor.Get(artifact.MediaType).AbstractAddition(ctx, artifact, addition)
}
func (c *controller) AddLabel(ctx context.Context, artifactID int64, labelID int64) (err error) {
@ -752,7 +728,7 @@ func (c *controller) populateIcon(art *Artifact) {
func (c *controller) populateTags(ctx context.Context, art *Artifact, option *tag.Option) {
tags, err := c.tagCtl.List(ctx, &q.Query{
Keywords: map[string]any{
Keywords: map[string]interface{}{
"artifact_id": art.ID,
},
}, option)
@ -773,7 +749,7 @@ func (c *controller) populateLabels(ctx context.Context, art *Artifact) {
}
func (c *controller) populateAdditionLinks(ctx context.Context, artifact *Artifact) {
types := processor.Get(artifact.ResolveArtifactType()).ListAdditionTypes(ctx, &artifact.Artifact)
types := processor.Get(artifact.MediaType).ListAdditionTypes(ctx, &artifact.Artifact)
if len(types) > 0 {
version := lib.GetAPIVersion(ctx)
for _, t := range types {

View File

@ -37,10 +37,8 @@ import (
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/blob/models"
"github.com/goharbor/harbor/src/pkg/label/model"
projectModel "github.com/goharbor/harbor/src/pkg/project/models"
repomodel "github.com/goharbor/harbor/src/pkg/repository/model"
model_tag "github.com/goharbor/harbor/src/pkg/tag/model/tag"
projecttesting "github.com/goharbor/harbor/src/testing/controller/project"
tagtesting "github.com/goharbor/harbor/src/testing/controller/tag"
ormtesting "github.com/goharbor/harbor/src/testing/lib/orm"
accessorytesting "github.com/goharbor/harbor/src/testing/pkg/accessory"
@ -77,7 +75,6 @@ type controllerTestSuite struct {
immutableMtr *immutable.FakeMatcher
regCli *registry.Client
accMgr *accessorytesting.Manager
proCtl *projecttesting.Controller
}
func (c *controllerTestSuite) SetupTest() {
@ -91,7 +88,6 @@ func (c *controllerTestSuite) SetupTest() {
c.immutableMtr = &immutable.FakeMatcher{}
c.accMgr = &accessorytesting.Manager{}
c.regCli = &registry.Client{}
c.proCtl = &projecttesting.Controller{}
c.ctl = &controller{
repoMgr: c.repoMgr,
artMgr: c.artMgr,
@ -103,7 +99,6 @@ func (c *controllerTestSuite) SetupTest() {
immutableMtr: c.immutableMtr,
regCli: c.regCli,
accessoryMgr: c.accMgr,
proCtl: c.proCtl,
}
}
@ -272,7 +267,6 @@ func (c *controllerTestSuite) TestEnsure() {
c.abstractor.On("AbstractMetadata").Return(nil)
c.tagCtl.On("Ensure").Return(int64(1), nil)
c.accMgr.On("Ensure").Return(nil)
c.proCtl.On("GetByName", mock.Anything, mock.Anything).Return(&projectModel.Project{ProjectID: 1, Name: "library", RegistryID: 0}, nil)
_, id, err := c.ctl.Ensure(orm.NewContext(nil, &ormtesting.FakeOrmer{}), "library/hello-world", digest, &ArtOption{
Tags: []string{"latest"},
})
@ -493,7 +487,6 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
// root artifact and doesn't exist
c.artMgr.On("Get", mock.Anything, mock.Anything).Return(nil, errors.NotFoundError(nil))
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err := c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, true, false)
c.Require().NotNil(err)
c.Assert().True(errors.IsErr(err, errors.NotFoundCode))
@ -504,7 +497,6 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
// child artifact and doesn't exist
c.artMgr.On("Get", mock.Anything, mock.Anything).Return(nil, errors.NotFoundError(nil))
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, false, false)
c.Require().Nil(err)
@ -524,7 +516,6 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
c.repoMgr.On("Get", mock.Anything, mock.Anything).Return(&repomodel.RepoRecord{}, nil)
c.artrashMgr.On("Create", mock.Anything, mock.Anything).Return(int64(0), nil)
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, false, false)
c.Require().Nil(err)
@ -541,7 +532,6 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
},
}, nil)
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, true, false)
c.Require().NotNil(err)
@ -558,7 +548,6 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
},
}, nil)
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(nil, 1, false, false)
c.Require().Nil(err)
@ -584,7 +573,6 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
c.blobMgr.On("CleanupAssociationsForProject", mock.Anything, mock.Anything, mock.Anything).Return(nil)
c.repoMgr.On("Get", mock.Anything, mock.Anything).Return(&repomodel.RepoRecord{}, nil)
c.artrashMgr.On("Create", mock.Anything, mock.Anything).Return(int64(0), nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, true, true)
c.Require().Nil(err)
@ -595,7 +583,6 @@ func (c *controllerTestSuite) TestCopy() {
ID: 1,
Digest: "sha256:418fb88ec412e340cdbef913b8ca1bbe8f9e8dc705f9617414c1f2c8db980180",
}, nil)
c.proCtl.On("GetByName", mock.Anything, mock.Anything).Return(&projectModel.Project{ProjectID: 1, Name: "library", RegistryID: 0}, nil)
c.repoMgr.On("GetByName", mock.Anything, mock.Anything).Return(&repomodel.RepoRecord{
RepositoryID: 1,
Name: "library/hello-world",

View File

@ -56,7 +56,7 @@ func (suite *IteratorTestSuite) TeardownSuite() {
func (suite *IteratorTestSuite) TestIterator() {
suite.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
q1 := &q.Query{PageNumber: 1, PageSize: 5, Keywords: map[string]any{}}
q1 := &q.Query{PageNumber: 1, PageSize: 5, Keywords: map[string]interface{}{}}
suite.artMgr.On("List", mock.Anything, q1).Return([]*artifact.Artifact{
{ID: 1},
{ID: 2},
@ -65,7 +65,7 @@ func (suite *IteratorTestSuite) TestIterator() {
{ID: 5},
}, nil)
q2 := &q.Query{PageNumber: 2, PageSize: 5, Keywords: map[string]any{}}
q2 := &q.Query{PageNumber: 2, PageSize: 5, Keywords: map[string]interface{}{}}
suite.artMgr.On("List", mock.Anything, q2).Return([]*artifact.Artifact{
{ID: 6},
{ID: 7},

View File

@ -40,7 +40,7 @@ func (artifact *Artifact) UnmarshalJSON(data []byte) error {
type Alias Artifact
ali := &struct {
*Alias
AccessoryItems []any `json:"accessories,omitempty"`
AccessoryItems []interface{} `json:"accessories,omitempty"`
}{
Alias: (*Alias)(artifact),
}
@ -94,16 +94,6 @@ func (artifact *Artifact) SetSBOMAdditionLink(sbomDgst string, version string) {
artifact.AdditionLinks[addition] = &AdditionLink{HREF: href, Absolute: false}
}
// AbstractLabelNames abstracts the label names from the artifact.
func (artifact *Artifact) AbstractLabelNames() []string {
var names []string
for _, label := range artifact.Labels {
names = append(names, label.Name)
}
return names
}
// AdditionLink is a link via that the addition can be fetched
type AdditionLink struct {
HREF string `json:"href"`

View File

@ -7,7 +7,6 @@ import (
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/accessory/model/cosign"
"github.com/goharbor/harbor/src/pkg/label/model"
)
func TestUnmarshalJSONWithACC(t *testing.T) {
@ -105,58 +104,3 @@ func TestUnmarshalJSONWithPartial(t *testing.T) {
assert.Equal(t, "", artifact.Type)
assert.Equal(t, "application/vnd.docker.container.image.v1+json", artifact.MediaType)
}
func TestAbstractLabelNames(t *testing.T) {
tests := []struct {
name string
artifact Artifact
want []string
}{
{
name: "Nil labels",
artifact: Artifact{
Labels: nil,
},
want: []string{},
},
{
name: "Single label",
artifact: Artifact{
Labels: []*model.Label{
{Name: "label1"},
},
},
want: []string{"label1"},
},
{
name: "Multiple labels",
artifact: Artifact{
Labels: []*model.Label{
{Name: "label1"},
{Name: "label2"},
{Name: "label3"},
},
},
want: []string{"label1", "label2", "label3"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.artifact.AbstractLabelNames()
// Check if lengths match
if len(got) != len(tt.want) {
t.Errorf("AbstractLabelNames() got length = %v, want length = %v", len(got), len(tt.want))
return
}
// Check if elements match
for i := range got {
if got[i] != tt.want[i] {
t.Errorf("AbstractLabelNames() got[%d] = %v, want[%d] = %v", i, got[i], i, tt.want[i])
}
}
})
}
}

View File

@ -44,7 +44,7 @@ type ManifestProcessor struct {
// AbstractMetadata abstracts metadata of artifact
func (m *ManifestProcessor) AbstractMetadata(ctx context.Context, artifact *artifact.Artifact, content []byte) error {
// parse metadata from config layer
metadata := map[string]any{}
metadata := map[string]interface{}{}
if err := m.UnmarshalConfig(ctx, artifact.RepositoryName, content, &metadata); err != nil {
return err
}
@ -55,7 +55,7 @@ func (m *ManifestProcessor) AbstractMetadata(ctx context.Context, artifact *arti
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]any{}
artifact.ExtraAttrs = map[string]interface{}{}
}
for _, property := range m.properties {
artifact.ExtraAttrs[property] = metadata[property]
@ -80,7 +80,7 @@ func (m *ManifestProcessor) ListAdditionTypes(_ context.Context, _ *artifact.Art
}
// UnmarshalConfig unmarshal the config blob of the artifact into the specified object "v"
func (m *ManifestProcessor) UnmarshalConfig(_ context.Context, repository string, manifest []byte, v any) error {
func (m *ManifestProcessor) UnmarshalConfig(_ context.Context, repository string, manifest []byte, v interface{}) error {
// unmarshal manifest
mani := &v1.Manifest{}
if err := json.Unmarshal(manifest, mani); err != nil {

View File

@ -89,7 +89,7 @@ func (p *processorTestSuite) TestAbstractAddition() {
Repository: "github.com/goharbor",
},
},
Values: map[string]any{
Values: map[string]interface{}{
"cluster.enable": true,
"cluster.slaveCount": 1,
"image.pullPolicy": "Always",

View File

@ -1,106 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cnai
import (
"context"
"encoding/json"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
ps "github.com/goharbor/harbor/src/controller/artifact/processor"
"github.com/goharbor/harbor/src/controller/artifact/processor/base"
"github.com/goharbor/harbor/src/controller/artifact/processor/cnai/parser"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/lib/log"
"github.com/goharbor/harbor/src/pkg/artifact"
)
// const definitions
const (
// ArtifactTypeCNAI defines the artifact type for CNAI model.
ArtifactTypeCNAI = "CNAI"
// AdditionTypeReadme defines the addition type readme for API.
AdditionTypeReadme = "README.MD"
// AdditionTypeLicense defines the addition type license for API.
AdditionTypeLicense = "LICENSE"
// AdditionTypeFiles defines the addition type files for API.
AdditionTypeFiles = "FILES"
)
func init() {
pc := &processor{
ManifestProcessor: base.NewManifestProcessor(),
}
if err := ps.Register(pc, modelspec.ArtifactTypeModelManifest); err != nil {
log.Errorf("failed to register processor for artifact type %s: %v", modelspec.ArtifactTypeModelManifest, err)
return
}
}
type processor struct {
*base.ManifestProcessor
}
func (p *processor) AbstractAddition(ctx context.Context, artifact *artifact.Artifact, addition string) (*ps.Addition, error) {
var additionParser parser.Parser
switch addition {
case AdditionTypeReadme:
additionParser = parser.NewReadme(p.RegCli)
case AdditionTypeLicense:
additionParser = parser.NewLicense(p.RegCli)
case AdditionTypeFiles:
additionParser = parser.NewFiles(p.RegCli)
default:
return nil, errors.New(nil).WithCode(errors.BadRequestCode).
WithMessagef("addition %s isn't supported for %s", addition, ArtifactTypeCNAI)
}
mf, _, err := p.RegCli.PullManifest(artifact.RepositoryName, artifact.Digest)
if err != nil {
return nil, err
}
_, payload, err := mf.Payload()
if err != nil {
return nil, err
}
manifest := &ocispec.Manifest{}
if err := json.Unmarshal(payload, manifest); err != nil {
return nil, err
}
contentType, content, err := additionParser.Parse(ctx, artifact, manifest)
if err != nil {
return nil, err
}
return &ps.Addition{
ContentType: contentType,
Content: content,
}, nil
}
func (p *processor) GetArtifactType(_ context.Context, _ *artifact.Artifact) string {
return ArtifactTypeCNAI
}
func (p *processor) ListAdditionTypes(_ context.Context, _ *artifact.Artifact) []string {
return []string{AdditionTypeReadme, AdditionTypeLicense, AdditionTypeFiles}
}

View File

@ -1,265 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cnai
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"io"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/suite"
"github.com/goharbor/harbor/src/controller/artifact/processor/base"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/distribution"
"github.com/goharbor/harbor/src/testing/mock"
"github.com/goharbor/harbor/src/testing/pkg/registry"
)
type ProcessorTestSuite struct {
suite.Suite
processor *processor
regCli *registry.Client
}
func (p *ProcessorTestSuite) SetupTest() {
p.regCli = &registry.Client{}
p.processor = &processor{}
p.processor.ManifestProcessor = &base.ManifestProcessor{
RegCli: p.regCli,
}
}
func createTarContent(filename, content string) ([]byte, error) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
hdr := &tar.Header{
Name: filename,
Mode: 0600,
Size: int64(len(content)),
}
if err := tw.WriteHeader(hdr); err != nil {
return nil, err
}
if _, err := tw.Write([]byte(content)); err != nil {
return nil, err
}
if err := tw.Close(); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (p *ProcessorTestSuite) TestAbstractAddition() {
cases := []struct {
name string
addition string
manifest *ocispec.Manifest
setupMockReg func(*registry.Client, *ocispec.Manifest)
expectErr string
expectContent string
expectType string
}{
{
name: "invalid addition type",
addition: "invalid",
manifest: &ocispec.Manifest{},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
},
expectErr: "addition invalid isn't supported for CNAI",
},
{
name: "readme not found",
addition: AdditionTypeReadme,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "other.txt",
},
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
},
expectErr: "readme layer not found",
},
{
name: "valid readme",
addition: AdditionTypeReadme,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:abc",
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
content := "# Test Model"
tarContent, err := createTarContent("README.md", content)
p.Require().NoError(err)
r.On("PullBlob", mock.Anything, "sha256:abc").Return(
int64(len(tarContent)),
io.NopCloser(bytes.NewReader(tarContent)),
nil,
)
},
expectContent: "# Test Model",
expectType: "text/markdown; charset=utf-8",
},
{
name: "valid license",
addition: AdditionTypeLicense,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:def",
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
content := "MIT License"
tarContent, err := createTarContent("LICENSE", content)
p.Require().NoError(err)
r.On("PullBlob", mock.Anything, "sha256:def").Return(
int64(len(tarContent)),
io.NopCloser(bytes.NewReader(tarContent)),
nil,
)
},
expectContent: "MIT License",
expectType: "text/plain; charset=utf-8",
},
{
name: "valid files list",
addition: AdditionTypeFiles,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "model/weights.bin",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 50,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "config.json",
},
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
},
expectContent: `[{"name":"model","type":"directory","children":[{"name":"weights.bin","type":"file","size":100}]},{"name":"config.json","type":"file","size":50}]`,
expectType: "application/json; charset=utf-8",
},
}
for _, tc := range cases {
p.Run(tc.name, func() {
// Reset mock
p.SetupTest()
if tc.setupMockReg != nil {
tc.setupMockReg(p.regCli, tc.manifest)
}
addition, err := p.processor.AbstractAddition(
context.Background(),
&artifact.Artifact{},
tc.addition,
)
if tc.expectErr != "" {
p.Error(err)
p.Contains(err.Error(), tc.expectErr)
return
}
p.NoError(err)
if tc.expectContent != "" {
p.Equal(tc.expectContent, string(addition.Content))
}
if tc.expectType != "" {
p.Equal(tc.expectType, addition.ContentType)
}
})
}
}
func (p *ProcessorTestSuite) TestGetArtifactType() {
p.Equal(ArtifactTypeCNAI, p.processor.GetArtifactType(nil, nil))
}
func (p *ProcessorTestSuite) TestListAdditionTypes() {
additions := p.processor.ListAdditionTypes(nil, nil)
p.ElementsMatch(
[]string{
AdditionTypeReadme,
AdditionTypeLicense,
AdditionTypeFiles,
},
additions,
)
}
func TestProcessorTestSuite(t *testing.T) {
suite.Run(t, &ProcessorTestSuite{})
}

View File

@ -1,99 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"fmt"
"io"
"path/filepath"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
var (
// errFileTooLarge is returned when the file is too large to be processed.
errFileTooLarge = errors.New("The file is too large to be processed")
)
const (
// contentTypeTextPlain is the content type of text/plain.
contentTypeTextPlain = "text/plain; charset=utf-8"
// contentTypeTextMarkdown is the content type of text/markdown.
contentTypeMarkdown = "text/markdown; charset=utf-8"
// contentTypeJSON is the content type of application/json.
contentTypeJSON = "application/json; charset=utf-8"
// defaultFileSizeLimit is the default file size limit.
defaultFileSizeLimit = 1024 * 1024 * 4 // 4MB
// formatTar is the format of tar file.
formatTar = ".tar"
// formatRaw is the format of raw file.
formatRaw = ".raw"
)
// newBase creates a new base parser.
func newBase(cli registry.Client) *base {
return &base{
regCli: cli,
}
}
// base provides a default implementation for other parsers to build upon.
type base struct {
regCli registry.Client
}
// Parse is the common implementation for parsing layer.
func (b *base) Parse(_ context.Context, artifact *artifact.Artifact, layer *ocispec.Descriptor) (string, []byte, error) {
if artifact == nil || layer == nil {
return "", nil, fmt.Errorf("artifact or manifest cannot be nil")
}
if layer.Size > defaultFileSizeLimit {
return "", nil, errors.RequestEntityTooLargeError(errFileTooLarge)
}
_, stream, err := b.regCli.PullBlob(artifact.RepositoryName, layer.Digest.String())
if err != nil {
return "", nil, fmt.Errorf("failed to pull blob from registry: %w", err)
}
defer stream.Close()
content, err := decodeContent(layer.MediaType, stream)
if err != nil {
return "", nil, fmt.Errorf("failed to decode content: %w", err)
}
return contentTypeTextPlain, content, nil
}
func decodeContent(mediaType string, reader io.Reader) ([]byte, error) {
format := filepath.Ext(mediaType)
switch format {
case formatTar:
return untar(reader)
case formatRaw:
return io.ReadAll(reader)
default:
return nil, fmt.Errorf("unsupported format: %s", format)
}
}

View File

@ -1,142 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"context"
"fmt"
"io"
"testing"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
mock "github.com/goharbor/harbor/src/testing/pkg/registry"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
)
func TestBaseParse(t *testing.T) {
tests := []struct {
name string
artifact *artifact.Artifact
layer *v1.Descriptor
mockSetup func(*mock.Client)
expectedType string
expectedError string
}{
{
name: "nil artifact",
artifact: nil,
layer: &v1.Descriptor{},
expectedError: "artifact or manifest cannot be nil",
},
{
name: "nil layer",
artifact: &artifact.Artifact{},
layer: nil,
expectedError: "artifact or manifest cannot be nil",
},
{
name: "registry client error",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), nil, fmt.Errorf("registry error"))
},
expectedError: "failed to pull blob from registry: registry error",
},
{
name: "successful parse (tar format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.tar",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
tw.WriteHeader(&tar.Header{
Name: "test.txt",
Size: 12,
})
tw.Write([]byte("test content"))
tw.Close()
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
},
{
name: "successful parse (raw format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.raw",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
buf.Write([]byte("test content"))
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
},
{
name: "error parse (unsupported format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.unknown",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
buf.Write([]byte("test content"))
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedError: "failed to decode content: unsupported format: .unknown",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockClient := &mock.Client{}
if tt.mockSetup != nil {
tt.mockSetup(mockClient)
}
b := &base{regCli: mockClient}
contentType, _, err := b.Parse(context.Background(), tt.artifact, tt.layer)
if tt.expectedError != "" {
assert.EqualError(t, err, tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
}
mockClient.AssertExpectations(t)
})
}
}
func TestNewBase(t *testing.T) {
b := newBase(registry.Cli)
assert.NotNil(t, b)
assert.Equal(t, registry.Cli, b.regCli)
}

View File

@ -1,113 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"encoding/json"
"fmt"
"sort"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
// NewFiles creates a new files parser.
func NewFiles(cli registry.Client) Parser {
return &files{
base: newBase(cli),
}
}
// files is the parser for listing files in the model artifact.
type files struct {
*base
}
type FileList struct {
Name string `json:"name"`
Type string `json:"type"`
Size int64 `json:"size,omitempty"`
Children []FileList `json:"children,omitempty"`
}
// Parse parses the files list.
func (f *files) Parse(_ context.Context, _ *artifact.Artifact, manifest *ocispec.Manifest) (string, []byte, error) {
if manifest == nil {
return "", nil, fmt.Errorf("manifest cannot be nil")
}
rootNode, err := walkManifest(*manifest)
if err != nil {
return "", nil, fmt.Errorf("failed to walk manifest: %w", err)
}
fileLists := traverseFileNode(rootNode)
data, err := json.Marshal(fileLists)
if err != nil {
return "", nil, err
}
return contentTypeJSON, data, nil
}
// walkManifest walks the manifest and returns the root file node.
func walkManifest(manifest ocispec.Manifest) (*FileNode, error) {
root := NewDirectory("/")
for _, layer := range manifest.Layers {
if layer.Annotations != nil && layer.Annotations[modelspec.AnnotationFilepath] != "" {
filepath := layer.Annotations[modelspec.AnnotationFilepath]
// mark it to directory if the file path ends with "/".
isDir := filepath[len(filepath)-1] == '/'
_, err := root.AddNode(filepath, layer.Size, isDir)
if err != nil {
return nil, err
}
}
}
return root, nil
}
// traverseFileNode traverses the file node and returns the file list.
func traverseFileNode(node *FileNode) []FileList {
if node == nil {
return nil
}
var children []FileList
for _, child := range node.Children {
children = append(children, FileList{
Name: child.Name,
Type: child.Type,
Size: child.Size,
Children: traverseFileNode(child),
})
}
// sort the children by type (directories first) and then by name.
sort.Slice(children, func(i, j int) bool {
if children[i].Type != children[j].Type {
return children[i].Type == TypeDirectory
}
return children[i].Name < children[j].Name
})
return children
}

View File

@ -1,229 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"encoding/json"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
mockregistry "github.com/goharbor/harbor/src/testing/pkg/registry"
)
func TestFilesParser(t *testing.T) {
tests := []struct {
name string
manifest *ocispec.Manifest
expectedType string
expectedOutput []FileList
expectedError string
}{
{
name: "nil manifest",
manifest: nil,
expectedError: "manifest cannot be nil",
},
{
name: "empty manifest layers",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{},
},
expectedType: contentTypeJSON,
expectedOutput: nil,
},
{
name: "single file",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "model.bin",
},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 100,
},
},
},
{
name: "file in directory",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 200,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v1/model.bin",
},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: []FileList{
{
Name: "models",
Type: TypeDirectory,
Children: []FileList{
{
Name: "v1",
Type: TypeDirectory,
Children: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 200,
},
},
},
},
},
},
},
{
name: "multiple files and directories",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 200,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v1/model.bin",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 300,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v2/",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 150,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v2/model.bin",
},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: []FileList{
{
Name: "models",
Type: TypeDirectory,
Children: []FileList{
{
Name: "v1",
Type: TypeDirectory,
Children: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 200,
},
},
},
{
Name: "v2",
Type: TypeDirectory,
Children: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 150,
},
},
},
},
},
{
Name: "README.md",
Type: TypeFile,
Size: 100,
},
},
},
{
name: "layer without filepath annotation",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockRegClient := &mockregistry.Client{}
parser := &files{
base: &base{
regCli: mockRegClient,
},
}
contentType, content, err := parser.Parse(context.Background(), &artifact.Artifact{}, tt.manifest)
if tt.expectedError != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
var fileList []FileList
err = json.Unmarshal(content, &fileList)
assert.NoError(t, err)
assert.Equal(t, tt.expectedOutput, fileList)
}
})
}
}
func TestNewFiles(t *testing.T) {
parser := NewFiles(registry.Cli)
assert.NotNil(t, parser)
filesParser, ok := parser.(*files)
assert.True(t, ok, "Parser should be of type *files")
assert.Equal(t, registry.Cli, filesParser.base.regCli)
}

View File

@ -1,70 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"fmt"
"slices"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
// NewLicense creates a new license parser.
func NewLicense(cli registry.Client) Parser {
return &license{
base: newBase(cli),
}
}
// license is the parser for License file.
type license struct {
*base
}
// Parse parses the License file.
func (l *license) Parse(ctx context.Context, artifact *artifact.Artifact, manifest *ocispec.Manifest) (string, []byte, error) {
if manifest == nil {
return "", nil, errors.New("manifest cannot be nil")
}
// lookup the license file layer
var layer *ocispec.Descriptor
for _, desc := range manifest.Layers {
if slices.Contains([]string{
modelspec.MediaTypeModelDoc,
modelspec.MediaTypeModelDocRaw,
}, desc.MediaType) {
if desc.Annotations != nil {
filepath := desc.Annotations[modelspec.AnnotationFilepath]
if filepath == "LICENSE" || filepath == "LICENSE.txt" {
layer = &desc
break
}
}
}
}
if layer == nil {
return "", nil, errors.NotFoundError(fmt.Errorf("license layer not found"))
}
return l.base.Parse(ctx, artifact, layer)
}

View File

@ -1,260 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"context"
"fmt"
"io"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
"github.com/goharbor/harbor/src/testing/mock"
mockregistry "github.com/goharbor/harbor/src/testing/pkg/registry"
)
func TestLicenseParser(t *testing.T) {
tests := []struct {
name string
manifest *ocispec.Manifest
setupMockReg func(*mockregistry.Client)
expectedType string
expectedOutput []byte
expectedError string
}{
{
name: "nil manifest",
manifest: nil,
expectedError: "manifest cannot be nil",
},
{
name: "empty manifest layers",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{},
},
expectedError: "license layer not found",
},
{
name: "LICENSE parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:abc123",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("MIT License")
_ = tw.WriteHeader(&tar.Header{
Name: "LICENSE",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:abc123").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("MIT License"),
},
{
name: "LICENSE parse success (raw)",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDocRaw,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:abc123",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
buf.Write([]byte("MIT License"))
mc.On("PullBlob", mock.Anything, "sha256:abc123").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("MIT License"),
},
{
name: "LICENSE.txt parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE.txt",
},
Digest: "sha256:def456",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("Apache License 2.0")
_ = tw.WriteHeader(&tar.Header{
Name: "LICENSE.txt",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:def456").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("Apache License 2.0"),
},
{
name: "registry error",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:ghi789",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
mc.On("PullBlob", mock.Anything, "sha256:ghi789").
Return(int64(0), nil, fmt.Errorf("registry error"))
},
expectedError: "failed to pull blob from registry: registry error",
},
{
name: "multiple layers with license",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "other.txt",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:jkl012",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("BSD License")
_ = tw.WriteHeader(&tar.Header{
Name: "LICENSE",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:jkl012").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("BSD License"),
},
{
name: "wrong media type",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: "wrong/type",
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
},
},
},
expectedError: "license layer not found",
},
{
name: "no matching license file",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "NOT_LICENSE",
},
},
},
},
expectedError: "license layer not found",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockRegClient := &mockregistry.Client{}
if tt.setupMockReg != nil {
tt.setupMockReg(mockRegClient)
}
parser := &license{
base: &base{
regCli: mockRegClient,
},
}
contentType, content, err := parser.Parse(context.Background(), &artifact.Artifact{}, tt.manifest)
if tt.expectedError != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
assert.Equal(t, tt.expectedOutput, content)
}
mockRegClient.AssertExpectations(t)
})
}
}
func TestNewLicense(t *testing.T) {
parser := NewLicense(registry.Cli)
assert.NotNil(t, parser)
licenseParser, ok := parser.(*license)
assert.True(t, ok, "Parser should be of type *license")
assert.Equal(t, registry.Cli, licenseParser.base.regCli)
}

View File

@ -1,29 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/pkg/artifact"
)
// Parser is the interface for parsing the content by different addition type.
type Parser interface {
// Parse returns the parsed content type and content.
Parse(ctx context.Context, artifact *artifact.Artifact, manifest *ocispec.Manifest) (contentType string, content []byte, err error)
}

View File

@ -1,75 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"fmt"
"slices"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
// NewReadme creates a new readme parser.
func NewReadme(cli registry.Client) Parser {
return &readme{
base: newBase(cli),
}
}
// readme is the parser for README.md file.
type readme struct {
*base
}
// Parse parses the README.md file.
func (r *readme) Parse(ctx context.Context, artifact *artifact.Artifact, manifest *ocispec.Manifest) (string, []byte, error) {
if manifest == nil {
return "", nil, errors.New("manifest cannot be nil")
}
// lookup the readme file layer.
var layer *ocispec.Descriptor
for _, desc := range manifest.Layers {
if slices.Contains([]string{
modelspec.MediaTypeModelDoc,
modelspec.MediaTypeModelDocRaw,
}, desc.MediaType) {
if desc.Annotations != nil {
filepath := desc.Annotations[modelspec.AnnotationFilepath]
if filepath == "README" || filepath == "README.md" {
layer = &desc
break
}
}
}
}
if layer == nil {
return "", nil, errors.NotFoundError(fmt.Errorf("readme layer not found"))
}
_, content, err := r.base.Parse(ctx, artifact, layer)
if err != nil {
return "", nil, err
}
return contentTypeMarkdown, content, nil
}

View File

@ -1,232 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"context"
"fmt"
"io"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/goharbor/harbor/src/testing/mock"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
mockregistry "github.com/goharbor/harbor/src/testing/pkg/registry"
)
func TestReadmeParser(t *testing.T) {
tests := []struct {
name string
manifest *ocispec.Manifest
setupMockReg func(*mockregistry.Client)
expectedType string
expectedOutput []byte
expectedError string
}{
{
name: "nil manifest",
manifest: nil,
expectedError: "manifest cannot be nil",
},
{
name: "empty manifest layers",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{},
},
expectedError: "readme layer not found",
},
{
name: "README.md parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:abc123",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("# Test README")
_ = tw.WriteHeader(&tar.Header{
Name: "README.md",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:abc123").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "README parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README",
},
Digest: "sha256:def456",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("# Test README")
_ = tw.WriteHeader(&tar.Header{
Name: "README",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:def456").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "README parse success (raw)",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDocRaw,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README",
},
Digest: "sha256:def456",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
buf.Write([]byte("# Test README"))
mc.On("PullBlob", mock.Anything, "sha256:def456").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "registry error",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:ghi789",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
mc.On("PullBlob", mock.Anything, "sha256:ghi789").
Return(int64(0), nil, fmt.Errorf("registry error"))
},
expectedError: "failed to pull blob from registry: registry error",
},
{
name: "multiple layers with README",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "other.txt",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:jkl012",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("# Second README")
_ = tw.WriteHeader(&tar.Header{
Name: "README.md",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:jkl012").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Second README"),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockRegClient := &mockregistry.Client{}
if tt.setupMockReg != nil {
tt.setupMockReg(mockRegClient)
}
parser := &readme{
base: &base{
regCli: mockRegClient,
},
}
contentType, content, err := parser.Parse(context.Background(), &artifact.Artifact{}, tt.manifest)
if tt.expectedError != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
assert.Equal(t, tt.expectedOutput, content)
}
mockRegClient.AssertExpectations(t)
})
}
}
func TestNewReadme(t *testing.T) {
parser := NewReadme(registry.Cli)
assert.NotNil(t, parser)
readmeParser, ok := parser.(*readme)
assert.True(t, ok, "Parser should be of type *readme")
assert.Equal(t, registry.Cli, readmeParser.base.regCli)
}

View File

@ -1,150 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"fmt"
"io"
"path/filepath"
"strings"
"sync"
)
func untar(reader io.Reader) ([]byte, error) {
tr := tar.NewReader(reader)
var buf bytes.Buffer
for {
header, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, fmt.Errorf("failed to read tar header: %w", err)
}
// skip the directory.
if header.Typeflag == tar.TypeDir {
continue
}
if _, err := io.Copy(&buf, tr); err != nil {
return nil, fmt.Errorf("failed to copy content to buffer: %w", err)
}
}
return buf.Bytes(), nil
}
// FileType represents the type of a file.
type FileType = string
const (
TypeFile FileType = "file"
TypeDirectory FileType = "directory"
)
type FileNode struct {
Name string
Type FileType
Size int64
Children map[string]*FileNode
mu sync.RWMutex
}
func NewFile(name string, size int64) *FileNode {
return &FileNode{
Name: name,
Type: TypeFile,
Size: size,
}
}
func NewDirectory(name string) *FileNode {
return &FileNode{
Name: name,
Type: TypeDirectory,
Children: make(map[string]*FileNode),
}
}
func (root *FileNode) AddChild(child *FileNode) error {
root.mu.Lock()
defer root.mu.Unlock()
if root.Type != TypeDirectory {
return fmt.Errorf("cannot add child to non-directory node")
}
root.Children[child.Name] = child
return nil
}
func (root *FileNode) GetChild(name string) (*FileNode, bool) {
root.mu.RLock()
defer root.mu.RUnlock()
child, ok := root.Children[name]
return child, ok
}
func (root *FileNode) AddNode(path string, size int64, isDir bool) (*FileNode, error) {
path = filepath.Clean(path)
parts := strings.Split(path, string(filepath.Separator))
current := root
for i, part := range parts {
if part == "" {
continue
}
isLastPart := i == len(parts)-1
child, exists := current.GetChild(part)
if !exists {
var newNode *FileNode
if isLastPart {
if isDir {
newNode = NewDirectory(part)
} else {
newNode = NewFile(part, size)
}
} else {
newNode = NewDirectory(part)
}
if err := current.AddChild(newNode); err != nil {
return nil, err
}
current = newNode
} else {
child.mu.RLock()
nodeType := child.Type
child.mu.RUnlock()
if isLastPart {
if (isDir && nodeType != TypeDirectory) || (!isDir && nodeType != TypeFile) {
return nil, fmt.Errorf("path conflicts: %s exists with different type", part)
}
}
current = child
}
}
return current, nil
}

View File

@ -1,173 +0,0 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"path/filepath"
"strings"
"testing"
)
func TestUntar(t *testing.T) {
tests := []struct {
name string
content string
wantErr bool
expected string
}{
{
name: "valid tar file with single file",
content: "test content",
wantErr: false,
expected: "test content",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
hdr := &tar.Header{
Name: "test.txt",
Mode: 0600,
Size: int64(len(tt.content)),
}
if err := tw.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err := tw.Write([]byte(tt.content)); err != nil {
t.Fatal(err)
}
tw.Close()
result, err := untar(&buf)
if (err != nil) != tt.wantErr {
t.Errorf("untar() error = %v, wantErr %v", err, tt.wantErr)
return
}
if string(result) != tt.expected {
t.Errorf("untar() = %v, want %v", string(result), tt.expected)
}
})
}
}
func TestFileNode(t *testing.T) {
t.Run("test file node operations", func(t *testing.T) {
// Test creating root directory.
root := NewDirectory("root")
if root.Type != TypeDirectory {
t.Errorf("Expected directory type, got %s", root.Type)
}
// Test creating file.
file := NewFile("test.txt", 100)
if file.Type != TypeFile {
t.Errorf("Expected file type, got %s", file.Type)
}
// Test adding child to directory.
err := root.AddChild(file)
if err != nil {
t.Errorf("Failed to add child: %v", err)
}
// Test getting child.
child, exists := root.GetChild("test.txt")
if !exists {
t.Error("Expected child to exist")
}
if child.Name != "test.txt" {
t.Errorf("Expected name test.txt, got %s", child.Name)
}
// Test adding child to file (should fail).
err = file.AddChild(NewFile("invalid.txt", 50))
if err == nil {
t.Error("Expected error when adding child to file")
}
})
}
func TestAddNode(t *testing.T) {
tests := []struct {
name string
path string
size int64
isDir bool
wantErr bool
setupFn func(*FileNode)
}{
{
name: "add file",
path: "dir1/dir2/file.txt",
size: 100,
isDir: false,
wantErr: false,
},
{
name: "add directory",
path: "dir1/dir2/dir3",
size: 0,
isDir: true,
wantErr: false,
},
{
name: "add file with conflicting directory",
path: "dir1/dir2",
size: 100,
isDir: false,
wantErr: true,
setupFn: func(node *FileNode) {
node.AddNode("dir1/dir2", 0, true)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
root := NewDirectory("root")
if tt.setupFn != nil {
tt.setupFn(root)
}
_, err := root.AddNode(tt.path, tt.size, tt.isDir)
if (err != nil) != tt.wantErr {
t.Errorf("AddNode() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !tt.wantErr {
// Verify the path exists.
current := root
parts := filepath.Clean(tt.path)
for part := range strings.SplitSeq(parts, string(filepath.Separator)) {
if part == "" {
continue
}
child, exists := current.GetChild(part)
if !exists {
t.Errorf("Expected path part %s to exist", part)
return
}
current = child
}
}
})
}
}

View File

@ -110,7 +110,7 @@ func (d *defaultProcessor) AbstractMetadata(ctx context.Context, artifact *artif
}
defer blob.Close()
// parse metadata from config layer
metadata := map[string]any{}
metadata := map[string]interface{}{}
if err = json.NewDecoder(blob).Decode(&metadata); err != nil {
return err
}

View File

@ -268,7 +268,7 @@ func (d *defaultProcessorTestSuite) TestAbstractMetadata() {
manifestMediaType, content, err := manifest.Payload()
d.Require().Nil(err)
metadata := map[string]any{}
metadata := map[string]interface{}{}
configBlob := io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
d.Require().Nil(err)
@ -289,7 +289,7 @@ func (d *defaultProcessorTestSuite) TestAbstractMetadataOfOCIManifesttWithUnknow
d.Require().Nil(err)
configBlob := io.NopCloser(strings.NewReader(UnknownJsonConfig))
metadata := map[string]any{}
metadata := map[string]interface{}{}
err = json.NewDecoder(configBlob).Decode(&metadata)
d.Require().Nil(err)

View File

@ -44,7 +44,7 @@ func (m *manifestV1Processor) AbstractMetadata(_ context.Context, artifact *arti
return err
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]any{}
artifact.ExtraAttrs = map[string]interface{}{}
}
artifact.ExtraAttrs["architecture"] = mani.Architecture
return nil

View File

@ -59,7 +59,7 @@ func (m *manifestV2Processor) AbstractMetadata(ctx context.Context, artifact *ar
return err
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]any{}
artifact.ExtraAttrs = map[string]interface{}{}
}
artifact.ExtraAttrs["created"] = config.Created
artifact.ExtraAttrs["architecture"] = config.Architecture

View File

@ -62,14 +62,14 @@ type Processor struct {
}
func (m *Processor) AbstractMetadata(ctx context.Context, art *artifact.Artifact, manifestBody []byte) error {
art.ExtraAttrs = map[string]any{}
art.ExtraAttrs = map[string]interface{}{}
manifest := &v1.Manifest{}
if err := json.Unmarshal(manifestBody, manifest); err != nil {
return err
}
if art.ExtraAttrs == nil {
art.ExtraAttrs = map[string]any{}
art.ExtraAttrs = map[string]interface{}{}
}
if manifest.Annotations[AnnotationVariantKey] == AnnotationVariantValue || manifest.Annotations[AnnotationHandlerKey] == AnnotationHandlerValue {
// for annotation way

Some files were not shown because too many files have changed in this diff Show More