Compare commits

...

223 Commits

Author SHA1 Message Date
stonezdj(Daojun Zhang) 6a1abab687
The tag retention job failed with 403 error message (#22159)
fixes #22141

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-31 14:43:00 +08:00
Chlins Zhang 70b03c9483
feat: support raw format for CNAI model (#22040)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-29 12:41:29 +00:00
stonezdj(Daojun Zhang) 171d9b4c0e
Add HTTP 409 error when creating robot account (#22201)
fixes #22107

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-29 10:03:00 +00:00
Wang Yan 257afebd5f
bump golang version (#22205)
to the latest golang version v1.24.5 from v1.24.3

Signed-off-by: wy65701436 <wangyan@vmware.com>
2025-07-29 07:21:57 +00:00
Wang Yan f15638c5f3
update the orm filter func (#22208)
to extend the enhancement from https://github.com/goharbor/harbor/pull/21924 to fuzzy and range match. After the enhance, the orm.ExerSep is not supported in any sort of query keywords.

Signed-off-by: wy65701436 <wangyan@vmware.com>
2025-07-29 13:22:35 +08:00
Chlins Zhang ebc340a8f7
fix: correct the permission of project maintainer role for webhook policy (#22135)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-25 08:40:40 +00:00
Wang Yan de657686b3
add the replicaiton adapter whitelist (#22198)
fixes #21925

According to https://github.com/goharbor/harbor/wiki/Harbor-Replicaiton-Adapter-Owner, some replication adapters are no longer actively maintained by the Harbor community. To address this, a whitelist environment variable is introduced to define the list of actively supported adapters, which will be used by the Harbor portal and API to display and allow usage.

If you still wish to view and use the unsupported or inactive adapters, you must manually update the whitelist and include the desired adapter names. For the list of adapter names, refer to https://github.com/goharbor/harbor/blob/main/src/pkg/reg/model/registry.go#L22

Signed-off-by: wang yan <wangyan@vmware.com>
2025-07-23 10:25:21 +00:00
stonezdj(Daojun Zhang) ea4110c30a
Display download url for BUILD_PACKAGE action (#22197)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-23 08:21:10 +00:00
Daniel Jiang bb7162f5e6
Don't always skip vuln check when artifact is not scannable (#22187)
fixes #22143

This commit makes update to the vulnerable policy middleware.  So that
it will skip the sheck only when the artifact is not scannable AND it
does not have a scan report.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-07-22 09:20:22 +00:00
stonezdj(Daojun Zhang) e8c2e478b6
Remove testcase Open Image Scanners doc page (#22180)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-22 02:55:11 +00:00
dependabot[bot] 71f2ea84bd
chore(deps): bump helm.sh/helm/v3 from 3.18.3 to 3.18.4 in /src (#22188)
---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.18.4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-21 07:41:33 +00:00
dependabot[bot] 8007c2e02e
chore(deps): bump helm.sh/helm/v3 from 3.18.2 to 3.18.3 in /src (#22113)
---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.18.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-07-21 05:55:08 +00:00
miner 0f67947c87
clean up project metadata for tag retention policy after deletion (#22174)
Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-18 10:52:33 +00:00
stonezdj(Daojun Zhang) ebdfb547ba
Set MAX_JOB_DURATION_SECONDS from jobservice config.yml (#22116)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-18 10:01:18 +00:00
Daniel Jiang 440f53ebbc
Add status field to the API on secyurityHub (#22182)
This commit makes change to the API GET /api/v2.0/vul to make it include
"status" of CVEs in the response.

It also makes update in the UI to add the "Status" column to the data
grids in security hub and artifact details page.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-07-18 16:02:29 +08:00
Moon c83f2d114f
chore: Updated RELEASE.md by updating Minor Release Support Map (#22145)
Updated the Minor Release Support Matrix to include v2.13

Signed-off-by: Mooneeb Hussain <mooneeb.hussain@gmail.com>
2025-07-17 07:39:01 +00:00
Roger 01dba8ad57
Improve portal README.md formatting and clarity (#22173)
improving the portal readme file

Signed-off-by: rgcr <roger.dev@pm.me>
2025-07-17 05:55:11 +00:00
Daniel Jiang 19f4958ec3
Add "status" of CVEs to artfact scan report (#22177)
This commit adds the field "status" to the struct of a vulnerability and adds
column "status" to vulnerability record table.  It makes sure the statuses
of CVEs returned by trivy scanner are persisted and can be returned via
the vulnerabilities addition API of an artifact.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-07-16 11:11:40 +08:00
Spyros Trigazis 6c620dc20c
Update FixVersion and ScoreV3 (#22007)
Set Fix and CVE3Score in VulnerabilityRecord from VulnerabilityItem.

Follow-up of #21915
Fixes #21463

Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>
2025-07-15 14:17:41 +08:00
yuzhipeng c93da7ff4b
Add 400 code response in swagger.yaml for updateRegistry updateReplicationPolicy and headProject (#22165)
Signed-off-by: yuzhipeng <yuzp1996@gmail.com>
2025-07-14 14:26:12 +08:00
Prasanth Baskar 0cf2d7545d
Fix: Audit Log Eventtype antipattern in System Settings UI (#22147)
fix: Audit Log Eventtype antipattern in System Settings

* update logic from disabled to enabled
* update i18n to reflect the change

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-07-07 09:00:09 +00:00
miner c0a859d538
lazy load v2_swagger_client module (#22154)
Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-07 14:51:09 +08:00
miner 2565491758
add BUILD_INSTALLER parameter for optionally build prepare and log container (#22148)
add BUILD_INSTALLER parameter to optionally build prepare and log container only when we need to build offline_installer

Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-03 18:23:17 +08:00
miner 0a3c06d89c
add dockernetwork parameter for build process (#22138)
add dockernetwork parameter for makefile

Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-03 15:33:25 +08:00
Sergey 6be2971941
Add Russian language support (#21083)
* Add Russian language support

Signed-off-by: Sergey Akhmineev <ssakhmineev@rt-dc.ru>

* Update ru-ru-lang.json

Made edits to the translation based on comments

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

* Update ru-ru-lang.json

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

* Update ru-ru-lang.json

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

* Update ru-ru-lang.json

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

---------

Signed-off-by: Sergey Akhmineev <ssakhmineev@rt-dc.ru>
Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>
Co-authored-by: Sergey Akhmineev <ssakhmineev@rt-dc.ru>
Co-authored-by: Orlix <7236111+OrlinVasilev@users.noreply.github.com>
2025-07-02 10:57:27 +02:00
dependabot[bot] 229ef88684
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.1.17 to 1.1.19 in /src (#22133)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-version: 1.1.19
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-07-01 08:19:14 +00:00
Chlins Zhang 0f8913bb27
feat: support customize the job execution retention count by env (#22129)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-01 07:33:05 +00:00
miner 0c5d82e9d4
Update pipenv for prepare (#22124)
* update pipenv and lock

Signed-off-by: my036811 <miner.yang@broadcom.com>

* update pipenv

Signed-off-by: my036811 <miner.yang@broadcom.com>

---------

Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-01 14:23:05 +08:00
stonezdj(Daojun Zhang) b8e3dd8fa0
change the pass-CI rules to exclude the resources and robot-cases folder (#22121)
Change the pass-CI rules to exclude the resources and robot-cases folder
   Pass HARBOR_ADMIN env to robot testcases

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-06-26 14:05:02 +08:00
Chethan e1e807072c
Update CHANGELOG.md, RELEASES.md and ROADMAP.md (#22095)
Signed-off-by: chethanm99 <chethanm1399@gmail.com>
2025-06-23 07:21:44 +00:00
miner 937e5920a2
update robot case for get harbor version (#22104) 2025-06-23 15:10:51 +08:00
dependabot[bot] 918aac61a6
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.1.11 to 1.1.17 in /src (#22089)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.1.11 to 1.1.17.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.1.11...v1.1.17)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-version: 1.1.17
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-06-20 12:17:48 +00:00
stonezdj(Daojun Zhang) c0b22d8e24
Add environment variable add network_type env (#22097)
Use test_network_type to adapt to various network conditions in the test environment.

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-06-19 12:47:04 +00:00
Chlins Zhang 59c3de10a6
refactor: simplify some implementations by modern go features (#21998)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-06-18 10:22:22 +00:00
Chethan b647032747
Update Swagger 's readme.md (#22087)
Enhance readability of swagger Readme.md file by fixing minor errors

Signed-off-by: chethanm99 <chethanm1399@gmail.com>
2025-06-17 10:45:28 +00:00
Prasanth Baskar ec9d13d107
fix: CVE Allowlist Validation (#22077)
fix: empty cve allowlist validation

- fixes empty and cves with only spaces



fix: cve allowlist validation



add: tests for cve allowlist validation



fix: types for projectCVEAllowlist

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-06-17 15:53:00 +08:00
Chethan f46ef3b38d
Update contributing.md (#22082)
Fix minor errors in CONTRIBUTING.md

Signed-off-by: chethanm99 <chethanm1399@gmail.com>
2025-06-16 09:02:45 +00:00
dependabot[bot] 907c6c0900
chore(deps): bump helm.sh/helm/v3 from 3.17.2 to 3.18.2 in /src (#22060)
Bumps [helm.sh/helm/v3](https://github.com/helm/helm) from 3.17.2 to 3.18.2.
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.17.2...v3.18.2)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.18.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-13 03:25:35 +00:00
stonezdj(Daojun Zhang) 780a217122
Remove document link from Image Scanner (#22064)
fixes #22001

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-06-12 15:35:56 +08:00
Prasanth Baskar 145a10a8b9
Refactor: Simplify SearchAndOnBoardGroup Logic (#22058)
refactor: simplify SearchAndOnBoardGroup logic

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-06-10 08:36:06 +02:00
Chlins Zhang f46295aadb
chore: fix the arguments for codecov v5 (#22050) 2025-06-07 02:56:47 +00:00
Mohamed Awnallah e049fcd985
README.md: add artifact hub badge (#20736)
In this commit, we add the artifact hub
badge for the harbor project to improve
their discoverability and the best practices
index on [clomonitor.io](https://clomonitor.io/projects/cncf/harbor)

Signed-off-by: Mohamed Awnallah <mohamedmohey2352@gmail.com>
Co-authored-by: Vadim Bauer <vb@container-registry.com>
2025-06-05 14:13:34 +03:00
dependabot[bot] 7dcdec94e2
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.0.185 to 1.1.10 in /src (#22035)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.0.185 to 1.1.10.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.0.185...v1.1.10)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-version: 1.1.10
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-29 16:56:48 +08:00
dependabot[bot] 3dee318a2e
chore(deps): bump k8s.io/client-go from 0.32.2 to 0.33.1 in /src (#22011)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.32.2 to 0.33.1.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.32.2...v0.33.1)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.33.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 10:54:24 +00:00
Jim Chen a546f99974
feat: add rate limiter for alibaba cloud acr adapter (#21953)
Signed-off-by: njucjc <njucjc@gmail.com>
2025-05-27 08:18:58 +00:00
dependabot[bot] 111fc1c03e
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go from 1.63.84 to 1.63.107 in /src (#21943)
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go in /src

Bumps [github.com/aliyun/alibaba-cloud-sdk-go](https://github.com/aliyun/alibaba-cloud-sdk-go) from 1.63.84 to 1.63.107.
- [Release notes](https://github.com/aliyun/alibaba-cloud-sdk-go/releases)
- [Changelog](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/master/ChangeLog.txt)
- [Commits](https://github.com/aliyun/alibaba-cloud-sdk-go/compare/v1.63.84...v1.63.107)

---
updated-dependencies:
- dependency-name: github.com/aliyun/alibaba-cloud-sdk-go
  dependency-version: 1.63.107
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Orlix <7236111+OrlinVasilev@users.noreply.github.com>
2025-05-27 06:39:52 +00:00
dependabot[bot] 6f856cd6b1
chore(deps): bump aws-actions/configure-aws-credentials from 4.1.0 to 4.2.1 (#22003)
chore(deps): bump aws-actions/configure-aws-credentials

Bumps [aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials) from 4.1.0 to 4.2.1.
- [Release notes](https://github.com/aws-actions/configure-aws-credentials/releases)
- [Changelog](https://github.com/aws-actions/configure-aws-credentials/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws-actions/configure-aws-credentials/compare/v4.1.0...v4.2.1)

---
updated-dependencies:
- dependency-name: aws-actions/configure-aws-credentials
  dependency-version: 4.2.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 13:56:35 +08:00
Wang Yan 2faff8e6af
udpate storage to s3 (#21999)
move the build storage from google storage to the CNCF S3 storage

Currently, we use the internal GCR to store all dev builds for nightly testing, development, and as candidates for RC and GA releases. However, this internal Google storage will no longer be available, this pull request it to move to the CNCF-hosted S3 storage.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-16 10:23:32 +00:00
Mile Druzijanic 424cdd8828
Test update for adding nil value to list (#21990)
test list update

Signed-off-by: miledxz <zedsprogramms@gmail.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-05-15 06:28:49 +00:00
miner 3df34c5735
increase docker client timeout for robot case (#21994)
* increase docker client timeout for robot case

Signed-off-by: my036811 <miner.yang@broadcom.com>
Signed-off-by: miner <miner.yang@broadcom.com>
2025-05-13 11:45:09 +00:00
Wang Yan b4ba918118
bump up golang version to v1.24.3 (#21993)
* bump up golang version to v1.24.3

Signed-off-by: wang yan <wangyan@vmware.com>

* bump mockery version to support golang v2.14

Signed-off-by: wang yan <wangyan@vmware.com>

---------

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-13 17:09:35 +08:00
Wang Yan 85f3f792e4
update robot permission table (#21989)
fixes #21947

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-13 05:49:01 +00:00
Wang Yan 073dab8a07
add build flag for harbor exporter (#21988)
As the harbor exporter is not a core component for installation, so like the trivy, add a flag to controller whether package it into the offline installer.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-13 13:11:55 +08:00
Prasanth Baskar ada851b49a
Fix: Helm Chart Copy Button in UI (#21969)
* fix: helm chart copy btn in UI

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add: tests for pull command component in UI

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-05-09 13:38:01 +08:00
Chlins Zhang 9e18bbc112
refactor: replace interface{} to any (#21973)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-05-08 11:02:49 +00:00
stonezdj(Daojun Zhang) 49df3b4362
Display gc progress information in running state (#21974)
fix #21411

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-05-08 14:25:05 +08:00
Raphael Zöllner bc8653abc7
Add manifestcache push for tag and digest to local repository (#21141)
Signed-off-by: Raphael Zöllner <raphael.zoellner@regiocom.com>
2025-05-07 09:22:26 +00:00
stonezdj(Daojun Zhang) f684c1c36e
change python ./setup.py install to pip install . because deprecated (#21952)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-30 13:40:45 +08:00
Daniel Jiang 70306dca0c
Generate URI of token service via Host in request (#21898)
This commit update the flow to generate URL of token service, which will
first try to use the Host in request.  This will help the situation when
Harbor is configured to serve via a hostname but some client needs to
pull artifacts from Harbor via IP due to limitations in the environment.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-04-28 16:23:09 +08:00
Wang Yan b3cfe225db
unify the golang image version (#21935)
Make the golang version as a unified parameter to build all harbor components

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-27 14:07:19 +08:00
stonezdj(Daojun Zhang) bef66740ec
Update the severity, fixed version and cvss_score_v3 (#21915)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-25 06:30:58 +00:00
Prasanth Baskar 972965ff5a
FIX: Display 'No SBOM' in multi-arch images in HarborUI (#21459)
fix: handle multi-arch images with SBOMs in HarborUI

* Updated the `hasChild` method to check for the presence of
`child_digest` in the `references` array.
* This ensures that SBOMs are correctly displayed for multi-arch images,
where child artifacts may contain their own SBOMs.
* Previously, No SBOM label was displayed for multi-arch images.

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-24 12:23:09 +00:00
Wang Yan 187f1a9ffb
enhance the query judgement (#21924)
the query parameter cannot contains orm.ExerSep which is key characters that used by orm.
the pull request enhances the validation for query parameters.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-24 18:20:33 +08:00
stonezdj(Daojun Zhang) ff2f4b0e71
Remove the error check never happen (#21916)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-24 02:41:16 +00:00
Prasanth Baskar 9850f1404d
Add missing step in e2e pipeline setup (#21888)
add missing step in e2e

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-23 03:02:26 +00:00
Wang Yan ad7be0b42f
revise make file for lint api (#21906)
Decouple the lint from the api generation step in the makefile.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-22 13:36:37 +08:00
Bin Liu 6772477e8a
fix: check blob exist before copying layers samller than chunk size (#21883)
`copyBlobByChunk()` should like `copyBlob()`, first try to mount an
exists layer, if not mounted or exist, then copy the layer monolithic
or by chunks.

Signed-off-by: Bin Liu <liubin0329@gmail.com>
Signed-off-by: Bin Liu <lb203159@antfin.com>
2025-04-21 08:08:51 +00:00
stonezdj(Daojun Zhang) a13a16383a
update artifact info (#21902)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-21 15:24:04 +08:00
Wang Yan b58a60e273
update gitaction machine to 22.04 (#21900)
Per https://github.com/actions/runner-images/issues/11101, the ubuntu 20.04 is out of support. Up it to the 22.04

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-21 10:13:21 +08:00
Chlins Zhang 9dcbd56e52
chore: bump golangci-lint to v2 (#21887)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-04-17 14:28:55 +08:00
miner f8f1994c9e
fix jobservice container loglevel consistent with job_log (#21874)
Signed-off-by: yminer <miner.yang@broadcom.com>
2025-04-15 14:07:39 +08:00
Chlins Zhang bfc29904f9
fix: support preheat cnai model artifact (#21849)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-04-08 19:49:46 +08:00
stonezdj(Daojun Zhang) 259c8a2053
Update robot testcase related to security hub row count to 15 by default (#21846)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-08 08:35:02 +00:00
Prasanth Baskar 7ad799c7c7
Update dependencies in Harbor UI (#21823)
* deps: update src/portal/app-swagger-ui

Signed-off-by: bupd <bupdprasanth@gmail.com>

* deps: update swagger-ui

Signed-off-by: bupd <bupdprasanth@gmail.com>

* deps: update src/portal

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-07 16:26:52 +08:00
Wang Yan d0917e3e66
bump base version for v2.14 (#21819)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-03 16:04:10 +08:00
Wang Yan 68eea5f3fd
bump up jwt and beego (#21814)
upgrdes the dependencies to resolve the upstream issues.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-02 09:50:18 +00:00
stonezdj(Daojun Zhang) b60bd1a69b
Update xpath for some UI components (#21817)
Update testcase for audit log enhancement
    Add e2e_setup for e2e testcases

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-02 09:12:22 +00:00
stonezdj(Daojun Zhang) 280ab5a027
Rule out the duplicate login event and false logout event for oidc (#21811)
* Ignore the second /c/log_out event

   fix the issue logout event logged twice

Signed-off-by: stonezdj <stone.zhang@broadcom.com>

* Rule out the duplicate login event and false logout event for oidc

Signed-off-by: stonezdj <stonezdj@gmail.com>

---------

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
Signed-off-by: stonezdj <stonezdj@gmail.com>
2025-04-02 08:34:38 +00:00
Daniel Jiang 5b28be8252
Bump up trivy and trivy-adapter to fix CVE (#21816)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-04-02 07:36:51 +00:00
Wang Yan e216f6beb9
bump up golang version (#21813)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-02 13:58:49 +08:00
Chlins Zhang 45d73acec4
chore: format the go.mod (#21812)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-04-01 09:36:34 +00:00
Prasanth Baskar 5d776a8a9e
Remove top copy pull cmd button (#21810)
remove top copy pull cmd button

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-01 16:54:47 +08:00
stonezdj(Daojun Zhang) 79a24a42d9
Add operation_descrtion when forward audit log (#21786)
skip to log error message when log endpoint is emtpy

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-01 06:02:03 +00:00
Prasanth Baskar 92297189ab
Fix: Modelfs overflow in UI. (#21791)
fix modelfs overflow in UI

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-01 09:59:49 +08:00
Wang Yan dce7d9f5cf
fix orm filterable issue (#21797)
the orm Filterable function always return true even set the tag of the field as false

Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-31 07:11:57 +00:00
stonezdj(Daojun Zhang) 1641c799ed
Add operation description for delete tag event (#21807)
fixes #21798

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-03-31 06:04:55 +00:00
stonezdj(Daojun Zhang) 72c1b9098a
Add tips for "Other events" (#21788)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-03-31 03:15:59 +00:00
miner 33d1a24127
clean up robot account for SBOM Job (#21794)
Signed-off-by: yminer <miner.yang@broadcom.com>
2025-03-28 09:40:50 +00:00
Prasanth Baskar 9cde2c3d78
feat: Persistent Page Size UI (#21627)
* update page size to global

Signed-off-by: bupd <bupdprasanth@gmail.com>

* update page size test

Signed-off-by: bupd <bupdprasanth@gmail.com>

* increase top page size to 100

Signed-off-by: bupd <bupdprasanth@gmail.com>

* fix lint

Signed-off-by: bupd <bupdprasanth@gmail.com>

* increase all page size to 100

* update all other pages to have same size factors

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add page sizes to constants

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
Co-authored-by: Vadim Bauer <vb@container-registry.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-27 14:58:20 +00:00
Wang Yan e9a8c05508
fix 21118 (#21792)
fix #21118
In the current robot API, querying with ?q=level=system returns both system and project-level robots.
This change addresses the issue by ensuring that specifying level=system will return only system-level robots.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-27 16:43:13 +08:00
Kostiantyn Yevchuk 9283e762b5
Bump golang.org/x/oauth2 from v0.25.0 to v0.27.0 (#21757)
bump x/oauth2 to 0.27.0

Signed-off-by: Kostiantyn Yevchuk <kostiantyn.yevchuk@gmail.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-25 10:20:33 +00:00
Wang Yan 68fb789354
update robot log level (#21778)
fix #21762

Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-25 07:14:53 +00:00
dependabot[bot] 9dcf96f8d0
chore(deps): bump github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 in /src (#21769)
chore(deps): bump github.com/golang-jwt/jwt/v5 in /src

Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-25 06:10:43 +00:00
Wang Yan af4c123f5f
update oidc login log level (#21775)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-25 04:01:04 +00:00
Ian Seyer 0a5ade8faa
Suppress aborthandler (#21479)
* chore(deps): bump go.opentelemetry.io/otel from 1.31.0 to 1.32.0 in /src (#21162)

Bumps [go.opentelemetry.io/otel](https://github.com/open-telemetry/opentelemetry-go) from 1.31.0 to 1.32.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.32.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
Signed-off-by: ianseyer <iseyer@cloudflare.com>

* Suppresses net.http/abortHandler panic

Signed-off-by: ianseyer <iseyer@cloudflare.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: ianseyer <iseyer@cloudflare.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
Co-authored-by: ianseyer <iseyer@cloudflare.com>
Co-authored-by: Daniel Jiang <jiangd@vmware.com>
2025-03-24 14:14:42 +00:00
Dee Kryvenko 87b9751d1c
Fix token service returning empty token on tls certificate issue without any error (#20081)
Signed-off-by: Dee Kryvenko <dee@selfcloud.tech>
Co-authored-by: Wang Yan <wangyan@vmware.com>
Co-authored-by: Orlix <7236111+OrlinVasilev@users.noreply.github.com>
Co-authored-by: Vadim Bauer <vb@container-registry.com>
2025-03-24 13:14:59 +00:00
dependabot[bot] ca825df27f
chore(deps): bump helm.sh/helm/v3 from 3.17.0 to 3.17.2 in /src (#21745)
Bumps [helm.sh/helm/v3](https://github.com/helm/helm) from 3.17.0 to 3.17.2.
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.17.0...v3.17.2)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Orlix <7236111+OrlinVasilev@users.noreply.github.com>
2025-03-24 10:11:45 +00:00
dependabot[bot] 7d1726afd6
chore(deps): bump golang.org/x/net from 0.34.0 to 0.37.0 in /src (#21744)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.34.0 to 0.37.0.
- [Commits](https://github.com/golang/net/compare/v0.34.0...v0.37.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 11:33:40 +02:00
dependabot[bot] 3d21dd29f1
chore(deps): bump golang.org/x/net from 0.34.0 to 0.36.0 in /src (#21731)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.34.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.34.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 06:59:45 +00:00
Zhaoxinxin c806b7e787
fix: Remove top error message about no README or license (#21754)
fix: Remove top error message about no README or license

Signed-off-by: zhaoxinxin <1186037180@qq.com>
2025-03-24 13:56:46 +08:00
Wang Yan b6c083d734
fix logout redirect (#21765)
For the default redirection, to the sign page.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-21 11:10:32 +00:00
dependabot[bot] bcfc1d8179
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp from 0.57.0 to 0.60.0 in /src (#21716)
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp

Bumps [go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp](https://github.com/open-telemetry/opentelemetry-go-contrib) from 0.57.0 to 0.60.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.57.0...zpages/v0.60.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-21 08:58:37 +00:00
Wang Yan 4f56f5d278
redirect to the sign-in page (#21764)
If redirect to the root page, harbor UI will redirect to the OIDC login page automaticlly.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-21 16:20:50 +08:00
Chlins Zhang b37da544d2
fix: limit the file size of the cnai model processor (#21759)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-03-21 15:17:31 +08:00
dependabot[bot] 8081d52c09
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.0.180 to 1.0.185 in /src (#21717)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.0.180 to 1.0.185.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.0.180...v1.0.185)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-03-20 07:53:56 +00:00
Prasanth Baskar 747aac043d
Fix Password Validation in UI (#21697)
fix(UI): password validation

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-03-20 06:38:29 +00:00
dependabot[bot] 1102585cce
chore(deps): bump golang.org/x/time from 0.9.0 to 0.11.0 in /src (#21715)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.9.0 to 0.11.0.
- [Commits](https://github.com/golang/time/compare/v0.9.0...v0.11.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-20 05:52:40 +00:00
Daniel Jiang 1277755ca5
Bump up trivy and trivy-adapter to the latest RC tag (#21741)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-03-18 15:22:24 +00:00
Wang Yan a16caa5ab7
update golang to v1.23.7 (#21749)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-18 18:52:02 +08:00
Chris Girard c2098f2ba3
fix: fix replication of multiple projects with numeric names (#21474)
Explicitly mark project names as strings

This keeps the server from parsing all-numeric project names as integer
values which it does not like.

Signed-off-by: Chris Girard <cgirard@mirantis.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-18 17:06:35 +08:00
Zhaoxinxin 6b2e6ba20c
Feat: artifact adds AI Model type (#21691)
feat: artifact adds model type

Signed-off-by: zhaoxinxin <1186037180@qq.com>
2025-03-18 08:17:22 +00:00
Wang Yan 723d37e1be
fix i18n issue (#21748)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-18 15:36:13 +08:00
Wang Yan 5960bc8fb2
oidclogout (#21718)
* oidclogout

enable oidc session logout

1, give the option of logging out user session from OIDC provider.
2, try best to log out the user offline session if the offline_access in the scope.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-18 11:52:35 +08:00
Prasanth Baskar 816667c794
Fix: Copy Pull Button Overlap with Tag Immutable Label (#21720)
fix: copy button overlap with tag immutable

- fix copy button overlap with tag immutable label on artifact-tag
component
- update css to fix this issue

Signed-off-by: bupd <bupdprasanth@gmail.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-17 10:15:23 +00:00
Prasanth Baskar 3407776e38
Add Lint Check for Copyright Headers in UI (#21692)
add lint check for headers in UI

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-03-14 12:39:18 +00:00
Prasanth Baskar 393db991dc
Replace Vmware to goharbor (#21696)
replace vmware with goharbor in src/portal

Signed-off-by: bupd <bupdprasanth@gmail.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-14 19:34:57 +08:00
Prasanth Baskar b5b1d45413
Add Missing Headers in UI part 3 (#21695)
add missing headers in UI part 3

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-03-14 10:16:54 +00:00
Prasanth Baskar 4f3aa2e437
Add Missing copyright headers in src/portal part 2 (#21694)
add missing headers part 2

Signed-off-by: bupd <bupdprasanth@gmail.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-14 17:38:58 +08:00
Prasanth Baskar f0c1e8f4b3
Add Missing copyright headers in src/portal (#21693)
add missing copyright headers to files in UI

Signed-off-by: bupd <bupdprasanth@gmail.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-14 15:30:37 +08:00
Wang Yan 6dd75c7b57
consume the downstream distribution (#21733)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-13 16:00:31 +08:00
Wang Yan e8a045ff1f
fix issue 20828 (#21726)
* fix issue 20828

fix #20828

Does not fire event only when the current project is a proxy-cache project and the artifact already exists.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-03-13 14:16:46 +08:00
Chlins Zhang d9e71f9dfc
feat: implement the CNAI model processor (#21663)
feat: implement the AI model processor

Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-03-13 02:04:45 +00:00
stonezdj(Daojun Zhang) 20658181ad
Change audit log label (#21703)
Add more description for update user operation change password or set sys admin

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-03-12 05:56:38 +00:00
Prasanth Baskar caaad52798
Update UI version in package.json to v2.13.0 (#21606)
update version to 2.13.0

Signed-off-by: bupd <bupdprasanth@gmail.com>
Co-authored-by: miner <yminer@vmware.com>
2025-03-11 09:15:21 +00:00
miner 229a27ff41
add prepare migration script for 2.13.0 (#21680)
Signed-off-by: yminer <miner.yang@broadcom.com>
2025-03-11 07:48:22 +00:00
miner 3b8c18fd26
update tlsOptions for external redis (#21681)
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
2025-03-10 11:45:18 +00:00
Daniel Jiang e40db21681
Add PKCE support for OIDC authentication (#21702)
Fixes #19393

By default Harbor will generate a pkce code and use it in the
authentication flow to interact with OIDC provider.
Per OAuth spec, this should not break the flow for the OIDC provider that does not support PKCE
The code_challenge_method is hard coded to SHA256 for security reason,
and we may consider add more settings in future based on feedbacks.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-03-10 16:41:14 +08:00
miner fef95244fc
remove redis sentinel patch from builder (#21679)
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-03-04 15:19:05 +08:00
Prasanth Baskar 8419bb6beb
Revamp Copy Pull Command (#21155)
* update copy pull command in artifact tags page

* This commit moves "Copy Pull Command" button inside the table
* and add a separate column for better usability

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add user preferences component

* This Commit adds Preferences in navbar
* Updates the navbar

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add container runtime to preference settings

Signed-off-by: bupd <bupdprasanth@gmail.com>

* fix: lint & rebase

Signed-off-by: bupd <bupdprasanth@gmail.com>

* update pull cmd for tag

Signed-off-by: bupd <bupdprasanth@gmail.com>

* update copy pull command for digest

Signed-off-by: bupd <bupdprasanth@gmail.com>

* fix tests

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add toast message on copy pull command

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add top copy button

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add test for preference settings component

Signed-off-by: bupd <bupdprasanth@gmail.com>

* fix lint

Signed-off-by: bupd <bupdprasanth@gmail.com>

* update comments and nits

Signed-off-by: bupd <bupdprasanth@gmail.com>

* update pull cmd prefix name

* Updates title of preference settings
* Updates container runtime to pull cmd prefix

Signed-off-by: bupd <bupdprasanth@gmail.com>

* extend copy pull command with custom prefix

* This commit adds custom as dropdown option
* add custom_runtime localstorage variable for the pull prefix
* fix artifact list tab styles
* align copy icon in artifact tag list tab

Signed-off-by: bupd <bupdprasanth@gmail.com>

* minor fix

* allow only lowercase alphabets

Signed-off-by: bupd <bupdprasanth@gmail.com>

* remove unused copy pull command in i18n

* removes unused in copy_pull_command in i18n in all languages

Signed-off-by: bupd <bupdprasanth@gmail.com>

* remove commented line

Signed-off-by: Prasanth Baskar <bupdprasanth@gmail.com>

* fix es-es-lang

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
Signed-off-by: Prasanth Baskar <bupdprasanth@gmail.com>
Co-authored-by: Vadim Bauer <vb@container-registry.com>
2025-03-03 13:50:05 +01:00
stonezdj(Daojun Zhang) b9528d8deb
Adjust the audit_log_ext column size to keep align with audit_log table (#21678)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-02-27 10:17:04 +00:00
miner 5c39e76ac4
prepare redis tls config (#21667)
add prepare for redis tls config

Signed-off-by: yminer <miner.yang@broadcom.com>
2025-02-27 17:38:08 +08:00
miner 351783aebe
remove version info for anonymous users (#21672)
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
2025-02-26 17:17:49 +08:00
miner 9e84d03720
add redis tls support for core&jobservice (#21654)
Signed-off-by: yminer <miner.yang@broadcom.com>
2025-02-25 07:09:36 +00:00
stonezdj(Daojun Zhang) 4cd06777c0
Fix issue with user create/delete/update event (#21651)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-02-25 03:42:48 +00:00
stonezdj(Daojun Zhang) e5e131845e
Add OIDC login event (#21650)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-02-21 10:18:51 +00:00
Lichao Xue b837bbb716
support to audit logs (#21377)
Signed-off-by: Lichao Xue <lichao.xue@broadcom.com>
Co-authored-by: Lichao Xue <lichao.xue@broadcom.com>
2025-02-21 13:44:48 +08:00
stonezdj(Daojun Zhang) 45659070b7
Update purge audit to purge both audit_log_ext and audit_log (#21608)
Fix integration issue with UI

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-02-18 09:45:46 +00:00
dependabot[bot] add0b600e1
chore(deps): bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp from 1.31.0 to 1.34.0 in /src (#21465)
chore(deps): bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp

Bumps [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp](https://github.com/open-telemetry/opentelemetry-go) from 1.31.0 to 1.34.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.34.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-16 09:07:52 +00:00
dependabot[bot] 42f86f8c4e
chore(deps): bump helm.sh/helm/v3 from 3.16.2 to 3.17.0 in /src (#21468)
Bumps [helm.sh/helm/v3](https://github.com/helm/helm) from 3.16.2 to 3.17.0.
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.16.2...v3.17.0)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-02-14 17:52:39 +08:00
dependabot[bot] db017f0dae
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.0.177 to 1.0.180 in /src (#21613)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.0.177 to 1.0.180.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.0.177...v1.0.180)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-02-14 07:44:25 +00:00
stonezdj(Daojun Zhang) 6965cab0c5
Add user event and config event (#21455)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-02-14 06:23:14 +00:00
Prasanth Baskar cc966435a5
Replace vmware with goharbor in README (#21626)
replace vmware with goharbor in README

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-02-13 16:58:02 +00:00
dependabot[bot] b658db6ee1
chore(deps): bump github.com/aws/aws-sdk-go from 1.55.5 to 1.55.6 in /src (#21467)
chore(deps): bump github.com/aws/aws-sdk-go in /src

Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.55.5 to 1.55.6.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG_PENDING.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.55.5...v1.55.6)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-02-13 11:24:04 +00:00
dependabot[bot] 885a5e9fff
chore(deps): bump github.com/go-ldap/ldap/v3 from 3.4.6 to 3.4.10 in /src (#21464)
* chore(deps): bump github.com/go-ldap/ldap/v3 in /src

Bumps [github.com/go-ldap/ldap/v3](https://github.com/go-ldap/ldap) from 3.4.6 to 3.4.10.
- [Release notes](https://github.com/go-ldap/ldap/releases)
- [Commits](https://github.com/go-ldap/ldap/compare/v3.4.6...v3.4.10)

---
updated-dependencies:
- dependency-name: github.com/go-ldap/ldap/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Use DialURL for goldap

Signed-off-by: yminer <miner.yang@broadcom.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
2025-02-13 18:41:15 +08:00
Wang Yan f35ed6df16
enlarge the gc workers to 10 (#21462)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-02-11 16:51:17 +08:00
Chlins Zhang 490f898aec
feat: add execution_id and task_id to the replication webhook payload (#21614)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-02-11 14:58:07 +08:00
Prasanth Baskar fee92c5189
Fix Overflow in Interrogation Services Page (#21043)
* fix overflow in interrogation services vuln page

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add space between items in vuln page

* adds padding in between scan now and results of the scan

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-02-10 05:26:13 +00:00
Prasanth Baskar c0ef35896d
Fix: Incorrect Data Display in Replications Table (#21461)
fix incorrect data display in replications table

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-02-08 17:28:17 +08:00
Sergio Méndez 28896b1bd6
Full spanish harbor ui translation (#21369)
Full spanish harbor ui translation

Signed-off-by: Sergio Méndez <sergioarm.gpl@gmail.com>
2025-02-05 07:31:13 +00:00
Prasanth Baskar 5c85f5ec43
(Doc): Add supported Node Version for Harbor UI in .nvmrc (#21153)
add .nvmrc and update doc



update nvmrc to supported version

Signed-off-by: bupd <bupdprasanth@gmail.com>
Co-authored-by: Vadim Bauer <vb@container-registry.com>
2025-02-03 13:43:04 +00:00
dependabot[bot] 28c3a0ed63
chore(deps): bump go.opentelemetry.io/otel/sdk from 1.31.0 to 1.34.0 in /src (#21440)
chore(deps): bump go.opentelemetry.io/otel/sdk in /src

Bumps [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) from 1.31.0 to 1.34.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.34.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-24 06:30:37 +00:00
dependabot[bot] 16436b37fc
chore(deps): bump actions/stale from 9.0.0 to 9.1.0 (#21446)
Bumps [actions/stale](https://github.com/actions/stale) from 9.0.0 to 9.1.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v9.0.0...v9.1.0)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-01-23 10:54:51 +00:00
dependabot[bot] c87c9080ad
chore(deps): bump golang.org/x/time from 0.7.0 to 0.9.0 in /src (#21438)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.7.0 to 0.9.0.
- [Commits](https://github.com/golang/time/compare/v0.7.0...v0.9.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-23 09:11:20 +00:00
stonezdj(Daojun Zhang) f808f33cca
Add user login event to audit log (#21415)
Add common event handler
  Register login event
  Update previous audit log event redirect to auditlogext table

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-23 15:55:22 +08:00
miner 39b2898e18
update exporter docker build para (#21448)
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
Co-authored-by: Orlix <7236111+OrlinVasilev@users.noreply.github.com>
2025-01-22 11:23:30 +00:00
dependabot[bot] cb794e7f86
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux from 0.57.0 to 0.59.0 in /src (#21439)
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux

Bumps [go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux](https://github.com/open-telemetry/opentelemetry-go-contrib) from 0.57.0 to 0.59.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.57.0...zpages/v0.59.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-01-22 10:26:55 +00:00
dependabot[bot] 8078b9b423
chore(deps): bump k8s.io/client-go from 0.31.1 to 0.32.1 in /src (#21436)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.31.1 to 0.32.1.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.31.1...v0.32.1)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-22 08:25:45 +00:00
dependabot[bot] 91a0edc19b
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go from 1.63.80 to 1.63.84 in /src (#21437)
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go in /src

Bumps [github.com/aliyun/alibaba-cloud-sdk-go](https://github.com/aliyun/alibaba-cloud-sdk-go) from 1.63.80 to 1.63.84.
- [Release notes](https://github.com/aliyun/alibaba-cloud-sdk-go/releases)
- [Changelog](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/master/ChangeLog.txt)
- [Commits](https://github.com/aliyun/alibaba-cloud-sdk-go/compare/v1.63.80...v1.63.84)

---
updated-dependencies:
- dependency-name: github.com/aliyun/alibaba-cloud-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-22 07:38:03 +00:00
Daniel Jiang 045f829277
Bump up trivy to v0.58.2, trivy adapter to v0.32.3 (#21417) (#21442)
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-01-21 17:45:09 +01:00
Wang Yan 9e8e647b71
separate buildin values (#21425)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-01-16 14:26:24 +00:00
stonezdj(Daojun Zhang) 4d5bc19866
Implement audit log ext API (#21414)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-16 09:23:25 +00:00
dependabot[bot] 3b655213c0
chore(deps): bump golang.org/x/oauth2 from 0.23.0 to 0.25.0 in /src (#21381)
Bumps [golang.org/x/oauth2](https://github.com/golang/oauth2) from 0.23.0 to 0.25.0.
- [Commits](https://github.com/golang/oauth2/compare/v0.23.0...v0.25.0)

---
updated-dependencies:
- dependency-name: golang.org/x/oauth2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-01-16 08:02:32 +00:00
Wang Yan 2140a283bf
remove with_signature (#21420)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-01-16 05:51:07 +00:00
dependabot[bot] b4c3c73391
chore(deps): bump k8s.io/apimachinery from 0.31.2 to 0.32.0 in /src (#21319)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.31.2 to 0.32.0.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.31.2...v0.32.0)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
Co-authored-by: Chlins Zhang <chlins.zhang@gmail.com>
2025-01-16 02:58:36 +00:00
dependabot[bot] 5545e5b5a8
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.0.164 to 1.0.177 in /src (#21404)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.0.164 to 1.0.177.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.0.164...v1.0.177)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-01-16 10:19:57 +08:00
Prasanth Baskar a6688903bb
Fix Overlay Issue in Replication Page UI (#21069)
* fix overlay in replication execution details page

Signed-off-by: bupd <bupdprasanth@gmail.com>

* fix time overflow in turkish and some languages

* minor fix in displaying time

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-01-15 09:00:36 +00:00
dependabot[bot] 9231fd2b72
chore(deps): bump golang.org/x/net from 0.30.0 to 0.33.0 in /src (#21413)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.30.0 to 0.33.0.
- [Commits](https://github.com/golang/net/compare/v0.30.0...v0.33.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-01-15 08:14:17 +00:00
dependabot[bot] ec03ccd7cf
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go from 1.63.47 to 1.63.80 in /src (#21405)
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go in /src

Bumps [github.com/aliyun/alibaba-cloud-sdk-go](https://github.com/aliyun/alibaba-cloud-sdk-go) from 1.63.47 to 1.63.80.
- [Release notes](https://github.com/aliyun/alibaba-cloud-sdk-go/releases)
- [Changelog](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/master/ChangeLog.txt)
- [Commits](https://github.com/aliyun/alibaba-cloud-sdk-go/compare/v1.63.47...v1.63.80)

---
updated-dependencies:
- dependency-name: github.com/aliyun/alibaba-cloud-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-01-15 06:52:06 +00:00
Wang Yan 97391608d0
bump mockery (#21419)
* bump mockery

Signed-off-by: wang yan <wangyan@vmware.com>

* update mock testing codes

Signed-off-by: wang yan <wangyan@vmware.com>

---------

Signed-off-by: wang yan <wangyan@vmware.com>
2025-01-15 14:08:21 +08:00
Wang Yan 2364957036
update spectral image (#21410)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-01-15 03:41:15 +00:00
stonezdj(Daojun Zhang) 60798a49b3
Add dao and manager for audit log ext (#21379)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-14 09:49:50 +00:00
dependabot[bot] cc6ace188d
chore(deps): bump github.com/beego/beego/v2 from 2.2.1 to 2.3.4 in /src (#21321)
* chore(deps): bump github.com/beego/beego/v2 from 2.2.1 to 2.3.4 in /src

Bumps [github.com/beego/beego/v2](https://github.com/beego/beego) from 2.2.1 to 2.3.4.
- [Release notes](https://github.com/beego/beego/releases)
- [Changelog](https://github.com/beego/beego/blob/master/CHANGELOG.md)
- [Commits](https://github.com/beego/beego/compare/v2.2.1...v2.3.4)

---
updated-dependencies:
- dependency-name: github.com/beego/beego/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* add SessionReleaseIfPresent for session Store implenmentation

Signed-off-by: yminer <miner.yang@broadcom.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
2025-01-14 17:12:56 +08:00
stonezdj(Daojun Zhang) 7c502a8581
Remove id field from payload when update purge audit or gc schedule (#21408)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-14 06:48:50 +00:00
stonezdj(Daojun Zhang) 67654f26bf
Add middleware for audit log (#21376)
Add middleware for audit log ext

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-14 03:54:26 +00:00
Wang Yan b0545c05fd
bump up swagger (#21396)
* bump up swagger

Signed-off-by: wang yan <wangyan@vmware.com>

* fix gc driver type

Signed-off-by: wang yan <wangyan@vmware.com>

---------

Signed-off-by: wang yan <wangyan@vmware.com>
2025-01-10 17:02:57 +08:00
Samuel Gaist 15d17a3338
Remove robotV1 from code base (#20958) (#20991)
It was deprecated in 2.4.0.

Signed-off-by: Samuel Gaist <samuel.gaist@idiap.ch>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-01-10 13:36:28 +08:00
stonezdj(Daojun Zhang) 12382fa8ae
Update prepare to avoid error when max_job_duration_hours not configured (#21395)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-10 10:56:43 +08:00
stonezdj(Daojun Zhang) 8ca455eb76
Add config max_job_duration_hours for jobservice (#21390)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-08 17:15:37 +08:00
Prasanth Baskar 8bf710a405
fix: replication rule message in UI (#21299)
* updates replication rule confirm message for execution in UI
* update en-us-lang and es-es-lang with clear focus on execution
* Since different languages have varying interpretations of 'execution'
* Its better to update only the English version

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-01-06 14:07:20 +00:00
stonezdj(Daojun Zhang) 875f43b93c
Add configure item for audit_log_disable (#21368)
Add configure item audit_log_disable

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-06 08:21:52 +00:00
stonezdj(Daojun Zhang) 6001359038
Update testcase in main branch (#21375)
Update robot account e2e testcase for export-cve change

    update job service schedule testcase
    switch dockerhub to registry.goharbor.io

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-06 14:17:42 +08:00
stonezdj(Daojun Zhang) b0c74a0584
Add swagger api and audit_log_ext table model (#21360)
add auditlog-ext related api in swagger
  add audit_log_ext table

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-03 06:11:09 +00:00
stonezdj(Daojun Zhang) abaa40ab60
Skip admin and change oidc user not found message more readable (#21061)
fixes #21041

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-01-03 10:58:24 +08:00
Chlins Zhang a14a4d2468
fix: unify the auth data handle to the decode method (#21350)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2024-12-27 13:52:51 +08:00
Slava Lysunkin 462749a633
Fixed the type in DTR adapter info (#21357)
Signed-off-by: Slava Lysunkin <lysunkin@gmail.com>
2024-12-26 14:08:05 +08:00
yuzhipeng d7ab265b10
Change commit-msg hook.sh address to right place (#21343)
Since hook.sh address has moved from

`https://cdn.rawgit.com/tommarshall/git-good-commit/v0.6.1/hook.sh`

to

`https://cdn.jsdelivr.net/gh/tommarshall/git-good-commit@v0.6.1/hook.sh`

fix the address to the moved address.

Signed-off-by: yuzhipeng <yuzp1996@gmail.com>
2024-12-24 10:40:30 +08:00
Chlins Zhang a548ab705f
feat: extend the p2p preheat policy (#21115)
Add the field extra_attrs to the p2p preheat policy for the provider to
define their specified parameters when preheating.

Signed-off-by: chlins <chlins.zhang@gmail.com>
2024-12-18 10:30:36 +08:00
Wang Yan e417875377
fix export cve permission issue (#21325)
The export CVE permission should be included in the project scope, as the API relies on project-level judgment.

Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-17 14:52:21 +08:00
dependabot[bot] af63122bb7
chore(deps): bump golang.org/x/crypto from 0.29.0 to 0.31.0 in /src (#21307)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.29.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2024-12-13 08:41:31 +00:00
Wang Yan c7cf57bdf8
fix robot account creation issue (#21310)
fixes #21251

Signed-off-by: wang yan <wangyan@vmware.com>
2024-12-13 11:11:44 +08:00
Wang Yan 29bd094732
fix robot deletion event (#21234)
* fix robot deletion event

Signed-off-by: wang yan <wangyan@vmware.com>

* resolve comments

Signed-off-by: wang yan <wangyan@vmware.com>

---------

Signed-off-by: wang yan <wangyan@vmware.com>
2024-11-26 14:32:35 +08:00
Chlins Zhang 05233b0711
fix: event-based replication deletion not work when policy with label (#21215)
fix: event-based replication deletion not work when policy with label filter

Fix event-based replication deletion on remote registry not triggered
when the replication policy configured the label filter.

Signed-off-by: chlins <chlins.zhang@gmail.com>
2024-11-26 02:29:34 +00:00
Hajnal Máté 4a12623459
Fix postgres script permissions (#21007)
The initdb.sh and the upgrade.sh scripts in the postgres image
were not owned by the postgres user, which made them failing
with permission denied errors.

Signed-off-by: Mate Hajnal <hajnalmt@gmail.com>
2024-11-25 14:53:19 +02:00
stonezdj(Daojun Zhang) 969384cd63
Enable MAX_JOB_DURATION_SECONDS in the jobservice container (#21232)
enable job service to set MAX_JOB_DURATION_SECONDS in the jobservice container to customize max job duration
  fork gocraft/work to goharbor/work

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-11-23 04:10:33 +08:00
Daniel Jiang 66c98c81f1
Update assignees (#21136)
Some developers are no longer working on Harbor.
I'm removing them from assignees list.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2024-11-19 16:17:23 +08:00
dependabot[bot] 994a8622d5
chore(deps): bump codecov/codecov-action from 4 to 5 (#21192)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2024-11-19 14:48:17 +08:00
Wang Yan ba177ffbb5
remove asc files handling (#21214)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-11-19 09:46:45 +08:00
Wang Yan 9345fe39c9
update csrf key generation (#21154)
* update csrf key generation

Fixes #21060

Do not generate a random key if the provided key has an invalid length.

Signed-off-by: wang yan <wangyan@vmware.com>

* fix ut check

Signed-off-by: wang yan <wangyan@vmware.com>

---------

Signed-off-by: wang yan <wangyan@vmware.com>
2024-11-15 21:40:27 +08:00
stonezdj(Daojun Zhang) bccfd5fb41
Change the source of trivy-db to avoid 429 error (#21183)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-11-15 03:30:04 +00:00
miner d39d979736
remove slack notification (#21185)
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
2024-11-14 14:07:38 +08:00
dependabot[bot] 45ec9bbbbd
chore(deps): bump golang.org/x/crypto from 0.28.0 to 0.29.0 in /src (#21158)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.28.0 to 0.29.0.
- [Commits](https://github.com/golang/crypto/compare/v0.28.0...v0.29.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...



go momd tidy

Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-13 08:43:38 +00:00
dependabot[bot] 45c4b01c8c
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.0.159 to 1.0.164 in /src (#21159)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.0.159 to 1.0.164.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.0.159...v1.0.164)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: miner <yminer@vmware.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-13 08:04:10 +00:00
dependabot[bot] f61f56c544
chore(deps): bump go.opentelemetry.io/otel from 1.31.0 to 1.32.0 in /src (#21162)
Bumps [go.opentelemetry.io/otel](https://github.com/open-telemetry/opentelemetry-go) from 1.31.0 to 1.32.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.31.0...v1.32.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-13 08:02:59 +00:00
dependabot[bot] 800a296956
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux from 0.56.0 to 0.57.0 in /src (#21160)
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux

Bumps [go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux](https://github.com/open-telemetry/opentelemetry-go-contrib) from 0.56.0 to 0.57.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.56.0...zpages/v0.57.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-13 07:21:55 +00:00
dependabot[bot] a7616b62c3
chore(deps): bump golang.org/x/text from 0.19.0 to 0.20.0 in /src (#21161)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.19.0 to 0.20.0.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.19.0...v0.20.0)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-13 14:44:53 +08:00
stonezdj(Daojun Zhang) a0d27d32cc
Update image tag for nightly-trivy-scan (#21165)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-11-12 13:34:41 +08:00
stonezdj(Daojun Zhang) 2b881d6a5f
Update support matrix for 2.12.x (#21150)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-11-08 17:34:50 +08:00
dependabot[bot] 75047746dc
chore(deps): bump github.com/prometheus/client_golang from 1.20.4 to 1.20.5 in /src (#21105)
chore(deps): bump github.com/prometheus/client_golang in /src

Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.20.4 to 1.20.5.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.20.4...v1.20.5)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-06 09:20:28 +00:00
dependabot[bot] 3da19ac9c7
chore(deps): bump github.com/golang-jwt/jwt/v4 from 4.4.2 to 4.5.1 in /src (#21132)
chore(deps): bump github.com/golang-jwt/jwt/v4 in /src

Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.4.2 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.4.2...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: indirect
...



go mod tidy

Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-06 08:43:59 +00:00
dependabot[bot] b1e5f9d00c
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go from 1.61.1193 to 1.63.47 in /src (#21130)
* chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go in /src

Bumps [github.com/aliyun/alibaba-cloud-sdk-go](https://github.com/aliyun/alibaba-cloud-sdk-go) from 1.61.1193 to 1.63.47.
- [Release notes](https://github.com/aliyun/alibaba-cloud-sdk-go/releases)
- [Changelog](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/master/ChangeLog.txt)
- [Commits](https://github.com/aliyun/alibaba-cloud-sdk-go/compare/v1.61.1193...v1.63.47)

---
updated-dependencies:
- dependency-name: github.com/aliyun/alibaba-cloud-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* update alibaba-cloud-sdk-go utils RandStringBytes to GetNonce

Signed-off-by: yminer <miner.yang@broadcom.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: yminer <miner.yang@broadcom.com>
2024-11-06 16:06:50 +08:00
dependabot[bot] 87d923ee89
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux from 0.51.0 to 0.56.0 in /src (#21131)
chore(deps): bump go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux

Bumps [go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux](https://github.com/open-telemetry/opentelemetry-go-contrib) from 0.51.0 to 0.56.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.51.0...zpages/v0.56.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux
  dependency-type: direct:production
  update-type: version-update:semver-minor
...



go mod tidy

Signed-off-by: yminer <miner.yang@broadcom.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-06 14:16:02 +08:00
dependabot[bot] 119e37945d
chore(deps): bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp from 1.27.0 to 1.31.0 in /src (#21107)
chore(deps): bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp

Bumps [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp](https://github.com/open-telemetry/opentelemetry-go) from 1.27.0 to 1.31.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.27.0...v1.31.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-05 08:06:44 +00:00
dependabot[bot] 62d1d93167
chore(deps): bump k8s.io/api from 0.31.1 to 0.31.2 in /src (#21104)
Bumps [k8s.io/api](https://github.com/kubernetes/api) from 0.31.1 to 0.31.2.
- [Commits](https://github.com/kubernetes/api/compare/v0.31.1...v0.31.2)

---
updated-dependencies:
- dependency-name: k8s.io/api
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-04 12:03:02 +00:00
dependabot[bot] ee8b8764df
chore(deps): bump go.opentelemetry.io/otel/sdk from 1.29.0 to 1.31.0 in /src (#21106)
chore(deps): bump go.opentelemetry.io/otel/sdk in /src

Bumps [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) from 1.29.0 to 1.31.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.29.0...v1.31.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2024-11-04 19:26:43 +08:00
stonezdj(Daojun Zhang) 9e55afbb9a
Update the testcase for v2.12.0, changes include (#21113)
pull image from registry.goharbor.io instead of dockerhub
   Update testcase to support Docker Image Can Be Pulled With Credential
   Change gitlab project name when user changed.
   Update permissions count and permission count total
   Change webhook_endpoint_ui

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2024-10-30 20:07:01 +08:00
Wang Yan 3dbfd4229b
bump base version (#21111)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-10-30 14:36:35 +08:00
cangqiaoyuzhuo f99e0da017
chore: fix some comments (#21109)
Signed-off-by: cangqiaoyuzhuo <850072022@qq.com>
2024-10-30 09:49:29 +08:00
Wang Yan 91082af39f
fix release script (#21100)
since we wil not ship the asc files since v2.12, it needs to remove the stesp to handle signatures.

Signed-off-by: wang yan <wangyan@vmware.com>
2024-10-28 07:14:07 +00:00
tostt 2ed5c1eb97
Update fr-fr-lang.json (#21082)
Updated untranslated French strings

Signed-off-by: tostt <tostt@users.noreply.github.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2024-10-25 16:07:17 +08:00
stonezdj(Daojun Zhang) 21de42421f
[WIP] Remove unused files (#20791)
Remove unused files

   fix golint warnings

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2024-10-24 21:07:07 +00:00
miner a7b91b5414
bump up dependencies (#21088)
Signed-off-by: yminer <yminer@vmware.com>
2024-10-24 18:32:16 +08:00
Wang Yan 6c394232b6
fix build package issue (#21087)
Signed-off-by: wang yan <wangyan@vmware.com>
2024-10-24 16:48:58 +08:00
László Rafael 8254c02603
Fix auth config oidc scope regex (#20483) 2024-10-24 08:09:44 +00:00
1337 changed files with 25981 additions and 10426 deletions

View File

@ -10,8 +10,6 @@ assignees:
- OrlinVasilev
- stonezdj
- chlins
- zyyw
- MinerYang
- AllForNothing
numberOfAssignees: 3

View File

@ -89,9 +89,9 @@ jobs:
bash ./tests/showtime.sh ./tests/ci/ut_run.sh $IP
df -h
- name: Codecov For BackEnd
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
file: ./src/github.com/goharbor/harbor/profile.cov
files: ./src/github.com/goharbor/harbor/profile.cov
flags: unittests
APITEST_DB:
@ -331,7 +331,7 @@ jobs:
bash ./tests/showtime.sh ./tests/ci/ui_ut_run.sh
df -h
- name: Codecov For UI
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
file: ./src/github.com/goharbor/harbor/src/portal/coverage/lcov.info
files: ./src/github.com/goharbor/harbor/src/portal/coverage/lcov.info
flags: unittests

View File

@ -13,16 +13,14 @@ jobs:
env:
BUILD_PACKAGE: true
runs-on:
- ubuntu-20.04
- ubuntu-22.04
steps:
- uses: actions/checkout@v3
- uses: 'google-github-actions/auth@v2'
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
with:
version: '430.0.0'
- run: gcloud info
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Set up Go 1.22
uses: actions/setup-go@v5
with:
@ -89,40 +87,20 @@ jobs:
else
build_base_params=" BUILD_BASE=true PUSHBASEIMAGE=true REGISTRYUSER=\"${{ secrets.DOCKER_HUB_USERNAME }}\" REGISTRYPASSWORD=\"${{ secrets.DOCKER_HUB_PASSWORD }}\""
fi
sudo make package_offline GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_online GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_offline GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true EXPORTERFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_online GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true EXPORTERFLAG=true HTTPPROXY= ${build_base_params}
harbor_offline_build_bundle=$(basename harbor-offline-installer-*.tgz)
harbor_online_build_bundle=$(basename harbor-online-installer-*.tgz)
echo "Package name is: $harbor_offline_build_bundle"
echo "Package name is: $harbor_online_build_bundle"
# echo -en "${{ secrets.HARBOR_SIGN_KEY }}" | gpg --import
# gpg -v -ab -u ${{ secrets.HARBOR_SIGN_KEY_ID }} $harbor_offline_build_bundle
# gpg -v -ab -u ${{ secrets.HARBOR_SIGN_KEY_ID }} $harbor_online_build_bundle
source tests/ci/build_util.sh
cp ${harbor_offline_build_bundle} harbor-offline-installer-latest.tgz
# cp ${harbor_offline_build_bundle}.asc harbor-offline-installer-latest.tgz.asc
cp ${harbor_online_build_bundle} harbor-online-installer-latest.tgz
# cp ${harbor_online_build_bundle}.asc harbor-online-installer-latest.tgz.asc
uploader ${harbor_offline_build_bundle} $harbor_target_bucket
# uploader ${harbor_offline_build_bundle}.asc $harbor_target_bucket
uploader ${harbor_online_build_bundle} $harbor_target_bucket
# uploader ${harbor_online_build_bundle}.asc $harbor_target_bucket
uploader harbor-offline-installer-latest.tgz $harbor_target_bucket
# uploader harbor-offline-installer-latest.tgz.asc $harbor_target_bucket
uploader harbor-online-installer-latest.tgz $harbor_target_bucket
# uploader harbor-online-installer-latest.tgz.asc $harbor_target_bucket
echo "BUILD_BUNDLE=$harbor_offline_build_bundle" >> $GITHUB_ENV
publishImage $target_branch $Harbor_Assets_Version "${{ secrets.DOCKER_HUB_USERNAME }}" "${{ secrets.DOCKER_HUB_PASSWORD }}"
- name: Slack Notification
uses: sonots/slack-notice-action@v3
with:
status: ${{ job.status }}
title: Build Package - ${{ env.BUILD_BUNDLE }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}
if: always()

View File

@ -17,14 +17,12 @@ jobs:
#- self-hosted
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: google-github-actions/auth@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
- run: gcloud info
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Set up Go 1.21
uses: actions/setup-go@v5
with:
@ -65,6 +63,5 @@ jobs:
- name: upload test result to gs
run: |
cd src/github.com/goharbor/harbor
gsutil cp ./distribution-spec/conformance/report.html gs://harbor-conformance-test/report.html
gsutil acl ch -u AllUsers:R gs://harbor-conformance-test/report.html
aws s3 cp ./distribution-spec/conformance/report.html s3://harbor-conformance-test/report.html
if: always()

View File

@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9.0.0
- uses: actions/stale@v9.1.0
with:
stale-issue-message: 'This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.'
stale-pr-message: 'This PR is being marked stale due to a period of inactivty. If this PR is still relevant, please comment or remove the stale label. Otherwise, this PR will close in 30 days.'

View File

@ -12,7 +12,7 @@ jobs:
matrix:
# maintain the versions of harbor that need to be actively
# security scanned
versions: [dev, v2.11.0-dev]
versions: [dev, v2.12.0-dev]
# list of images that need to be scanned
images: [harbor-core, harbor-db, harbor-exporter, harbor-jobservice, harbor-log, harbor-portal, harbor-registryctl, prepare]
permissions:
@ -30,7 +30,11 @@ jobs:
format: 'template'
template: '@/contrib/sarif.tpl'
output: 'trivy-results.sarif'
env:
# Use AWS' ECR mirror for the trivy-db image, as GitHub's Container
# Registry is returning a TOOMANYREQUESTS error.
# Ref: https://github.com/aquasecurity/trivy-action/issues/389
TRIVY_DB_REPOSITORY: 'public.ecr.aws/aquasecurity/trivy-db:2'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
with:

View File

@ -9,6 +9,9 @@ on:
- '!tests/**.sh'
- '!tests/apitests/**'
- '!tests/ci/**'
- '!tests/resources/**'
- '!tests/robot-cases/**'
- '!tests/robot-cases/Group1-Nightly/**'
push:
paths:
- 'docs/**'
@ -17,6 +20,9 @@ on:
- '!tests/**.sh'
- '!tests/apitests/**'
- '!tests/ci/**'
- '!tests/resources/**'
- '!tests/robot-cases/**'
- '!tests/robot-cases/Group1-Nightly/**'
jobs:
UTTEST:

View File

@ -7,7 +7,7 @@ on:
jobs:
release:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- name: Setup env
@ -19,12 +19,12 @@ jobs:
echo "PRE_TAG=$(echo $release | jq -r '.body' | jq -r '.preTag')" >> $GITHUB_ENV
echo "BRANCH=$(echo $release | jq -r '.target_commitish')" >> $GITHUB_ENV
echo "PRERELEASE=$(echo $release | jq -r '.prerelease')" >> $GITHUB_ENV
- uses: 'google-github-actions/auth@v2'
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
with:
version: '430.0.0'
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Prepare Assets
run: |
if [ ! ${{ env.BUILD_NO }} -o ${{ env.BUILD_NO }} = "null" ]
@ -39,10 +39,8 @@ jobs:
src_online_package=harbor-online-installer-${{ env.BASE_TAG }}-${{ env.BUILD_NO }}.tgz
dst_offline_package=harbor-offline-installer-${{ env.CUR_TAG }}.tgz
dst_online_package=harbor-online-installer-${{ env.CUR_TAG }}.tgz
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_offline_package} gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_offline_package}
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_offline_package}.asc gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_offline_package}.asc
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_online_package} gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_online_package}
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_online_package}.asc gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_online_package}.asc
aws s3 cp s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_offline_package} s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_offline_package}
aws s3 cp s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_online_package} s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_online_package}
assets_path=$(pwd)/assets
source tools/release/release_utils.sh && getAssets ${{ secrets.HARBOR_RELEASE_BUILD }} ${{ env.BRANCH }} $dst_offline_package $dst_online_package ${{ env.PRERELEASE }} $assets_path
@ -74,7 +72,6 @@ jobs:
body_path: ${{ env.RELEASE_NOTES_PATH }}
files: |
${{ env.OFFLINE_PACKAGE_PATH }}
${{ env.OFFLINE_PACKAGE_PATH }}.asc
${{ env.MD5SUM_PATH }}
- name: GA Release
uses: softprops/action-gh-release@v2
@ -83,7 +80,5 @@ jobs:
body_path: ${{ env.RELEASE_NOTES_PATH }}
files: |
${{ env.OFFLINE_PACKAGE_PATH }}
${{ env.OFFLINE_PACKAGE_PATH }}.asc
${{ env.ONLINE_PACKAGE_PATH }}
${{ env.ONLINE_PACKAGE_PATH }}.asc
${{ env.MD5SUM_PATH }}

2
.gitignore vendored
View File

@ -50,6 +50,7 @@ src/portal/cypress/screenshots
**/aot
**/dist
**/.bin
**/robotvars.py
src/core/conf/app.conf
src/server/v2.0/models/
@ -58,3 +59,4 @@ src/server/v2.0/restapi/
harborclient/
openapi-generator-cli.jar
tests/e2e_setup/robotvars.py

View File

@ -31,10 +31,10 @@ API explorer integration. End users can now explore and trigger Harbors API v
* Support Image Retag, enables the user to tag image to different repositories and projects, this is particularly useful in cases when images need to be retagged programmatically in a CI pipeline.
* Support Image Build History, makes it easy to see the contents of a container image, refer to the [User Guide](https://github.com/goharbor/harbor/blob/release-1.7.0/docs/user_guide.md#build-history).
* Support Logger customization, enables the user to customize STDOUT / STDERR / FILE / DB logger of running jobs.
* Improve user experience of Helm Chart Repository:
- Chart searching included in the global search results
- Show chart versions total number in the chart list
- Mark labels to helm charts
* Improve the user experience of Helm Chart Repository:
- Chart searching is included in the global search results
- Show the total number of chart versions in the chart list
- Mark labels in helm charts
- The latest version can be downloaded as default one on the chart list view
- The chart can be deleted by deleting all the versions under it
@ -58,7 +58,7 @@ API explorer integration. End users can now explore and trigger Harbors API v
- Replication policy rework to support wildcard, scheduled replication.
- Support repository level description.
- Batch operation on projects/repositories/users from UI.
- On board LDAP user when adding member to a project.
- On board LDAP user when adding a member to a project.
## v1.3.0 (2018-01-04)
@ -75,11 +75,11 @@ API explorer integration. End users can now explore and trigger Harbors API v
## v1.1.0 (2017-04-18)
- Add in Notary support
- User can update configuration through Harbor UI
- User can update the configuration through Harbor UI
- Redesign of Harbor's UI using Clarity
- Some changes to API
- Fix some security issues in token service
- Upgrade base image of nginx for latest openssl version
- Fix some security issues in the token service
- Upgrade the base image of nginx to the latest openssl version
- Various bug fixes.
## v0.5.0 (2016-12-6)
@ -88,7 +88,7 @@ API explorer integration. End users can now explore and trigger Harbors API v
- Easier configuration for HTTPS in prepare script
- Script to collect logs of a Harbor deployment
- User can view the storage usage (default location) of Harbor.
- Add an attribute to disable normal user to create project
- Add an attribute to disable normal users from creating projects.
- Various bug fixes.
For Harbor virtual appliance:

View File

@ -14,7 +14,7 @@ Contributors are encouraged to collaborate using the following resources in addi
* Chat with us on the CNCF Slack ([get an invitation here][cncf-slack] )
* [#harbor][users-slack] for end-user discussions
* [#harbor-dev][dev-slack] for development of Harbor
* Want long-form communication instead of Slack? We have two distributions lists:
* Want long-form communication instead of Slack? We have two distribution lists:
* [harbor-users][users-dl] for end-user discussions
* [harbor-dev][dev-dl] for development of Harbor
@ -49,7 +49,7 @@ To build the project, please refer the [build](https://goharbor.io/docs/edge/bui
### Repository Structure
Here is the basic structure of the harbor code base. Some key folders / files are commented for your references.
Here is the basic structure of the Harbor code base. Some key folders / files are commented for your reference.
```
.
...
@ -167,13 +167,15 @@ Harbor backend is written in [Go](http://golang.org/). If you don't have a Harbo
| 2.10 | 1.21.8 |
| 2.11 | 1.22.3 |
| 2.12 | 1.23.2 |
| 2.13 | 1.23.8 |
| 2.14 | 1.24.5 |
Ensure your GOPATH and PATH have been configured in accordance with the Go environment instructions.
#### Web
Harbor web UI is built based on [Clarity](https://vmware.github.io/clarity/) and [Angular](https://angular.io/) web framework. To setup web UI development environment, please make sure the [npm](https://www.npmjs.com/get-npm) tool is installed first.
Harbor web UI is built based on [Clarity](https://vmware.github.io/clarity/) and [Angular](https://angular.io/) web framework. To setup a web UI development environment, please make sure that the [npm](https://www.npmjs.com/get-npm) tool is installed first.
| Harbor | Requires Angular | Requires Clarity |
|----------|--------------------|--------------------|
@ -203,7 +205,7 @@ PR are always welcome, even if they only contain small fixes like typos or a few
Please submit a PR broken down into small changes bit by bit. A PR consisting of a lot of features and code changes may be hard to review. It is recommended to submit PRs in an incremental fashion.
Note: If you split your pull request to small changes, please make sure any of the changes goes to `main` will not break anything. Otherwise, it can not be merged until this feature complete.
Note: If you split your pull request to small changes, please make sure any of the changes goes to `main` will not break anything. Otherwise, it can not be merged until this feature completed.
### Fork and clone
@ -277,7 +279,7 @@ To build the code, please refer to [build](https://goharbor.io/docs/edge/build-c
**Note**: from v2.0, Harbor uses [go-swagger](https://github.com/go-swagger/go-swagger) to generate API server from Swagger 2.0 (aka [OpenAPI 2.0](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md)). To add or change the APIs, first update the `api/v2.0/swagger.yaml` file, then run `make gen_apis` to generate the API server, finally, implement or update the API handlers in `src/server/v2.0/handler` package.
As now Harbor uses `controller/manager/dao` programming model, we suggest to use [testify mock](https://github.com/stretchr/testify/blob/master/mock/doc.go) to test `controller` and `manager`. Harbor integrates [mockery](https://github.com/vektra/mockery) to generate mocks for golang interfaces using the testify mock package. To generate mocks for the interface, first add mock config in the `src/.mockery.yaml`, then run `make gen_mocks` to generate mocks.
As Harbor now uses `controller/manager/dao` programming model, we suggest using [testify mock](https://github.com/stretchr/testify/blob/master/mock/doc.go) to test `controller` and `manager`. Harbor integrates [mockery](https://github.com/vektra/mockery) to generate mocks for golang interfaces using the testify mock package. To generate mocks for the interface, first add mock config in the `src/.mockery.yaml`, then run `make gen_mocks` to generate mocks.
### Keep sync with upstream
@ -312,19 +314,19 @@ The commit message should follow the convention on [How to Write a Git Commit Me
To help write conformant commit messages, it is recommended to set up the [git-good-commit](https://github.com/tommarshall/git-good-commit) commit hook. Run this command in the Harbor repo's root directory:
```sh
curl https://cdn.rawgit.com/tommarshall/git-good-commit/v0.6.1/hook.sh > .git/hooks/commit-msg && chmod +x .git/hooks/commit-msg
curl https://cdn.jsdelivr.net/gh/tommarshall/git-good-commit@v0.6.1/hook.sh > .git/hooks/commit-msg && chmod +x .git/hooks/commit-msg
```
### Automated Testing
Once your pull request has been opened, harbor will run two CI pipelines against it.
Once your pull request has been opened, Harbor will run two CI pipelines against it.
1. In the travis CI, your source code will be checked via `golint`, `go vet` and `go race` that makes sure the code is readable, safe and correct. Also, all of unit tests will be triggered via `go test` against the pull request. What you need to pay attention to is the travis result and the coverage report.
* If any failure in travis, you need to figure out whether it is introduced by your commits.
* If the coverage dramatic decline, you need to commit unit test to coverage your code.
2. In the drone CI, the E2E test will be triggered against the pull request. Also, the source code will be checked via `gosec`, and the result is stored in google storage for later analysis. The pipeline is about to build and install harbor from source code, then to run four very basic E2E tests to validate the basic functionalities of harbor, like:
* Registry Basic Verification, to validate the image can be pulled and pushed successful.
* Trivy Basic Verification, to validate the image can be scanned successful.
* Notary Basic Verification, to validate the image can be signed successful.
* Ldap Basic Verification, to validate harbor can work in LDAP environment.
* If the coverage dramatically declines, then you need to commit a unit test to cover your code.
2. In the drone CI, the E2E test will be triggered against the pull request. Also, the source code will be checked via `gosec`, and the result is stored in google storage for later analysis. The pipeline is about to build and install harbor from source code, then to run four very basic E2E tests to validate the basic functionalities of Harbor, like:
* Registry Basic Verification, to validate that the image can be pulled and pushed successfully.
* Trivy Basic Verification, to validate that the image can be scanned successfully.
* Notary Basic Verification, to validate that the image can be signed successfully.
* Ldap Basic Verification, to validate that Harbor can work in LDAP environment.
### Push and Create PR
When ready for review, push your branch to your fork repository on `github.com`:
@ -343,7 +345,7 @@ Commit changes made in response to review comments to the same branch on your fo
It is a great way to contribute to Harbor by reporting an issue. Well-written and complete bug reports are always welcome! Please open an issue on GitHub and follow the template to fill in required information.
Before opening any issue, please look up the existing [issues](https://github.com/goharbor/harbor/issues) to avoid submitting a duplication.
Before opening any issue, please look up the existing [issues](https://github.com/goharbor/harbor/issues) to avoid submitting a duplicate.
If you find a match, you can "subscribe" to it to get notified on updates. If you have additional helpful information about the issue, please leave a comment.
When reporting issues, always include:

View File

@ -78,8 +78,10 @@ REGISTRYSERVER=
REGISTRYPROJECTNAME=goharbor
DEVFLAG=true
TRIVYFLAG=false
EXPORTERFLAG=false
HTTPPROXY=
BUILDBIN=true
BUILDREG=true
BUILDTRIVYADP=true
NPM_REGISTRY=https://registry.npmjs.org
BUILDTARGET=build
GEN_TLS=
@ -91,7 +93,12 @@ VERSIONTAG=dev
BUILD_BASE=true
PUSHBASEIMAGE=false
BASEIMAGETAG=dev
BUILDBASETARGET=trivy-adapter core db jobservice log nginx portal prepare redis registry registryctl exporter
# for skip build prepare and log container while BUILD_INSTALLER=false
BUILD_INSTALLER=true
BUILDBASETARGET=trivy-adapter core db jobservice nginx portal redis registry registryctl exporter
ifeq ($(BUILD_INSTALLER), true)
BUILDBASETARGET += prepare log
endif
IMAGENAMESPACE=goharbor
BASEIMAGENAMESPACE=goharbor
# #input true/false only
@ -104,13 +111,14 @@ PREPARE_VERSION_NAME=versions
#versions
REGISTRYVERSION=v2.8.3-patch-redis
TRIVYVERSION=v0.56.1
TRIVYADAPTERVERSION=v0.32.0-rc.1
TRIVYVERSION=v0.61.0
TRIVYADAPTERVERSION=v0.33.0-rc.2
NODEBUILDIMAGE=node:16.18.0
# version of registry for pulling the source code
REGISTRY_SRC_TAG=v2.8.3
REGISTRY_SRC_TAG=release/2.8
# source of upstream distribution code
DISTRIBUTION_SRC=https://github.com/distribution/distribution.git
DISTRIBUTION_SRC=https://github.com/goharbor/distribution.git
# dependency binaries
REGISTRYURL=https://storage.googleapis.com/harbor-builds/bin/registry/release-${REGISTRYVERSION}/registry
@ -127,6 +135,7 @@ endef
# docker parameters
DOCKERCMD=$(shell which docker)
DOCKERBUILD=$(DOCKERCMD) build
DOCKERNETWORK=default
DOCKERRMIMAGE=$(DOCKERCMD) rmi
DOCKERPULL=$(DOCKERCMD) pull
DOCKERIMAGES=$(DOCKERCMD) images
@ -142,7 +151,7 @@ GOINSTALL=$(GOCMD) install
GOTEST=$(GOCMD) test
GODEP=$(GOTEST) -i
GOFMT=gofmt -w
GOBUILDIMAGE=golang:1.23.2
GOBUILDIMAGE=golang:1.24.5
GOBUILDPATHINCONTAINER=/harbor
# go build
@ -236,18 +245,27 @@ REGISTRYUSER=
REGISTRYPASSWORD=
# cmds
DOCKERSAVE_PARA=$(DOCKER_IMAGE_NAME_PREPARE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) \
DOCKERSAVE_PARA=$(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) \
$(DOCKERIMAGENAME_CORE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_LOG):$(VERSIONTAG) \
$(DOCKERIMAGENAME_DB):$(VERSIONTAG) \
$(DOCKERIMAGENAME_JOBSERVICE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_REGCTL):$(VERSIONTAG) \
$(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG) \
$(IMAGENAMESPACE)/redis-photon:$(VERSIONTAG) \
$(IMAGENAMESPACE)/nginx-photon:$(VERSIONTAG) \
$(IMAGENAMESPACE)/registry-photon:$(VERSIONTAG)
ifeq ($(BUILD_INSTALLER), true)
DOCKERSAVE_PARA+= $(DOCKER_IMAGE_NAME_PREPARE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_LOG):$(VERSIONTAG)
endif
ifeq ($(TRIVYFLAG), true)
DOCKERSAVE_PARA+= $(IMAGENAMESPACE)/trivy-adapter-photon:$(VERSIONTAG)
endif
ifeq ($(EXPORTERFLAG), true)
DOCKERSAVE_PARA+= $(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG)
endif
PACKAGE_OFFLINE_PARA=-zcvf harbor-offline-installer-$(PKGVERSIONTAG).tgz \
$(HARBORPKG)/$(DOCKERIMGFILE).$(VERSIONTAG).tar.gz \
$(HARBORPKG)/prepare \
@ -264,11 +282,6 @@ PACKAGE_ONLINE_PARA=-zcvf harbor-online-installer-$(PKGVERSIONTAG).tgz \
DOCKERCOMPOSE_FILE_OPT=-f $(DOCKERCOMPOSEFILEPATH)/$(DOCKERCOMPOSEFILENAME)
ifeq ($(TRIVYFLAG), true)
DOCKERSAVE_PARA+= $(IMAGENAMESPACE)/trivy-adapter-photon:$(VERSIONTAG)
endif
RUNCONTAINER=$(DOCKERCMD) run --rm -u $(shell id -u):$(shell id -g) -v $(BUILDPATH):$(BUILDPATH) -w $(BUILDPATH)
# $1 the name of the docker image
@ -282,8 +295,8 @@ endef
# lint swagger doc
SPECTRAL_IMAGENAME=$(IMAGENAMESPACE)/spectral
SPECTRAL_VERSION=v6.11.1
SPECTRAL_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/spectral/Dockerfile --build-arg GOLANG=${GOBUILDIMAGE} --build-arg SPECTRAL_VERSION=${SPECTRAL_VERSION} -t ${SPECTRAL_IMAGENAME}:$(SPECTRAL_VERSION) .
SPECTRAL_VERSION=v6.14.2
SPECTRAL_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/spectral/Dockerfile --build-arg NODE=${NODEBUILDIMAGE} --build-arg SPECTRAL_VERSION=${SPECTRAL_VERSION} -t ${SPECTRAL_IMAGENAME}:$(SPECTRAL_VERSION) .
SPECTRAL=$(RUNCONTAINER) $(SPECTRAL_IMAGENAME):$(SPECTRAL_VERSION)
lint_apis:
@ -291,7 +304,7 @@ lint_apis:
$(SPECTRAL) lint ./api/v2.0/swagger.yaml
SWAGGER_IMAGENAME=$(IMAGENAMESPACE)/swagger
SWAGGER_VERSION=v0.25.0
SWAGGER_VERSION=v0.31.0
SWAGGER=$(RUNCONTAINER) ${SWAGGER_IMAGENAME}:${SWAGGER_VERSION}
SWAGGER_GENERATE_SERVER=${SWAGGER} generate server --template-dir=$(TOOLSPATH)/swagger/templates --exclude-main --additional-initialism=CVE --additional-initialism=GC --additional-initialism=OIDC
SWAGGER_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/swagger/Dockerfile --build-arg GOLANG=${GOBUILDIMAGE} --build-arg SWAGGER_VERSION=${SWAGGER_VERSION} -t ${SWAGGER_IMAGENAME}:$(SWAGGER_VERSION) .
@ -306,13 +319,13 @@ define swagger_generate_server
@$(SWAGGER_GENERATE_SERVER) -f $(1) -A $(3) --target $(2)
endef
gen_apis: lint_apis
gen_apis:
$(call prepare_docker_image,${SWAGGER_IMAGENAME},${SWAGGER_VERSION},${SWAGGER_IMAGE_BUILD_CMD})
$(call swagger_generate_server,api/v2.0/swagger.yaml,src/server/v2.0,harbor)
MOCKERY_IMAGENAME=$(IMAGENAMESPACE)/mockery
MOCKERY_VERSION=v2.46.2
MOCKERY_VERSION=v2.53.3
MOCKERY=$(RUNCONTAINER)/src ${MOCKERY_IMAGENAME}:${MOCKERY_VERSION}
MOCKERY_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/mockery/Dockerfile --build-arg GOLANG=${GOBUILDIMAGE} --build-arg MOCKERY_VERSION=${MOCKERY_VERSION} -t ${MOCKERY_IMAGENAME}:$(MOCKERY_VERSION) .
@ -336,7 +349,7 @@ versions_prepare:
check_environment:
@$(MAKEPATH)/$(CHECKENVCMD)
compile_core: gen_apis
compile_core: lint_apis gen_apis
@echo "compiling binary for core (golang image)..."
@echo $(GOBUILDPATHINCONTAINER)
@$(DOCKERCMD) run --rm -v $(BUILDPATH):$(GOBUILDPATHINCONTAINER) -w $(GOBUILDPATH_CORE) $(GOBUILDIMAGE) $(GOIMAGEBUILD_CORE) -o $(GOBUILDPATHINCONTAINER)/$(GOBUILDMAKEPATH_CORE)/$(CORE_BINARYNAME)
@ -387,17 +400,19 @@ build:
echo Should pull base images from registry in docker configuration since no base images built. ; \
exit 1; \
fi
make -f $(MAKEFILEPATH_PHOTON)/Makefile $(BUILDTARGET) -e DEVFLAG=$(DEVFLAG) -e GOBUILDIMAGE=$(GOBUILDIMAGE) \
make -f $(MAKEFILEPATH_PHOTON)/Makefile $(BUILDTARGET) -e DEVFLAG=$(DEVFLAG) -e GOBUILDIMAGE=$(GOBUILDIMAGE) -e NODEBUILDIMAGE=$(NODEBUILDIMAGE) \
-e REGISTRYVERSION=$(REGISTRYVERSION) -e REGISTRY_SRC_TAG=$(REGISTRY_SRC_TAG) -e DISTRIBUTION_SRC=$(DISTRIBUTION_SRC)\
-e TRIVYVERSION=$(TRIVYVERSION) -e TRIVYADAPTERVERSION=$(TRIVYADAPTERVERSION) \
-e VERSIONTAG=$(VERSIONTAG) \
-e BUILDBIN=$(BUILDBIN) \
-e DOCKERNETWORK=$(DOCKERNETWORK) \
-e BUILDREG=$(BUILDREG) -e BUILDTRIVYADP=$(BUILDTRIVYADP) \
-e BUILD_INSTALLER=$(BUILD_INSTALLER) \
-e NPM_REGISTRY=$(NPM_REGISTRY) -e BASEIMAGETAG=$(BASEIMAGETAG) -e IMAGENAMESPACE=$(IMAGENAMESPACE) -e BASEIMAGENAMESPACE=$(BASEIMAGENAMESPACE) \
-e REGISTRYURL=$(REGISTRYURL) \
-e TRIVY_DOWNLOAD_URL=$(TRIVY_DOWNLOAD_URL) -e TRIVY_ADAPTER_DOWNLOAD_URL=$(TRIVY_ADAPTER_DOWNLOAD_URL) \
-e PULL_BASE_FROM_DOCKERHUB=$(PULL_BASE_FROM_DOCKERHUB) -e BUILD_BASE=$(BUILD_BASE) \
-e REGISTRYUSER=$(REGISTRYUSER) -e REGISTRYPASSWORD=$(REGISTRYPASSWORD) \
-e PUSHBASEIMAGE=$(PUSHBASEIMAGE)
-e PUSHBASEIMAGE=$(PUSHBASEIMAGE) -e GOBUILDIMAGE=$(GOBUILDIMAGE)
build_standalone_db_migrator: compile_standalone_db_migrator
make -f $(MAKEFILEPATH_PHOTON)/Makefile _build_standalone_db_migrator -e BASEIMAGETAG=$(BASEIMAGETAG) -e VERSIONTAG=$(VERSIONTAG)
@ -438,7 +453,14 @@ package_online: update_prepare_version
@rm -rf $(HARBORPKG)
@echo "Done."
package_offline: update_prepare_version compile build
.PHONY: check_buildinstaller
check_buildinstaller:
@if [ "$(BUILD_INSTALLER)" != "true" ]; then \
echo "Must set BUILD_INSTALLER as true while triggering package_offline build" ; \
exit 1; \
fi
package_offline: check_buildinstaller update_prepare_version compile build
@echo "packing offline package ..."
@cp -r make $(HARBORPKG)
@ -468,8 +490,8 @@ misspell:
@echo checking misspell...
@find . -type d \( -path ./tests \) -prune -o -name '*.go' -print | xargs misspell -error
# golangci-lint binary installation or refer to https://golangci-lint.run/usage/install/#local-installation
# curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.55.2
# golangci-lint binary installation or refer to https://golangci-lint.run/usage/install/#local-installation
# curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v2.1.2
GOLANGCI_LINT := $(shell go env GOPATH)/bin/golangci-lint
lint:
@echo checking lint
@ -537,7 +559,7 @@ swagger_client:
rm -rf harborclient
mkdir -p harborclient/harbor_v2_swagger_client
java -jar openapi-generator-cli.jar generate -i api/v2.0/swagger.yaml -g python -o harborclient/harbor_v2_swagger_client --package-name v2_swagger_client
cd harborclient/harbor_v2_swagger_client; python ./setup.py install
cd harborclient/harbor_v2_swagger_client; pip install .
pip install docker -q
pip freeze

View File

@ -9,6 +9,7 @@
[![Nightly Status](https://us-central1-eminent-nation-87317.cloudfunctions.net/harbor-nightly-result)](https://www.googleapis.com/storage/v1/b/harbor-nightly/o)
![CONFORMANCE_TEST](https://github.com/goharbor/harbor/workflows/CONFORMANCE_TEST/badge.svg)
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fgoharbor%2Fharbor.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fgoharbor%2Fharbor?ref=badge_shield)
[![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/harbor)](https://artifacthub.io/packages/helm/harbor/harbor)
</br>
|![notification](https://raw.githubusercontent.com/goharbor/website/master/docs/img/readme/bell-outline-badged.svg)Community Meeting|
@ -18,7 +19,7 @@
</br> </br>
**Note**: The `main` branch may be in an *unstable or even broken state* during development.
Please use [releases](https://github.com/vmware/harbor/releases) instead of the `main` branch in order to get a stable set of binaries.
Please use [releases](https://github.com/goharbor/harbor/releases) instead of the `main` branch in order to get a stable set of binaries.
<img alt="Harbor" src="https://raw.githubusercontent.com/goharbor/website/master/docs/img/readme/harbor_logo.png">
@ -57,7 +58,7 @@ For learning the architecture design of Harbor, check the document [Architecture
**On a Linux host:** docker 20.10.10-ce+ and docker-compose 1.18.0+ .
Download binaries of **[Harbor release ](https://github.com/vmware/harbor/releases)** and follow **[Installation & Configuration Guide](https://goharbor.io/docs/latest/install-config/)** to install Harbor.
Download binaries of **[Harbor release ](https://github.com/goharbor/harbor/releases)** and follow **[Installation & Configuration Guide](https://goharbor.io/docs/latest/install-config/)** to install Harbor.
If you want to deploy Harbor on Kubernetes, please use the **[Harbor chart](https://github.com/goharbor/harbor-helm)**.

View File

@ -1,27 +1,27 @@
# Versioning and Release
This document describes the versioning and release process of Harbor. This document is a living document, contents will be updated according to each release.
This document describes the versioning and release process of Harbor. This document is a living document, it's contents will be updated according to each release.
## Releases
Harbor releases will be versioned using dotted triples, similar to [Semantic Version](http://semver.org/). For this specific document, we will refer to the respective components of this triple as `<major>.<minor>.<patch>`. The version number may have additional information, such as "-rc1,-rc2,-rc3" to mark release candidate builds for earlier access. Such releases will be considered as "pre-releases".
### Major and Minor Releases
Major and minor releases of Harbor will be branched from `main` when the release reaches to `RC(release candidate)` state. The branch format should follow `release-<major>.<minor>.0`. For example, once the release `v1.0.0` reaches to RC, a branch will be created with the format `release-1.0.0`. When the release reaches to `GA(General Available)` state, The tag with format `v<major>.<minor>.<patch>` and should be made with command `git tag -s v<major>.<minor>.<patch>`. The release cadence is around 3 months, might be adjusted based on open source event, but will communicate it clearly.
Major and minor releases of Harbor will be branched from `main` when the release reaches to `RC(release candidate)` state. The branch format should follow `release-<major>.<minor>.0`. For example, once the release `v1.0.0` reaches to RC, a branch will be created with the format `release-1.0.0`. When the release reaches to `GA(General Available)` state, the tag with format `v<major>.<minor>.<patch>` and should be made with the command `git tag -s v<major>.<minor>.<patch>`. The release cadence is around 3 months, might be adjusted based on open source events, but will communicate it clearly.
### Patch releases
Patch releases are based on the major/minor release branch, the release cadence for patch release of recent minor release is one month to solve critical community and security issues. The cadence for patch release of recent minus two minor releases are on-demand driven based on the severity of the issue to be fixed.
### Pre-releases
`Pre-releases:mainly the different RC builds` will be compiled from their corresponding branches. Please note they are done to assist in the stabilization process, no guarantees are provided.
`Pre-releases:mainly the different RC builds` will be compiled from their corresponding branches. Please note that they are done to assist in the stabilization process, no guarantees are provided.
### Minor Release Support Matrix
| Version | Supported |
|----------------| ------------------ |
| Harbor v2.13.x | :white_check_mark: |
| Harbor v2.12.x | :white_check_mark: |
| Harbor v2.11.x | :white_check_mark: |
| Harbor v2.10.x | :white_check_mark: |
| Harbor v2.9.x | :white_check_mark: |
### Upgrade path and support policy
The upgrade path for Harbor is (1) 2.2.x patch releases are always compatible with its major and minor version. For example, previous released 2.2.x can be upgraded to most recent 2.2.3 release. (2) Harbor only supports two previous minor releases to upgrade to current minor release. For example, 2.3.0 will only support 2.1.0 and 2.2.0 to upgrade from, 2.0.0 to 2.3.0 is not supported. One should upgrade to 2.2.0 first, then to 2.3.0.
The upgrade path for Harbor is (1) 2.2.x patch releases are always compatible with its major and minor versions. For example, previous released 2.2.x can be upgraded to most recent 2.2.3 release. (2) Harbor only supports two previous minor releases to upgrade to current minor release. For example, 2.3.0 will only support 2.1.0 and 2.2.0 to upgrade from, 2.0.0 to 2.3.0 is not supported. One should upgrade to 2.2.0 first, then to 2.3.0.
The Harbor project maintains release branches for the three most recent minor releases, each minor release will be maintained for approximately 9 months.
### Next Release
@ -32,12 +32,12 @@ The activity for next release will be tracked in the [up-to-date project board](
The following steps outline what to do when it's time to plan for and publish a release. Depending on the release (major/minor/patch), not all the following items are needed.
1. Prepare information about what's new in the release.
* For every release, update documentation for changes that have happened in the release. See the [goharbor/website](https://github.com/goharbor/website) repo for more details on how to create documentation for a release. All documentation for a release should be published by the time the release is out.
* For every release, update the documentation for changes that have happened in the release. See the [goharbor/website](https://github.com/goharbor/website) repo for more details on how to create documentation for a release. All documentation for a release should be published by the time the release is out.
* For every release, write release notes. See [previous releases](https://github.com/goharbor/harbor/releases) for examples of what to include in release notes.
* For a major/minor release, write a blog post that highlights new features in the release. Plan to publish this the same day as the release. Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. If there are any new features or workflows introduced in a release, consider writing additional blog posts to help users learn about the new features. Plan to publish these after the release date (all blogs dont have to be published all at once).
* For a major/minor release, write a blog post that highlights new features in the release. Plan to publish this on the same day as the release. Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. If there are any new features or workflows introduced in a release, consider writing additional blog posts to help users learn about the new features. Plan to publish these after the release date (all blogs dont have to be published all at once).
1. Release a new version. Make the new version, docs updates, and blog posts available.
1. Announce the release and thank contributors. We should be doing the following for all releases.
* In all messages to the community include a brief list of highlights and links to the new release blog, release notes, or download location. Also include shoutouts to community member contribution included in the release.
* In all messages to the community include a brief list of highlights and links to the new release blog, release notes, or download location. Also include shoutouts to community members contributions included in the release.
* Send an email to the community via the [mailing list](https://lists.cncf.io/g/harbor-users)
* Post a message in the Harbor [slack channel](https://cloud-native.slack.com/archives/CC1E09J6S)
* Post to social media. Maintainers are encouraged to also post or repost from the Harbor account to help spread the word.

View File

@ -9,11 +9,11 @@ This document provides a link to the [Harbor Project board](https://github.com/o
Discussion on the roadmap can take place in threads under [Issues](https://github.com/goharbor/harbor/issues) or in [community meetings](https://goharbor.io/community/). Please open and comment on an issue if you want to provide suggestions and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
Please open an issue to track any initiative on the roadmap of Harbor (Usually driven by new feature requests). We will work with and rely on our community to focus our efforts to improve Harbor.
Please open an issue to track any initiative on the roadmap of Harbor (Usually driven by new feature requests). We will work with and rely on our community to focus our efforts on improving Harbor.
### Current Roadmap
The following table includes the current roadmap for Harbor. If you have any questions or would like to contribute to Harbor, please attend a [community meeting](https://goharbor.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Harbor.
The following table includes the current roadmap for Harbor. If you have any questions or would like to contribute to Harbor, please attend a [community meeting](https://goharbor.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors who will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Harbor.
`Last Updated: June 2022`
@ -49,4 +49,4 @@ The following table includes the current roadmap for Harbor. If you have any que
|I&AM and RBAC|Improved Multi-tenancy through granular access and ability to manage teams of users and robot accounts through workspaces|Dec 2020|
|Observability|Expose Harbor metrics through Prometheus Integration|Mar 2021|
|Tracing|Leverage OpenTelemetry for enhanced tracing capabilities and identify bottlenecks and improve performance |Mar 2021|
|Image Signing|Leverage Sigstore Cosign to deliver persisting image signatures across image replications|Apr 2021|
|Image Signing|Leverage Sigstore Cosign to deliver persistent image signatures across image replications|Apr 2021|

View File

@ -1 +1 @@
v2.12.0
v2.14.0

View File

@ -336,6 +336,8 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'404':
$ref: '#/responses/404'
'500':
@ -997,12 +999,6 @@ paths:
type: boolean
required: false
default: false
- name: with_signature
in: query
description: Specify whether the signature is included inside the tags of the returning artifacts. Only works when setting "with_tag=true"
type: boolean
required: false
default: false
- name: with_immutable_status
in: query
description: Specify whether the immutable status is included inside the tags of the returning artifacts. Only works when setting "with_immutable_status=true"
@ -1193,7 +1189,7 @@ paths:
'404':
$ref: '#/responses/404'
'422':
$ref: '#/responses/422'
$ref: '#/responses/422'
'500':
$ref: '#/responses/500'
/projects/{project_name}/repositories/{repository_name}/artifacts/{reference}/scan/stop:
@ -1226,7 +1222,7 @@ paths:
'404':
$ref: '#/responses/404'
'422':
$ref: '#/responses/422'
$ref: '#/responses/422'
'500':
$ref: '#/responses/500'
/projects/{project_name}/repositories/{repository_name}/artifacts/{reference}/scan/{report_id}/log:
@ -1313,12 +1309,6 @@ paths:
- $ref: '#/parameters/sort'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
- name: with_signature
in: query
description: Specify whether the signature is included inside the returning tags
type: boolean
required: false
default: false
- name: with_immutable_status
in: query
description: Specify whether the immutable status is included inside the returning tags
@ -1461,7 +1451,14 @@ paths:
in: path
description: The type of addition.
type: string
enum: [build_history, values.yaml, readme.md, dependencies, sbom]
enum:
- build_history
- values.yaml
- readme.md
- dependencies
- sbom
- license
- files
required: true
responses:
'200':
@ -1723,9 +1720,9 @@ paths:
$ref: '#/responses/500'
/audit-logs:
get:
summary: Get recent logs of the projects which the user is a member of
summary: Get recent logs of projects which the user is a member with project admin role, or return all audit logs for system admin user (deprecated)
description: |
This endpoint let user see the recent operation logs of the projects which he is member of
This endpoint let the user see the recent operation logs of projects which the user is a member with project admin role,, or return all audit logs for system admin user, it only query the audit log in previous version.
tags:
- auditlog
operationId: listAuditLogs
@ -1755,10 +1752,63 @@ paths:
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/auditlog-exts:
get:
summary: Get recent logs of the projects which the user is a member with project_admin role, or return all audit logs for system admin user
description: |
This endpoint let user see the recent operation logs of the projects which he is member with project_admin role, or return all audit logs for system admin user.
tags:
- auditlog
operationId: listAuditLogExts
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/query'
- $ref: '#/parameters/sort'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
responses:
'200':
description: Success
headers:
X-Total-Count:
description: The total count of auditlogs
type: integer
Link:
description: Link refers to the previous page and next page
type: string
schema:
type: array
items:
$ref: '#/definitions/AuditLogExt'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/auditlog-exts/events:
get:
summary: Get all event types of audit log
description: |
Get all event types of audit log
tags:
- auditlog
operationId: listAuditLogEventTypes
parameters:
- $ref: '#/parameters/requestId'
responses:
'200':
description: Success
schema:
type: array
items:
$ref: '#/definitions/AuditLogEventType'
'401':
$ref: '#/responses/401'
/projects/{project_name}/logs:
get:
summary: Get recent logs of the projects
description: Get recent logs of the projects
summary: Get recent logs of the projects (deprecated)
description: Get recent logs of the projects, it only query the previous version's audit log
tags:
- project
operationId: getLogs
@ -1789,6 +1839,40 @@ paths:
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/projects/{project_name}/auditlog-exts:
get:
summary: Get recent logs of the projects
description: Get recent logs of the projects
tags:
- project
operationId: getLogExts
parameters:
- $ref: '#/parameters/projectName'
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/query'
- $ref: '#/parameters/sort'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
responses:
'200':
description: Success
headers:
X-Total-Count:
description: The total count of auditlogs
type: integer
Link:
description: Link refers to the previous page and next page
type: string
schema:
type: array
items:
$ref: '#/definitions/AuditLogExt'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'500':
$ref: '#/responses/500'
/p2p/preheat/providers:
get:
summary: List P2P providers
@ -2351,160 +2435,6 @@ paths:
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
/projects/{project_name_or_id}/robots:
get:
summary: Get all robot accounts of specified project
description: Get all robot accounts of specified project
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/page'
- $ref: '#/parameters/pageSize'
- $ref: '#/parameters/query'
- $ref: '#/parameters/sort'
tags:
- robotv1
operationId: ListRobotV1
responses:
'200':
description: Success
headers:
X-Total-Count:
description: The total count of robot accounts
type: integer
Link:
description: Link refers to the previous page and next page
type: string
schema:
type: array
items:
$ref: '#/definitions/Robot'
'400':
$ref: '#/responses/400'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
post:
summary: Create a robot account
description: Create a robot account
tags:
- robotv1
operationId: CreateRobotV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- name: robot
in: body
description: The JSON object of a robot account.
required: true
schema:
$ref: '#/definitions/RobotCreateV1'
responses:
'201':
description: Created
headers:
X-Request-Id:
description: The ID of the corresponding request for the response
type: string
Location:
description: The location of the resource
type: string
schema:
$ref: '#/definitions/RobotCreated'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
/projects/{project_name_or_id}/robots/{robot_id}:
get:
summary: Get a robot account
description: This endpoint returns specific robot account information by robot ID.
tags:
- robotv1
operationId: GetRobotByIDV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/robotId'
responses:
'200':
description: Return matched robot information.
schema:
$ref: '#/definitions/Robot'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
put:
summary: Update status of robot account.
description: Used to disable/enable a specified robot account.
tags:
- robotv1
operationId: UpdateRobotV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/robotId'
- name: robot
in: body
description: The JSON object of a robot account.
required: true
schema:
$ref: '#/definitions/Robot'
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'409':
$ref: '#/responses/409'
'500':
$ref: '#/responses/500'
delete:
summary: Delete a robot account
description: This endpoint deletes specific robot account information by robot ID.
tags:
- robotv1
operationId: DeleteRobotV1
parameters:
- $ref: '#/parameters/requestId'
- $ref: '#/parameters/isResourceName'
- $ref: '#/parameters/projectNameOrId'
- $ref: '#/parameters/robotId'
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
$ref: '#/responses/403'
'404':
$ref: '#/responses/404'
'500':
$ref: '#/responses/500'
'/projects/{project_name_or_id}/immutabletagrules':
get:
summary: List all immutable tag rules of current project
@ -3101,6 +3031,8 @@ paths:
type: string
'401':
$ref: '#/responses/401'
'409':
$ref: '#/responses/409'
'500':
$ref: '#/responses/500'
'/usergroups/{group_id}':
@ -3632,6 +3564,8 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
@ -4070,6 +4004,8 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
@ -4565,7 +4501,7 @@ paths:
description: |
The purge job's schedule, it is a json object. |
The sample format is |
{"parameters":{"audit_retention_hour":168,"dry_run":true, "include_operations":"create,delete,pull"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
{"parameters":{"audit_retention_hour":168,"dry_run":true,"include_event_types":"create_artifact,delete_artifact,pull_artifact"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
the include_operation should be a comma separated string, e.g. create,delete,pull, if it is empty, no operation will be purged.
tags:
- purge
@ -4595,7 +4531,7 @@ paths:
description: |
The purge job's schedule, it is a json object. |
The sample format is |
{"parameters":{"audit_retention_hour":168,"dry_run":true, "include_operations":"create,delete,pull"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
{"parameters":{"audit_retention_hour":168,"dry_run":true,"include_event_types":"create_artifact,delete_artifact,pull_artifact"},"schedule":{"type":"Hourly","cron":"0 0 * * * *"}} |
the include_operation should be a comma separated string, e.g. create,delete,pull, if it is empty, no operation will be purged.
tags:
- purge
@ -6210,6 +6146,7 @@ paths:
cve_id(exact match)
cvss_score_v3(range condition)
severity(exact match)
status(exact match)
repository_name(exact match)
project_id(exact match)
package(exact match)
@ -6553,7 +6490,7 @@ responses:
description: The ID of the corresponding request for the response
type: string
schema:
$ref: '#/definitions/Errors'
$ref: '#/definitions/Errors'
'500':
description: Internal server error
headers:
@ -6996,6 +6933,43 @@ definitions:
format: date-time
example: '2006-01-02T15:04:05Z'
description: The time when this operation is triggered.
AuditLogExt:
type: object
properties:
id:
type: integer
description: The ID of the audit log entry.
username:
type: string
description: The username of the operator in this log entry.
resource:
type: string
description: Name of the resource in this log entry.
resource_type:
type: string
description: Type of the resource in this log entry.
operation:
type: string
description: The operation against the resource in this log entry.
operation_description:
type: string
description: The operation's detail description
operation_result:
type: boolean
x-omitempty: false
description: the operation's result, true for success, false for fail
op_time:
type: string
format: date-time
example: '2006-01-02T15:04:05Z'
description: The time when this operation is triggered.
AuditLogEventType:
type: object
properties:
event_type:
type: string
description: the event type, such as create_user.
example: create_user
Metadata:
type: object
properties:
@ -7095,9 +7069,9 @@ definitions:
type: boolean
description: Whether the preheat policy enabled
x-omitempty: false
scope:
extra_attrs:
type: string
description: The scope of preheat policy
description: The extra attributes of preheat policy
creation_time:
type: string
format: date-time
@ -7937,7 +7911,7 @@ definitions:
properties:
resource:
type: string
description: The resource of the access. Possible resources are listed here for system and project level https://github.com/goharbor/harbor/blob/main/src/common/rbac/const.go
description: The resource of the access. Possible resources are listed here for system and project level https://github.com/goharbor/harbor/blob/main/src/common/rbac/const.go
action:
type: string
description: The action of the access. Possible actions are *, pull, push, create, read, update, delete, list, operate, scanner-pull and stop.
@ -9100,6 +9074,9 @@ definitions:
oidc_extra_redirect_parms:
$ref: '#/definitions/StringConfigItem'
description: Extra parameters to add when redirect request to OIDC provider
oidc_logout:
$ref: '#/definitions/BoolConfigItem'
description: Extra parameters to logout user session from the OIDC provider
robot_token_duration:
$ref: '#/definitions/IntegerConfigItem'
description: The robot account token duration in days
@ -9143,6 +9120,9 @@ definitions:
banner_message:
$ref: '#/definitions/StringConfigItem'
description: The banner message for the UI.It is the stringified result of the banner message object
disabled_audit_log_event_types:
$ref: '#/definitions/StringConfigItem'
description: The audit log event types to skip to log in database
Configurations:
type: object
properties:
@ -9371,6 +9351,11 @@ definitions:
description: Extra parameters to add when redirect request to OIDC provider
x-omitempty: true
x-isnullable: true
oidc_logout:
type: boolean
description: Logout OIDC user session
x-omitempty: true
x-isnullable: true
robot_token_duration:
type: integer
description: The robot account token duration in days
@ -9421,6 +9406,11 @@ definitions:
description: The banner message for the UI.It is the stringified result of the banner message object
x-omitempty: true
x-isnullable: true
disabled_audit_log_event_types:
type: string
description: the list to disable log audit event types.
x-omitempty: true
x-isnullable: true
StringConfigItem:
type: object
properties:
@ -10085,6 +10075,9 @@ definitions:
severity:
type: string
description: the severity of the vulnerability
status:
type: string
description: the status of the vulnerability, example "fixed", "won't fix"
cvss_v3_score:
type: number
format: float
@ -10112,4 +10105,4 @@ definitions:
scan_type:
type: string
description: 'The scan type for the scan request. Two options are currently supported, vulnerability and sbom'
enum: [ vulnerability, sbom ]
enum: [ vulnerability, sbom ]

View File

@ -1,30 +0,0 @@
# Configuring Harbor as a local registry mirror
Harbor runs as a local registry by default. It can also be configured as a registry mirror,
which caches downloaded images for subsequent use. Note that under this setup, the Harbor registry only acts as a mirror server and
no longer accepts image pushing requests. Edit `Deploy/templates/registry/config.yml` before executing `./prepare`, and append a `proxy` section as follows:
```
proxy:
remoteurl: https://registry-1.docker.io
```
In order to access private images on the Docker Hub, a username and a password can be supplied:
```
proxy:
remoteurl: https://registry-1.docker.io
username: [username]
password: [password]
```
You will need to pass the `--registry-mirror` option to your Docker daemon on startup:
```
docker --registry-mirror=https://<my-docker-mirror-host> daemon
```
For example, if your mirror is serving on `http://reg.yourdomain.com`, you would run:
```
docker --registry-mirror=https://reg.yourdomain.com daemon
```
Refer to the [Registry as a pull through cache](https://docs.docker.com/registry/recipes/mirror/) for detailed information.

View File

@ -1,29 +0,0 @@
# registryapi
api for docker registry by token authorization
+ a simple api class which lies in registryapi.py, which simulates the interactions
between docker registry and the vendor authorization platform like harbor.
```
usage:
from registryapi import RegistryApi
api = RegistryApi('username', 'password', 'http://www.your_registry_url.com/')
repos = api.getRepositoryList()
tags = api.getTagList('public/ubuntu')
manifest = api.getManifest('public/ubuntu', 'latest')
res = api.deleteManifest('public/ubuntu', '23424545**4343')
```
+ a simple client tool based on api class, which contains basic read and delete
operations for repo, tag, manifest
```
usage:
./cli.py --username username --password password --registry_endpoint http://www.your_registry_url.com/ target action params
target can be: repo, tag, manifest
action can be: list, get, delete
params can be: --repo --ref --tag
more see: ./cli.py -h
```

View File

@ -1,135 +0,0 @@
#!/usr/bin/env python
# -*- coding:utf-8 -*-
# bug-report: feilengcui008@gmail.com
""" cli tool """
import argparse
import sys
import json
from registry import RegistryApi
class ApiProxy(object):
""" user RegistryApi """
def __init__(self, registry, args):
self.registry = registry
self.args = args
self.callbacks = dict()
self.register_callback("repo", "list", self.list_repo)
self.register_callback("tag", "list", self.list_tag)
self.register_callback("tag", "delete", self.delete_tag)
self.register_callback("manifest", "list", self.list_manifest)
self.register_callback("manifest", "delete", self.delete_manifest)
self.register_callback("manifest", "get", self.get_manifest)
def register_callback(self, target, action, func):
""" register real actions """
if not target in self.callbacks.keys():
self.callbacks[target] = {action: func}
return
self.callbacks[target][action] = func
def execute(self, target, action):
""" execute """
print json.dumps(self.callbacks[target][action](), indent=4, sort_keys=True)
def list_repo(self):
""" list repo """
return self.registry.getRepositoryList(self.args.num)
def list_tag(self):
""" list tag """
return self.registry.getTagList(self.args.repo)
def delete_tag(self):
""" delete tag """
(_, ref) = self.registry.existManifest(self.args.repo, self.args.tag)
if ref is not None:
return self.registry.deleteManifest(self.args.repo, ref)
return False
def list_manifest(self):
""" list manifest """
tags = self.registry.getTagList(self.args.repo)["tags"]
manifests = list()
if tags is None:
return None
for i in tags:
content = self.registry.getManifestWithConf(self.args.repo, i)
manifests.append({i: content})
return manifests
def delete_manifest(self):
""" delete manifest """
return self.registry.deleteManifest(self.args.repo, self.args.ref)
def get_manifest(self):
""" get manifest """
return self.registry.getManifestWithConf(self.args.repo, self.args.tag)
# since just a script tool, we do not construct whole target->action->args
# structure with oo abstractions which has more flexibility, just register
# parser directly
def get_parser():
""" return a parser """
parser = argparse.ArgumentParser("cli")
parser.add_argument('--username', action='store', required=True, help='username')
parser.add_argument('--password', action='store', required=True, help='password')
parser.add_argument('--registry_endpoint', action='store', required=True,
help='registry endpoint')
subparsers = parser.add_subparsers(dest='target', help='target to operate on')
# repo target
repo_target_parser = subparsers.add_parser('repo', help='target repository')
repo_target_subparsers = repo_target_parser.add_subparsers(dest='action',
help='repository subcommand')
repo_cmd_parser = repo_target_subparsers.add_parser('list', help='list repositories')
repo_cmd_parser.add_argument('--num', action='store', required=False, default=None,
help='the number of data to return')
# tag target
tag_target_parser = subparsers.add_parser('tag', help='target tag')
tag_target_subparsers = tag_target_parser.add_subparsers(dest='action',
help='tag subcommand')
tag_list_parser = tag_target_subparsers.add_parser('list', help='list tags')
tag_list_parser.add_argument('--repo', action='store', required=True, help='list tags')
tag_delete_parser = tag_target_subparsers.add_parser('delete', help='delete tag')
tag_delete_parser.add_argument('--repo', action='store', required=True, help='delete tags')
tag_delete_parser.add_argument('--tag', action='store', required=True,
help='tag reference')
# manifest target
manifest_target_parser = subparsers.add_parser('manifest', help='target manifest')
manifest_target_subparsers = manifest_target_parser.add_subparsers(dest='action',
help='manifest subcommand')
manifest_list_parser = manifest_target_subparsers.add_parser('list', help='list manifests')
manifest_list_parser.add_argument('--repo', action='store', required=True,
help='list manifests')
manifest_delete_parser = manifest_target_subparsers.add_parser('delete', help='delete manifest')
manifest_delete_parser.add_argument('--repo', action='store', required=True,
help='delete manifest')
manifest_delete_parser.add_argument('--ref', action='store', required=True,
help='manifest reference')
manifest_get_parser = manifest_target_subparsers.add_parser('get', help='get manifest content')
manifest_get_parser.add_argument('--repo', action='store', required=True, help='delete tags')
manifest_get_parser.add_argument('--tag', action='store', required=True,
help='manifest reference')
return parser
def main():
""" main entrance """
parser = get_parser()
options = parser.parse_args(sys.argv[1:])
registry = RegistryApi(options.username, options.password, options.registry_endpoint)
proxy = ApiProxy(registry, options)
proxy.execute(options.target, options.action)
if __name__ == '__main__':
main()

View File

@ -1,165 +0,0 @@
#!/usr/bin/env python
# -*- coding:utf-8 -*-
# bug-report: feilengcui008@gmail.com
""" api for docker registry """
import urllib2
import urllib
import json
import base64
class RegistryException(Exception):
""" registry api related exception """
pass
class RegistryApi(object):
""" interact with docker registry and harbor """
def __init__(self, username, password, registry_endpoint):
self.username = username
self.password = password
self.basic_token = base64.encodestring("%s:%s" % (str(username), str(password)))[0:-1]
self.registry_endpoint = registry_endpoint.rstrip('/')
auth = self.pingRegistry("%s/v2/_catalog" % (self.registry_endpoint,))
if auth is None:
raise RegistryException("get token realm and service failed")
self.token_endpoint = auth[0]
self.service = auth[1]
def pingRegistry(self, registry_endpoint):
""" ping v2 registry and get realm and service """
headers = dict()
try:
res = urllib2.urlopen(registry_endpoint)
except urllib2.HTTPError as e:
headers = e.hdrs.dict
try:
(realm, service, _) = headers['www-authenticate'].split(',')
return (realm[14:-1:], service[9:-1])
except Exception as e:
return None
def getBearerTokenForScope(self, scope):
""" get bearer token from harbor """
payload = urllib.urlencode({'service': self.service, 'scope': scope})
url = "%s?%s" % (self.token_endpoint, payload)
req = urllib2.Request(url)
req.add_header('Authorization', 'Basic %s' % (self.basic_token,))
try:
response = urllib2.urlopen(req)
return json.loads(response.read())["token"]
except Exception as e:
return None
def getRepositoryList(self, n=None):
""" get repository list """
scope = "registry:catalog:*"
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/_catalog" % (self.registry_endpoint,)
if n is not None:
url = "%s?n=%s" % (url, str(n))
req = urllib2.Request(url)
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
try:
response = urllib2.urlopen(req)
return json.loads(response.read())
except Exception as e:
return None
def getTagList(self, repository):
""" get tag list for repository """
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/%s/tags/list" % (self.registry_endpoint, repository)
req = urllib2.Request(url)
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
try:
response = urllib2.urlopen(req)
return json.loads(response.read())
except Exception as e:
return None
def getManifest(self, repository, reference="latest", v1=False):
""" get manifest for tag or digest """
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/%s/manifests/%s" % (self.registry_endpoint, repository, reference)
req = urllib2.Request(url)
req.get_method = lambda: 'GET'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v2+json')
if v1:
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v1+json')
try:
response = urllib2.urlopen(req)
return json.loads(response.read())
except Exception as e:
return None
def existManifest(self, repository, reference, v1=False):
""" check to see it manifest exist """
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
raise RegistryException("manifestExist failed due to token error")
url = "%s/v2/%s/manifests/%s" % (self.registry_endpoint, repository, reference)
req = urllib2.Request(url)
req.get_method = lambda: 'HEAD'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v2+json')
if v1:
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v1+json')
try:
response = urllib2.urlopen(req)
return (True, response.headers.dict["docker-content-digest"])
except Exception as e:
return (False, None)
def deleteManifest(self, repository, reference):
""" delete manifest by tag """
(is_exist, digest) = self.existManifest(repository, reference)
if not is_exist:
raise RegistryException("manifest not exist")
scope = "repository:%s:pull,push" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
raise RegistryException("delete manifest failed due to token error")
url = "%s/v2/%s/manifests/%s" % (self.registry_endpoint, repository, digest)
req = urllib2.Request(url)
req.get_method = lambda: 'DELETE'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
try:
urllib2.urlopen(req)
except Exception as e:
return False
return True
def getManifestWithConf(self, repository, reference="latest"):
""" get manifest for tag or digest """
manifest = self.getManifest(repository, reference)
if manifest is None:
raise RegistryException("manifest for %s %s not exist" % (repository, reference))
config_digest = manifest["config"]["digest"]
scope = "repository:%s:pull" % (repository,)
bear_token = self.getBearerTokenForScope(scope)
if bear_token is None:
return None
url = "%s/v2/%s/blobs/%s" % (self.registry_endpoint, repository, config_digest)
req = urllib2.Request(url)
req.get_method = lambda: 'GET'
req.add_header('Authorization', r'Bearer %s' % (bear_token,))
req.add_header('Accept', 'application/vnd.docker.distribution.manifest.v2+json')
try:
response = urllib2.urlopen(req)
manifest["configContent"] = json.loads(response.read())
return manifest
except Exception as e:
return None

BIN
icons/cnai.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

View File

@ -135,6 +135,8 @@ trivy:
jobservice:
# Maximum number of job workers in job service
max_job_workers: 10
# Maximum hours of task duration in job service, default 24
max_job_duration_hours: 24
# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"
job_loggers:
- STD_OUTPUT
@ -174,7 +176,7 @@ log:
# port: 5140
#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.12.0
_version: 2.13.0
# Uncomment external_database if using external database.
# external_database:
@ -213,6 +215,14 @@ _version: 2.12.0
# # username:
# # sentinel_master_set must be set to support redis+sentinel
# #sentinel_master_set:
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
# tlsOptions:
# enable: false
# # if it is a self-signed ca, please set the ca path specifically.
# rootCA:
# # db_index 0 is for core, it's unchangeable
# registry_db_index: 1
# jobservice_db_index: 2

View File

@ -0,0 +1,23 @@
ALTER TABLE p2p_preheat_policy DROP COLUMN IF EXISTS scope;
ALTER TABLE p2p_preheat_policy ADD COLUMN IF NOT EXISTS extra_attrs text;
CREATE TABLE IF NOT EXISTS audit_log_ext
(
id BIGSERIAL PRIMARY KEY NOT NULL,
project_id BIGINT,
operation VARCHAR(50) NULL,
resource_type VARCHAR(255) NULL,
resource VARCHAR(1024) NULL,
username VARCHAR(255) NULL,
op_desc VARCHAR(1024) NULL,
op_result BOOLEAN DEFAULT true,
payload TEXT NULL,
source_ip VARCHAR(50) NULL,
op_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- add index to the audit_log_ext table
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_op_time ON audit_log_ext (op_time);
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_project_id_optime ON audit_log_ext (project_id, op_time);
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_project_id_resource_type ON audit_log_ext (project_id, resource_type);
CREATE INDEX IF NOT EXISTS idx_audit_log_ext_project_id_operation ON audit_log_ext (project_id, operation);

View File

@ -0,0 +1,9 @@
ALTER TABLE role_permission ALTER COLUMN id TYPE BIGINT;
ALTER SEQUENCE role_permission_id_seq AS BIGINT;
ALTER TABLE permission_policy ALTER COLUMN id TYPE BIGINT;
ALTER SEQUENCE permission_policy_id_seq AS BIGINT;
ALTER TABLE role_permission ALTER COLUMN permission_policy_id TYPE BIGINT;
ALTER TABLE vulnerability_record ADD COLUMN IF NOT EXISTS status text;

View File

@ -18,7 +18,7 @@ TIMESTAMP=$(shell date +"%Y%m%d")
# docker parameters
DOCKERCMD=$(shell which docker)
DOCKERBUILD=$(DOCKERCMD) build --no-cache
DOCKERBUILD=$(DOCKERCMD) build --no-cache --network=$(DOCKERNETWORK)
DOCKERBUILD_WITH_PULL_PARA=$(DOCKERBUILD) --pull=$(PULL_BASE_FROM_DOCKERHUB)
DOCKERRMIMAGE=$(DOCKERCMD) rmi
DOCKERIMAGES=$(DOCKERCMD) images
@ -122,7 +122,7 @@ _build_db:
_build_portal:
@$(call _build_base,$(PORTAL),$(DOCKERFILEPATH_PORTAL))
@echo "building portal container for photon..."
$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg npm_registry=$(NPM_REGISTRY) -f $(DOCKERFILEPATH_PORTAL)/$(DOCKERFILENAME_PORTAL) -t $(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) .
$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg NODE=${NODEBUILDIMAGE} --build-arg npm_registry=$(NPM_REGISTRY) -f $(DOCKERFILEPATH_PORTAL)/$(DOCKERFILENAME_PORTAL) -t $(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) .
@echo "Done."
_build_core:
@ -149,12 +149,12 @@ _build_trivy_adapter:
rm -rf $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary && mkdir -p $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary ; \
echo "Downloading Trivy scanner $(TRIVYVERSION)..." ; \
$(call _extract_archive, $(TRIVY_DOWNLOAD_URL), $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary/) ; \
if [ "$(BUILDBIN)" != "true" ] ; then \
if [ "$(BUILDTRIVYADP)" != "true" ] ; then \
echo "Downloading Trivy adapter $(TRIVYADAPTERVERSION)..." ; \
$(call _extract_archive, $(TRIVY_ADAPTER_DOWNLOAD_URL), $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary/) ; \
else \
echo "Building Trivy adapter $(TRIVYADAPTERVERSION) from sources..." ; \
cd $(DOCKERFILEPATH_TRIVY_ADAPTER) && $(DOCKERFILEPATH_TRIVY_ADAPTER)/builder.sh $(TRIVYADAPTERVERSION) && cd - ; \
cd $(DOCKERFILEPATH_TRIVY_ADAPTER) && $(DOCKERFILEPATH_TRIVY_ADAPTER)/builder.sh $(TRIVYADAPTERVERSION) $(GOBUILDIMAGE) $(DOCKERNETWORK) && cd - ; \
fi ; \
echo "Building Trivy adapter container for photon..." ; \
$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) \
@ -174,11 +174,11 @@ _build_nginx:
_build_registry:
@$(call _build_base,$(REGISTRY),$(DOCKERFILEPATH_REG))
@if [ "$(BUILDBIN)" != "true" ] ; then \
@if [ "$(BUILDREG)" != "true" ] ; then \
rm -rf $(DOCKERFILEPATH_REG)/binary && mkdir -p $(DOCKERFILEPATH_REG)/binary && \
$(call _get_binary, $(REGISTRYURL), $(DOCKERFILEPATH_REG)/binary/registry); \
else \
cd $(DOCKERFILEPATH_REG) && $(DOCKERFILEPATH_REG)/builder $(REGISTRY_SRC_TAG) $(DISTRIBUTION_SRC) && cd - ; \
cd $(DOCKERFILEPATH_REG) && $(DOCKERFILEPATH_REG)/builder $(REGISTRY_SRC_TAG) $(DISTRIBUTION_SRC) $(GOBUILDIMAGE) $(DOCKERNETWORK) && cd - ; \
fi
@echo "building registry container for photon..."
@chmod 655 $(DOCKERFILEPATH_REG)/binary/registry && $(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) -f $(DOCKERFILEPATH_REG)/$(DOCKERFILENAME_REG) -t $(DOCKERIMAGENAME_REG):$(VERSIONTAG) .
@ -205,7 +205,7 @@ _build_standalone_db_migrator:
_compile_and_build_exporter:
@$(call _build_base,$(EXPORTER),$(DOCKERFILEPATH_EXPORTER))
@echo "compiling and building image for exporter..."
@$(DOCKERCMD) build --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg build_image=$(GOBUILDIMAGE) -f ${DOCKERFILEPATH_EXPORTER}/${DOCKERFILENAME_EXPORTER} -t $(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG) .
@$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) --build-arg build_image=$(GOBUILDIMAGE) -f ${DOCKERFILEPATH_EXPORTER}/${DOCKERFILENAME_EXPORTER} -t $(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG) .
@echo "Done."
define _extract_archive
@ -233,10 +233,17 @@ define _build_base
fi
endef
build: _build_prepare _build_db _build_portal _build_core _build_jobservice _build_log _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
ifeq ($(BUILD_INSTALLER), true)
buildcompt: _build_prepare _build_db _build_portal _build_core _build_jobservice _build_log _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
else
buildcompt: _build_db _build_portal _build_core _build_jobservice _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
endif
build: buildcompt
@if [ -n "$(REGISTRYUSER)" ] && [ -n "$(REGISTRYPASSWORD)" ] ; then \
docker logout ; \
fi
cleanimage:
@echo "cleaning image for photon..."
- $(DOCKERRMIMAGE) -f $(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG)
@ -246,4 +253,3 @@ cleanimage:
.PHONY: clean
clean: cleanimage

View File

@ -9,8 +9,9 @@ COPY ./make/photon/db/initdb.sh /initdb.sh
COPY ./make/photon/db/upgrade.sh /upgrade.sh
COPY ./make/photon/db/docker-healthcheck.sh /docker-healthcheck.sh
COPY ./make/photon/db/initial-registry.sql /docker-entrypoint-initdb.d/
RUN chown -R postgres:postgres /docker-entrypoint.sh /docker-healthcheck.sh /docker-entrypoint-initdb.d \
&& chmod u+x /docker-entrypoint.sh /docker-healthcheck.sh
RUN chown -R postgres:postgres /docker-entrypoint.sh /initdb.sh /upgrade.sh \
/docker-healthcheck.sh /docker-entrypoint-initdb.d \
&& chmod u+x /initdb.sh /upgrade.sh /docker-entrypoint.sh /docker-healthcheck.sh
ENTRYPOINT ["/docker-entrypoint.sh", "14", "15"]
HEALTHCHECK CMD ["/docker-healthcheck.sh"]

View File

@ -1,6 +1,7 @@
ARG harbor_base_image_version
ARG harbor_base_namespace
FROM node:16.18.0 as nodeportal
ARG NODE
FROM ${NODE} as nodeportal
WORKDIR /build_dir

View File

@ -1,7 +1,7 @@
FROM photon:5.0
RUN tdnf install -y python3 python3-pip python3-PyYAML python3-jinja2 && tdnf clean all
RUN pip3 install pipenv==2022.1.8
RUN pip3 install pipenv==2025.0.3
#To install only htpasswd binary from photon package httpd
RUN tdnf install -y rpm cpio apr-util

View File

@ -12,4 +12,4 @@ pylint = "*"
pytest = "*"
[requires]
python_version = "3.9.1"
python_version = "3.13"

View File

@ -1,11 +1,11 @@
{
"_meta": {
"hash": {
"sha256": "0c84f574a48755d88f78a64d754b3f834a72f2a86808370dd5f3bf3e650bfa13"
"sha256": "d3a89b8575c29b9f822b892ffd31fd4a997effb1ebf3e3ed061a41e2d04b4490"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.9.1"
"python_version": "3.13"
},
"sources": [
{
@ -18,157 +18,122 @@
"default": {
"click": {
"hashes": [
"sha256:8c04c11192119b1ef78ea049e0a6f0463e4c48ef00a30160c704337586f3ad7a",
"sha256:fba402a4a47334742d782209a7c79bc448911afe1149d07bdabdf480b3e2f4b6"
"sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202",
"sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b"
],
"index": "pypi",
"version": "==8.0.1"
"markers": "python_version >= '3.10'",
"version": "==8.2.1"
},
"packaging": {
"hashes": [
"sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5",
"sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"
"sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484",
"sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"
],
"index": "pypi",
"version": "==20.9"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
"sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.7"
"markers": "python_version >= '3.8'",
"version": "==25.0"
}
},
"develop": {
"astroid": {
"hashes": [
"sha256:4db03ab5fc3340cf619dbc25e42c2cc3755154ce6009469766d7143d1fc2ee4e",
"sha256:8a398dfce302c13f14bab13e2b14fe385d32b73f4e4853b9bdfb64598baa1975"
"sha256:104fb9cb9b27ea95e847a94c003be03a9e039334a8ebca5ee27dafaf5c5711eb",
"sha256:c332157953060c6deb9caa57303ae0d20b0fbdb2e59b4a4f2a6ba49d0a7961ce"
],
"markers": "python_version ~= '3.6'",
"version": "==2.5.6"
"markers": "python_full_version >= '3.9.0'",
"version": "==3.3.10"
},
"attrs": {
"dill": {
"hashes": [
"sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1",
"sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"
"sha256:0633f1d2df477324f53a895b02c901fb961bdbf65a17122586ea7019292cbcf0",
"sha256:44f54bf6412c2c8464c14e8243eb163690a9800dbe2c367330883b19c7561049"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==21.2.0"
"markers": "python_version >= '3.8'",
"version": "==0.4.0"
},
"iniconfig": {
"hashes": [
"sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3",
"sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"
"sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7",
"sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"
],
"version": "==1.1.1"
"markers": "python_version >= '3.8'",
"version": "==2.1.0"
},
"isort": {
"hashes": [
"sha256:0a943902919f65c5684ac4e0154b1ad4fac6dcaa5d9f3426b732f1c8b5419be6",
"sha256:2bb1680aad211e3c9944dbce1d4ba09a989f04e238296c87fe2139faa26d655d"
"sha256:1cb5df28dfbc742e490c5e41bad6da41b805b0a8be7bc93cd0fb2a8a890ac450",
"sha256:2dc5d7f65c9678d94c88dfc29161a320eec67328bc97aad576874cb4be1e9615"
],
"markers": "python_version >= '3.6' and python_version < '4.0'",
"version": "==5.8.0"
},
"lazy-object-proxy": {
"hashes": [
"sha256:17e0967ba374fc24141738c69736da90e94419338fd4c7c7bef01ee26b339653",
"sha256:1fee665d2638491f4d6e55bd483e15ef21f6c8c2095f235fef72601021e64f61",
"sha256:22ddd618cefe54305df49e4c069fa65715be4ad0e78e8d252a33debf00f6ede2",
"sha256:24a5045889cc2729033b3e604d496c2b6f588c754f7a62027ad4437a7ecc4837",
"sha256:410283732af311b51b837894fa2f24f2c0039aa7f220135192b38fcc42bd43d3",
"sha256:4732c765372bd78a2d6b2150a6e99d00a78ec963375f236979c0626b97ed8e43",
"sha256:489000d368377571c6f982fba6497f2aa13c6d1facc40660963da62f5c379726",
"sha256:4f60460e9f1eb632584c9685bccea152f4ac2130e299784dbaf9fae9f49891b3",
"sha256:5743a5ab42ae40caa8421b320ebf3a998f89c85cdc8376d6b2e00bd12bd1b587",
"sha256:85fb7608121fd5621cc4377a8961d0b32ccf84a7285b4f1d21988b2eae2868e8",
"sha256:9698110e36e2df951c7c36b6729e96429c9c32b3331989ef19976592c5f3c77a",
"sha256:9d397bf41caad3f489e10774667310d73cb9c4258e9aed94b9ec734b34b495fd",
"sha256:b579f8acbf2bdd9ea200b1d5dea36abd93cabf56cf626ab9c744a432e15c815f",
"sha256:b865b01a2e7f96db0c5d12cfea590f98d8c5ba64ad222300d93ce6ff9138bcad",
"sha256:bf34e368e8dd976423396555078def5cfc3039ebc6fc06d1ae2c5a65eebbcde4",
"sha256:c6938967f8528b3668622a9ed3b31d145fab161a32f5891ea7b84f6b790be05b",
"sha256:d1c2676e3d840852a2de7c7d5d76407c772927addff8d742b9808fe0afccebdf",
"sha256:d7124f52f3bd259f510651450e18e0fd081ed82f3c08541dffc7b94b883aa981",
"sha256:d900d949b707778696fdf01036f58c9876a0d8bfe116e8d220cfd4b15f14e741",
"sha256:ebfd274dcd5133e0afae738e6d9da4323c3eb021b3e13052d8cbd0e457b1256e",
"sha256:ed361bb83436f117f9917d282a456f9e5009ea12fd6de8742d1a4752c3017e93",
"sha256:f5144c75445ae3ca2057faac03fda5a902eff196702b0a24daf1d6ce0650514b"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'",
"version": "==1.6.0"
"markers": "python_full_version >= '3.9.0'",
"version": "==6.0.1"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
"sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325",
"sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"
],
"version": "==0.6.1"
"markers": "python_version >= '3.6'",
"version": "==0.7.0"
},
"packaging": {
"hashes": [
"sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5",
"sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"
"sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484",
"sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"
],
"index": "pypi",
"version": "==20.9"
"markers": "python_version >= '3.8'",
"version": "==25.0"
},
"platformdirs": {
"hashes": [
"sha256:3d512d96e16bcb959a814c9f348431070822a6496326a4be0911c40b5a74c2bc",
"sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4"
],
"markers": "python_version >= '3.9'",
"version": "==4.3.8"
},
"pluggy": {
"hashes": [
"sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0",
"sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"
"sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3",
"sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.13.1"
"markers": "python_version >= '3.9'",
"version": "==1.6.0"
},
"py": {
"pygments": {
"hashes": [
"sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3",
"sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"
"sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887",
"sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==1.10.0"
"markers": "python_version >= '3.8'",
"version": "==2.19.2"
},
"pylint": {
"hashes": [
"sha256:586d8fa9b1891f4b725f587ef267abe2a1bad89d6b184520c7f07a253dd6e217",
"sha256:f7e2072654a6b6afdf5e2fb38147d3e2d2d43c89f648637baab63e026481279b"
"sha256:2b11de8bde49f9c5059452e0c310c079c746a0a8eeaa789e5aa966ecc23e4559",
"sha256:43860aafefce92fca4cf6b61fe199cdc5ae54ea28f9bf4cd49de267b5195803d"
],
"index": "pypi",
"version": "==2.8.2"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
"sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.7"
"markers": "python_full_version >= '3.9.0'",
"version": "==3.3.7"
},
"pytest": {
"hashes": [
"sha256:50bcad0a0b9c5a72c8e4e7c9855a3ad496ca6a881a3641b4260605450772c54b",
"sha256:91ef2131a9bd6be8f76f1f08eac5c5317221d6ad1e143ae03894b862e8976890"
"sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7",
"sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"
],
"index": "pypi",
"version": "==6.2.4"
"markers": "python_version >= '3.9'",
"version": "==8.4.1"
},
"toml": {
"tomlkit": {
"hashes": [
"sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b",
"sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"
"sha256:430cf247ee57df2b94ee3fbe588e71d362a941ebb545dec29b53961d61add2a1",
"sha256:c89c649d79ee40629a9fda55f8ace8c6a1b42deb912b2a8fd8d942ddadb606b0"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.10.2"
},
"wrapt": {
"hashes": [
"sha256:b62ffa81fb85f4332a4f609cab4ac40709470da05643a082ec1eb88e6d9b97d7"
],
"version": "==1.12.1"
"markers": "python_version >= '3.8'",
"version": "==0.13.3"
}
}
}

View File

@ -10,7 +10,7 @@ from migrations import accept_versions
@click.command()
@click.option('-i', '--input', 'input_', required=True, help="The path of original config file")
@click.option('-o', '--output', default='', help="the path of output config file")
@click.option('-t', '--target', default='2.12.0', help="target version of input path")
@click.option('-t', '--target', default='2.13.0', help="target version of input path")
def migrate(input_, output, target):
"""
migrate command will migrate config file style to specific version

View File

@ -27,6 +27,7 @@ internal_tls_dir = secret_dir.joinpath('tls')
storage_ca_bundle_filename = 'storage_ca_bundle.crt'
internal_ca_filename = 'harbor_internal_ca.crt'
redis_tls_ca_filename = 'redis_tls_ca.crt'
old_private_key_pem_path = Path('/config/core/private_key.pem')
old_crt_path = Path('/config/registry/root.crt')

View File

@ -2,4 +2,4 @@ import os
MIGRATION_BASE_DIR = os.path.dirname(__file__)
accept_versions = {'1.9.0', '1.10.0', '2.0.0', '2.1.0', '2.2.0', '2.3.0', '2.4.0', '2.5.0', '2.6.0', '2.7.0', '2.8.0', '2.9.0','2.10.0', '2.11.0', '2.12.0'}
accept_versions = {'1.9.0', '1.10.0', '2.0.0', '2.1.0', '2.2.0', '2.3.0', '2.4.0', '2.5.0', '2.6.0', '2.7.0', '2.8.0', '2.9.0','2.10.0', '2.11.0', '2.12.0', '2.13.0'}

View File

@ -0,0 +1,21 @@
import os
from jinja2 import Environment, FileSystemLoader, StrictUndefined, select_autoescape
from utils.migration import read_conf
revision = '2.13.0'
down_revisions = ['2.12.0']
def migrate(input_cfg, output_cfg):
current_dir = os.path.dirname(__file__)
tpl = Environment(
loader=FileSystemLoader(current_dir),
undefined=StrictUndefined,
trim_blocks=True,
lstrip_blocks=True,
autoescape = select_autoescape()
).get_template('harbor.yml.jinja')
config_dict = read_conf(input_cfg)
with open(output_cfg, 'w') as f:
f.write(tpl.render(**config_dict))

View File

@ -0,0 +1,775 @@
# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: {{ hostname }}
# http related config
{% if http is defined %}
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: {{ http.port }}
{% else %}
# http:
# # port for http, default is 80. If https enabled, this port will redirect to https port
# port: 80
{% endif %}
{% if https is defined %}
# https related config
https:
# https port for harbor, default is 443
port: {{ https.port }}
# The path of cert and key files for nginx
certificate: {{ https.certificate }}
private_key: {{ https.private_key }}
# enable strong ssl ciphers (default: false)
{% if strong_ssl_ciphers is defined %}
strong_ssl_ciphers: {{ strong_ssl_ciphers | lower }}
{% else %}
strong_ssl_ciphers: false
{% endif %}
{% else %}
# https related config
# https:
# # https port for harbor, default is 443
# port: 443
# # The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
# enable strong ssl ciphers (default: false)
# strong_ssl_ciphers: false
{% endif %}
# # Harbor will set ipv4 enabled only by default if this block is not configured
# # Otherwise, please uncomment this block to configure your own ip_family stacks
{% if ip_family is defined %}
ip_family:
# ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component
{% if ip_family.ipv6 is defined %}
ipv6:
enabled: {{ ip_family.ipv6.enabled | lower }}
{% else %}
ipv6:
enabled: false
{% endif %}
# ipv4Enabled set to true by default, currently it affected the nginx related component
{% if ip_family.ipv4 is defined %}
ipv4:
enabled: {{ ip_family.ipv4.enabled | lower }}
{% else %}
ipv4:
enabled: true
{% endif %}
{% else %}
# ip_family:
# # ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component
# ipv6:
# enabled: false
# # ipv4Enabled set to true by default, currently it affected the nginx related component
# ipv4:
# enabled: true
{% endif %}
{% if internal_tls is defined %}
# Uncomment following will enable tls communication between all harbor components
internal_tls:
# set enabled to true means internal tls is enabled
enabled: {{ internal_tls.enabled | lower }}
{% if internal_tls.dir is defined %}
# put your cert and key files on dir
dir: {{ internal_tls.dir }}
{% endif %}
{% else %}
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal
{% endif %}
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
{% if external_url is defined %}
external_url: {{ external_url }}
{% else %}
# external_url: https://reg.mydomain.com:8433
{% endif %}
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
{% if harbor_admin_password is defined %}
harbor_admin_password: {{ harbor_admin_password }}
{% else %}
harbor_admin_password: Harbor12345
{% endif %}
# Harbor DB configuration
database:
{% if database is defined %}
# The password for the root user of Harbor DB. Change this before any production use.
password: {{ database.password}}
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: {{ database.max_idle_conns }}
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: {{ database.max_open_conns }}
# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
{% if database.conn_max_lifetime is defined %}
conn_max_lifetime: {{ database.conn_max_lifetime }}
{% else %}
conn_max_lifetime: 5m
{% endif %}
# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
{% if database.conn_max_idle_time is defined %}
conn_max_idle_time: {{ database.conn_max_idle_time }}
{% else %}
conn_max_idle_time: 0
{% endif %}
{% else %}
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 100
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 900
# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
conn_max_lifetime: 5m
# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
conn_max_idle_time: 0
{% endif %}
{% if data_volume is defined %}
# The default data volume
data_volume: {{ data_volume }}
{% else %}
# The default data volume
data_volume: /data
{% endif %}
# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
{% if storage_service is defined %}
storage_service:
{% for key, value in storage_service.items() %}
{% if key == 'ca_bundle' %}
# # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# # of registry's and chart repository's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
ca_bundle: {{ value if value is not none else '' }}
{% elif key == 'redirect' %}
# # set disable to true when you want to disable registry redirect
redirect:
{% if storage_service.redirect.disabled is defined %}
disable: {{ storage_service.redirect.disabled | lower}}
{% else %}
disable: {{ storage_service.redirect.disable | lower}}
{% endif %}
{% else %}
# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# # for more info about this configuration please refer https://distribution.github.io/distribution/about/configuration/
# # and https://distribution.github.io/distribution/storage-drivers/
{{ key }}:
{% for k, v in value.items() %}
{{ k }}: {{ v if v is not none else '' }}
{% endfor %}
{% endif %}
{% endfor %}
{% else %}
# storage_service:
# # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# # of registry's and chart repository's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
# ca_bundle:
# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# # for more info about this configuration please refer https://distribution.github.io/distribution/about/configuration/
# # and https://distribution.github.io/distribution/storage-drivers/
# filesystem:
# maxthreads: 100
# # set disable to true when you want to disable registry redirect
# redirect:
# disable: false
{% endif %}
# Trivy configuration
#
# Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
# in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
# should download a newer version from the Internet or use the cached one. Currently, the database is updated every
# 12 hours and published as a new release to GitHub.
{% if trivy is defined %}
trivy:
# ignoreUnfixed The flag to display only fixed vulnerabilities
{% if trivy.ignore_unfixed is defined %}
ignore_unfixed: {{ trivy.ignore_unfixed | lower }}
{% else %}
ignore_unfixed: false
{% endif %}
# skipUpdate The flag to enable or disable Trivy DB downloads from GitHub
#
# You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.
# If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and
# `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.
{% if trivy.skip_update is defined %}
skip_update: {{ trivy.skip_update | lower }}
{% else %}
skip_update: false
{% endif %}
{% if trivy.skip_java_db_update is defined %}
# skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the
# `/home/scanner/.cache/trivy/java-db/trivy-java.db` path
skip_java_db_update: {{ trivy.skip_java_db_update | lower }}
{% else %}
skip_java_db_update: false
{% endif %}
#
{% if trivy.offline_scan is defined %}
offline_scan: {{ trivy.offline_scan | lower }}
{% else %}
offline_scan: false
{% endif %}
#
# Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.
{% if trivy.security_check is defined %}
security_check: {{ trivy.security_check }}
{% else %}
security_check: vuln
{% endif %}
#
# insecure The flag to skip verifying registry certificate
{% if trivy.insecure is defined %}
insecure: {{ trivy.insecure | lower }}
{% else %}
insecure: false
{% endif %}
#
{% if trivy.timeout is defined %}
# timeout The duration to wait for scan completion.
# There is upper bound of 30 minutes defined in scan job. So if this `timeout` is larger than 30m0s, it will also timeout at 30m0s.
timeout: {{ trivy.timeout}}
{% else %}
timeout: 5m0s
{% endif %}
#
# github_token The GitHub access token to download Trivy DB
#
# Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
# https://developer.github.com/v3/#rate-limiting
#
# You can create a GitHub token by following the instructions in
# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
#
{% if trivy.github_token is defined %}
github_token: {{ trivy.github_token }}
{% else %}
# github_token: xxx
{% endif %}
{% else %}
# trivy:
# # ignoreUnfixed The flag to display only fixed vulnerabilities
# ignore_unfixed: false
# # skipUpdate The flag to enable or disable Trivy DB downloads from GitHub
# #
# # You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.
# # If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and
# # `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.
# skip_update: false
# #
# # skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the
# # `/home/scanner/.cache/trivy/java-db/trivy-java.db` path
# skip_java_db_update: false
# #
# #The offline_scan option prevents Trivy from sending API requests to identify dependencies.
# # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.
# # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't
# # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.
# # It would work if all the dependencies are in local.
# # This option doesnt affect DB download. You need to specify "skip-update" as well as "offline-scan" in an air-gapped environment.
# offline_scan: false
# #
# # insecure The flag to skip verifying registry certificate
# insecure: false
# # github_token The GitHub access token to download Trivy DB
# #
# # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
# # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
# # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
# # https://developer.github.com/v3/#rate-limiting
# #
# # timeout The duration to wait for scan completion.
# # There is upper bound of 30 minutes defined in scan job. So if this `timeout` is larger than 30m0s, it will also timeout at 30m0s.
# timeout: 5m0s
# #
# # You can create a GitHub token by following the instructions in
# # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
# #
# # github_token: xxx
{% endif %}
jobservice:
# Maximum number of job workers in job service
{% if jobservice is defined %}
max_job_workers: {{ jobservice.max_job_workers }}
# Maximum hours of task duration in job service, default 24
{% if jobservice.max_job_duration_hours is defined %}
max_job_duration_hours: {{ jobservice.max_job_duration_hours }}
{% else %}
max_job_duration_hours: 24
{% endif %}
# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"
{% if jobservice.job_loggers is defined %}
job_loggers:
{% for job_logger in jobservice.job_loggers %}
- {{job_logger}}
{% endfor %}
{% else %}
job_loggers:
- STD_OUTPUT
- FILE
# - DB
{% endif %}
# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
{% if jobservice.logger_sweeper_duration is defined %}
logger_sweeper_duration: {{ jobservice.logger_sweeper_duration }}
{% else %}
logger_sweeper_duration: 1
{% endif %}
{% else %}
max_job_workers: 10
max_job_duration_hours: 24
# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"
job_loggers:
- STD_OUTPUT
- FILE
# - DB
# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
logger_sweeper_duration: 1
{% endif %}
notification:
# Maximum retry count for webhook job
{% if notification is defined %}
webhook_job_max_retry: {{ notification.webhook_job_max_retry}}
# HTTP client timeout for webhook job
{% if notification.webhook_job_http_client_timeout is defined %}
webhook_job_http_client_timeout: {{ notification.webhook_job_http_client_timeout }}
{% else %}
webhook_job_http_client_timeout: 3 #seconds
{% endif %}
{% else %}
webhook_job_max_retry: 3
# HTTP client timeout for webhook job
webhook_job_http_client_timeout: 3 #seconds
{% endif %}
# Log configurations
log:
# options are debug, info, warning, error, fatal
{% if log is defined %}
level: {{ log.level }}
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: {{ log.local.rotate_count }}
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: {{ log.local.rotate_size }}
# The directory on your host that store log
location: {{ log.local.location }}
{% if log.external_endpoint is defined %}
external_endpoint:
# protocol used to transmit log to external endpoint, options is tcp or udp
protocol: {{ log.external_endpoint.protocol }}
# The host of external endpoint
host: {{ log.external_endpoint.host }}
# Port of external endpoint
port: {{ log.external_endpoint.port }}
{% else %}
# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140
{% endif %}
{% else %}
level: info
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: 50
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: 200M
# The directory on your host that store log
location: /var/log/harbor
# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140
{% endif %}
#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.13.0
{% if external_database is defined %}
# Uncomment external_database if using external database.
external_database:
harbor:
host: {{ external_database.harbor.host }}
port: {{ external_database.harbor.port }}
db_name: {{ external_database.harbor.db_name }}
username: {{ external_database.harbor.username }}
password: {{ external_database.harbor.password }}
ssl_mode: {{ external_database.harbor.ssl_mode }}
max_idle_conns: {{ external_database.harbor.max_idle_conns}}
max_open_conns: {{ external_database.harbor.max_open_conns}}
{% else %}
# Uncomment external_database if using external database.
# external_database:
# harbor:
# host: harbor_db_host
# port: harbor_db_port
# db_name: harbor_db_name
# username: harbor_db_username
# password: harbor_db_password
# ssl_mode: disable
# max_idle_conns: 2
# max_open_conns: 0
{% endif %}
{% if redis is defined %}
redis:
# # db_index 0 is for core, it's unchangeable
{% if redis.registry_db_index is defined %}
registry_db_index: {{ redis.registry_db_index }}
{% else %}
# # registry_db_index: 1
{% endif %}
{% if redis.jobservice_db_index is defined %}
jobservice_db_index: {{ redis.jobservice_db_index }}
{% else %}
# # jobservice_db_index: 2
{% endif %}
{% if redis.trivy_db_index is defined %}
trivy_db_index: {{ redis.trivy_db_index }}
{% else %}
# # trivy_db_index: 5
{% endif %}
{% if redis.harbor_db_index is defined %}
harbor_db_index: {{ redis.harbor_db_index }}
{% else %}
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
{% endif %}
{% if redis.cache_layer_db_index is defined %}
cache_layer_db_index: {{ redis.cache_layer_db_index }}
{% else %}
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% else %}
# Uncomment redis if need to customize redis db
# redis:
# # db_index 0 is for core, it's unchangeable
# # registry_db_index: 1
# # jobservice_db_index: 2
# # trivy_db_index: 5
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% if external_redis is defined %}
external_redis:
# support redis, redis+sentinel
# host for redis: <host_redis>:<port_redis>
# host for redis+sentinel:
# <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
host: {{ external_redis.host }}
password: {{ external_redis.password }}
# Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH <username> <password> form.
{% if external_redis.username is defined %}
username: {{ external_redis.username }}
{% else %}
# username:
{% endif %}
# sentinel_master_set must be set to support redis+sentinel
#sentinel_master_set:
{% if external_redis.tlsOptions is defined %}
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
tlsOptions:
enable: {{ external_redis.tlsOptions.enable }}
# if it is a self-signed ca, please set the ca path specifically.
{% if external_redis.tlsOptions.rootCA is defined %}
rootCA: {{ external_redis.tlsOptions.rootCA }}
{% else %}
# rootCA:
{% endif %}
{% else %}
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
# tlsOptions:
# enable: false
# # if it is a self-signed ca, please set the ca path specifically.
# rootCA:
{% endif %}
# db_index 0 is for core, it's unchangeable
registry_db_index: {{ external_redis.registry_db_index }}
jobservice_db_index: {{ external_redis.jobservice_db_index }}
trivy_db_index: 5
idle_timeout_seconds: 30
{% if external_redis.harbor_db_index is defined %}
harbor_db_index: {{ redis.harbor_db_index }}
{% else %}
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
{% endif %}
{% if external_redis.cache_layer_db_index is defined %}
cache_layer_db_index: {{ redis.cache_layer_db_index }}
{% else %}
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% else %}
# Uncomments external_redis if using external Redis server
# external_redis:
# # support redis, redis+sentinel
# # host for redis: <host_redis>:<port_redis>
# # host for redis+sentinel:
# # <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
# host: redis:6379
# password:
# # Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH <username> <password> form.
# # username:
# # sentinel_master_set must be set to support redis+sentinel
# #sentinel_master_set:
# # tls configuration for redis connection
# # only server-authentication is supported
# # mtls for redis connection is not supported
# # tls connection will be disable by default
# tlsOptions:
# enable: false
# # if it is a self-signed ca, please set the ca path specifically.
# rootCA:
# # db_index 0 is for core, it's unchangeable
# registry_db_index: 1
# jobservice_db_index: 2
# trivy_db_index: 5
# idle_timeout_seconds: 30
# # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it.
# # harbor_db_index: 6
# # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it.
# # cache_layer_db_index: 7
{% endif %}
{% if uaa is defined %}
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
uaa:
ca_file: {{ uaa.ca_file }}
{% else %}
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
# ca_file: /path/to/ca
{% endif %}
# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
{% if proxy is defined %}
proxy:
http_proxy: {{ proxy.http_proxy or ''}}
https_proxy: {{ proxy.https_proxy or ''}}
no_proxy: {{ proxy.no_proxy or ''}}
{% if proxy.components is defined %}
components:
{% for component in proxy.components %}
{% if component != 'clair' %}
- {{component}}
{% endif %}
{% endfor %}
{% endif %}
{% else %}
proxy:
http_proxy:
https_proxy:
no_proxy:
components:
- core
- jobservice
- trivy
{% endif %}
{% if metric is defined %}
metric:
enabled: {{ metric.enabled }}
port: {{ metric.port }}
path: {{ metric.path }}
{% else %}
# metric:
# enabled: false
# port: 9090
# path: /metrics
{% endif %}
# Trace related config
# only can enable one trace provider(jaeger or otel) at the same time,
# and when using jaeger as provider, can only enable it with agent mode or collector mode.
# if using jaeger collector mode, uncomment endpoint and uncomment username, password if needed
# if using jaeger agetn mode uncomment agent_host and agent_port
{% if trace is defined %}
trace:
enabled: {{ trace.enabled | lower}}
sample_rate: {{ trace.sample_rate }}
# # namespace used to differentiate different harbor services
{% if trace.namespace is defined %}
namespace: {{ trace.namespace }}
{% else %}
# namespace:
{% endif %}
# # attributes is a key value dict contains user defined attributes used to initialize trace provider
{% if trace.attributes is defined%}
attributes:
{% for name, value in trace.attributes.items() %}
{{name}}: {{value}}
{% endfor %}
{% else %}
# attributes:
# application: harbor
{% endif %}
{% if trace.jaeger is defined%}
jaeger:
endpoint: {{trace.jaeger.endpoint or '' }}
username: {{trace.jaeger.username or ''}}
password: {{trace.jaeger.password or ''}}
agent_host: {{trace.jaeger.agent_host or ''}}
agent_port: {{trace.jaeger.agent_port or ''}}
{% else %}
# jaeger:
# endpoint:
# username:
# password:
# agent_host:
# agent_port:
{% endif %}
{% if trace. otel is defined %}
otel:
endpoint: {{trace.otel.endpoint or '' }}
url_path: {{trace.otel.url_path or '' }}
compression: {{trace.otel.compression | lower }}
insecure: {{trace.otel.insecure | lower }}
timeout: {{trace.otel.timeout or '' }}
{% else %}
# otel:
# endpoint: hostname:4318
# url_path: /v1/traces
# compression: false
# insecure: true
# # timeout is in seconds
# timeout: 10
{% endif%}
{% else %}
# trace:
# enabled: true
# # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
# sample_rate: 1
# # # namespace used to differentiate different harbor services
# # namespace:
# # # attributes is a key value dict contains user defined attributes used to initialize trace provider
# # attributes:
# # application: harbor
# # jaeger:
# # endpoint: http://hostname:14268/api/traces
# # username:
# # password:
# # agent_host: hostname
# # agent_port: 6831
# # otel:
# # endpoint: hostname:4318
# # url_path: /v1/traces
# # compression: false
# # insecure: true
# # # timeout is in seconds
# # timeout: 10
{% endif %}
# enable purge _upload directories
{% if upload_purging is defined %}
upload_purging:
enabled: {{ upload_purging.enabled | lower}}
age: {{ upload_purging.age }}
interval: {{ upload_purging.interval }}
dryrun: {{ upload_purging.dryrun | lower}}
{% else %}
upload_purging:
enabled: true
# remove files in _upload directories which exist for a period of time, default is one week.
age: 168h
# the interval of the purge operations
interval: 24h
dryrun: false
{% endif %}
# Cache layer related config
{% if cache is defined %}
cache:
enabled: {{ cache.enabled | lower}}
expire_hours: {{ cache.expire_hours }}
{% else %}
cache:
enabled: false
expire_hours: 24
{% endif %}
# Harbor core configurations
# Uncomment to enable the following harbor core related configuration items.
{% if core is defined %}
core:
# The provider for updating project quota(usage), there are 2 options, redis or db,
# by default is implemented by db but you can switch the updation via redis which
# can improve the performance of high concurrent pushing to the same project,
# and reduce the database connections spike and occupies.
# By redis will bring up some delay for quota usage updation for display, so only
# suggest switch provider to redis if you were ran into the db connections spike aroud
# the scenario of high concurrent pushing to same project, no improvment for other scenes.
quota_update_provider: {{ core.quota_update_provider }}
{% else %}
# core:
# # The provider for updating project quota(usage), there are 2 options, redis or db,
# # by default is implemented by db but you can switch the updation via redis which
# # can improve the performance of high concurrent pushing to the same project,
# # and reduce the database connections spike and occupies.
# # By redis will bring up some delay for quota usage updation for display, so only
# # suggest switch provider to redis if you were ran into the db connections spike around
# # the scenario of high concurrent pushing to same project, no improvement for other scenes.
# quota_update_provider: redis # Or db
{% endif %}

View File

@ -41,6 +41,7 @@ REGISTRY_CREDENTIAL_PASSWORD={{registry_password}}
CSRF_KEY={{csrf_key}}
ROBOT_SCANNER_NAME_PREFIX={{scan_robot_prefix}}
PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE=docker-hub,harbor,azure-acr,ali-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory
REPLICATION_ADAPTER_WHITELIST=ali-acr,aws-ecr,azure-acr,docker-hub,docker-registry,github-ghcr,google-gcr,harbor,huawei-SWR,jfrog-artifactory,tencent-tcr,volcengine-cr
HTTP_PROXY={{core_http_proxy}}
HTTPS_PROXY={{core_https_proxy}}

View File

@ -67,7 +67,7 @@ metric:
reaper:
# the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24,
max_update_hours: 24
max_update_hours: {{ max_job_duration_hours }}
# the max time for execution in running state without new task created
max_dangling_hours: 168

View File

@ -6,6 +6,8 @@ REGISTRY_CONTROLLER_URL={{registry_controller_url}}
JOBSERVICE_WEBHOOK_JOB_MAX_RETRY={{notification_webhook_job_max_retry}}
JOBSERVICE_WEBHOOK_JOB_HTTP_CLIENT_TIMEOUT={{notification_webhook_job_http_client_timeout}}
LOG_LEVEL={{log_level}}
{%if internal_tls.enabled %}
INTERNAL_TLS_ENABLED=true
INTERNAL_TLS_TRUST_CA_PATH=/harbor_cust_cert/harbor_internal_ca.crt

View File

@ -40,6 +40,7 @@ redis:
dialtimeout: 10s
password: {{redis_password}}
db: {{redis_db_index_reg}}
enableTLS: {{redis_enableTLS}}
pool:
maxidle: 100
maxactive: 500

View File

@ -4,7 +4,7 @@ from pathlib import Path
from subprocess import DEVNULL
import logging
from g import DEFAULT_GID, DEFAULT_UID, shared_cert_dir, storage_ca_bundle_filename, internal_tls_dir, internal_ca_filename
from g import DEFAULT_GID, DEFAULT_UID, shared_cert_dir, storage_ca_bundle_filename, internal_tls_dir, internal_ca_filename, redis_tls_ca_filename
from .misc import (
mark_file,
generate_random_string,
@ -120,18 +120,23 @@ def prepare_trust_ca(config_dict):
internal_ca_src = internal_tls_dir.joinpath(internal_ca_filename)
ca_bundle_src = config_dict.get('registry_custom_ca_bundle_path')
redis_tls_ca_src = config_dict.get('redis_custom_tls_ca_path')
for src_path, dst_filename in (
(internal_ca_src, internal_ca_filename),
(ca_bundle_src, storage_ca_bundle_filename)):
(ca_bundle_src, storage_ca_bundle_filename),
(redis_tls_ca_src, redis_tls_ca_filename)):
print('copy {} to shared trust ca dir as name {} ...'.format(src_path, dst_filename))
logging.info('copy {} to shared trust ca dir as name {} ...'.format(src_path, dst_filename))
# check if source file valied
if not src_path:
continue
real_src_path = get_realpath(str(src_path))
if not real_src_path.exists():
print('ca file {} is not exist'.format(real_src_path))
logging.info('ca file {} is not exist'.format(real_src_path))
continue
if not real_src_path.is_file():
print('{} is not file'.format(real_src_path))
logging.info('{} is not file'.format(real_src_path))
continue

View File

@ -1,3 +1,4 @@
from distutils.command.config import config
import logging
import os
import yaml
@ -222,6 +223,10 @@ def parse_yaml_config(config_file_path, with_trivy):
# jobservice config
js_config = configs.get('jobservice') or {}
config_dict['max_job_workers'] = js_config["max_job_workers"]
config_dict['max_job_duration_hours'] = js_config.get("max_job_duration_hours") or 24
value = config_dict["max_job_duration_hours"]
if not isinstance(value, int) or value < 24:
config_dict["max_job_duration_hours"] = 24
config_dict['job_loggers'] = js_config["job_loggers"]
config_dict['logger_sweeper_duration'] = js_config["logger_sweeper_duration"]
config_dict['jobservice_secret'] = generate_random_string(16)
@ -349,6 +354,11 @@ def parse_yaml_config(config_file_path, with_trivy):
return config_dict
def get_redis_schema(redis=None):
if 'tlsOptions' in redis and redis['tlsOptions'].get('enable'):
return redis.get('sentinel_master_set', None) and 'rediss+sentinel' or 'rediss'
else:
return redis.get('sentinel_master_set', None) and 'redis+sentinel' or 'redis'
def get_redis_url(db, redis=None):
"""Returns redis url with format `redis://[arbitrary_username:password@]ipaddress:port/database_index?idle_timeout_seconds=30`
@ -368,7 +378,7 @@ def get_redis_url(db, redis=None):
'password': '',
}
kwargs.update(redis or {})
kwargs['scheme'] = kwargs.get('sentinel_master_set', None) and 'redis+sentinel' or 'redis'
kwargs['scheme'] = get_redis_schema(kwargs)
kwargs['db_part'] = db and ("/%s" % db) or ""
kwargs['sentinel_part'] = kwargs.get('sentinel_master_set', None) and ("/" + kwargs['sentinel_master_set']) or ''
kwargs['password_part'] = quote(str(kwargs.get('password', None)), safe='') and (':%s@' % quote(str(kwargs['password']), safe='')) or ''
@ -453,5 +463,8 @@ def get_redis_configs(internal_redis=None, external_redis=None, with_trivy=True)
if with_trivy:
configs['trivy_redis_url'] = get_redis_url(redis['trivy_db_index'], redis)
if 'tlsOptions' in redis and redis['tlsOptions'].get('enable'):
configs['redis_custom_tls_ca_path'] = redis['tlsOptions']['rootCA']
return configs

View File

@ -33,6 +33,7 @@ def prepare_job_service(config_dict):
gid=DEFAULT_GID,
internal_tls=config_dict['internal_tls'],
max_job_workers=config_dict['max_job_workers'],
max_job_duration_hours=config_dict['max_job_duration_hours'],
job_loggers=config_dict['job_loggers'],
logger_sweeper_duration=config_dict['logger_sweeper_duration'],
redis_url=config_dict['redis_url_js'],

View File

@ -48,6 +48,14 @@ def parse_redis(redis_url):
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': u.path and int(u.path[1:]) or 0,
'redis_enableTLS': 'false',
}
elif u.scheme == 'rediss':
return {
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': u.path and int(u.path[1:]) or 0,
'redis_enableTLS': 'true',
}
elif u.scheme == 'redis+sentinel':
return {
@ -55,6 +63,15 @@ def parse_redis(redis_url):
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': len(u.path.split('/')) == 3 and int(u.path.split('/')[2]) or 0,
'redis_enableTLS': 'false',
}
elif u.scheme == 'rediss+sentinel':
return {
'sentinel_master_set': u.path.split('/')[1],
'redis_host': u.netloc.split('@')[-1],
'redis_password': '' if u.password is None else unquote(u.password),
'redis_db_index_reg': len(u.path.split('/')) == 3 and int(u.path.split('/')[2]) or 0,
'redis_enableTLS': 'true',
}
else:
raise Exception('bad redis url for registry:' + redis_url)

View File

@ -1,4 +1,5 @@
FROM golang:1.23.2
ARG golang_image
FROM ${golang_image}
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
ENV BUILDTAGS include_oss include_gcs

View File

@ -14,6 +14,8 @@ fi
VERSION="$1"
DISTRIBUTION_SRC="$2"
GOBUILDIMAGE="$3"
DOCKERNETWORK="$4"
set -e
@ -28,14 +30,11 @@ cur=$PWD
TEMP=`mktemp -d ${TMPDIR-/tmp}/distribution.XXXXXX`
git clone -b $VERSION $DISTRIBUTION_SRC $TEMP
# add patch redis
cd $TEMP
git apply $cur/redis.patch
cd $cur
echo 'build the registry binary ...'
cp Dockerfile.binary $TEMP
docker build -f $TEMP/Dockerfile.binary -t registry-golang $TEMP
docker build --network=$DOCKERNETWORK --build-arg golang_image=$GOBUILDIMAGE -f $TEMP/Dockerfile.binary -t registry-golang $TEMP
echo 'copy the registry binary to local...'
ID=$(docker create registry-golang)

View File

@ -1,883 +0,0 @@
diff --git a/configuration/configuration.go b/configuration/configuration.go
index 7076df85d4..3e74330321 100644
--- a/configuration/configuration.go
+++ b/configuration/configuration.go
@@ -168,6 +168,9 @@ type Configuration struct {
// Addr specifies the the redis instance available to the application.
Addr string `yaml:"addr,omitempty"`
+ // SentinelMasterSet specifies the the redis sentinel master set name.
+ SentinelMasterSet string `yaml:"sentinelMasterSet,omitempty"`
+
// Password string to use when making a connection.
Password string `yaml:"password,omitempty"`
diff --git a/registry/handlers/app.go b/registry/handlers/app.go
index bf56cea22a..4a7cee9a2e 100644
--- a/registry/handlers/app.go
+++ b/registry/handlers/app.go
@@ -3,6 +3,7 @@ package handlers
import (
"context"
"crypto/rand"
+ "errors"
"expvar"
"fmt"
"math"
@@ -16,6 +17,7 @@ import (
"strings"
"time"
+ "github.com/FZambia/sentinel"
"github.com/distribution/reference"
"github.com/docker/distribution"
"github.com/docker/distribution/configuration"
@@ -499,6 +501,45 @@ func (app *App) configureRedis(configuration *configuration.Configuration) {
return
}
+ var getRedisAddr func() (string, error)
+ var testOnBorrow func(c redis.Conn, t time.Time) error
+ if configuration.Redis.SentinelMasterSet != "" {
+ sntnl := &sentinel.Sentinel{
+ Addrs: strings.Split(configuration.Redis.Addr, ","),
+ MasterName: configuration.Redis.SentinelMasterSet,
+ Dial: func(addr string) (redis.Conn, error) {
+ c, err := redis.DialTimeout("tcp", addr,
+ configuration.Redis.DialTimeout,
+ configuration.Redis.ReadTimeout,
+ configuration.Redis.WriteTimeout)
+ if err != nil {
+ return nil, err
+ }
+ return c, nil
+ },
+ }
+ getRedisAddr = func() (string, error) {
+ return sntnl.MasterAddr()
+ }
+ testOnBorrow = func(c redis.Conn, t time.Time) error {
+ if !sentinel.TestRole(c, "master") {
+ return errors.New("role check failed")
+ }
+ return nil
+ }
+
+ } else {
+ getRedisAddr = func() (string, error) {
+ return configuration.Redis.Addr, nil
+ }
+ testOnBorrow = func(c redis.Conn, t time.Time) error {
+ // TODO(stevvooe): We can probably do something more interesting
+ // here with the health package.
+ _, err := c.Do("PING")
+ return err
+ }
+ }
+
pool := &redis.Pool{
Dial: func() (redis.Conn, error) {
// TODO(stevvooe): Yet another use case for contextual timing.
@@ -514,8 +555,11 @@ func (app *App) configureRedis(configuration *configuration.Configuration) {
}
}
- conn, err := redis.DialTimeout("tcp",
- configuration.Redis.Addr,
+ redisAddr, err := getRedisAddr()
+ if err != nil {
+ return nil, err
+ }
+ conn, err := redis.DialTimeout("tcp", redisAddr,
configuration.Redis.DialTimeout,
configuration.Redis.ReadTimeout,
configuration.Redis.WriteTimeout)
@@ -547,16 +591,11 @@ func (app *App) configureRedis(configuration *configuration.Configuration) {
done(nil)
return conn, nil
},
- MaxIdle: configuration.Redis.Pool.MaxIdle,
- MaxActive: configuration.Redis.Pool.MaxActive,
- IdleTimeout: configuration.Redis.Pool.IdleTimeout,
- TestOnBorrow: func(c redis.Conn, t time.Time) error {
- // TODO(stevvooe): We can probably do something more interesting
- // here with the health package.
- _, err := c.Do("PING")
- return err
- },
- Wait: false, // if a connection is not available, proceed without cache.
+ MaxIdle: configuration.Redis.Pool.MaxIdle,
+ MaxActive: configuration.Redis.Pool.MaxActive,
+ IdleTimeout: configuration.Redis.Pool.IdleTimeout,
+ TestOnBorrow: testOnBorrow,
+ Wait: false, // if a connection is not available, proceed without cache.
}
app.redis = pool
diff --git a/registry/handlers/app_test.go b/registry/handlers/app_test.go
index 60a57e6c15..8a644d83d8 100644
--- a/registry/handlers/app_test.go
+++ b/registry/handlers/app_test.go
@@ -140,7 +140,29 @@ func TestAppDispatcher(t *testing.T) {
// TestNewApp covers the creation of an application via NewApp with a
// configuration.
func TestNewApp(t *testing.T) {
- ctx := context.Background()
+
+ config := configuration.Configuration{
+ Storage: configuration.Storage{
+ "testdriver": nil,
+ "maintenance": configuration.Parameters{"uploadpurging": map[interface{}]interface{}{
+ "enabled": false,
+ }},
+ },
+ Auth: configuration.Auth{
+ // For now, we simply test that new auth results in a viable
+ // application.
+ "silly": {
+ "realm": "realm-test",
+ "service": "service-test",
+ },
+ },
+ }
+ runAppWithConfig(t, config)
+}
+
+// TestNewApp covers the creation of an application via NewApp with a
+// configuration(with redis).
+func TestNewAppWithRedis(t *testing.T) {
config := configuration.Configuration{
Storage: configuration.Storage{
"testdriver": nil,
@@ -157,7 +179,38 @@ func TestNewApp(t *testing.T) {
},
},
}
+ config.Redis.Addr = "127.0.0.1:6379"
+ config.Redis.DB = 0
+ runAppWithConfig(t, config)
+}
+// TestNewApp covers the creation of an application via NewApp with a
+// configuration(with redis sentinel cluster).
+func TestNewAppWithRedisSentinelCluster(t *testing.T) {
+ config := configuration.Configuration{
+ Storage: configuration.Storage{
+ "testdriver": nil,
+ "maintenance": configuration.Parameters{"uploadpurging": map[interface{}]interface{}{
+ "enabled": false,
+ }},
+ },
+ Auth: configuration.Auth{
+ // For now, we simply test that new auth results in a viable
+ // application.
+ "silly": {
+ "realm": "realm-test",
+ "service": "service-test",
+ },
+ },
+ }
+ config.Redis.Addr = "192.168.0.11:26379,192.168.0.12:26379"
+ config.Redis.DB = 0
+ config.Redis.SentinelMasterSet = "mymaster"
+ runAppWithConfig(t, config)
+}
+
+func runAppWithConfig(t *testing.T, config configuration.Configuration) {
+ ctx := context.Background()
// Mostly, with this test, given a sane configuration, we are simply
// ensuring that NewApp doesn't panic. We might want to tweak this
// behavior.
diff --git a/vendor.conf b/vendor.conf
index 33fe616b76..a8d8f58bc6 100644
--- a/vendor.conf
+++ b/vendor.conf
@@ -51,3 +51,4 @@ gopkg.in/yaml.v2 v2.2.1
rsc.io/letsencrypt e770c10b0f1a64775ae91d240407ce00d1a5bdeb https://github.com/dmcgowan/letsencrypt.git
github.com/opencontainers/go-digest ea51bea511f75cfa3ef6098cc253c5c3609b037a # v1.0.0
github.com/opencontainers/image-spec 67d2d5658fe0476ab9bf414cec164077ebff3920 # v1.0.2
+github.com/FZambia/sentinel 5585739eb4b6478aa30161866ccf9ce0ef5847c7 https://github.com/jeremyxu2010/sentinel.git
diff --git a/vendor/github.com/FZambia/sentinel/LICENSE b/vendor/github.com/FZambia/sentinel/LICENSE
new file mode 100644
index 0000000000..8dada3edaf
--- /dev/null
+++ b/vendor/github.com/FZambia/sentinel/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/FZambia/sentinel/README.md b/vendor/github.com/FZambia/sentinel/README.md
new file mode 100644
index 0000000000..f544c54ef6
--- /dev/null
+++ b/vendor/github.com/FZambia/sentinel/README.md
@@ -0,0 +1,39 @@
+go-sentinel
+===========
+
+Redis Sentinel support for [redigo](https://github.com/gomodule/redigo) library.
+
+Documentation
+-------------
+
+- [API Reference](http://godoc.org/github.com/FZambia/sentinel)
+
+Alternative solution
+--------------------
+
+You can alternatively configure Haproxy between your application and Redis to proxy requests to Redis master instance if you only need HA:
+
+```
+listen redis
+ server redis-01 127.0.0.1:6380 check port 6380 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2
+ server redis-02 127.0.0.1:6381 check port 6381 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 backup
+ bind *:6379
+ mode tcp
+ option tcpka
+ option tcplog
+ option tcp-check
+ tcp-check send PING\r\n
+ tcp-check expect string +PONG
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
+ tcp-check send QUIT\r\n
+ tcp-check expect string +OK
+ balance roundrobin
+```
+
+This way you don't need to use this library.
+
+License
+-------
+
+Library is available under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html).
diff --git a/vendor/github.com/FZambia/sentinel/sentinel.go b/vendor/github.com/FZambia/sentinel/sentinel.go
new file mode 100644
index 0000000000..79209e9f0d
--- /dev/null
+++ b/vendor/github.com/FZambia/sentinel/sentinel.go
@@ -0,0 +1,426 @@
+package sentinel
+
+import (
+ "errors"
+ "fmt"
+ "net"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/garyburd/redigo/redis"
+)
+
+// Sentinel provides a way to add high availability (HA) to Redis Pool using
+// preconfigured addresses of Sentinel servers and name of master which Sentinels
+// monitor. It works with Redis >= 2.8.12 (mostly because of ROLE command that
+// was introduced in that version, it's possible though to support old versions
+// using INFO command).
+//
+// Example of the simplest usage to contact master "mymaster":
+//
+// func newSentinelPool() *redis.Pool {
+// sntnl := &sentinel.Sentinel{
+// Addrs: []string{":26379", ":26380", ":26381"},
+// MasterName: "mymaster",
+// Dial: func(addr string) (redis.Conn, error) {
+// timeout := 500 * time.Millisecond
+// c, err := redis.DialTimeout("tcp", addr, timeout, timeout, timeout)
+// if err != nil {
+// return nil, err
+// }
+// return c, nil
+// },
+// }
+// return &redis.Pool{
+// MaxIdle: 3,
+// MaxActive: 64,
+// Wait: true,
+// IdleTimeout: 240 * time.Second,
+// Dial: func() (redis.Conn, error) {
+// masterAddr, err := sntnl.MasterAddr()
+// if err != nil {
+// return nil, err
+// }
+// c, err := redis.Dial("tcp", masterAddr)
+// if err != nil {
+// return nil, err
+// }
+// return c, nil
+// },
+// TestOnBorrow: func(c redis.Conn, t time.Time) error {
+// if !sentinel.TestRole(c, "master") {
+// return errors.New("Role check failed")
+// } else {
+// return nil
+// }
+// },
+// }
+// }
+type Sentinel struct {
+ // Addrs is a slice with known Sentinel addresses.
+ Addrs []string
+
+ // MasterName is a name of Redis master Sentinel servers monitor.
+ MasterName string
+
+ // Dial is a user supplied function to connect to Sentinel on given address. This
+ // address will be chosen from Addrs slice.
+ // Note that as per the redis-sentinel client guidelines, a timeout is mandatory
+ // while connecting to Sentinels, and should not be set to 0.
+ Dial func(addr string) (redis.Conn, error)
+
+ // Pool is a user supplied function returning custom connection pool to Sentinel.
+ // This can be useful to tune options if you are not satisfied with what default
+ // Sentinel pool offers. See defaultPool() method for default pool implementation.
+ // In most cases you only need to provide Dial function and let this be nil.
+ Pool func(addr string) *redis.Pool
+
+ mu sync.RWMutex
+ pools map[string]*redis.Pool
+ addr string
+}
+
+// NoSentinelsAvailable is returned when all sentinels in the list are exhausted
+// (or none configured), and contains the last error returned by Dial (which
+// may be nil)
+type NoSentinelsAvailable struct {
+ lastError error
+}
+
+func (ns NoSentinelsAvailable) Error() string {
+ if ns.lastError != nil {
+ return fmt.Sprintf("redigo: no sentinels available; last error: %s", ns.lastError.Error())
+ }
+ return fmt.Sprintf("redigo: no sentinels available")
+}
+
+// putToTop puts Sentinel address to the top of address list - this means
+// that all next requests will use Sentinel on this address first.
+//
+// From Sentinel guidelines:
+//
+// The first Sentinel replying to the client request should be put at the
+// start of the list, so that at the next reconnection, we'll try first
+// the Sentinel that was reachable in the previous connection attempt,
+// minimizing latency.
+//
+// Lock must be held by caller.
+func (s *Sentinel) putToTop(addr string) {
+ addrs := s.Addrs
+ if addrs[0] == addr {
+ // Already on top.
+ return
+ }
+ newAddrs := []string{addr}
+ for _, a := range addrs {
+ if a == addr {
+ continue
+ }
+ newAddrs = append(newAddrs, a)
+ }
+ s.Addrs = newAddrs
+}
+
+// putToBottom puts Sentinel address to the bottom of address list.
+// We call this method internally when see that some Sentinel failed to answer
+// on application request so next time we start with another one.
+//
+// Lock must be held by caller.
+func (s *Sentinel) putToBottom(addr string) {
+ addrs := s.Addrs
+ if addrs[len(addrs)-1] == addr {
+ // Already on bottom.
+ return
+ }
+ newAddrs := []string{}
+ for _, a := range addrs {
+ if a == addr {
+ continue
+ }
+ newAddrs = append(newAddrs, a)
+ }
+ newAddrs = append(newAddrs, addr)
+ s.Addrs = newAddrs
+}
+
+// defaultPool returns a connection pool to one Sentinel. This allows
+// us to call concurrent requests to Sentinel using connection Do method.
+func (s *Sentinel) defaultPool(addr string) *redis.Pool {
+ return &redis.Pool{
+ MaxIdle: 3,
+ MaxActive: 10,
+ Wait: true,
+ IdleTimeout: 240 * time.Second,
+ Dial: func() (redis.Conn, error) {
+ return s.Dial(addr)
+ },
+ TestOnBorrow: func(c redis.Conn, t time.Time) error {
+ _, err := c.Do("PING")
+ return err
+ },
+ }
+}
+
+func (s *Sentinel) get(addr string) redis.Conn {
+ pool := s.poolForAddr(addr)
+ return pool.Get()
+}
+
+func (s *Sentinel) poolForAddr(addr string) *redis.Pool {
+ s.mu.Lock()
+ if s.pools == nil {
+ s.pools = make(map[string]*redis.Pool)
+ }
+ pool, ok := s.pools[addr]
+ if ok {
+ s.mu.Unlock()
+ return pool
+ }
+ s.mu.Unlock()
+ newPool := s.newPool(addr)
+ s.mu.Lock()
+ p, ok := s.pools[addr]
+ if ok {
+ s.mu.Unlock()
+ return p
+ }
+ s.pools[addr] = newPool
+ s.mu.Unlock()
+ return newPool
+}
+
+func (s *Sentinel) newPool(addr string) *redis.Pool {
+ if s.Pool != nil {
+ return s.Pool(addr)
+ }
+ return s.defaultPool(addr)
+}
+
+// close connection pool to Sentinel.
+// Lock must be hold by caller.
+func (s *Sentinel) close() {
+ if s.pools != nil {
+ for _, pool := range s.pools {
+ pool.Close()
+ }
+ }
+ s.pools = nil
+}
+
+func (s *Sentinel) doUntilSuccess(f func(redis.Conn) (interface{}, error)) (interface{}, error) {
+ s.mu.RLock()
+ addrs := s.Addrs
+ s.mu.RUnlock()
+
+ var lastErr error
+
+ for _, addr := range addrs {
+ conn := s.get(addr)
+ reply, err := f(conn)
+ conn.Close()
+ if err != nil {
+ lastErr = err
+ s.mu.Lock()
+ pool, ok := s.pools[addr]
+ if ok {
+ pool.Close()
+ delete(s.pools, addr)
+ }
+ s.putToBottom(addr)
+ s.mu.Unlock()
+ continue
+ }
+ s.putToTop(addr)
+ return reply, nil
+ }
+
+ return nil, NoSentinelsAvailable{lastError: lastErr}
+}
+
+// MasterAddr returns an address of current Redis master instance.
+func (s *Sentinel) MasterAddr() (string, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForMaster(c, s.MasterName)
+ })
+ if err != nil {
+ return "", err
+ }
+ return res.(string), nil
+}
+
+// SlaveAddrs returns a slice with known slave addresses of current master instance.
+func (s *Sentinel) SlaveAddrs() ([]string, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForSlaveAddrs(c, s.MasterName)
+ })
+ if err != nil {
+ return nil, err
+ }
+ return res.([]string), nil
+}
+
+// Slave represents a Redis slave instance which is known by Sentinel.
+type Slave struct {
+ ip string
+ port string
+ flags string
+}
+
+// Addr returns an address of slave.
+func (s *Slave) Addr() string {
+ return net.JoinHostPort(s.ip, s.port)
+}
+
+// Available returns if slave is in working state at moment based on information in slave flags.
+func (s *Slave) Available() bool {
+ return !strings.Contains(s.flags, "disconnected") && !strings.Contains(s.flags, "s_down")
+}
+
+// Slaves returns a slice with known slaves of master instance.
+func (s *Sentinel) Slaves() ([]*Slave, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForSlaves(c, s.MasterName)
+ })
+ if err != nil {
+ return nil, err
+ }
+ return res.([]*Slave), nil
+}
+
+// SentinelAddrs returns a slice of known Sentinel addresses Sentinel server aware of.
+func (s *Sentinel) SentinelAddrs() ([]string, error) {
+ res, err := s.doUntilSuccess(func(c redis.Conn) (interface{}, error) {
+ return queryForSentinels(c, s.MasterName)
+ })
+ if err != nil {
+ return nil, err
+ }
+ return res.([]string), nil
+}
+
+// Discover allows to update list of known Sentinel addresses. From docs:
+//
+// A client may update its internal list of Sentinel nodes following this procedure:
+// 1) Obtain a list of other Sentinels for this master using the command SENTINEL sentinels <master-name>.
+// 2) Add every ip:port pair not already existing in our list at the end of the list.
+func (s *Sentinel) Discover() error {
+ addrs, err := s.SentinelAddrs()
+ if err != nil {
+ return err
+ }
+ s.mu.Lock()
+ for _, addr := range addrs {
+ if !stringInSlice(addr, s.Addrs) {
+ s.Addrs = append(s.Addrs, addr)
+ }
+ }
+ s.mu.Unlock()
+ return nil
+}
+
+// Close closes current connection to Sentinel.
+func (s *Sentinel) Close() error {
+ s.mu.Lock()
+ s.close()
+ s.mu.Unlock()
+ return nil
+}
+
+// TestRole wraps GetRole in a test to verify if the role matches an expected
+// role string. If there was any error in querying the supplied connection,
+// the function returns false. Works with Redis >= 2.8.12.
+// It's not goroutine safe, but if you call this method on pooled connections
+// then you are OK.
+func TestRole(c redis.Conn, expectedRole string) bool {
+ role, err := getRole(c)
+ if err != nil || role != expectedRole {
+ return false
+ }
+ return true
+}
+
+// getRole is a convenience function supplied to query an instance (master or
+// slave) for its role. It attempts to use the ROLE command introduced in
+// redis 2.8.12.
+func getRole(c redis.Conn) (string, error) {
+ res, err := c.Do("ROLE")
+ if err != nil {
+ return "", err
+ }
+ rres, ok := res.([]interface{})
+ if ok {
+ return redis.String(rres[0], nil)
+ }
+ return "", errors.New("redigo: can not transform ROLE reply to string")
+}
+
+func queryForMaster(conn redis.Conn, masterName string) (string, error) {
+ res, err := redis.Strings(conn.Do("SENTINEL", "get-master-addr-by-name", masterName))
+ if err != nil {
+ return "", err
+ }
+ if len(res) < 2 {
+ return "", errors.New("redigo: malformed get-master-addr-by-name reply")
+ }
+ masterAddr := net.JoinHostPort(res[0], res[1])
+ return masterAddr, nil
+}
+
+func queryForSlaveAddrs(conn redis.Conn, masterName string) ([]string, error) {
+ slaves, err := queryForSlaves(conn, masterName)
+ if err != nil {
+ return nil, err
+ }
+ slaveAddrs := make([]string, 0)
+ for _, slave := range slaves {
+ slaveAddrs = append(slaveAddrs, slave.Addr())
+ }
+ return slaveAddrs, nil
+}
+
+func queryForSlaves(conn redis.Conn, masterName string) ([]*Slave, error) {
+ res, err := redis.Values(conn.Do("SENTINEL", "slaves", masterName))
+ if err != nil {
+ return nil, err
+ }
+ slaves := make([]*Slave, 0)
+ for _, a := range res {
+ sm, err := redis.StringMap(a, err)
+ if err != nil {
+ return slaves, err
+ }
+ slave := &Slave{
+ ip: sm["ip"],
+ port: sm["port"],
+ flags: sm["flags"],
+ }
+ slaves = append(slaves, slave)
+ }
+ return slaves, nil
+}
+
+func queryForSentinels(conn redis.Conn, masterName string) ([]string, error) {
+ res, err := redis.Values(conn.Do("SENTINEL", "sentinels", masterName))
+ if err != nil {
+ return nil, err
+ }
+ sentinels := make([]string, 0)
+ for _, a := range res {
+ sm, err := redis.StringMap(a, err)
+ if err != nil {
+ return sentinels, err
+ }
+ sentinels = append(sentinels, fmt.Sprintf("%s:%s", sm["ip"], sm["port"]))
+ }
+ return sentinels, nil
+}
+
+func stringInSlice(str string, slice []string) bool {
+ for _, s := range slice {
+ if s == str {
+ return true
+ }
+ }
+ return false
+}

View File

@ -1,4 +1,5 @@
FROM golang:1.23.2
ARG golang_image
FROM ${golang_image}
ADD . /go/src/github.com/goharbor/harbor-scanner-trivy/
WORKDIR /go/src/github.com/goharbor/harbor-scanner-trivy/

View File

@ -8,6 +8,8 @@ if [ -z $1 ]; then
fi
VERSION="$1"
GOBUILDIMAGE="$2"
DOCKERNETWORK="$3"
set -e
@ -19,9 +21,9 @@ TEMP=$(mktemp -d ${TMPDIR-/tmp}/trivy-adapter.XXXXXX)
git clone https://github.com/goharbor/harbor-scanner-trivy.git $TEMP
cd $TEMP; git checkout $VERSION; cd -
echo "Building Trivy adapter binary based on golang:1.23.2..."
echo "Building Trivy adapter binary ..."
cp Dockerfile.binary $TEMP
docker build -f $TEMP/Dockerfile.binary -t trivy-adapter-golang $TEMP
docker build --network=$DOCKERNETWORK --build-arg golang_image=$GOBUILDIMAGE -f $TEMP/Dockerfile.binary -t trivy-adapter-golang $TEMP
echo "Copying Trivy adapter binary from the container to the local directory..."
ID=$(docker create trivy-adapter-golang)

View File

@ -1,76 +1,56 @@
linters-settings:
gofmt:
# Simplify code: gofmt with `-s` option.
# Default: true
simplify: false
misspell:
locale: US,UK
goimports:
local-prefixes: github.com/goharbor/harbor
stylecheck:
checks: [
"ST1019", # Importing the same package multiple times.
]
goheader:
template-path: copyright.tmpl
version: "2"
linters:
disable-all: true
default: none
enable:
- gofmt
- goheader
- misspell
- typecheck
# - dogsled
# - dupl
# - depguard
# - funlen
# - goconst
# - gocritic
# - gocyclo
# - goimports
# - goprintffuncname
- ineffassign
# - nakedret
# - nolintlint
- revive
- whitespace
- bodyclose
- errcheck
# - gosec
- gosimple
- goimports
- goheader
- govet
# - noctx
# - rowserrcheck
- ineffassign
- misspell
- revive
- staticcheck
- stylecheck
# - unconvert
# - unparam
# - unused // disabled due to too many false positive check and limited support golang 1.19 https://github.com/dominikh/go-tools/issues/1282
run:
skip-files:
- ".*_test.go"
- ".*test.go"
skip-dirs:
- "testing"
timeout: 20m
issue:
max-same-issues: 0
max-per-linter: 0
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- goimports
- path: src/testing/*.go
linters:
- goimports
- path: src/jobservice/mgt/mock_manager.go
linters:
- goimports
- whitespace
settings:
goheader:
template-path: copyright.tmpl
misspell:
locale: US,UK
staticcheck:
checks:
- ST1019
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
- .*_test\.go
- .*test\.go
- testing
- src/jobservice/mgt/mock_manager.go
formatters:
enable:
- gofmt
- goimports
settings:
gofmt:
simplify: false
goimports:
local-prefixes:
- github.com/goharbor/harbor
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
- .*_test\.go
- .*test\.go
- testing
- src/jobservice/mgt/mock_manager.go

View File

@ -462,6 +462,16 @@ packages:
DAO:
config:
dir: testing/pkg/audit/dao
github.com/goharbor/harbor/src/pkg/auditext:
interfaces:
Manager:
config:
dir: testing/pkg/auditext
github.com/goharbor/harbor/src/pkg/auditext/dao:
interfaces:
DAO:
config:
dir: testing/pkg/auditext/dao
github.com/goharbor/harbor/src/pkg/systemartifact:
interfaces:
Manager:

View File

@ -78,7 +78,7 @@ func (b *BaseAPI) RenderError(code int, text string) {
}
// DecodeJSONReq decodes a json request
func (b *BaseAPI) DecodeJSONReq(v interface{}) error {
func (b *BaseAPI) DecodeJSONReq(v any) error {
err := json.Unmarshal(b.Ctx.Input.CopyBody(1<<35), v)
if err != nil {
log.Errorf("Error while decoding the json request, error: %v, %v",
@ -89,7 +89,7 @@ func (b *BaseAPI) DecodeJSONReq(v interface{}) error {
}
// Validate validates v if it implements interface validation.ValidFormer
func (b *BaseAPI) Validate(v interface{}) (bool, error) {
func (b *BaseAPI) Validate(v any) (bool, error) {
validator := validation.Validation{}
isValid, err := validator.Valid(v)
if err != nil {
@ -108,7 +108,7 @@ func (b *BaseAPI) Validate(v interface{}) (bool, error) {
}
// DecodeJSONReqAndValidate does both decoding and validation
func (b *BaseAPI) DecodeJSONReqAndValidate(v interface{}) (bool, error) {
func (b *BaseAPI) DecodeJSONReqAndValidate(v any) (bool, error) {
if err := b.DecodeJSONReq(v); err != nil {
return false, err
}

View File

@ -119,6 +119,7 @@ const (
OIDCExtraRedirectParms = "oidc_extra_redirect_parms"
OIDCScope = "oidc_scope"
OIDCUserClaim = "oidc_user_claim"
OIDCLogout = "oidc_logout"
CfgDriverDB = "db"
NewHarborAdminName = "admin@harbor.local"
@ -151,6 +152,7 @@ const (
OIDCCallbackPath = "/c/oidc/callback"
OIDCLoginPath = "/c/oidc/login"
OIDCLoginoutPath = "/c/oidc/logout"
AuthProxyRedirectPath = "/c/authproxy/redirect"
@ -208,7 +210,7 @@ const (
// 24h.
DefaultCacheExpireHours = 24
PurgeAuditIncludeOperations = "include_operations"
PurgeAuditIncludeEventTypes = "include_event_types"
PurgeAuditDryRun = "dry_run"
PurgeAuditRetentionHour = "audit_retention_hour"
// AuditLogForwardEndpoint indicate to forward the audit log to an endpoint
@ -220,6 +222,9 @@ const (
// ScannerSkipUpdatePullTime
ScannerSkipUpdatePullTime = "scanner_skip_update_pulltime"
// AuditLogEventsDisabled ...
AuditLogEventsDisabled = "disabled_audit_log_event_types"
// SessionTimeout defines the web session timeout
SessionTimeout = "session_timeout"
@ -247,4 +252,7 @@ const (
// Global Leeway used for token validation
JwtLeeway = 60 * time.Second
// The replication adapter whitelist
ReplicationAdapterWhiteList = "REPLICATION_ADAPTER_WHITELIST"
)

View File

@ -144,6 +144,6 @@ func (l *mLogger) Verbose() bool {
}
// Printf ...
func (l *mLogger) Printf(format string, v ...interface{}) {
func (l *mLogger) Printf(format string, v ...any) {
l.logger.Infof(format, v...)
}

View File

@ -29,7 +29,7 @@ import (
var testCtx context.Context
func execUpdate(o orm.TxOrmer, sql string, params ...interface{}) error {
func execUpdate(o orm.TxOrmer, sql string, params ...any) error {
p, err := o.Raw(sql).Prepare()
if err != nil {
return err

View File

@ -27,7 +27,7 @@ func TestMaxOpenConns(t *testing.T) {
queryNum := 200
results := make([]bool, queryNum)
for i := 0; i < queryNum; i++ {
for i := range queryNum {
wg.Add(1)
go func(i int) {
defer wg.Done()

View File

@ -142,7 +142,7 @@ func ArrayEqual(arrayA, arrayB []int) bool {
return false
}
size := len(arrayA)
for i := 0; i < size; i++ {
for i := range size {
if arrayA[i] != arrayB[i] {
return false
}

View File

@ -69,7 +69,7 @@ func (c *Client) Do(req *http.Request) (*http.Response, error) {
}
// Get ...
func (c *Client) Get(url string, v ...interface{}) error {
func (c *Client) Get(url string, v ...any) error {
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return err
@ -98,7 +98,7 @@ func (c *Client) Head(url string) error {
}
// Post ...
func (c *Client) Post(url string, v ...interface{}) error {
func (c *Client) Post(url string, v ...any) error {
var reader io.Reader
if len(v) > 0 {
if r, ok := v[0].(io.Reader); ok {
@ -123,7 +123,7 @@ func (c *Client) Post(url string, v ...interface{}) error {
}
// Put ...
func (c *Client) Put(url string, v ...interface{}) error {
func (c *Client) Put(url string, v ...any) error {
var reader io.Reader
if len(v) > 0 {
data, err := json.Marshal(v[0])
@ -176,7 +176,7 @@ func (c *Client) do(req *http.Request) ([]byte, error) {
// GetAndIteratePagination iterates the pagination header and returns all resources
// The parameter "v" must be a pointer to a slice
func (c *Client) GetAndIteratePagination(endpoint string, v interface{}) error {
func (c *Client) GetAndIteratePagination(endpoint string, v any) error {
url, err := url.Parse(endpoint)
if err != nil {
return err

View File

@ -15,7 +15,7 @@
package models
// Parameters for job execution.
type Parameters map[string]interface{}
type Parameters map[string]any
// JobRequest is the request of launching a job.
type JobRequest struct {
@ -96,5 +96,5 @@ type JobStatusChange struct {
// Message is designed for sub/pub messages
type Message struct {
Event string
Data interface{} // generic format
Data any // generic format
}

View File

@ -23,7 +23,7 @@ type OIDCUser struct {
ID int64 `orm:"pk;auto;column(id)" json:"id"`
UserID int `orm:"column(user_id)" json:"user_id"`
// encrypted secret
Secret string `orm:"column(secret)" json:"-"`
Secret string `orm:"column(secret)" filter:"false" json:"-"`
// secret in plain text
PlainSecret string `orm:"-" json:"secret"`
SubIss string `orm:"column(subiss)" json:"subiss"`

View File

@ -133,9 +133,6 @@ func (n *NolimitProvider) GetPermissions(s scope) []*types.Policy {
&types.Policy{Resource: ResourceLdapUser, Action: ActionCreate},
&types.Policy{Resource: ResourceLdapUser, Action: ActionList},
&types.Policy{Resource: ResourceExportCVE, Action: ActionCreate},
&types.Policy{Resource: ResourceExportCVE, Action: ActionRead},
&types.Policy{Resource: ResourceQuota, Action: ActionUpdate},
&types.Policy{Resource: ResourceUserGroup, Action: ActionCreate},
@ -151,6 +148,9 @@ func (n *NolimitProvider) GetPermissions(s scope) []*types.Policy {
&types.Policy{Resource: ResourceRobot, Action: ActionList},
&types.Policy{Resource: ResourceRobot, Action: ActionDelete},
&types.Policy{Resource: ResourceExportCVE, Action: ActionCreate},
&types.Policy{Resource: ResourceExportCVE, Action: ActionRead},
&types.Policy{Resource: ResourceMember, Action: ActionCreate},
&types.Policy{Resource: ResourceMember, Action: ActionRead},
&types.Policy{Resource: ResourceMember, Action: ActionUpdate},

View File

@ -119,7 +119,7 @@ func BenchmarkProjectEvaluator(b *testing.B) {
resource := NewNamespace(public.ProjectID).Resource(rbac.ResourceRepository)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
evaluator.HasPermission(context.TODO(), resource, rbac.ActionPull)
}
}

View File

@ -43,7 +43,7 @@ func (ns *projectNamespace) Resource(subresources ...types.Resource) types.Resou
return types.Resource(fmt.Sprintf("/project/%d", ns.projectID)).Subresource(subresources...)
}
func (ns *projectNamespace) Identity() interface{} {
func (ns *projectNamespace) Identity() any {
return ns.projectID
}

View File

@ -127,8 +127,6 @@ var (
{Resource: rbac.ResourceMetadata, Action: rbac.ActionRead},
{Resource: rbac.ResourceLog, Action: rbac.ActionList},
{Resource: rbac.ResourceQuota, Action: rbac.ActionRead},
{Resource: rbac.ResourceLabel, Action: rbac.ActionCreate},
@ -164,6 +162,7 @@ var (
{Resource: rbac.ResourceRobot, Action: rbac.ActionRead},
{Resource: rbac.ResourceRobot, Action: rbac.ActionList},
{Resource: rbac.ResourceNotificationPolicy, Action: rbac.ActionRead},
{Resource: rbac.ResourceNotificationPolicy, Action: rbac.ActionList},
{Resource: rbac.ResourceScan, Action: rbac.ActionCreate},
@ -199,8 +198,6 @@ var (
{Resource: rbac.ResourceMember, Action: rbac.ActionRead},
{Resource: rbac.ResourceMember, Action: rbac.ActionList},
{Resource: rbac.ResourceLog, Action: rbac.ActionList},
{Resource: rbac.ResourceLabel, Action: rbac.ActionRead},
{Resource: rbac.ResourceLabel, Action: rbac.ActionList},
@ -254,8 +251,6 @@ var (
{Resource: rbac.ResourceMember, Action: rbac.ActionRead},
{Resource: rbac.ResourceMember, Action: rbac.ActionList},
{Resource: rbac.ResourceLog, Action: rbac.ActionList},
{Resource: rbac.ResourceLabel, Action: rbac.ActionRead},
{Resource: rbac.ResourceLabel, Action: rbac.ActionList},

View File

@ -38,7 +38,7 @@ func (ns *systemNamespace) Resource(subresources ...types.Resource) types.Resour
return types.Resource("/system/").Subresource(subresources...)
}
func (ns *systemNamespace) Identity() interface{} {
func (ns *systemNamespace) Identity() any {
return nil
}

View File

@ -63,7 +63,7 @@ func (t *tokenSecurityCtx) GetMyProjects() ([]*models.Project, error) {
return []*models.Project{}, nil
}
func (t *tokenSecurityCtx) GetProjectRoles(_ interface{}) []int {
func (t *tokenSecurityCtx) GetProjectRoles(_ any) []int {
return []int{}
}

View File

@ -18,7 +18,7 @@ import (
"github.com/goharbor/harbor/src/common"
)
var defaultConfig = map[string]interface{}{
var defaultConfig = map[string]any{
common.ExtEndpoint: "https://host01.com",
common.AUTHMode: common.DBAuth,
common.DatabaseType: "postgresql",
@ -66,6 +66,6 @@ var defaultConfig = map[string]interface{}{
}
// GetDefaultConfigMap returns the default config map for easier modification.
func GetDefaultConfigMap() map[string]interface{} {
func GetDefaultConfigMap() map[string]any {
return defaultConfig
}

View File

@ -30,7 +30,7 @@ type GCResult struct {
}
// NewRegistryCtl returns a mock registry server
func NewRegistryCtl(_ map[string]interface{}) (*httptest.Server, error) {
func NewRegistryCtl(_ map[string]any) (*httptest.Server, error) {
m := []*RequestHandlerMapping{}
gcr := GCResult{true, "hello-world", time.Now(), time.Now()}

View File

@ -94,9 +94,9 @@ func NewServer(mappings ...*RequestHandlerMapping) *httptest.Server {
}
// GetUnitTestConfig ...
func GetUnitTestConfig() map[string]interface{} {
func GetUnitTestConfig() map[string]any {
ipAddress := os.Getenv("IP")
return map[string]interface{}{
return map[string]any{
common.ExtEndpoint: fmt.Sprintf("https://%s", ipAddress),
common.AUTHMode: "db_auth",
common.DatabaseType: "postgresql",
@ -130,7 +130,7 @@ func GetUnitTestConfig() map[string]interface{} {
}
// TraceCfgMap ...
func TraceCfgMap(cfgs map[string]interface{}) {
func TraceCfgMap(cfgs map[string]any) {
var keys []string
for k := range cfgs {
keys = append(keys, k)

View File

@ -89,7 +89,7 @@ type SearchUserEntry struct {
ExtID string `json:"externalId"`
UserName string `json:"userName"`
Emails []SearchUserEmailEntry `json:"emails"`
Groups []interface{}
Groups []any
}
// SearchUserRes is the struct to parse the result of search user API of UAA

View File

@ -75,7 +75,7 @@ func GenerateRandomStringWithLen(length int) string {
if err != nil {
log.Warningf("Error reading random bytes: %v", err)
}
for i := 0; i < length; i++ {
for i := range length {
result[i] = chars[int(result[i])%l]
}
return string(result)
@ -140,7 +140,7 @@ func ParseTimeStamp(timestamp string) (*time.Time, error) {
}
// ConvertMapToStruct is used to fill the specified struct with map.
func ConvertMapToStruct(object interface{}, values interface{}) error {
func ConvertMapToStruct(object any, values any) error {
if object == nil {
return errors.New("nil struct is not supported")
}
@ -158,7 +158,7 @@ func ConvertMapToStruct(object interface{}, values interface{}) error {
}
// ParseProjectIDOrName parses value to ID(int64) or name(string)
func ParseProjectIDOrName(value interface{}) (int64, string, error) {
func ParseProjectIDOrName(value any) (int64, string, error) {
if value == nil {
return 0, "", errors.New("harborIDOrName is nil")
}
@ -177,7 +177,7 @@ func ParseProjectIDOrName(value interface{}) (int64, string, error) {
}
// SafeCastString -- cast an object to string safely
func SafeCastString(value interface{}) string {
func SafeCastString(value any) string {
if result, ok := value.(string); ok {
return result
}
@ -185,7 +185,7 @@ func SafeCastString(value interface{}) string {
}
// SafeCastInt --
func SafeCastInt(value interface{}) int {
func SafeCastInt(value any) int {
if result, ok := value.(int); ok {
return result
}
@ -193,7 +193,7 @@ func SafeCastInt(value interface{}) int {
}
// SafeCastBool --
func SafeCastBool(value interface{}) bool {
func SafeCastBool(value any) bool {
if result, ok := value.(bool); ok {
return result
}
@ -201,7 +201,7 @@ func SafeCastBool(value interface{}) bool {
}
// SafeCastFloat64 --
func SafeCastFloat64(value interface{}) float64 {
func SafeCastFloat64(value any) float64 {
if result, ok := value.(float64); ok {
return result
}
@ -214,9 +214,9 @@ func TrimLower(str string) string {
}
// GetStrValueOfAnyType return string format of any value, for map, need to convert to json
func GetStrValueOfAnyType(value interface{}) string {
func GetStrValueOfAnyType(value any) string {
var strVal string
if _, ok := value.(map[string]interface{}); ok {
if _, ok := value.(map[string]any); ok {
b, err := json.Marshal(value)
if err != nil {
log.Errorf("can not marshal json object, error %v", err)
@ -237,18 +237,18 @@ func GetStrValueOfAnyType(value interface{}) string {
}
// IsIllegalLength ...
func IsIllegalLength(s string, min int, max int) bool {
if min == -1 {
return (len(s) > max)
func IsIllegalLength(s string, minVal int, maxVal int) bool {
if minVal == -1 {
return (len(s) > maxVal)
}
if max == -1 {
return (len(s) <= min)
if maxVal == -1 {
return (len(s) <= minVal)
}
return (len(s) < min || len(s) > max)
return (len(s) < minVal || len(s) > maxVal)
}
// ParseJSONInt ...
func ParseJSONInt(value interface{}) (int, bool) {
func ParseJSONInt(value any) (int, bool) {
switch v := value.(type) {
case float64:
return int(v), true

View File

@ -216,7 +216,7 @@ type testingStruct struct {
}
func TestConvertMapToStruct(t *testing.T) {
dataMap := make(map[string]interface{})
dataMap := make(map[string]any)
dataMap["Name"] = "testing"
dataMap["Count"] = 100
@ -232,7 +232,7 @@ func TestConvertMapToStruct(t *testing.T) {
func TestSafeCastString(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -254,7 +254,7 @@ func TestSafeCastString(t *testing.T) {
func TestSafeCastBool(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -276,7 +276,7 @@ func TestSafeCastBool(t *testing.T) {
func TestSafeCastInt(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -298,7 +298,7 @@ func TestSafeCastInt(t *testing.T) {
func TestSafeCastFloat64(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -342,7 +342,7 @@ func TestTrimLower(t *testing.T) {
func TestGetStrValueOfAnyType(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -357,7 +357,7 @@ func TestGetStrValueOfAnyType(t *testing.T) {
{"string", args{"hello world"}, "hello world"},
{"bool", args{true}, "true"},
{"bool", args{false}, "false"},
{"map", args{map[string]interface{}{"key1": "value1"}}, "{\"key1\":\"value1\"}"},
{"map", args{map[string]any{"key1": "value1"}}, "{\"key1\":\"value1\"}"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@ -83,7 +83,7 @@ func (a *abstractor) AbstractMetadata(ctx context.Context, artifact *artifact.Ar
default:
return fmt.Errorf("unsupported manifest media type: %s", artifact.ManifestMediaType)
}
return processor.Get(artifact.MediaType).AbstractMetadata(ctx, artifact, content)
return processor.Get(artifact.ResolveArtifactType()).AbstractMetadata(ctx, artifact, content)
}
// the artifact is enveloped by docker manifest v1

View File

@ -66,8 +66,7 @@ func parseV1alpha1SkipList(artifact *artifact.Artifact, manifest *v1.Manifest) {
skipListAnnotationKey := fmt.Sprintf("%s.%s.%s", AnnotationPrefix, V1alpha1, SkipList)
skipList, ok := manifest.Config.Annotations[skipListAnnotationKey]
if ok {
skipKeyList := strings.Split(skipList, ",")
for _, skipKey := range skipKeyList {
for skipKey := range strings.SplitSeq(skipList, ",") {
delete(metadata, skipKey)
}
artifact.ExtraAttrs = metadata

View File

@ -231,7 +231,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err := manifest.Payload()
p.Require().Nil(err)
metadata := map[string]interface{}{}
metadata := map[string]any{}
configBlob := io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -244,7 +244,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 12)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("sha256:d923b93eadde0af5c639a972710a4d919066aba5d0dfbf4b9385099f70272da0", art.Icon)
// ormbManifestWithoutSkipList
@ -255,7 +255,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err = manifest.Payload()
p.Require().Nil(err)
metadata = map[string]interface{}{}
metadata = map[string]any{}
configBlob = io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -268,7 +268,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 13)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("sha256:d923b93eadde0af5c639a972710a4d919066aba5d0dfbf4b9385099f70272da0", art.Icon)
// ormbManifestWithoutIcon
@ -279,7 +279,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err = manifest.Payload()
p.Require().Nil(err)
metadata = map[string]interface{}{}
metadata = map[string]any{}
configBlob = io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -290,7 +290,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 12)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("", art.Icon)
}

View File

@ -25,13 +25,16 @@ import (
"github.com/opencontainers/go-digest"
"github.com/goharbor/harbor/src/common/utils"
"github.com/goharbor/harbor/src/controller/artifact/processor"
"github.com/goharbor/harbor/src/controller/artifact/processor/chart"
"github.com/goharbor/harbor/src/controller/artifact/processor/cnab"
"github.com/goharbor/harbor/src/controller/artifact/processor/cnai"
"github.com/goharbor/harbor/src/controller/artifact/processor/image"
"github.com/goharbor/harbor/src/controller/artifact/processor/sbom"
"github.com/goharbor/harbor/src/controller/artifact/processor/wasm"
"github.com/goharbor/harbor/src/controller/event/metadata"
"github.com/goharbor/harbor/src/controller/project"
"github.com/goharbor/harbor/src/controller/tag"
"github.com/goharbor/harbor/src/lib"
"github.com/goharbor/harbor/src/lib/errors"
@ -44,7 +47,7 @@ import (
accessorymodel "github.com/goharbor/harbor/src/pkg/accessory/model"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/artifactrash"
"github.com/goharbor/harbor/src/pkg/artifactrash/model"
trashmodel "github.com/goharbor/harbor/src/pkg/artifactrash/model"
"github.com/goharbor/harbor/src/pkg/blob"
"github.com/goharbor/harbor/src/pkg/immutable/match"
"github.com/goharbor/harbor/src/pkg/immutable/match/rule"
@ -78,6 +81,7 @@ var (
cnab.ArtifactTypeCNAB: icon.DigestOfIconCNAB,
wasm.ArtifactTypeWASM: icon.DigestOfIconWASM,
sbom.ArtifactTypeSBOM: icon.DigestOfIconAccSBOM,
cnai.ArtifactTypeCNAI: icon.DigestOfIconCNAI,
}
)
@ -135,6 +139,7 @@ func NewController() Controller {
regCli: registry.Cli,
abstractor: NewAbstractor(),
accessoryMgr: accessory.Mgr,
proCtl: project.Ctl,
}
}
@ -149,6 +154,7 @@ type controller struct {
regCli registry.Client
abstractor Abstractor
accessoryMgr accessory.Manager
proCtl project.Controller
}
type ArtOption struct {
@ -173,18 +179,28 @@ func (c *controller) Ensure(ctx context.Context, repository, digest string, opti
}
}
}
if created {
// fire event for create
e := &metadata.PushArtifactEventMetadata{
Ctx: ctx,
Artifact: artifact,
}
if option != nil && len(option.Tags) > 0 {
e.Tag = option.Tags[0]
}
notification.AddEvent(ctx, e)
projectName, _ := utils.ParseRepository(repository)
p, err := c.proCtl.GetByName(ctx, projectName)
if err != nil {
return false, 0, err
}
// Does not fire event only when the current project is a proxy-cache project and the artifact already exists.
if p.IsProxy() && !created {
return created, artifact.ID, nil
}
// fire event for create
e := &metadata.PushArtifactEventMetadata{
Ctx: ctx,
Artifact: artifact,
}
if option != nil && len(option.Tags) > 0 {
e.Tag = option.Tags[0]
}
notification.AddEvent(ctx, e)
return created, artifact.ID, nil
}
@ -219,7 +235,7 @@ func (c *controller) ensureArtifact(ctx context.Context, repository, digest stri
}
// populate the artifact type
artifact.Type = processor.Get(artifact.MediaType).GetArtifactType(ctx, artifact)
artifact.Type = processor.Get(artifact.ResolveArtifactType()).GetArtifactType(ctx, artifact)
// create it
// use orm.WithTransaction here to avoid the issue:
@ -297,7 +313,7 @@ func (c *controller) getByTag(ctx context.Context, repository, tag string, optio
return nil, err
}
tags, err := c.tagCtl.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"RepositoryID": repo.RepositoryID,
"Name": tag,
},
@ -326,7 +342,7 @@ func (c *controller) Delete(ctx context.Context, id int64) error {
// the error handling logic for the root parent artifact and others is different
// "isAccessory" is used to specify whether the artifact is an accessory.
func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAccessory bool) error {
art, err := c.Get(ctx, id, &Option{WithTag: true, WithAccessory: true})
art, err := c.Get(ctx, id, &Option{WithTag: true, WithAccessory: true, WithLabel: true})
if err != nil {
// return nil if the nonexistent artifact isn't the root parent
if !isRoot && errors.IsErr(err, errors.NotFoundCode) {
@ -340,7 +356,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
return nil
}
parents, err := c.artMgr.ListReferences(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"ChildID": id,
},
})
@ -369,7 +385,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
if acc.IsHard() {
// if this acc artifact has parent(is child), set isRoot to false
parents, err := c.artMgr.ListReferences(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"ChildID": acc.GetData().ArtifactID,
},
})
@ -437,7 +453,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
// use orm.WithTransaction here to avoid the issue:
// https://www.postgresql.org/message-id/002e01c04da9%24a8f95c20%2425efe6c1%40lasting.ro
if err = orm.WithTransaction(func(ctx context.Context) error {
_, err = c.artrashMgr.Create(ctx, &model.ArtifactTrash{
_, err = c.artrashMgr.Create(ctx, &trashmodel.ArtifactTrash{
MediaType: art.MediaType,
ManifestMediaType: art.ManifestMediaType,
RepositoryName: art.RepositoryName,
@ -450,14 +466,20 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
// only fire event for the root parent artifact
if isRoot {
var tags []string
var tags, labels []string
for _, tag := range art.Tags {
tags = append(tags, tag.Name)
}
for _, label := range art.Labels {
labels = append(labels, label.Name)
}
notification.AddEvent(ctx, &metadata.DeleteArtifactEventMetadata{
Ctx: ctx,
Artifact: &art.Artifact,
Tags: tags,
Labels: labels,
})
}
@ -593,7 +615,7 @@ func (c *controller) GetAddition(ctx context.Context, artifactID int64, addition
if err != nil {
return nil, err
}
return processor.Get(artifact.MediaType).AbstractAddition(ctx, artifact, addition)
return processor.Get(artifact.ResolveArtifactType()).AbstractAddition(ctx, artifact, addition)
}
func (c *controller) AddLabel(ctx context.Context, artifactID int64, labelID int64) (err error) {
@ -730,7 +752,7 @@ func (c *controller) populateIcon(art *Artifact) {
func (c *controller) populateTags(ctx context.Context, art *Artifact, option *tag.Option) {
tags, err := c.tagCtl.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"artifact_id": art.ID,
},
}, option)
@ -751,7 +773,7 @@ func (c *controller) populateLabels(ctx context.Context, art *Artifact) {
}
func (c *controller) populateAdditionLinks(ctx context.Context, artifact *Artifact) {
types := processor.Get(artifact.MediaType).ListAdditionTypes(ctx, &artifact.Artifact)
types := processor.Get(artifact.ResolveArtifactType()).ListAdditionTypes(ctx, &artifact.Artifact)
if len(types) > 0 {
version := lib.GetAPIVersion(ctx)
for _, t := range types {

View File

@ -37,8 +37,10 @@ import (
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/blob/models"
"github.com/goharbor/harbor/src/pkg/label/model"
projectModel "github.com/goharbor/harbor/src/pkg/project/models"
repomodel "github.com/goharbor/harbor/src/pkg/repository/model"
model_tag "github.com/goharbor/harbor/src/pkg/tag/model/tag"
projecttesting "github.com/goharbor/harbor/src/testing/controller/project"
tagtesting "github.com/goharbor/harbor/src/testing/controller/tag"
ormtesting "github.com/goharbor/harbor/src/testing/lib/orm"
accessorytesting "github.com/goharbor/harbor/src/testing/pkg/accessory"
@ -75,6 +77,7 @@ type controllerTestSuite struct {
immutableMtr *immutable.FakeMatcher
regCli *registry.Client
accMgr *accessorytesting.Manager
proCtl *projecttesting.Controller
}
func (c *controllerTestSuite) SetupTest() {
@ -88,6 +91,7 @@ func (c *controllerTestSuite) SetupTest() {
c.immutableMtr = &immutable.FakeMatcher{}
c.accMgr = &accessorytesting.Manager{}
c.regCli = &registry.Client{}
c.proCtl = &projecttesting.Controller{}
c.ctl = &controller{
repoMgr: c.repoMgr,
artMgr: c.artMgr,
@ -99,6 +103,7 @@ func (c *controllerTestSuite) SetupTest() {
immutableMtr: c.immutableMtr,
regCli: c.regCli,
accessoryMgr: c.accMgr,
proCtl: c.proCtl,
}
}
@ -267,6 +272,7 @@ func (c *controllerTestSuite) TestEnsure() {
c.abstractor.On("AbstractMetadata").Return(nil)
c.tagCtl.On("Ensure").Return(int64(1), nil)
c.accMgr.On("Ensure").Return(nil)
c.proCtl.On("GetByName", mock.Anything, mock.Anything).Return(&projectModel.Project{ProjectID: 1, Name: "library", RegistryID: 0}, nil)
_, id, err := c.ctl.Ensure(orm.NewContext(nil, &ormtesting.FakeOrmer{}), "library/hello-world", digest, &ArtOption{
Tags: []string{"latest"},
})
@ -487,6 +493,7 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
// root artifact and doesn't exist
c.artMgr.On("Get", mock.Anything, mock.Anything).Return(nil, errors.NotFoundError(nil))
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err := c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, true, false)
c.Require().NotNil(err)
c.Assert().True(errors.IsErr(err, errors.NotFoundCode))
@ -497,6 +504,7 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
// child artifact and doesn't exist
c.artMgr.On("Get", mock.Anything, mock.Anything).Return(nil, errors.NotFoundError(nil))
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, false, false)
c.Require().Nil(err)
@ -516,6 +524,7 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
c.repoMgr.On("Get", mock.Anything, mock.Anything).Return(&repomodel.RepoRecord{}, nil)
c.artrashMgr.On("Create", mock.Anything, mock.Anything).Return(int64(0), nil)
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, false, false)
c.Require().Nil(err)
@ -532,6 +541,7 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
},
}, nil)
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, true, false)
c.Require().NotNil(err)
@ -548,6 +558,7 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
},
}, nil)
c.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(nil, 1, false, false)
c.Require().Nil(err)
@ -573,6 +584,7 @@ func (c *controllerTestSuite) TestDeleteDeeply() {
c.blobMgr.On("CleanupAssociationsForProject", mock.Anything, mock.Anything, mock.Anything).Return(nil)
c.repoMgr.On("Get", mock.Anything, mock.Anything).Return(&repomodel.RepoRecord{}, nil)
c.artrashMgr.On("Create", mock.Anything, mock.Anything).Return(int64(0), nil)
c.labelMgr.On("ListByArtifact", mock.Anything, mock.Anything).Return([]*model.Label{}, nil)
err = c.ctl.deleteDeeply(orm.NewContext(nil, &ormtesting.FakeOrmer{}), 1, true, true)
c.Require().Nil(err)
@ -583,6 +595,7 @@ func (c *controllerTestSuite) TestCopy() {
ID: 1,
Digest: "sha256:418fb88ec412e340cdbef913b8ca1bbe8f9e8dc705f9617414c1f2c8db980180",
}, nil)
c.proCtl.On("GetByName", mock.Anything, mock.Anything).Return(&projectModel.Project{ProjectID: 1, Name: "library", RegistryID: 0}, nil)
c.repoMgr.On("GetByName", mock.Anything, mock.Anything).Return(&repomodel.RepoRecord{
RepositoryID: 1,
Name: "library/hello-world",

View File

@ -56,7 +56,7 @@ func (suite *IteratorTestSuite) TeardownSuite() {
func (suite *IteratorTestSuite) TestIterator() {
suite.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
q1 := &q.Query{PageNumber: 1, PageSize: 5, Keywords: map[string]interface{}{}}
q1 := &q.Query{PageNumber: 1, PageSize: 5, Keywords: map[string]any{}}
suite.artMgr.On("List", mock.Anything, q1).Return([]*artifact.Artifact{
{ID: 1},
{ID: 2},
@ -65,7 +65,7 @@ func (suite *IteratorTestSuite) TestIterator() {
{ID: 5},
}, nil)
q2 := &q.Query{PageNumber: 2, PageSize: 5, Keywords: map[string]interface{}{}}
q2 := &q.Query{PageNumber: 2, PageSize: 5, Keywords: map[string]any{}}
suite.artMgr.On("List", mock.Anything, q2).Return([]*artifact.Artifact{
{ID: 6},
{ID: 7},

View File

@ -40,7 +40,7 @@ func (artifact *Artifact) UnmarshalJSON(data []byte) error {
type Alias Artifact
ali := &struct {
*Alias
AccessoryItems []interface{} `json:"accessories,omitempty"`
AccessoryItems []any `json:"accessories,omitempty"`
}{
Alias: (*Alias)(artifact),
}
@ -94,6 +94,16 @@ func (artifact *Artifact) SetSBOMAdditionLink(sbomDgst string, version string) {
artifact.AdditionLinks[addition] = &AdditionLink{HREF: href, Absolute: false}
}
// AbstractLabelNames abstracts the label names from the artifact.
func (artifact *Artifact) AbstractLabelNames() []string {
var names []string
for _, label := range artifact.Labels {
names = append(names, label.Name)
}
return names
}
// AdditionLink is a link via that the addition can be fetched
type AdditionLink struct {
HREF string `json:"href"`

View File

@ -7,6 +7,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/accessory/model/cosign"
"github.com/goharbor/harbor/src/pkg/label/model"
)
func TestUnmarshalJSONWithACC(t *testing.T) {
@ -104,3 +105,58 @@ func TestUnmarshalJSONWithPartial(t *testing.T) {
assert.Equal(t, "", artifact.Type)
assert.Equal(t, "application/vnd.docker.container.image.v1+json", artifact.MediaType)
}
func TestAbstractLabelNames(t *testing.T) {
tests := []struct {
name string
artifact Artifact
want []string
}{
{
name: "Nil labels",
artifact: Artifact{
Labels: nil,
},
want: []string{},
},
{
name: "Single label",
artifact: Artifact{
Labels: []*model.Label{
{Name: "label1"},
},
},
want: []string{"label1"},
},
{
name: "Multiple labels",
artifact: Artifact{
Labels: []*model.Label{
{Name: "label1"},
{Name: "label2"},
{Name: "label3"},
},
},
want: []string{"label1", "label2", "label3"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.artifact.AbstractLabelNames()
// Check if lengths match
if len(got) != len(tt.want) {
t.Errorf("AbstractLabelNames() got length = %v, want length = %v", len(got), len(tt.want))
return
}
// Check if elements match
for i := range got {
if got[i] != tt.want[i] {
t.Errorf("AbstractLabelNames() got[%d] = %v, want[%d] = %v", i, got[i], i, tt.want[i])
}
}
})
}
}

View File

@ -44,7 +44,7 @@ type ManifestProcessor struct {
// AbstractMetadata abstracts metadata of artifact
func (m *ManifestProcessor) AbstractMetadata(ctx context.Context, artifact *artifact.Artifact, content []byte) error {
// parse metadata from config layer
metadata := map[string]interface{}{}
metadata := map[string]any{}
if err := m.UnmarshalConfig(ctx, artifact.RepositoryName, content, &metadata); err != nil {
return err
}
@ -55,7 +55,7 @@ func (m *ManifestProcessor) AbstractMetadata(ctx context.Context, artifact *arti
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]interface{}{}
artifact.ExtraAttrs = map[string]any{}
}
for _, property := range m.properties {
artifact.ExtraAttrs[property] = metadata[property]
@ -80,7 +80,7 @@ func (m *ManifestProcessor) ListAdditionTypes(_ context.Context, _ *artifact.Art
}
// UnmarshalConfig unmarshal the config blob of the artifact into the specified object "v"
func (m *ManifestProcessor) UnmarshalConfig(_ context.Context, repository string, manifest []byte, v interface{}) error {
func (m *ManifestProcessor) UnmarshalConfig(_ context.Context, repository string, manifest []byte, v any) error {
// unmarshal manifest
mani := &v1.Manifest{}
if err := json.Unmarshal(manifest, mani); err != nil {

View File

@ -89,7 +89,7 @@ func (p *processorTestSuite) TestAbstractAddition() {
Repository: "github.com/goharbor",
},
},
Values: map[string]interface{}{
Values: map[string]any{
"cluster.enable": true,
"cluster.slaveCount": 1,
"image.pullPolicy": "Always",

View File

@ -0,0 +1,106 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cnai
import (
"context"
"encoding/json"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
ps "github.com/goharbor/harbor/src/controller/artifact/processor"
"github.com/goharbor/harbor/src/controller/artifact/processor/base"
"github.com/goharbor/harbor/src/controller/artifact/processor/cnai/parser"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/lib/log"
"github.com/goharbor/harbor/src/pkg/artifact"
)
// const definitions
const (
// ArtifactTypeCNAI defines the artifact type for CNAI model.
ArtifactTypeCNAI = "CNAI"
// AdditionTypeReadme defines the addition type readme for API.
AdditionTypeReadme = "README.MD"
// AdditionTypeLicense defines the addition type license for API.
AdditionTypeLicense = "LICENSE"
// AdditionTypeFiles defines the addition type files for API.
AdditionTypeFiles = "FILES"
)
func init() {
pc := &processor{
ManifestProcessor: base.NewManifestProcessor(),
}
if err := ps.Register(pc, modelspec.ArtifactTypeModelManifest); err != nil {
log.Errorf("failed to register processor for artifact type %s: %v", modelspec.ArtifactTypeModelManifest, err)
return
}
}
type processor struct {
*base.ManifestProcessor
}
func (p *processor) AbstractAddition(ctx context.Context, artifact *artifact.Artifact, addition string) (*ps.Addition, error) {
var additionParser parser.Parser
switch addition {
case AdditionTypeReadme:
additionParser = parser.NewReadme(p.RegCli)
case AdditionTypeLicense:
additionParser = parser.NewLicense(p.RegCli)
case AdditionTypeFiles:
additionParser = parser.NewFiles(p.RegCli)
default:
return nil, errors.New(nil).WithCode(errors.BadRequestCode).
WithMessagef("addition %s isn't supported for %s", addition, ArtifactTypeCNAI)
}
mf, _, err := p.RegCli.PullManifest(artifact.RepositoryName, artifact.Digest)
if err != nil {
return nil, err
}
_, payload, err := mf.Payload()
if err != nil {
return nil, err
}
manifest := &ocispec.Manifest{}
if err := json.Unmarshal(payload, manifest); err != nil {
return nil, err
}
contentType, content, err := additionParser.Parse(ctx, artifact, manifest)
if err != nil {
return nil, err
}
return &ps.Addition{
ContentType: contentType,
Content: content,
}, nil
}
func (p *processor) GetArtifactType(_ context.Context, _ *artifact.Artifact) string {
return ArtifactTypeCNAI
}
func (p *processor) ListAdditionTypes(_ context.Context, _ *artifact.Artifact) []string {
return []string{AdditionTypeReadme, AdditionTypeLicense, AdditionTypeFiles}
}

View File

@ -0,0 +1,265 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cnai
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"io"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/suite"
"github.com/goharbor/harbor/src/controller/artifact/processor/base"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/distribution"
"github.com/goharbor/harbor/src/testing/mock"
"github.com/goharbor/harbor/src/testing/pkg/registry"
)
type ProcessorTestSuite struct {
suite.Suite
processor *processor
regCli *registry.Client
}
func (p *ProcessorTestSuite) SetupTest() {
p.regCli = &registry.Client{}
p.processor = &processor{}
p.processor.ManifestProcessor = &base.ManifestProcessor{
RegCli: p.regCli,
}
}
func createTarContent(filename, content string) ([]byte, error) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
hdr := &tar.Header{
Name: filename,
Mode: 0600,
Size: int64(len(content)),
}
if err := tw.WriteHeader(hdr); err != nil {
return nil, err
}
if _, err := tw.Write([]byte(content)); err != nil {
return nil, err
}
if err := tw.Close(); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (p *ProcessorTestSuite) TestAbstractAddition() {
cases := []struct {
name string
addition string
manifest *ocispec.Manifest
setupMockReg func(*registry.Client, *ocispec.Manifest)
expectErr string
expectContent string
expectType string
}{
{
name: "invalid addition type",
addition: "invalid",
manifest: &ocispec.Manifest{},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
},
expectErr: "addition invalid isn't supported for CNAI",
},
{
name: "readme not found",
addition: AdditionTypeReadme,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "other.txt",
},
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
},
expectErr: "readme layer not found",
},
{
name: "valid readme",
addition: AdditionTypeReadme,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:abc",
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
content := "# Test Model"
tarContent, err := createTarContent("README.md", content)
p.Require().NoError(err)
r.On("PullBlob", mock.Anything, "sha256:abc").Return(
int64(len(tarContent)),
io.NopCloser(bytes.NewReader(tarContent)),
nil,
)
},
expectContent: "# Test Model",
expectType: "text/markdown; charset=utf-8",
},
{
name: "valid license",
addition: AdditionTypeLicense,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:def",
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
content := "MIT License"
tarContent, err := createTarContent("LICENSE", content)
p.Require().NoError(err)
r.On("PullBlob", mock.Anything, "sha256:def").Return(
int64(len(tarContent)),
io.NopCloser(bytes.NewReader(tarContent)),
nil,
)
},
expectContent: "MIT License",
expectType: "text/plain; charset=utf-8",
},
{
name: "valid files list",
addition: AdditionTypeFiles,
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "model/weights.bin",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 50,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "config.json",
},
},
},
},
setupMockReg: func(r *registry.Client, m *ocispec.Manifest) {
manifestJSON, err := json.Marshal(m)
p.Require().NoError(err)
manifest, _, err := distribution.UnmarshalManifest(v1.MediaTypeImageManifest, manifestJSON)
p.Require().NoError(err)
r.On("PullManifest", mock.Anything, mock.Anything).Return(manifest, "", nil)
},
expectContent: `[{"name":"model","type":"directory","children":[{"name":"weights.bin","type":"file","size":100}]},{"name":"config.json","type":"file","size":50}]`,
expectType: "application/json; charset=utf-8",
},
}
for _, tc := range cases {
p.Run(tc.name, func() {
// Reset mock
p.SetupTest()
if tc.setupMockReg != nil {
tc.setupMockReg(p.regCli, tc.manifest)
}
addition, err := p.processor.AbstractAddition(
context.Background(),
&artifact.Artifact{},
tc.addition,
)
if tc.expectErr != "" {
p.Error(err)
p.Contains(err.Error(), tc.expectErr)
return
}
p.NoError(err)
if tc.expectContent != "" {
p.Equal(tc.expectContent, string(addition.Content))
}
if tc.expectType != "" {
p.Equal(tc.expectType, addition.ContentType)
}
})
}
}
func (p *ProcessorTestSuite) TestGetArtifactType() {
p.Equal(ArtifactTypeCNAI, p.processor.GetArtifactType(nil, nil))
}
func (p *ProcessorTestSuite) TestListAdditionTypes() {
additions := p.processor.ListAdditionTypes(nil, nil)
p.ElementsMatch(
[]string{
AdditionTypeReadme,
AdditionTypeLicense,
AdditionTypeFiles,
},
additions,
)
}
func TestProcessorTestSuite(t *testing.T) {
suite.Run(t, &ProcessorTestSuite{})
}

View File

@ -0,0 +1,99 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"fmt"
"io"
"path/filepath"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
var (
// errFileTooLarge is returned when the file is too large to be processed.
errFileTooLarge = errors.New("The file is too large to be processed")
)
const (
// contentTypeTextPlain is the content type of text/plain.
contentTypeTextPlain = "text/plain; charset=utf-8"
// contentTypeTextMarkdown is the content type of text/markdown.
contentTypeMarkdown = "text/markdown; charset=utf-8"
// contentTypeJSON is the content type of application/json.
contentTypeJSON = "application/json; charset=utf-8"
// defaultFileSizeLimit is the default file size limit.
defaultFileSizeLimit = 1024 * 1024 * 4 // 4MB
// formatTar is the format of tar file.
formatTar = ".tar"
// formatRaw is the format of raw file.
formatRaw = ".raw"
)
// newBase creates a new base parser.
func newBase(cli registry.Client) *base {
return &base{
regCli: cli,
}
}
// base provides a default implementation for other parsers to build upon.
type base struct {
regCli registry.Client
}
// Parse is the common implementation for parsing layer.
func (b *base) Parse(_ context.Context, artifact *artifact.Artifact, layer *ocispec.Descriptor) (string, []byte, error) {
if artifact == nil || layer == nil {
return "", nil, fmt.Errorf("artifact or manifest cannot be nil")
}
if layer.Size > defaultFileSizeLimit {
return "", nil, errors.RequestEntityTooLargeError(errFileTooLarge)
}
_, stream, err := b.regCli.PullBlob(artifact.RepositoryName, layer.Digest.String())
if err != nil {
return "", nil, fmt.Errorf("failed to pull blob from registry: %w", err)
}
defer stream.Close()
content, err := decodeContent(layer.MediaType, stream)
if err != nil {
return "", nil, fmt.Errorf("failed to decode content: %w", err)
}
return contentTypeTextPlain, content, nil
}
func decodeContent(mediaType string, reader io.Reader) ([]byte, error) {
format := filepath.Ext(mediaType)
switch format {
case formatTar:
return untar(reader)
case formatRaw:
return io.ReadAll(reader)
default:
return nil, fmt.Errorf("unsupported format: %s", format)
}
}

View File

@ -0,0 +1,142 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"context"
"fmt"
"io"
"testing"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
mock "github.com/goharbor/harbor/src/testing/pkg/registry"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
)
func TestBaseParse(t *testing.T) {
tests := []struct {
name string
artifact *artifact.Artifact
layer *v1.Descriptor
mockSetup func(*mock.Client)
expectedType string
expectedError string
}{
{
name: "nil artifact",
artifact: nil,
layer: &v1.Descriptor{},
expectedError: "artifact or manifest cannot be nil",
},
{
name: "nil layer",
artifact: &artifact.Artifact{},
layer: nil,
expectedError: "artifact or manifest cannot be nil",
},
{
name: "registry client error",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), nil, fmt.Errorf("registry error"))
},
expectedError: "failed to pull blob from registry: registry error",
},
{
name: "successful parse (tar format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.tar",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
tw.WriteHeader(&tar.Header{
Name: "test.txt",
Size: 12,
})
tw.Write([]byte("test content"))
tw.Close()
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
},
{
name: "successful parse (raw format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.raw",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
buf.Write([]byte("test content"))
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
},
{
name: "error parse (unsupported format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.unknown",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
buf.Write([]byte("test content"))
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedError: "failed to decode content: unsupported format: .unknown",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockClient := &mock.Client{}
if tt.mockSetup != nil {
tt.mockSetup(mockClient)
}
b := &base{regCli: mockClient}
contentType, _, err := b.Parse(context.Background(), tt.artifact, tt.layer)
if tt.expectedError != "" {
assert.EqualError(t, err, tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
}
mockClient.AssertExpectations(t)
})
}
}
func TestNewBase(t *testing.T) {
b := newBase(registry.Cli)
assert.NotNil(t, b)
assert.Equal(t, registry.Cli, b.regCli)
}

View File

@ -0,0 +1,113 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"encoding/json"
"fmt"
"sort"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
// NewFiles creates a new files parser.
func NewFiles(cli registry.Client) Parser {
return &files{
base: newBase(cli),
}
}
// files is the parser for listing files in the model artifact.
type files struct {
*base
}
type FileList struct {
Name string `json:"name"`
Type string `json:"type"`
Size int64 `json:"size,omitempty"`
Children []FileList `json:"children,omitempty"`
}
// Parse parses the files list.
func (f *files) Parse(_ context.Context, _ *artifact.Artifact, manifest *ocispec.Manifest) (string, []byte, error) {
if manifest == nil {
return "", nil, fmt.Errorf("manifest cannot be nil")
}
rootNode, err := walkManifest(*manifest)
if err != nil {
return "", nil, fmt.Errorf("failed to walk manifest: %w", err)
}
fileLists := traverseFileNode(rootNode)
data, err := json.Marshal(fileLists)
if err != nil {
return "", nil, err
}
return contentTypeJSON, data, nil
}
// walkManifest walks the manifest and returns the root file node.
func walkManifest(manifest ocispec.Manifest) (*FileNode, error) {
root := NewDirectory("/")
for _, layer := range manifest.Layers {
if layer.Annotations != nil && layer.Annotations[modelspec.AnnotationFilepath] != "" {
filepath := layer.Annotations[modelspec.AnnotationFilepath]
// mark it to directory if the file path ends with "/".
isDir := filepath[len(filepath)-1] == '/'
_, err := root.AddNode(filepath, layer.Size, isDir)
if err != nil {
return nil, err
}
}
}
return root, nil
}
// traverseFileNode traverses the file node and returns the file list.
func traverseFileNode(node *FileNode) []FileList {
if node == nil {
return nil
}
var children []FileList
for _, child := range node.Children {
children = append(children, FileList{
Name: child.Name,
Type: child.Type,
Size: child.Size,
Children: traverseFileNode(child),
})
}
// sort the children by type (directories first) and then by name.
sort.Slice(children, func(i, j int) bool {
if children[i].Type != children[j].Type {
return children[i].Type == TypeDirectory
}
return children[i].Name < children[j].Name
})
return children
}

View File

@ -0,0 +1,229 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"encoding/json"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
mockregistry "github.com/goharbor/harbor/src/testing/pkg/registry"
)
func TestFilesParser(t *testing.T) {
tests := []struct {
name string
manifest *ocispec.Manifest
expectedType string
expectedOutput []FileList
expectedError string
}{
{
name: "nil manifest",
manifest: nil,
expectedError: "manifest cannot be nil",
},
{
name: "empty manifest layers",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{},
},
expectedType: contentTypeJSON,
expectedOutput: nil,
},
{
name: "single file",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "model.bin",
},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 100,
},
},
},
{
name: "file in directory",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 200,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v1/model.bin",
},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: []FileList{
{
Name: "models",
Type: TypeDirectory,
Children: []FileList{
{
Name: "v1",
Type: TypeDirectory,
Children: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 200,
},
},
},
},
},
},
},
{
name: "multiple files and directories",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 200,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v1/model.bin",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 300,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v2/",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 150,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "models/v2/model.bin",
},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: []FileList{
{
Name: "models",
Type: TypeDirectory,
Children: []FileList{
{
Name: "v1",
Type: TypeDirectory,
Children: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 200,
},
},
},
{
Name: "v2",
Type: TypeDirectory,
Children: []FileList{
{
Name: "model.bin",
Type: TypeFile,
Size: 150,
},
},
},
},
},
{
Name: "README.md",
Type: TypeFile,
Size: 100,
},
},
},
{
name: "layer without filepath annotation",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Size: 100,
Annotations: map[string]string{},
},
},
},
expectedType: contentTypeJSON,
expectedOutput: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockRegClient := &mockregistry.Client{}
parser := &files{
base: &base{
regCli: mockRegClient,
},
}
contentType, content, err := parser.Parse(context.Background(), &artifact.Artifact{}, tt.manifest)
if tt.expectedError != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
var fileList []FileList
err = json.Unmarshal(content, &fileList)
assert.NoError(t, err)
assert.Equal(t, tt.expectedOutput, fileList)
}
})
}
}
func TestNewFiles(t *testing.T) {
parser := NewFiles(registry.Cli)
assert.NotNil(t, parser)
filesParser, ok := parser.(*files)
assert.True(t, ok, "Parser should be of type *files")
assert.Equal(t, registry.Cli, filesParser.base.regCli)
}

View File

@ -0,0 +1,70 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"fmt"
"slices"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
// NewLicense creates a new license parser.
func NewLicense(cli registry.Client) Parser {
return &license{
base: newBase(cli),
}
}
// license is the parser for License file.
type license struct {
*base
}
// Parse parses the License file.
func (l *license) Parse(ctx context.Context, artifact *artifact.Artifact, manifest *ocispec.Manifest) (string, []byte, error) {
if manifest == nil {
return "", nil, errors.New("manifest cannot be nil")
}
// lookup the license file layer
var layer *ocispec.Descriptor
for _, desc := range manifest.Layers {
if slices.Contains([]string{
modelspec.MediaTypeModelDoc,
modelspec.MediaTypeModelDocRaw,
}, desc.MediaType) {
if desc.Annotations != nil {
filepath := desc.Annotations[modelspec.AnnotationFilepath]
if filepath == "LICENSE" || filepath == "LICENSE.txt" {
layer = &desc
break
}
}
}
}
if layer == nil {
return "", nil, errors.NotFoundError(fmt.Errorf("license layer not found"))
}
return l.base.Parse(ctx, artifact, layer)
}

View File

@ -0,0 +1,260 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"context"
"fmt"
"io"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
"github.com/goharbor/harbor/src/testing/mock"
mockregistry "github.com/goharbor/harbor/src/testing/pkg/registry"
)
func TestLicenseParser(t *testing.T) {
tests := []struct {
name string
manifest *ocispec.Manifest
setupMockReg func(*mockregistry.Client)
expectedType string
expectedOutput []byte
expectedError string
}{
{
name: "nil manifest",
manifest: nil,
expectedError: "manifest cannot be nil",
},
{
name: "empty manifest layers",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{},
},
expectedError: "license layer not found",
},
{
name: "LICENSE parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:abc123",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("MIT License")
_ = tw.WriteHeader(&tar.Header{
Name: "LICENSE",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:abc123").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("MIT License"),
},
{
name: "LICENSE parse success (raw)",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDocRaw,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:abc123",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
buf.Write([]byte("MIT License"))
mc.On("PullBlob", mock.Anything, "sha256:abc123").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("MIT License"),
},
{
name: "LICENSE.txt parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE.txt",
},
Digest: "sha256:def456",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("Apache License 2.0")
_ = tw.WriteHeader(&tar.Header{
Name: "LICENSE.txt",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:def456").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("Apache License 2.0"),
},
{
name: "registry error",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:ghi789",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
mc.On("PullBlob", mock.Anything, "sha256:ghi789").
Return(int64(0), nil, fmt.Errorf("registry error"))
},
expectedError: "failed to pull blob from registry: registry error",
},
{
name: "multiple layers with license",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "other.txt",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:jkl012",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("BSD License")
_ = tw.WriteHeader(&tar.Header{
Name: "LICENSE",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:jkl012").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("BSD License"),
},
{
name: "wrong media type",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: "wrong/type",
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
},
},
},
expectedError: "license layer not found",
},
{
name: "no matching license file",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "NOT_LICENSE",
},
},
},
},
expectedError: "license layer not found",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockRegClient := &mockregistry.Client{}
if tt.setupMockReg != nil {
tt.setupMockReg(mockRegClient)
}
parser := &license{
base: &base{
regCli: mockRegClient,
},
}
contentType, content, err := parser.Parse(context.Background(), &artifact.Artifact{}, tt.manifest)
if tt.expectedError != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
assert.Equal(t, tt.expectedOutput, content)
}
mockRegClient.AssertExpectations(t)
})
}
}
func TestNewLicense(t *testing.T) {
parser := NewLicense(registry.Cli)
assert.NotNil(t, parser)
licenseParser, ok := parser.(*license)
assert.True(t, ok, "Parser should be of type *license")
assert.Equal(t, registry.Cli, licenseParser.base.regCli)
}

View File

@ -0,0 +1,29 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/pkg/artifact"
)
// Parser is the interface for parsing the content by different addition type.
type Parser interface {
// Parse returns the parsed content type and content.
Parse(ctx context.Context, artifact *artifact.Artifact, manifest *ocispec.Manifest) (contentType string, content []byte, err error)
}

View File

@ -0,0 +1,75 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"context"
"fmt"
"slices"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/goharbor/harbor/src/lib/errors"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
)
// NewReadme creates a new readme parser.
func NewReadme(cli registry.Client) Parser {
return &readme{
base: newBase(cli),
}
}
// readme is the parser for README.md file.
type readme struct {
*base
}
// Parse parses the README.md file.
func (r *readme) Parse(ctx context.Context, artifact *artifact.Artifact, manifest *ocispec.Manifest) (string, []byte, error) {
if manifest == nil {
return "", nil, errors.New("manifest cannot be nil")
}
// lookup the readme file layer.
var layer *ocispec.Descriptor
for _, desc := range manifest.Layers {
if slices.Contains([]string{
modelspec.MediaTypeModelDoc,
modelspec.MediaTypeModelDocRaw,
}, desc.MediaType) {
if desc.Annotations != nil {
filepath := desc.Annotations[modelspec.AnnotationFilepath]
if filepath == "README" || filepath == "README.md" {
layer = &desc
break
}
}
}
}
if layer == nil {
return "", nil, errors.NotFoundError(fmt.Errorf("readme layer not found"))
}
_, content, err := r.base.Parse(ctx, artifact, layer)
if err != nil {
return "", nil, err
}
return contentTypeMarkdown, content, nil
}

View File

@ -0,0 +1,232 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"context"
"fmt"
"io"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/goharbor/harbor/src/testing/mock"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/goharbor/harbor/src/pkg/artifact"
"github.com/goharbor/harbor/src/pkg/registry"
mockregistry "github.com/goharbor/harbor/src/testing/pkg/registry"
)
func TestReadmeParser(t *testing.T) {
tests := []struct {
name string
manifest *ocispec.Manifest
setupMockReg func(*mockregistry.Client)
expectedType string
expectedOutput []byte
expectedError string
}{
{
name: "nil manifest",
manifest: nil,
expectedError: "manifest cannot be nil",
},
{
name: "empty manifest layers",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{},
},
expectedError: "readme layer not found",
},
{
name: "README.md parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:abc123",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("# Test README")
_ = tw.WriteHeader(&tar.Header{
Name: "README.md",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:abc123").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "README parse success",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README",
},
Digest: "sha256:def456",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("# Test README")
_ = tw.WriteHeader(&tar.Header{
Name: "README",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:def456").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "README parse success (raw)",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDocRaw,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README",
},
Digest: "sha256:def456",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
buf.Write([]byte("# Test README"))
mc.On("PullBlob", mock.Anything, "sha256:def456").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "registry error",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:ghi789",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
mc.On("PullBlob", mock.Anything, "sha256:ghi789").
Return(int64(0), nil, fmt.Errorf("registry error"))
},
expectedError: "failed to pull blob from registry: registry error",
},
{
name: "multiple layers with README",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "other.txt",
},
},
{
MediaType: modelspec.MediaTypeModelDoc,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README.md",
},
Digest: "sha256:jkl012",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
content := []byte("# Second README")
_ = tw.WriteHeader(&tar.Header{
Name: "README.md",
Size: int64(len(content)),
})
_, _ = tw.Write(content)
tw.Close()
mc.On("PullBlob", mock.Anything, "sha256:jkl012").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Second README"),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockRegClient := &mockregistry.Client{}
if tt.setupMockReg != nil {
tt.setupMockReg(mockRegClient)
}
parser := &readme{
base: &base{
regCli: mockRegClient,
},
}
contentType, content, err := parser.Parse(context.Background(), &artifact.Artifact{}, tt.manifest)
if tt.expectedError != "" {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedType, contentType)
assert.Equal(t, tt.expectedOutput, content)
}
mockRegClient.AssertExpectations(t)
})
}
}
func TestNewReadme(t *testing.T) {
parser := NewReadme(registry.Cli)
assert.NotNil(t, parser)
readmeParser, ok := parser.(*readme)
assert.True(t, ok, "Parser should be of type *readme")
assert.Equal(t, registry.Cli, readmeParser.base.regCli)
}

View File

@ -0,0 +1,150 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"fmt"
"io"
"path/filepath"
"strings"
"sync"
)
func untar(reader io.Reader) ([]byte, error) {
tr := tar.NewReader(reader)
var buf bytes.Buffer
for {
header, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, fmt.Errorf("failed to read tar header: %w", err)
}
// skip the directory.
if header.Typeflag == tar.TypeDir {
continue
}
if _, err := io.Copy(&buf, tr); err != nil {
return nil, fmt.Errorf("failed to copy content to buffer: %w", err)
}
}
return buf.Bytes(), nil
}
// FileType represents the type of a file.
type FileType = string
const (
TypeFile FileType = "file"
TypeDirectory FileType = "directory"
)
type FileNode struct {
Name string
Type FileType
Size int64
Children map[string]*FileNode
mu sync.RWMutex
}
func NewFile(name string, size int64) *FileNode {
return &FileNode{
Name: name,
Type: TypeFile,
Size: size,
}
}
func NewDirectory(name string) *FileNode {
return &FileNode{
Name: name,
Type: TypeDirectory,
Children: make(map[string]*FileNode),
}
}
func (root *FileNode) AddChild(child *FileNode) error {
root.mu.Lock()
defer root.mu.Unlock()
if root.Type != TypeDirectory {
return fmt.Errorf("cannot add child to non-directory node")
}
root.Children[child.Name] = child
return nil
}
func (root *FileNode) GetChild(name string) (*FileNode, bool) {
root.mu.RLock()
defer root.mu.RUnlock()
child, ok := root.Children[name]
return child, ok
}
func (root *FileNode) AddNode(path string, size int64, isDir bool) (*FileNode, error) {
path = filepath.Clean(path)
parts := strings.Split(path, string(filepath.Separator))
current := root
for i, part := range parts {
if part == "" {
continue
}
isLastPart := i == len(parts)-1
child, exists := current.GetChild(part)
if !exists {
var newNode *FileNode
if isLastPart {
if isDir {
newNode = NewDirectory(part)
} else {
newNode = NewFile(part, size)
}
} else {
newNode = NewDirectory(part)
}
if err := current.AddChild(newNode); err != nil {
return nil, err
}
current = newNode
} else {
child.mu.RLock()
nodeType := child.Type
child.mu.RUnlock()
if isLastPart {
if (isDir && nodeType != TypeDirectory) || (!isDir && nodeType != TypeFile) {
return nil, fmt.Errorf("path conflicts: %s exists with different type", part)
}
}
current = child
}
}
return current, nil
}

View File

@ -0,0 +1,173 @@
// Copyright Project Harbor Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package parser
import (
"archive/tar"
"bytes"
"path/filepath"
"strings"
"testing"
)
func TestUntar(t *testing.T) {
tests := []struct {
name string
content string
wantErr bool
expected string
}{
{
name: "valid tar file with single file",
content: "test content",
wantErr: false,
expected: "test content",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
hdr := &tar.Header{
Name: "test.txt",
Mode: 0600,
Size: int64(len(tt.content)),
}
if err := tw.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err := tw.Write([]byte(tt.content)); err != nil {
t.Fatal(err)
}
tw.Close()
result, err := untar(&buf)
if (err != nil) != tt.wantErr {
t.Errorf("untar() error = %v, wantErr %v", err, tt.wantErr)
return
}
if string(result) != tt.expected {
t.Errorf("untar() = %v, want %v", string(result), tt.expected)
}
})
}
}
func TestFileNode(t *testing.T) {
t.Run("test file node operations", func(t *testing.T) {
// Test creating root directory.
root := NewDirectory("root")
if root.Type != TypeDirectory {
t.Errorf("Expected directory type, got %s", root.Type)
}
// Test creating file.
file := NewFile("test.txt", 100)
if file.Type != TypeFile {
t.Errorf("Expected file type, got %s", file.Type)
}
// Test adding child to directory.
err := root.AddChild(file)
if err != nil {
t.Errorf("Failed to add child: %v", err)
}
// Test getting child.
child, exists := root.GetChild("test.txt")
if !exists {
t.Error("Expected child to exist")
}
if child.Name != "test.txt" {
t.Errorf("Expected name test.txt, got %s", child.Name)
}
// Test adding child to file (should fail).
err = file.AddChild(NewFile("invalid.txt", 50))
if err == nil {
t.Error("Expected error when adding child to file")
}
})
}
func TestAddNode(t *testing.T) {
tests := []struct {
name string
path string
size int64
isDir bool
wantErr bool
setupFn func(*FileNode)
}{
{
name: "add file",
path: "dir1/dir2/file.txt",
size: 100,
isDir: false,
wantErr: false,
},
{
name: "add directory",
path: "dir1/dir2/dir3",
size: 0,
isDir: true,
wantErr: false,
},
{
name: "add file with conflicting directory",
path: "dir1/dir2",
size: 100,
isDir: false,
wantErr: true,
setupFn: func(node *FileNode) {
node.AddNode("dir1/dir2", 0, true)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
root := NewDirectory("root")
if tt.setupFn != nil {
tt.setupFn(root)
}
_, err := root.AddNode(tt.path, tt.size, tt.isDir)
if (err != nil) != tt.wantErr {
t.Errorf("AddNode() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !tt.wantErr {
// Verify the path exists.
current := root
parts := filepath.Clean(tt.path)
for part := range strings.SplitSeq(parts, string(filepath.Separator)) {
if part == "" {
continue
}
child, exists := current.GetChild(part)
if !exists {
t.Errorf("Expected path part %s to exist", part)
return
}
current = child
}
}
})
}
}

View File

@ -110,7 +110,7 @@ func (d *defaultProcessor) AbstractMetadata(ctx context.Context, artifact *artif
}
defer blob.Close()
// parse metadata from config layer
metadata := map[string]interface{}{}
metadata := map[string]any{}
if err = json.NewDecoder(blob).Decode(&metadata); err != nil {
return err
}

View File

@ -268,7 +268,7 @@ func (d *defaultProcessorTestSuite) TestAbstractMetadata() {
manifestMediaType, content, err := manifest.Payload()
d.Require().Nil(err)
metadata := map[string]interface{}{}
metadata := map[string]any{}
configBlob := io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
d.Require().Nil(err)
@ -289,7 +289,7 @@ func (d *defaultProcessorTestSuite) TestAbstractMetadataOfOCIManifesttWithUnknow
d.Require().Nil(err)
configBlob := io.NopCloser(strings.NewReader(UnknownJsonConfig))
metadata := map[string]interface{}{}
metadata := map[string]any{}
err = json.NewDecoder(configBlob).Decode(&metadata)
d.Require().Nil(err)

View File

@ -44,7 +44,7 @@ func (m *manifestV1Processor) AbstractMetadata(_ context.Context, artifact *arti
return err
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]interface{}{}
artifact.ExtraAttrs = map[string]any{}
}
artifact.ExtraAttrs["architecture"] = mani.Architecture
return nil

View File

@ -59,7 +59,7 @@ func (m *manifestV2Processor) AbstractMetadata(ctx context.Context, artifact *ar
return err
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]interface{}{}
artifact.ExtraAttrs = map[string]any{}
}
artifact.ExtraAttrs["created"] = config.Created
artifact.ExtraAttrs["architecture"] = config.Architecture

View File

@ -62,14 +62,14 @@ type Processor struct {
}
func (m *Processor) AbstractMetadata(ctx context.Context, art *artifact.Artifact, manifestBody []byte) error {
art.ExtraAttrs = map[string]interface{}{}
art.ExtraAttrs = map[string]any{}
manifest := &v1.Manifest{}
if err := json.Unmarshal(manifestBody, manifest); err != nil {
return err
}
if art.ExtraAttrs == nil {
art.ExtraAttrs = map[string]interface{}{}
art.ExtraAttrs = map[string]any{}
}
if manifest.Annotations[AnnotationVariantKey] == AnnotationVariantValue || manifest.Annotations[AnnotationHandlerKey] == AnnotationHandlerValue {
// for annotation way

Some files were not shown because too many files have changed in this diff Show More