Compare commits

...

75 Commits

Author SHA1 Message Date
stonezdj(Daojun Zhang) 6a1abab687
The tag retention job failed with 403 error message (#22159)
fixes #22141

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-31 14:43:00 +08:00
Chlins Zhang 70b03c9483
feat: support raw format for CNAI model (#22040)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-29 12:41:29 +00:00
stonezdj(Daojun Zhang) 171d9b4c0e
Add HTTP 409 error when creating robot account (#22201)
fixes #22107

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-29 10:03:00 +00:00
Wang Yan 257afebd5f
bump golang version (#22205)
to the latest golang version v1.24.5 from v1.24.3

Signed-off-by: wy65701436 <wangyan@vmware.com>
2025-07-29 07:21:57 +00:00
Wang Yan f15638c5f3
update the orm filter func (#22208)
to extend the enhancement from https://github.com/goharbor/harbor/pull/21924 to fuzzy and range match. After the enhance, the orm.ExerSep is not supported in any sort of query keywords.

Signed-off-by: wy65701436 <wangyan@vmware.com>
2025-07-29 13:22:35 +08:00
Chlins Zhang ebc340a8f7
fix: correct the permission of project maintainer role for webhook policy (#22135)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-25 08:40:40 +00:00
Wang Yan de657686b3
add the replicaiton adapter whitelist (#22198)
fixes #21925

According to https://github.com/goharbor/harbor/wiki/Harbor-Replicaiton-Adapter-Owner, some replication adapters are no longer actively maintained by the Harbor community. To address this, a whitelist environment variable is introduced to define the list of actively supported adapters, which will be used by the Harbor portal and API to display and allow usage.

If you still wish to view and use the unsupported or inactive adapters, you must manually update the whitelist and include the desired adapter names. For the list of adapter names, refer to https://github.com/goharbor/harbor/blob/main/src/pkg/reg/model/registry.go#L22

Signed-off-by: wang yan <wangyan@vmware.com>
2025-07-23 10:25:21 +00:00
stonezdj(Daojun Zhang) ea4110c30a
Display download url for BUILD_PACKAGE action (#22197)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-23 08:21:10 +00:00
Daniel Jiang bb7162f5e6
Don't always skip vuln check when artifact is not scannable (#22187)
fixes #22143

This commit makes update to the vulnerable policy middleware.  So that
it will skip the sheck only when the artifact is not scannable AND it
does not have a scan report.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-07-22 09:20:22 +00:00
stonezdj(Daojun Zhang) e8c2e478b6
Remove testcase Open Image Scanners doc page (#22180)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-22 02:55:11 +00:00
dependabot[bot] 71f2ea84bd
chore(deps): bump helm.sh/helm/v3 from 3.18.3 to 3.18.4 in /src (#22188)
---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.18.4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-21 07:41:33 +00:00
dependabot[bot] 8007c2e02e
chore(deps): bump helm.sh/helm/v3 from 3.18.2 to 3.18.3 in /src (#22113)
---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.18.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-07-21 05:55:08 +00:00
miner 0f67947c87
clean up project metadata for tag retention policy after deletion (#22174)
Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-18 10:52:33 +00:00
stonezdj(Daojun Zhang) ebdfb547ba
Set MAX_JOB_DURATION_SECONDS from jobservice config.yml (#22116)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-07-18 10:01:18 +00:00
Daniel Jiang 440f53ebbc
Add status field to the API on secyurityHub (#22182)
This commit makes change to the API GET /api/v2.0/vul to make it include
"status" of CVEs in the response.

It also makes update in the UI to add the "Status" column to the data
grids in security hub and artifact details page.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-07-18 16:02:29 +08:00
Moon c83f2d114f
chore: Updated RELEASE.md by updating Minor Release Support Map (#22145)
Updated the Minor Release Support Matrix to include v2.13

Signed-off-by: Mooneeb Hussain <mooneeb.hussain@gmail.com>
2025-07-17 07:39:01 +00:00
Roger 01dba8ad57
Improve portal README.md formatting and clarity (#22173)
improving the portal readme file

Signed-off-by: rgcr <roger.dev@pm.me>
2025-07-17 05:55:11 +00:00
Daniel Jiang 19f4958ec3
Add "status" of CVEs to artfact scan report (#22177)
This commit adds the field "status" to the struct of a vulnerability and adds
column "status" to vulnerability record table.  It makes sure the statuses
of CVEs returned by trivy scanner are persisted and can be returned via
the vulnerabilities addition API of an artifact.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-07-16 11:11:40 +08:00
Spyros Trigazis 6c620dc20c
Update FixVersion and ScoreV3 (#22007)
Set Fix and CVE3Score in VulnerabilityRecord from VulnerabilityItem.

Follow-up of #21915
Fixes #21463

Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>
2025-07-15 14:17:41 +08:00
yuzhipeng c93da7ff4b
Add 400 code response in swagger.yaml for updateRegistry updateReplicationPolicy and headProject (#22165)
Signed-off-by: yuzhipeng <yuzp1996@gmail.com>
2025-07-14 14:26:12 +08:00
Prasanth Baskar 0cf2d7545d
Fix: Audit Log Eventtype antipattern in System Settings UI (#22147)
fix: Audit Log Eventtype antipattern in System Settings

* update logic from disabled to enabled
* update i18n to reflect the change

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-07-07 09:00:09 +00:00
miner c0a859d538
lazy load v2_swagger_client module (#22154)
Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-07 14:51:09 +08:00
miner 2565491758
add BUILD_INSTALLER parameter for optionally build prepare and log container (#22148)
add BUILD_INSTALLER parameter to optionally build prepare and log container only when we need to build offline_installer

Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-03 18:23:17 +08:00
miner 0a3c06d89c
add dockernetwork parameter for build process (#22138)
add dockernetwork parameter for makefile

Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-03 15:33:25 +08:00
Sergey 6be2971941
Add Russian language support (#21083)
* Add Russian language support

Signed-off-by: Sergey Akhmineev <ssakhmineev@rt-dc.ru>

* Update ru-ru-lang.json

Made edits to the translation based on comments

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

* Update ru-ru-lang.json

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

* Update ru-ru-lang.json

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

* Update ru-ru-lang.json

Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>

---------

Signed-off-by: Sergey Akhmineev <ssakhmineev@rt-dc.ru>
Signed-off-by: Sergey <81344204+sergey-akhmineev@users.noreply.github.com>
Co-authored-by: Sergey Akhmineev <ssakhmineev@rt-dc.ru>
Co-authored-by: Orlix <7236111+OrlinVasilev@users.noreply.github.com>
2025-07-02 10:57:27 +02:00
dependabot[bot] 229ef88684
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.1.17 to 1.1.19 in /src (#22133)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-version: 1.1.19
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-07-01 08:19:14 +00:00
Chlins Zhang 0f8913bb27
feat: support customize the job execution retention count by env (#22129)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-01 07:33:05 +00:00
miner 0c5d82e9d4
Update pipenv for prepare (#22124)
* update pipenv and lock

Signed-off-by: my036811 <miner.yang@broadcom.com>

* update pipenv

Signed-off-by: my036811 <miner.yang@broadcom.com>

---------

Signed-off-by: my036811 <miner.yang@broadcom.com>
2025-07-01 14:23:05 +08:00
stonezdj(Daojun Zhang) b8e3dd8fa0
change the pass-CI rules to exclude the resources and robot-cases folder (#22121)
Change the pass-CI rules to exclude the resources and robot-cases folder
   Pass HARBOR_ADMIN env to robot testcases

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-06-26 14:05:02 +08:00
Chethan e1e807072c
Update CHANGELOG.md, RELEASES.md and ROADMAP.md (#22095)
Signed-off-by: chethanm99 <chethanm1399@gmail.com>
2025-06-23 07:21:44 +00:00
miner 937e5920a2
update robot case for get harbor version (#22104) 2025-06-23 15:10:51 +08:00
dependabot[bot] 918aac61a6
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.1.11 to 1.1.17 in /src (#22089)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.1.11 to 1.1.17.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.1.11...v1.1.17)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-version: 1.1.17
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: miner <yminer@vmware.com>
2025-06-20 12:17:48 +00:00
stonezdj(Daojun Zhang) c0b22d8e24
Add environment variable add network_type env (#22097)
Use test_network_type to adapt to various network conditions in the test environment.

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-06-19 12:47:04 +00:00
Chlins Zhang 59c3de10a6
refactor: simplify some implementations by modern go features (#21998)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-06-18 10:22:22 +00:00
Chethan b647032747
Update Swagger 's readme.md (#22087)
Enhance readability of swagger Readme.md file by fixing minor errors

Signed-off-by: chethanm99 <chethanm1399@gmail.com>
2025-06-17 10:45:28 +00:00
Prasanth Baskar ec9d13d107
fix: CVE Allowlist Validation (#22077)
fix: empty cve allowlist validation

- fixes empty and cves with only spaces



fix: cve allowlist validation



add: tests for cve allowlist validation



fix: types for projectCVEAllowlist

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-06-17 15:53:00 +08:00
Chethan f46ef3b38d
Update contributing.md (#22082)
Fix minor errors in CONTRIBUTING.md

Signed-off-by: chethanm99 <chethanm1399@gmail.com>
2025-06-16 09:02:45 +00:00
dependabot[bot] 907c6c0900
chore(deps): bump helm.sh/helm/v3 from 3.17.2 to 3.18.2 in /src (#22060)
Bumps [helm.sh/helm/v3](https://github.com/helm/helm) from 3.17.2 to 3.18.2.
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.17.2...v3.18.2)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.18.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-13 03:25:35 +00:00
stonezdj(Daojun Zhang) 780a217122
Remove document link from Image Scanner (#22064)
fixes #22001

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-06-12 15:35:56 +08:00
Prasanth Baskar 145a10a8b9
Refactor: Simplify SearchAndOnBoardGroup Logic (#22058)
refactor: simplify SearchAndOnBoardGroup logic

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-06-10 08:36:06 +02:00
Chlins Zhang f46295aadb
chore: fix the arguments for codecov v5 (#22050) 2025-06-07 02:56:47 +00:00
Mohamed Awnallah e049fcd985
README.md: add artifact hub badge (#20736)
In this commit, we add the artifact hub
badge for the harbor project to improve
their discoverability and the best practices
index on [clomonitor.io](https://clomonitor.io/projects/cncf/harbor)

Signed-off-by: Mohamed Awnallah <mohamedmohey2352@gmail.com>
Co-authored-by: Vadim Bauer <vb@container-registry.com>
2025-06-05 14:13:34 +03:00
dependabot[bot] 7dcdec94e2
chore(deps): bump github.com/volcengine/volcengine-go-sdk from 1.0.185 to 1.1.10 in /src (#22035)
chore(deps): bump github.com/volcengine/volcengine-go-sdk in /src

Bumps [github.com/volcengine/volcengine-go-sdk](https://github.com/volcengine/volcengine-go-sdk) from 1.0.185 to 1.1.10.
- [Release notes](https://github.com/volcengine/volcengine-go-sdk/releases)
- [Commits](https://github.com/volcengine/volcengine-go-sdk/compare/v1.0.185...v1.1.10)

---
updated-dependencies:
- dependency-name: github.com/volcengine/volcengine-go-sdk
  dependency-version: 1.1.10
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-29 16:56:48 +08:00
dependabot[bot] 3dee318a2e
chore(deps): bump k8s.io/client-go from 0.32.2 to 0.33.1 in /src (#22011)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.32.2 to 0.33.1.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.32.2...v0.33.1)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.33.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 10:54:24 +00:00
Jim Chen a546f99974
feat: add rate limiter for alibaba cloud acr adapter (#21953)
Signed-off-by: njucjc <njucjc@gmail.com>
2025-05-27 08:18:58 +00:00
dependabot[bot] 111fc1c03e
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go from 1.63.84 to 1.63.107 in /src (#21943)
chore(deps): bump github.com/aliyun/alibaba-cloud-sdk-go in /src

Bumps [github.com/aliyun/alibaba-cloud-sdk-go](https://github.com/aliyun/alibaba-cloud-sdk-go) from 1.63.84 to 1.63.107.
- [Release notes](https://github.com/aliyun/alibaba-cloud-sdk-go/releases)
- [Changelog](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/master/ChangeLog.txt)
- [Commits](https://github.com/aliyun/alibaba-cloud-sdk-go/compare/v1.63.84...v1.63.107)

---
updated-dependencies:
- dependency-name: github.com/aliyun/alibaba-cloud-sdk-go
  dependency-version: 1.63.107
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Orlix <7236111+OrlinVasilev@users.noreply.github.com>
2025-05-27 06:39:52 +00:00
dependabot[bot] 6f856cd6b1
chore(deps): bump aws-actions/configure-aws-credentials from 4.1.0 to 4.2.1 (#22003)
chore(deps): bump aws-actions/configure-aws-credentials

Bumps [aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials) from 4.1.0 to 4.2.1.
- [Release notes](https://github.com/aws-actions/configure-aws-credentials/releases)
- [Changelog](https://github.com/aws-actions/configure-aws-credentials/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws-actions/configure-aws-credentials/compare/v4.1.0...v4.2.1)

---
updated-dependencies:
- dependency-name: aws-actions/configure-aws-credentials
  dependency-version: 4.2.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 13:56:35 +08:00
Wang Yan 2faff8e6af
udpate storage to s3 (#21999)
move the build storage from google storage to the CNCF S3 storage

Currently, we use the internal GCR to store all dev builds for nightly testing, development, and as candidates for RC and GA releases. However, this internal Google storage will no longer be available, this pull request it to move to the CNCF-hosted S3 storage.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-16 10:23:32 +00:00
Mile Druzijanic 424cdd8828
Test update for adding nil value to list (#21990)
test list update

Signed-off-by: miledxz <zedsprogramms@gmail.com>
Co-authored-by: Wang Yan <wangyan@vmware.com>
2025-05-15 06:28:49 +00:00
miner 3df34c5735
increase docker client timeout for robot case (#21994)
* increase docker client timeout for robot case

Signed-off-by: my036811 <miner.yang@broadcom.com>
Signed-off-by: miner <miner.yang@broadcom.com>
2025-05-13 11:45:09 +00:00
Wang Yan b4ba918118
bump up golang version to v1.24.3 (#21993)
* bump up golang version to v1.24.3

Signed-off-by: wang yan <wangyan@vmware.com>

* bump mockery version to support golang v2.14

Signed-off-by: wang yan <wangyan@vmware.com>

---------

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-13 17:09:35 +08:00
Wang Yan 85f3f792e4
update robot permission table (#21989)
fixes #21947

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-13 05:49:01 +00:00
Wang Yan 073dab8a07
add build flag for harbor exporter (#21988)
As the harbor exporter is not a core component for installation, so like the trivy, add a flag to controller whether package it into the offline installer.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-05-13 13:11:55 +08:00
Prasanth Baskar ada851b49a
Fix: Helm Chart Copy Button in UI (#21969)
* fix: helm chart copy btn in UI

Signed-off-by: bupd <bupdprasanth@gmail.com>

* add: tests for pull command component in UI

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-05-09 13:38:01 +08:00
Chlins Zhang 9e18bbc112
refactor: replace interface{} to any (#21973)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-05-08 11:02:49 +00:00
stonezdj(Daojun Zhang) 49df3b4362
Display gc progress information in running state (#21974)
fix #21411

Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-05-08 14:25:05 +08:00
Raphael Zöllner bc8653abc7
Add manifestcache push for tag and digest to local repository (#21141)
Signed-off-by: Raphael Zöllner <raphael.zoellner@regiocom.com>
2025-05-07 09:22:26 +00:00
stonezdj(Daojun Zhang) f684c1c36e
change python ./setup.py install to pip install . because deprecated (#21952)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-30 13:40:45 +08:00
Daniel Jiang 70306dca0c
Generate URI of token service via Host in request (#21898)
This commit update the flow to generate URL of token service, which will
first try to use the Host in request.  This will help the situation when
Harbor is configured to serve via a hostname but some client needs to
pull artifacts from Harbor via IP due to limitations in the environment.

Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
2025-04-28 16:23:09 +08:00
Wang Yan b3cfe225db
unify the golang image version (#21935)
Make the golang version as a unified parameter to build all harbor components

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-27 14:07:19 +08:00
stonezdj(Daojun Zhang) bef66740ec
Update the severity, fixed version and cvss_score_v3 (#21915)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-25 06:30:58 +00:00
Prasanth Baskar 972965ff5a
FIX: Display 'No SBOM' in multi-arch images in HarborUI (#21459)
fix: handle multi-arch images with SBOMs in HarborUI

* Updated the `hasChild` method to check for the presence of
`child_digest` in the `references` array.
* This ensures that SBOMs are correctly displayed for multi-arch images,
where child artifacts may contain their own SBOMs.
* Previously, No SBOM label was displayed for multi-arch images.

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-24 12:23:09 +00:00
Wang Yan 187f1a9ffb
enhance the query judgement (#21924)
the query parameter cannot contains orm.ExerSep which is key characters that used by orm.
the pull request enhances the validation for query parameters.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-24 18:20:33 +08:00
stonezdj(Daojun Zhang) ff2f4b0e71
Remove the error check never happen (#21916)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-24 02:41:16 +00:00
Prasanth Baskar 9850f1404d
Add missing step in e2e pipeline setup (#21888)
add missing step in e2e

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-23 03:02:26 +00:00
Wang Yan ad7be0b42f
revise make file for lint api (#21906)
Decouple the lint from the api generation step in the makefile.

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-22 13:36:37 +08:00
Bin Liu 6772477e8a
fix: check blob exist before copying layers samller than chunk size (#21883)
`copyBlobByChunk()` should like `copyBlob()`, first try to mount an
exists layer, if not mounted or exist, then copy the layer monolithic
or by chunks.

Signed-off-by: Bin Liu <liubin0329@gmail.com>
Signed-off-by: Bin Liu <lb203159@antfin.com>
2025-04-21 08:08:51 +00:00
stonezdj(Daojun Zhang) a13a16383a
update artifact info (#21902)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-21 15:24:04 +08:00
Wang Yan b58a60e273
update gitaction machine to 22.04 (#21900)
Per https://github.com/actions/runner-images/issues/11101, the ubuntu 20.04 is out of support. Up it to the 22.04

Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-21 10:13:21 +08:00
Chlins Zhang 9dcbd56e52
chore: bump golangci-lint to v2 (#21887)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-04-17 14:28:55 +08:00
miner f8f1994c9e
fix jobservice container loglevel consistent with job_log (#21874)
Signed-off-by: yminer <miner.yang@broadcom.com>
2025-04-15 14:07:39 +08:00
Chlins Zhang bfc29904f9
fix: support preheat cnai model artifact (#21849)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-04-08 19:49:46 +08:00
stonezdj(Daojun Zhang) 259c8a2053
Update robot testcase related to security hub row count to 15 by default (#21846)
Signed-off-by: stonezdj <stone.zhang@broadcom.com>
2025-04-08 08:35:02 +00:00
Prasanth Baskar 7ad799c7c7
Update dependencies in Harbor UI (#21823)
* deps: update src/portal/app-swagger-ui

Signed-off-by: bupd <bupdprasanth@gmail.com>

* deps: update swagger-ui

Signed-off-by: bupd <bupdprasanth@gmail.com>

* deps: update src/portal

Signed-off-by: bupd <bupdprasanth@gmail.com>

---------

Signed-off-by: bupd <bupdprasanth@gmail.com>
2025-04-07 16:26:52 +08:00
Wang Yan d0917e3e66
bump base version for v2.14 (#21819)
Signed-off-by: wang yan <wangyan@vmware.com>
2025-04-03 16:04:10 +08:00
688 changed files with 6423 additions and 3970 deletions

View File

@ -91,7 +91,7 @@ jobs:
- name: Codecov For BackEnd
uses: codecov/codecov-action@v5
with:
file: ./src/github.com/goharbor/harbor/profile.cov
files: ./src/github.com/goharbor/harbor/profile.cov
flags: unittests
APITEST_DB:
@ -333,5 +333,5 @@ jobs:
- name: Codecov For UI
uses: codecov/codecov-action@v5
with:
file: ./src/github.com/goharbor/harbor/src/portal/coverage/lcov.info
files: ./src/github.com/goharbor/harbor/src/portal/coverage/lcov.info
flags: unittests

View File

@ -13,16 +13,14 @@ jobs:
env:
BUILD_PACKAGE: true
runs-on:
- ubuntu-20.04
- ubuntu-22.04
steps:
- uses: actions/checkout@v3
- uses: 'google-github-actions/auth@v2'
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
with:
version: '430.0.0'
- run: gcloud info
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Set up Go 1.22
uses: actions/setup-go@v5
with:
@ -89,8 +87,8 @@ jobs:
else
build_base_params=" BUILD_BASE=true PUSHBASEIMAGE=true REGISTRYUSER=\"${{ secrets.DOCKER_HUB_USERNAME }}\" REGISTRYPASSWORD=\"${{ secrets.DOCKER_HUB_PASSWORD }}\""
fi
sudo make package_offline GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_online GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_offline GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true EXPORTERFLAG=true HTTPPROXY= ${build_base_params}
sudo make package_online GOBUILDTAGS="include_oss include_gcs" BASEIMAGETAG=${Harbor_Build_Base_Tag} VERSIONTAG=${Harbor_Assets_Version} PKGVERSIONTAG=${Harbor_Package_Version} TRIVYFLAG=true EXPORTERFLAG=true HTTPPROXY= ${build_base_params}
harbor_offline_build_bundle=$(basename harbor-offline-installer-*.tgz)
harbor_online_build_bundle=$(basename harbor-online-installer-*.tgz)
echo "Package name is: $harbor_offline_build_bundle"

View File

@ -17,14 +17,12 @@ jobs:
#- self-hosted
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: google-github-actions/auth@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
- run: gcloud info
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Set up Go 1.21
uses: actions/setup-go@v5
with:
@ -65,6 +63,5 @@ jobs:
- name: upload test result to gs
run: |
cd src/github.com/goharbor/harbor
gsutil cp ./distribution-spec/conformance/report.html gs://harbor-conformance-test/report.html
gsutil acl ch -u AllUsers:R gs://harbor-conformance-test/report.html
aws s3 cp ./distribution-spec/conformance/report.html s3://harbor-conformance-test/report.html
if: always()

View File

@ -9,6 +9,9 @@ on:
- '!tests/**.sh'
- '!tests/apitests/**'
- '!tests/ci/**'
- '!tests/resources/**'
- '!tests/robot-cases/**'
- '!tests/robot-cases/Group1-Nightly/**'
push:
paths:
- 'docs/**'
@ -17,6 +20,9 @@ on:
- '!tests/**.sh'
- '!tests/apitests/**'
- '!tests/ci/**'
- '!tests/resources/**'
- '!tests/robot-cases/**'
- '!tests/robot-cases/Group1-Nightly/**'
jobs:
UTTEST:

View File

@ -7,7 +7,7 @@ on:
jobs:
release:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- name: Setup env
@ -19,12 +19,12 @@ jobs:
echo "PRE_TAG=$(echo $release | jq -r '.body' | jq -r '.preTag')" >> $GITHUB_ENV
echo "BRANCH=$(echo $release | jq -r '.target_commitish')" >> $GITHUB_ENV
echo "PRERELEASE=$(echo $release | jq -r '.prerelease')" >> $GITHUB_ENV
- uses: 'google-github-actions/auth@v2'
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4.2.1
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- uses: google-github-actions/setup-gcloud@v2
with:
version: '430.0.0'
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Prepare Assets
run: |
if [ ! ${{ env.BUILD_NO }} -o ${{ env.BUILD_NO }} = "null" ]
@ -39,8 +39,8 @@ jobs:
src_online_package=harbor-online-installer-${{ env.BASE_TAG }}-${{ env.BUILD_NO }}.tgz
dst_offline_package=harbor-offline-installer-${{ env.CUR_TAG }}.tgz
dst_online_package=harbor-online-installer-${{ env.CUR_TAG }}.tgz
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_offline_package} gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_offline_package}
gsutil cp gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_online_package} gs://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_online_package}
aws s3 cp s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_offline_package} s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_offline_package}
aws s3 cp s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${src_online_package} s3://${{ secrets.HARBOR_RELEASE_BUILD }}/${{ env.BRANCH }}/${dst_online_package}
assets_path=$(pwd)/assets
source tools/release/release_utils.sh && getAssets ${{ secrets.HARBOR_RELEASE_BUILD }} ${{ env.BRANCH }} $dst_offline_package $dst_online_package ${{ env.PRERELEASE }} $assets_path

View File

@ -31,10 +31,10 @@ API explorer integration. End users can now explore and trigger Harbors API v
* Support Image Retag, enables the user to tag image to different repositories and projects, this is particularly useful in cases when images need to be retagged programmatically in a CI pipeline.
* Support Image Build History, makes it easy to see the contents of a container image, refer to the [User Guide](https://github.com/goharbor/harbor/blob/release-1.7.0/docs/user_guide.md#build-history).
* Support Logger customization, enables the user to customize STDOUT / STDERR / FILE / DB logger of running jobs.
* Improve user experience of Helm Chart Repository:
- Chart searching included in the global search results
- Show chart versions total number in the chart list
- Mark labels to helm charts
* Improve the user experience of Helm Chart Repository:
- Chart searching is included in the global search results
- Show the total number of chart versions in the chart list
- Mark labels in helm charts
- The latest version can be downloaded as default one on the chart list view
- The chart can be deleted by deleting all the versions under it
@ -58,7 +58,7 @@ API explorer integration. End users can now explore and trigger Harbors API v
- Replication policy rework to support wildcard, scheduled replication.
- Support repository level description.
- Batch operation on projects/repositories/users from UI.
- On board LDAP user when adding member to a project.
- On board LDAP user when adding a member to a project.
## v1.3.0 (2018-01-04)
@ -75,11 +75,11 @@ API explorer integration. End users can now explore and trigger Harbors API v
## v1.1.0 (2017-04-18)
- Add in Notary support
- User can update configuration through Harbor UI
- User can update the configuration through Harbor UI
- Redesign of Harbor's UI using Clarity
- Some changes to API
- Fix some security issues in token service
- Upgrade base image of nginx for latest openssl version
- Fix some security issues in the token service
- Upgrade the base image of nginx to the latest openssl version
- Various bug fixes.
## v0.5.0 (2016-12-6)
@ -88,7 +88,7 @@ API explorer integration. End users can now explore and trigger Harbors API v
- Easier configuration for HTTPS in prepare script
- Script to collect logs of a Harbor deployment
- User can view the storage usage (default location) of Harbor.
- Add an attribute to disable normal user to create project
- Add an attribute to disable normal users from creating projects.
- Various bug fixes.
For Harbor virtual appliance:

View File

@ -14,7 +14,7 @@ Contributors are encouraged to collaborate using the following resources in addi
* Chat with us on the CNCF Slack ([get an invitation here][cncf-slack] )
* [#harbor][users-slack] for end-user discussions
* [#harbor-dev][dev-slack] for development of Harbor
* Want long-form communication instead of Slack? We have two distributions lists:
* Want long-form communication instead of Slack? We have two distribution lists:
* [harbor-users][users-dl] for end-user discussions
* [harbor-dev][dev-dl] for development of Harbor
@ -49,7 +49,7 @@ To build the project, please refer the [build](https://goharbor.io/docs/edge/bui
### Repository Structure
Here is the basic structure of the harbor code base. Some key folders / files are commented for your references.
Here is the basic structure of the Harbor code base. Some key folders / files are commented for your reference.
```
.
...
@ -168,13 +168,14 @@ Harbor backend is written in [Go](http://golang.org/). If you don't have a Harbo
| 2.11 | 1.22.3 |
| 2.12 | 1.23.2 |
| 2.13 | 1.23.8 |
| 2.14 | 1.24.5 |
Ensure your GOPATH and PATH have been configured in accordance with the Go environment instructions.
#### Web
Harbor web UI is built based on [Clarity](https://vmware.github.io/clarity/) and [Angular](https://angular.io/) web framework. To setup web UI development environment, please make sure the [npm](https://www.npmjs.com/get-npm) tool is installed first.
Harbor web UI is built based on [Clarity](https://vmware.github.io/clarity/) and [Angular](https://angular.io/) web framework. To setup a web UI development environment, please make sure that the [npm](https://www.npmjs.com/get-npm) tool is installed first.
| Harbor | Requires Angular | Requires Clarity |
|----------|--------------------|--------------------|
@ -204,7 +205,7 @@ PR are always welcome, even if they only contain small fixes like typos or a few
Please submit a PR broken down into small changes bit by bit. A PR consisting of a lot of features and code changes may be hard to review. It is recommended to submit PRs in an incremental fashion.
Note: If you split your pull request to small changes, please make sure any of the changes goes to `main` will not break anything. Otherwise, it can not be merged until this feature complete.
Note: If you split your pull request to small changes, please make sure any of the changes goes to `main` will not break anything. Otherwise, it can not be merged until this feature completed.
### Fork and clone
@ -278,7 +279,7 @@ To build the code, please refer to [build](https://goharbor.io/docs/edge/build-c
**Note**: from v2.0, Harbor uses [go-swagger](https://github.com/go-swagger/go-swagger) to generate API server from Swagger 2.0 (aka [OpenAPI 2.0](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md)). To add or change the APIs, first update the `api/v2.0/swagger.yaml` file, then run `make gen_apis` to generate the API server, finally, implement or update the API handlers in `src/server/v2.0/handler` package.
As now Harbor uses `controller/manager/dao` programming model, we suggest to use [testify mock](https://github.com/stretchr/testify/blob/master/mock/doc.go) to test `controller` and `manager`. Harbor integrates [mockery](https://github.com/vektra/mockery) to generate mocks for golang interfaces using the testify mock package. To generate mocks for the interface, first add mock config in the `src/.mockery.yaml`, then run `make gen_mocks` to generate mocks.
As Harbor now uses `controller/manager/dao` programming model, we suggest using [testify mock](https://github.com/stretchr/testify/blob/master/mock/doc.go) to test `controller` and `manager`. Harbor integrates [mockery](https://github.com/vektra/mockery) to generate mocks for golang interfaces using the testify mock package. To generate mocks for the interface, first add mock config in the `src/.mockery.yaml`, then run `make gen_mocks` to generate mocks.
### Keep sync with upstream
@ -317,15 +318,15 @@ curl https://cdn.jsdelivr.net/gh/tommarshall/git-good-commit@v0.6.1/hook.sh > .g
```
### Automated Testing
Once your pull request has been opened, harbor will run two CI pipelines against it.
Once your pull request has been opened, Harbor will run two CI pipelines against it.
1. In the travis CI, your source code will be checked via `golint`, `go vet` and `go race` that makes sure the code is readable, safe and correct. Also, all of unit tests will be triggered via `go test` against the pull request. What you need to pay attention to is the travis result and the coverage report.
* If any failure in travis, you need to figure out whether it is introduced by your commits.
* If the coverage dramatic decline, you need to commit unit test to coverage your code.
2. In the drone CI, the E2E test will be triggered against the pull request. Also, the source code will be checked via `gosec`, and the result is stored in google storage for later analysis. The pipeline is about to build and install harbor from source code, then to run four very basic E2E tests to validate the basic functionalities of harbor, like:
* Registry Basic Verification, to validate the image can be pulled and pushed successful.
* Trivy Basic Verification, to validate the image can be scanned successful.
* Notary Basic Verification, to validate the image can be signed successful.
* Ldap Basic Verification, to validate harbor can work in LDAP environment.
* If the coverage dramatically declines, then you need to commit a unit test to cover your code.
2. In the drone CI, the E2E test will be triggered against the pull request. Also, the source code will be checked via `gosec`, and the result is stored in google storage for later analysis. The pipeline is about to build and install harbor from source code, then to run four very basic E2E tests to validate the basic functionalities of Harbor, like:
* Registry Basic Verification, to validate that the image can be pulled and pushed successfully.
* Trivy Basic Verification, to validate that the image can be scanned successfully.
* Notary Basic Verification, to validate that the image can be signed successfully.
* Ldap Basic Verification, to validate that Harbor can work in LDAP environment.
### Push and Create PR
When ready for review, push your branch to your fork repository on `github.com`:
@ -344,7 +345,7 @@ Commit changes made in response to review comments to the same branch on your fo
It is a great way to contribute to Harbor by reporting an issue. Well-written and complete bug reports are always welcome! Please open an issue on GitHub and follow the template to fill in required information.
Before opening any issue, please look up the existing [issues](https://github.com/goharbor/harbor/issues) to avoid submitting a duplication.
Before opening any issue, please look up the existing [issues](https://github.com/goharbor/harbor/issues) to avoid submitting a duplicate.
If you find a match, you can "subscribe" to it to get notified on updates. If you have additional helpful information about the issue, please leave a comment.
When reporting issues, always include:

View File

@ -78,6 +78,7 @@ REGISTRYSERVER=
REGISTRYPROJECTNAME=goharbor
DEVFLAG=true
TRIVYFLAG=false
EXPORTERFLAG=false
HTTPPROXY=
BUILDREG=true
BUILDTRIVYADP=true
@ -92,7 +93,12 @@ VERSIONTAG=dev
BUILD_BASE=true
PUSHBASEIMAGE=false
BASEIMAGETAG=dev
BUILDBASETARGET=trivy-adapter core db jobservice log nginx portal prepare redis registry registryctl exporter
# for skip build prepare and log container while BUILD_INSTALLER=false
BUILD_INSTALLER=true
BUILDBASETARGET=trivy-adapter core db jobservice nginx portal redis registry registryctl exporter
ifeq ($(BUILD_INSTALLER), true)
BUILDBASETARGET += prepare log
endif
IMAGENAMESPACE=goharbor
BASEIMAGENAMESPACE=goharbor
# #input true/false only
@ -129,6 +135,7 @@ endef
# docker parameters
DOCKERCMD=$(shell which docker)
DOCKERBUILD=$(DOCKERCMD) build
DOCKERNETWORK=default
DOCKERRMIMAGE=$(DOCKERCMD) rmi
DOCKERPULL=$(DOCKERCMD) pull
DOCKERIMAGES=$(DOCKERCMD) images
@ -144,7 +151,7 @@ GOINSTALL=$(GOCMD) install
GOTEST=$(GOCMD) test
GODEP=$(GOTEST) -i
GOFMT=gofmt -w
GOBUILDIMAGE=golang:1.23.8
GOBUILDIMAGE=golang:1.24.5
GOBUILDPATHINCONTAINER=/harbor
# go build
@ -238,18 +245,27 @@ REGISTRYUSER=
REGISTRYPASSWORD=
# cmds
DOCKERSAVE_PARA=$(DOCKER_IMAGE_NAME_PREPARE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) \
DOCKERSAVE_PARA=$(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG) \
$(DOCKERIMAGENAME_CORE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_LOG):$(VERSIONTAG) \
$(DOCKERIMAGENAME_DB):$(VERSIONTAG) \
$(DOCKERIMAGENAME_JOBSERVICE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_REGCTL):$(VERSIONTAG) \
$(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG) \
$(IMAGENAMESPACE)/redis-photon:$(VERSIONTAG) \
$(IMAGENAMESPACE)/nginx-photon:$(VERSIONTAG) \
$(IMAGENAMESPACE)/registry-photon:$(VERSIONTAG)
ifeq ($(BUILD_INSTALLER), true)
DOCKERSAVE_PARA+= $(DOCKER_IMAGE_NAME_PREPARE):$(VERSIONTAG) \
$(DOCKERIMAGENAME_LOG):$(VERSIONTAG)
endif
ifeq ($(TRIVYFLAG), true)
DOCKERSAVE_PARA+= $(IMAGENAMESPACE)/trivy-adapter-photon:$(VERSIONTAG)
endif
ifeq ($(EXPORTERFLAG), true)
DOCKERSAVE_PARA+= $(DOCKERIMAGENAME_EXPORTER):$(VERSIONTAG)
endif
PACKAGE_OFFLINE_PARA=-zcvf harbor-offline-installer-$(PKGVERSIONTAG).tgz \
$(HARBORPKG)/$(DOCKERIMGFILE).$(VERSIONTAG).tar.gz \
$(HARBORPKG)/prepare \
@ -266,11 +282,6 @@ PACKAGE_ONLINE_PARA=-zcvf harbor-online-installer-$(PKGVERSIONTAG).tgz \
DOCKERCOMPOSE_FILE_OPT=-f $(DOCKERCOMPOSEFILEPATH)/$(DOCKERCOMPOSEFILENAME)
ifeq ($(TRIVYFLAG), true)
DOCKERSAVE_PARA+= $(IMAGENAMESPACE)/trivy-adapter-photon:$(VERSIONTAG)
endif
RUNCONTAINER=$(DOCKERCMD) run --rm -u $(shell id -u):$(shell id -g) -v $(BUILDPATH):$(BUILDPATH) -w $(BUILDPATH)
# $1 the name of the docker image
@ -308,13 +319,13 @@ define swagger_generate_server
@$(SWAGGER_GENERATE_SERVER) -f $(1) -A $(3) --target $(2)
endef
gen_apis: lint_apis
gen_apis:
$(call prepare_docker_image,${SWAGGER_IMAGENAME},${SWAGGER_VERSION},${SWAGGER_IMAGE_BUILD_CMD})
$(call swagger_generate_server,api/v2.0/swagger.yaml,src/server/v2.0,harbor)
MOCKERY_IMAGENAME=$(IMAGENAMESPACE)/mockery
MOCKERY_VERSION=v2.51.0
MOCKERY_VERSION=v2.53.3
MOCKERY=$(RUNCONTAINER)/src ${MOCKERY_IMAGENAME}:${MOCKERY_VERSION}
MOCKERY_IMAGE_BUILD_CMD=${DOCKERBUILD} -f ${TOOLSPATH}/mockery/Dockerfile --build-arg GOLANG=${GOBUILDIMAGE} --build-arg MOCKERY_VERSION=${MOCKERY_VERSION} -t ${MOCKERY_IMAGENAME}:$(MOCKERY_VERSION) .
@ -338,7 +349,7 @@ versions_prepare:
check_environment:
@$(MAKEPATH)/$(CHECKENVCMD)
compile_core: gen_apis
compile_core: lint_apis gen_apis
@echo "compiling binary for core (golang image)..."
@echo $(GOBUILDPATHINCONTAINER)
@$(DOCKERCMD) run --rm -v $(BUILDPATH):$(GOBUILDPATHINCONTAINER) -w $(GOBUILDPATH_CORE) $(GOBUILDIMAGE) $(GOIMAGEBUILD_CORE) -o $(GOBUILDPATHINCONTAINER)/$(GOBUILDMAKEPATH_CORE)/$(CORE_BINARYNAME)
@ -393,13 +404,15 @@ build:
-e REGISTRYVERSION=$(REGISTRYVERSION) -e REGISTRY_SRC_TAG=$(REGISTRY_SRC_TAG) -e DISTRIBUTION_SRC=$(DISTRIBUTION_SRC)\
-e TRIVYVERSION=$(TRIVYVERSION) -e TRIVYADAPTERVERSION=$(TRIVYADAPTERVERSION) \
-e VERSIONTAG=$(VERSIONTAG) \
-e DOCKERNETWORK=$(DOCKERNETWORK) \
-e BUILDREG=$(BUILDREG) -e BUILDTRIVYADP=$(BUILDTRIVYADP) \
-e BUILD_INSTALLER=$(BUILD_INSTALLER) \
-e NPM_REGISTRY=$(NPM_REGISTRY) -e BASEIMAGETAG=$(BASEIMAGETAG) -e IMAGENAMESPACE=$(IMAGENAMESPACE) -e BASEIMAGENAMESPACE=$(BASEIMAGENAMESPACE) \
-e REGISTRYURL=$(REGISTRYURL) \
-e TRIVY_DOWNLOAD_URL=$(TRIVY_DOWNLOAD_URL) -e TRIVY_ADAPTER_DOWNLOAD_URL=$(TRIVY_ADAPTER_DOWNLOAD_URL) \
-e PULL_BASE_FROM_DOCKERHUB=$(PULL_BASE_FROM_DOCKERHUB) -e BUILD_BASE=$(BUILD_BASE) \
-e REGISTRYUSER=$(REGISTRYUSER) -e REGISTRYPASSWORD=$(REGISTRYPASSWORD) \
-e PUSHBASEIMAGE=$(PUSHBASEIMAGE)
-e PUSHBASEIMAGE=$(PUSHBASEIMAGE) -e GOBUILDIMAGE=$(GOBUILDIMAGE)
build_standalone_db_migrator: compile_standalone_db_migrator
make -f $(MAKEFILEPATH_PHOTON)/Makefile _build_standalone_db_migrator -e BASEIMAGETAG=$(BASEIMAGETAG) -e VERSIONTAG=$(VERSIONTAG)
@ -440,7 +453,14 @@ package_online: update_prepare_version
@rm -rf $(HARBORPKG)
@echo "Done."
package_offline: update_prepare_version compile build
.PHONY: check_buildinstaller
check_buildinstaller:
@if [ "$(BUILD_INSTALLER)" != "true" ]; then \
echo "Must set BUILD_INSTALLER as true while triggering package_offline build" ; \
exit 1; \
fi
package_offline: check_buildinstaller update_prepare_version compile build
@echo "packing offline package ..."
@cp -r make $(HARBORPKG)
@ -471,7 +491,7 @@ misspell:
@find . -type d \( -path ./tests \) -prune -o -name '*.go' -print | xargs misspell -error
# golangci-lint binary installation or refer to https://golangci-lint.run/usage/install/#local-installation
# curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.55.2
# curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v2.1.2
GOLANGCI_LINT := $(shell go env GOPATH)/bin/golangci-lint
lint:
@echo checking lint
@ -539,7 +559,7 @@ swagger_client:
rm -rf harborclient
mkdir -p harborclient/harbor_v2_swagger_client
java -jar openapi-generator-cli.jar generate -i api/v2.0/swagger.yaml -g python -o harborclient/harbor_v2_swagger_client --package-name v2_swagger_client
cd harborclient/harbor_v2_swagger_client; python ./setup.py install
cd harborclient/harbor_v2_swagger_client; pip install .
pip install docker -q
pip freeze

View File

@ -9,6 +9,7 @@
[![Nightly Status](https://us-central1-eminent-nation-87317.cloudfunctions.net/harbor-nightly-result)](https://www.googleapis.com/storage/v1/b/harbor-nightly/o)
![CONFORMANCE_TEST](https://github.com/goharbor/harbor/workflows/CONFORMANCE_TEST/badge.svg)
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fgoharbor%2Fharbor.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fgoharbor%2Fharbor?ref=badge_shield)
[![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/harbor)](https://artifacthub.io/packages/helm/harbor/harbor)
</br>
|![notification](https://raw.githubusercontent.com/goharbor/website/master/docs/img/readme/bell-outline-badged.svg)Community Meeting|

View File

@ -1,28 +1,27 @@
# Versioning and Release
This document describes the versioning and release process of Harbor. This document is a living document, contents will be updated according to each release.
This document describes the versioning and release process of Harbor. This document is a living document, it's contents will be updated according to each release.
## Releases
Harbor releases will be versioned using dotted triples, similar to [Semantic Version](http://semver.org/). For this specific document, we will refer to the respective components of this triple as `<major>.<minor>.<patch>`. The version number may have additional information, such as "-rc1,-rc2,-rc3" to mark release candidate builds for earlier access. Such releases will be considered as "pre-releases".
### Major and Minor Releases
Major and minor releases of Harbor will be branched from `main` when the release reaches to `RC(release candidate)` state. The branch format should follow `release-<major>.<minor>.0`. For example, once the release `v1.0.0` reaches to RC, a branch will be created with the format `release-1.0.0`. When the release reaches to `GA(General Available)` state, The tag with format `v<major>.<minor>.<patch>` and should be made with command `git tag -s v<major>.<minor>.<patch>`. The release cadence is around 3 months, might be adjusted based on open source event, but will communicate it clearly.
Major and minor releases of Harbor will be branched from `main` when the release reaches to `RC(release candidate)` state. The branch format should follow `release-<major>.<minor>.0`. For example, once the release `v1.0.0` reaches to RC, a branch will be created with the format `release-1.0.0`. When the release reaches to `GA(General Available)` state, the tag with format `v<major>.<minor>.<patch>` and should be made with the command `git tag -s v<major>.<minor>.<patch>`. The release cadence is around 3 months, might be adjusted based on open source events, but will communicate it clearly.
### Patch releases
Patch releases are based on the major/minor release branch, the release cadence for patch release of recent minor release is one month to solve critical community and security issues. The cadence for patch release of recent minus two minor releases are on-demand driven based on the severity of the issue to be fixed.
### Pre-releases
`Pre-releases:mainly the different RC builds` will be compiled from their corresponding branches. Please note they are done to assist in the stabilization process, no guarantees are provided.
`Pre-releases:mainly the different RC builds` will be compiled from their corresponding branches. Please note that they are done to assist in the stabilization process, no guarantees are provided.
### Minor Release Support Matrix
| Version | Supported |
|----------------| ------------------ |
| Harbor v2.13.x | :white_check_mark: |
| Harbor v2.12.x | :white_check_mark: |
| Harbor v2.11.x | :white_check_mark: |
| Harbor v2.10.x | :white_check_mark: |
### Upgrade path and support policy
The upgrade path for Harbor is (1) 2.2.x patch releases are always compatible with its major and minor version. For example, previous released 2.2.x can be upgraded to most recent 2.2.3 release. (2) Harbor only supports two previous minor releases to upgrade to current minor release. For example, 2.3.0 will only support 2.1.0 and 2.2.0 to upgrade from, 2.0.0 to 2.3.0 is not supported. One should upgrade to 2.2.0 first, then to 2.3.0.
The upgrade path for Harbor is (1) 2.2.x patch releases are always compatible with its major and minor versions. For example, previous released 2.2.x can be upgraded to most recent 2.2.3 release. (2) Harbor only supports two previous minor releases to upgrade to current minor release. For example, 2.3.0 will only support 2.1.0 and 2.2.0 to upgrade from, 2.0.0 to 2.3.0 is not supported. One should upgrade to 2.2.0 first, then to 2.3.0.
The Harbor project maintains release branches for the three most recent minor releases, each minor release will be maintained for approximately 9 months.
### Next Release
@ -33,12 +32,12 @@ The activity for next release will be tracked in the [up-to-date project board](
The following steps outline what to do when it's time to plan for and publish a release. Depending on the release (major/minor/patch), not all the following items are needed.
1. Prepare information about what's new in the release.
* For every release, update documentation for changes that have happened in the release. See the [goharbor/website](https://github.com/goharbor/website) repo for more details on how to create documentation for a release. All documentation for a release should be published by the time the release is out.
* For every release, update the documentation for changes that have happened in the release. See the [goharbor/website](https://github.com/goharbor/website) repo for more details on how to create documentation for a release. All documentation for a release should be published by the time the release is out.
* For every release, write release notes. See [previous releases](https://github.com/goharbor/harbor/releases) for examples of what to include in release notes.
* For a major/minor release, write a blog post that highlights new features in the release. Plan to publish this the same day as the release. Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. If there are any new features or workflows introduced in a release, consider writing additional blog posts to help users learn about the new features. Plan to publish these after the release date (all blogs dont have to be published all at once).
* For a major/minor release, write a blog post that highlights new features in the release. Plan to publish this on the same day as the release. Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. If there are any new features or workflows introduced in a release, consider writing additional blog posts to help users learn about the new features. Plan to publish these after the release date (all blogs dont have to be published all at once).
1. Release a new version. Make the new version, docs updates, and blog posts available.
1. Announce the release and thank contributors. We should be doing the following for all releases.
* In all messages to the community include a brief list of highlights and links to the new release blog, release notes, or download location. Also include shoutouts to community member contribution included in the release.
* In all messages to the community include a brief list of highlights and links to the new release blog, release notes, or download location. Also include shoutouts to community members contributions included in the release.
* Send an email to the community via the [mailing list](https://lists.cncf.io/g/harbor-users)
* Post a message in the Harbor [slack channel](https://cloud-native.slack.com/archives/CC1E09J6S)
* Post to social media. Maintainers are encouraged to also post or repost from the Harbor account to help spread the word.

View File

@ -9,11 +9,11 @@ This document provides a link to the [Harbor Project board](https://github.com/o
Discussion on the roadmap can take place in threads under [Issues](https://github.com/goharbor/harbor/issues) or in [community meetings](https://goharbor.io/community/). Please open and comment on an issue if you want to provide suggestions and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
Please open an issue to track any initiative on the roadmap of Harbor (Usually driven by new feature requests). We will work with and rely on our community to focus our efforts to improve Harbor.
Please open an issue to track any initiative on the roadmap of Harbor (Usually driven by new feature requests). We will work with and rely on our community to focus our efforts on improving Harbor.
### Current Roadmap
The following table includes the current roadmap for Harbor. If you have any questions or would like to contribute to Harbor, please attend a [community meeting](https://goharbor.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Harbor.
The following table includes the current roadmap for Harbor. If you have any questions or would like to contribute to Harbor, please attend a [community meeting](https://goharbor.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors who will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Harbor.
`Last Updated: June 2022`
@ -49,4 +49,4 @@ The following table includes the current roadmap for Harbor. If you have any que
|I&AM and RBAC|Improved Multi-tenancy through granular access and ability to manage teams of users and robot accounts through workspaces|Dec 2020|
|Observability|Expose Harbor metrics through Prometheus Integration|Mar 2021|
|Tracing|Leverage OpenTelemetry for enhanced tracing capabilities and identify bottlenecks and improve performance |Mar 2021|
|Image Signing|Leverage Sigstore Cosign to deliver persisting image signatures across image replications|Apr 2021|
|Image Signing|Leverage Sigstore Cosign to deliver persistent image signatures across image replications|Apr 2021|

View File

@ -1 +1 @@
v2.13.0
v2.14.0

View File

@ -336,6 +336,8 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'404':
$ref: '#/responses/404'
'500':
@ -3029,6 +3031,8 @@ paths:
type: string
'401':
$ref: '#/responses/401'
'409':
$ref: '#/responses/409'
'500':
$ref: '#/responses/500'
'/usergroups/{group_id}':
@ -3560,6 +3564,8 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
@ -3998,6 +4004,8 @@ paths:
responses:
'200':
$ref: '#/responses/200'
'400':
$ref: '#/responses/400'
'401':
$ref: '#/responses/401'
'403':
@ -6138,6 +6146,7 @@ paths:
cve_id(exact match)
cvss_score_v3(range condition)
severity(exact match)
status(exact match)
repository_name(exact match)
project_id(exact match)
package(exact match)
@ -10066,6 +10075,9 @@ definitions:
severity:
type: string
description: the severity of the vulnerability
status:
type: string
description: the status of the vulnerability, example "fixed", "won't fix"
cvss_v3_score:
type: number
format: float

View File

@ -0,0 +1,9 @@
ALTER TABLE role_permission ALTER COLUMN id TYPE BIGINT;
ALTER SEQUENCE role_permission_id_seq AS BIGINT;
ALTER TABLE permission_policy ALTER COLUMN id TYPE BIGINT;
ALTER SEQUENCE permission_policy_id_seq AS BIGINT;
ALTER TABLE role_permission ALTER COLUMN permission_policy_id TYPE BIGINT;
ALTER TABLE vulnerability_record ADD COLUMN IF NOT EXISTS status text;

View File

@ -18,7 +18,7 @@ TIMESTAMP=$(shell date +"%Y%m%d")
# docker parameters
DOCKERCMD=$(shell which docker)
DOCKERBUILD=$(DOCKERCMD) build --no-cache
DOCKERBUILD=$(DOCKERCMD) build --no-cache --network=$(DOCKERNETWORK)
DOCKERBUILD_WITH_PULL_PARA=$(DOCKERBUILD) --pull=$(PULL_BASE_FROM_DOCKERHUB)
DOCKERRMIMAGE=$(DOCKERCMD) rmi
DOCKERIMAGES=$(DOCKERCMD) images
@ -154,7 +154,7 @@ _build_trivy_adapter:
$(call _extract_archive, $(TRIVY_ADAPTER_DOWNLOAD_URL), $(DOCKERFILEPATH_TRIVY_ADAPTER)/binary/) ; \
else \
echo "Building Trivy adapter $(TRIVYADAPTERVERSION) from sources..." ; \
cd $(DOCKERFILEPATH_TRIVY_ADAPTER) && $(DOCKERFILEPATH_TRIVY_ADAPTER)/builder.sh $(TRIVYADAPTERVERSION) && cd - ; \
cd $(DOCKERFILEPATH_TRIVY_ADAPTER) && $(DOCKERFILEPATH_TRIVY_ADAPTER)/builder.sh $(TRIVYADAPTERVERSION) $(GOBUILDIMAGE) $(DOCKERNETWORK) && cd - ; \
fi ; \
echo "Building Trivy adapter container for photon..." ; \
$(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) \
@ -178,7 +178,7 @@ _build_registry:
rm -rf $(DOCKERFILEPATH_REG)/binary && mkdir -p $(DOCKERFILEPATH_REG)/binary && \
$(call _get_binary, $(REGISTRYURL), $(DOCKERFILEPATH_REG)/binary/registry); \
else \
cd $(DOCKERFILEPATH_REG) && $(DOCKERFILEPATH_REG)/builder $(REGISTRY_SRC_TAG) $(DISTRIBUTION_SRC) && cd - ; \
cd $(DOCKERFILEPATH_REG) && $(DOCKERFILEPATH_REG)/builder $(REGISTRY_SRC_TAG) $(DISTRIBUTION_SRC) $(GOBUILDIMAGE) $(DOCKERNETWORK) && cd - ; \
fi
@echo "building registry container for photon..."
@chmod 655 $(DOCKERFILEPATH_REG)/binary/registry && $(DOCKERBUILD_WITH_PULL_PARA) --build-arg harbor_base_image_version=$(BASEIMAGETAG) --build-arg harbor_base_namespace=$(BASEIMAGENAMESPACE) -f $(DOCKERFILEPATH_REG)/$(DOCKERFILENAME_REG) -t $(DOCKERIMAGENAME_REG):$(VERSIONTAG) .
@ -233,10 +233,17 @@ define _build_base
fi
endef
build: _build_prepare _build_db _build_portal _build_core _build_jobservice _build_log _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
ifeq ($(BUILD_INSTALLER), true)
buildcompt: _build_prepare _build_db _build_portal _build_core _build_jobservice _build_log _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
else
buildcompt: _build_db _build_portal _build_core _build_jobservice _build_nginx _build_registry _build_registryctl _build_trivy_adapter _build_redis _compile_and_build_exporter
endif
build: buildcompt
@if [ -n "$(REGISTRYUSER)" ] && [ -n "$(REGISTRYPASSWORD)" ] ; then \
docker logout ; \
fi
cleanimage:
@echo "cleaning image for photon..."
- $(DOCKERRMIMAGE) -f $(DOCKERIMAGENAME_PORTAL):$(VERSIONTAG)

View File

@ -1,7 +1,7 @@
FROM photon:5.0
RUN tdnf install -y python3 python3-pip python3-PyYAML python3-jinja2 && tdnf clean all
RUN pip3 install pipenv==2022.1.8
RUN pip3 install pipenv==2025.0.3
#To install only htpasswd binary from photon package httpd
RUN tdnf install -y rpm cpio apr-util

View File

@ -12,4 +12,4 @@ pylint = "*"
pytest = "*"
[requires]
python_version = "3.9.1"
python_version = "3.13"

View File

@ -1,11 +1,11 @@
{
"_meta": {
"hash": {
"sha256": "0c84f574a48755d88f78a64d754b3f834a72f2a86808370dd5f3bf3e650bfa13"
"sha256": "d3a89b8575c29b9f822b892ffd31fd4a997effb1ebf3e3ed061a41e2d04b4490"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.9.1"
"python_version": "3.13"
},
"sources": [
{
@ -18,157 +18,122 @@
"default": {
"click": {
"hashes": [
"sha256:8c04c11192119b1ef78ea049e0a6f0463e4c48ef00a30160c704337586f3ad7a",
"sha256:fba402a4a47334742d782209a7c79bc448911afe1149d07bdabdf480b3e2f4b6"
"sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202",
"sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b"
],
"index": "pypi",
"version": "==8.0.1"
"markers": "python_version >= '3.10'",
"version": "==8.2.1"
},
"packaging": {
"hashes": [
"sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5",
"sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"
"sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484",
"sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"
],
"index": "pypi",
"version": "==20.9"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
"sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.7"
"markers": "python_version >= '3.8'",
"version": "==25.0"
}
},
"develop": {
"astroid": {
"hashes": [
"sha256:4db03ab5fc3340cf619dbc25e42c2cc3755154ce6009469766d7143d1fc2ee4e",
"sha256:8a398dfce302c13f14bab13e2b14fe385d32b73f4e4853b9bdfb64598baa1975"
"sha256:104fb9cb9b27ea95e847a94c003be03a9e039334a8ebca5ee27dafaf5c5711eb",
"sha256:c332157953060c6deb9caa57303ae0d20b0fbdb2e59b4a4f2a6ba49d0a7961ce"
],
"markers": "python_version ~= '3.6'",
"version": "==2.5.6"
"markers": "python_full_version >= '3.9.0'",
"version": "==3.3.10"
},
"attrs": {
"dill": {
"hashes": [
"sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1",
"sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"
"sha256:0633f1d2df477324f53a895b02c901fb961bdbf65a17122586ea7019292cbcf0",
"sha256:44f54bf6412c2c8464c14e8243eb163690a9800dbe2c367330883b19c7561049"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==21.2.0"
"markers": "python_version >= '3.8'",
"version": "==0.4.0"
},
"iniconfig": {
"hashes": [
"sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3",
"sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"
"sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7",
"sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760"
],
"version": "==1.1.1"
"markers": "python_version >= '3.8'",
"version": "==2.1.0"
},
"isort": {
"hashes": [
"sha256:0a943902919f65c5684ac4e0154b1ad4fac6dcaa5d9f3426b732f1c8b5419be6",
"sha256:2bb1680aad211e3c9944dbce1d4ba09a989f04e238296c87fe2139faa26d655d"
"sha256:1cb5df28dfbc742e490c5e41bad6da41b805b0a8be7bc93cd0fb2a8a890ac450",
"sha256:2dc5d7f65c9678d94c88dfc29161a320eec67328bc97aad576874cb4be1e9615"
],
"markers": "python_version >= '3.6' and python_version < '4.0'",
"version": "==5.8.0"
},
"lazy-object-proxy": {
"hashes": [
"sha256:17e0967ba374fc24141738c69736da90e94419338fd4c7c7bef01ee26b339653",
"sha256:1fee665d2638491f4d6e55bd483e15ef21f6c8c2095f235fef72601021e64f61",
"sha256:22ddd618cefe54305df49e4c069fa65715be4ad0e78e8d252a33debf00f6ede2",
"sha256:24a5045889cc2729033b3e604d496c2b6f588c754f7a62027ad4437a7ecc4837",
"sha256:410283732af311b51b837894fa2f24f2c0039aa7f220135192b38fcc42bd43d3",
"sha256:4732c765372bd78a2d6b2150a6e99d00a78ec963375f236979c0626b97ed8e43",
"sha256:489000d368377571c6f982fba6497f2aa13c6d1facc40660963da62f5c379726",
"sha256:4f60460e9f1eb632584c9685bccea152f4ac2130e299784dbaf9fae9f49891b3",
"sha256:5743a5ab42ae40caa8421b320ebf3a998f89c85cdc8376d6b2e00bd12bd1b587",
"sha256:85fb7608121fd5621cc4377a8961d0b32ccf84a7285b4f1d21988b2eae2868e8",
"sha256:9698110e36e2df951c7c36b6729e96429c9c32b3331989ef19976592c5f3c77a",
"sha256:9d397bf41caad3f489e10774667310d73cb9c4258e9aed94b9ec734b34b495fd",
"sha256:b579f8acbf2bdd9ea200b1d5dea36abd93cabf56cf626ab9c744a432e15c815f",
"sha256:b865b01a2e7f96db0c5d12cfea590f98d8c5ba64ad222300d93ce6ff9138bcad",
"sha256:bf34e368e8dd976423396555078def5cfc3039ebc6fc06d1ae2c5a65eebbcde4",
"sha256:c6938967f8528b3668622a9ed3b31d145fab161a32f5891ea7b84f6b790be05b",
"sha256:d1c2676e3d840852a2de7c7d5d76407c772927addff8d742b9808fe0afccebdf",
"sha256:d7124f52f3bd259f510651450e18e0fd081ed82f3c08541dffc7b94b883aa981",
"sha256:d900d949b707778696fdf01036f58c9876a0d8bfe116e8d220cfd4b15f14e741",
"sha256:ebfd274dcd5133e0afae738e6d9da4323c3eb021b3e13052d8cbd0e457b1256e",
"sha256:ed361bb83436f117f9917d282a456f9e5009ea12fd6de8742d1a4752c3017e93",
"sha256:f5144c75445ae3ca2057faac03fda5a902eff196702b0a24daf1d6ce0650514b"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'",
"version": "==1.6.0"
"markers": "python_full_version >= '3.9.0'",
"version": "==6.0.1"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
"sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325",
"sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"
],
"version": "==0.6.1"
"markers": "python_version >= '3.6'",
"version": "==0.7.0"
},
"packaging": {
"hashes": [
"sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5",
"sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"
"sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484",
"sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"
],
"index": "pypi",
"version": "==20.9"
"markers": "python_version >= '3.8'",
"version": "==25.0"
},
"platformdirs": {
"hashes": [
"sha256:3d512d96e16bcb959a814c9f348431070822a6496326a4be0911c40b5a74c2bc",
"sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4"
],
"markers": "python_version >= '3.9'",
"version": "==4.3.8"
},
"pluggy": {
"hashes": [
"sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0",
"sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"
"sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3",
"sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.13.1"
"markers": "python_version >= '3.9'",
"version": "==1.6.0"
},
"py": {
"pygments": {
"hashes": [
"sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3",
"sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"
"sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887",
"sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==1.10.0"
"markers": "python_version >= '3.8'",
"version": "==2.19.2"
},
"pylint": {
"hashes": [
"sha256:586d8fa9b1891f4b725f587ef267abe2a1bad89d6b184520c7f07a253dd6e217",
"sha256:f7e2072654a6b6afdf5e2fb38147d3e2d2d43c89f648637baab63e026481279b"
"sha256:2b11de8bde49f9c5059452e0c310c079c746a0a8eeaa789e5aa966ecc23e4559",
"sha256:43860aafefce92fca4cf6b61fe199cdc5ae54ea28f9bf4cd49de267b5195803d"
],
"index": "pypi",
"version": "==2.8.2"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
"sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.7"
"markers": "python_full_version >= '3.9.0'",
"version": "==3.3.7"
},
"pytest": {
"hashes": [
"sha256:50bcad0a0b9c5a72c8e4e7c9855a3ad496ca6a881a3641b4260605450772c54b",
"sha256:91ef2131a9bd6be8f76f1f08eac5c5317221d6ad1e143ae03894b862e8976890"
"sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7",
"sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c"
],
"index": "pypi",
"version": "==6.2.4"
"markers": "python_version >= '3.9'",
"version": "==8.4.1"
},
"toml": {
"tomlkit": {
"hashes": [
"sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b",
"sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"
"sha256:430cf247ee57df2b94ee3fbe588e71d362a941ebb545dec29b53961d61add2a1",
"sha256:c89c649d79ee40629a9fda55f8ace8c6a1b42deb912b2a8fd8d942ddadb606b0"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.10.2"
},
"wrapt": {
"hashes": [
"sha256:b62ffa81fb85f4332a4f609cab4ac40709470da05643a082ec1eb88e6d9b97d7"
],
"version": "==1.12.1"
"markers": "python_version >= '3.8'",
"version": "==0.13.3"
}
}
}

View File

@ -41,6 +41,7 @@ REGISTRY_CREDENTIAL_PASSWORD={{registry_password}}
CSRF_KEY={{csrf_key}}
ROBOT_SCANNER_NAME_PREFIX={{scan_robot_prefix}}
PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE=docker-hub,harbor,azure-acr,ali-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory
REPLICATION_ADAPTER_WHITELIST=ali-acr,aws-ecr,azure-acr,docker-hub,docker-registry,github-ghcr,google-gcr,harbor,huawei-SWR,jfrog-artifactory,tencent-tcr,volcengine-cr
HTTP_PROXY={{core_http_proxy}}
HTTPS_PROXY={{core_https_proxy}}

View File

@ -6,6 +6,8 @@ REGISTRY_CONTROLLER_URL={{registry_controller_url}}
JOBSERVICE_WEBHOOK_JOB_MAX_RETRY={{notification_webhook_job_max_retry}}
JOBSERVICE_WEBHOOK_JOB_HTTP_CLIENT_TIMEOUT={{notification_webhook_job_http_client_timeout}}
LOG_LEVEL={{log_level}}
{%if internal_tls.enabled %}
INTERNAL_TLS_ENABLED=true
INTERNAL_TLS_TRUST_CA_PATH=/harbor_cust_cert/harbor_internal_ca.crt
@ -21,7 +23,6 @@ HTTPS_PROXY={{jobservice_https_proxy}}
NO_PROXY={{jobservice_no_proxy}}
REGISTRY_CREDENTIAL_USERNAME={{registry_username}}
REGISTRY_CREDENTIAL_PASSWORD={{registry_password}}
MAX_JOB_DURATION_SECONDS={{max_job_duration_seconds}}
{% if metric.enabled %}
METRIC_NAMESPACE=harbor

View File

@ -227,7 +227,6 @@ def parse_yaml_config(config_file_path, with_trivy):
value = config_dict["max_job_duration_hours"]
if not isinstance(value, int) or value < 24:
config_dict["max_job_duration_hours"] = 24
config_dict['max_job_duration_seconds'] = config_dict['max_job_duration_hours'] * 3600
config_dict['job_loggers'] = js_config["job_loggers"]
config_dict['logger_sweeper_duration'] = js_config["logger_sweeper_duration"]
config_dict['jobservice_secret'] = generate_random_string(16)

View File

@ -34,7 +34,6 @@ def prepare_job_service(config_dict):
internal_tls=config_dict['internal_tls'],
max_job_workers=config_dict['max_job_workers'],
max_job_duration_hours=config_dict['max_job_duration_hours'],
max_job_duration_seconds=config_dict['max_job_duration_seconds'],
job_loggers=config_dict['job_loggers'],
logger_sweeper_duration=config_dict['logger_sweeper_duration'],
redis_url=config_dict['redis_url_js'],

View File

@ -1,4 +1,5 @@
FROM golang:1.23.8
ARG golang_image
FROM ${golang_image}
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
ENV BUILDTAGS include_oss include_gcs

View File

@ -14,6 +14,8 @@ fi
VERSION="$1"
DISTRIBUTION_SRC="$2"
GOBUILDIMAGE="$3"
DOCKERNETWORK="$4"
set -e
@ -32,7 +34,7 @@ cd $cur
echo 'build the registry binary ...'
cp Dockerfile.binary $TEMP
docker build -f $TEMP/Dockerfile.binary -t registry-golang $TEMP
docker build --network=$DOCKERNETWORK --build-arg golang_image=$GOBUILDIMAGE -f $TEMP/Dockerfile.binary -t registry-golang $TEMP
echo 'copy the registry binary to local...'
ID=$(docker create registry-golang)

View File

@ -1,4 +1,5 @@
FROM golang:1.23.8
ARG golang_image
FROM ${golang_image}
ADD . /go/src/github.com/goharbor/harbor-scanner-trivy/
WORKDIR /go/src/github.com/goharbor/harbor-scanner-trivy/

View File

@ -8,6 +8,8 @@ if [ -z $1 ]; then
fi
VERSION="$1"
GOBUILDIMAGE="$2"
DOCKERNETWORK="$3"
set -e
@ -19,9 +21,9 @@ TEMP=$(mktemp -d ${TMPDIR-/tmp}/trivy-adapter.XXXXXX)
git clone https://github.com/goharbor/harbor-scanner-trivy.git $TEMP
cd $TEMP; git checkout $VERSION; cd -
echo "Building Trivy adapter binary based on golang:1.23.8..."
echo "Building Trivy adapter binary ..."
cp Dockerfile.binary $TEMP
docker build -f $TEMP/Dockerfile.binary -t trivy-adapter-golang $TEMP
docker build --network=$DOCKERNETWORK --build-arg golang_image=$GOBUILDIMAGE -f $TEMP/Dockerfile.binary -t trivy-adapter-golang $TEMP
echo "Copying Trivy adapter binary from the container to the local directory..."
ID=$(docker create trivy-adapter-golang)

View File

@ -1,76 +1,56 @@
linters-settings:
gofmt:
# Simplify code: gofmt with `-s` option.
# Default: true
simplify: false
misspell:
locale: US,UK
goimports:
local-prefixes: github.com/goharbor/harbor
stylecheck:
checks: [
"ST1019", # Importing the same package multiple times.
]
goheader:
template-path: copyright.tmpl
version: "2"
linters:
disable-all: true
default: none
enable:
- gofmt
- goheader
- misspell
- typecheck
# - dogsled
# - dupl
# - depguard
# - funlen
# - goconst
# - gocritic
# - gocyclo
# - goimports
# - goprintffuncname
- ineffassign
# - nakedret
# - nolintlint
- revive
- whitespace
- bodyclose
- errcheck
# - gosec
- gosimple
- goimports
- goheader
- govet
# - noctx
# - rowserrcheck
- ineffassign
- misspell
- revive
- staticcheck
- stylecheck
# - unconvert
# - unparam
# - unused // disabled due to too many false positive check and limited support golang 1.19 https://github.com/dominikh/go-tools/issues/1282
run:
skip-files:
- ".*_test.go"
- ".*test.go"
skip-dirs:
- "testing"
timeout: 20m
issue:
max-same-issues: 0
max-per-linter: 0
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- goimports
- path: src/testing/*.go
linters:
- goimports
- path: src/jobservice/mgt/mock_manager.go
linters:
- goimports
- whitespace
settings:
goheader:
template-path: copyright.tmpl
misspell:
locale: US,UK
staticcheck:
checks:
- ST1019
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
- .*_test\.go
- .*test\.go
- testing
- src/jobservice/mgt/mock_manager.go
formatters:
enable:
- gofmt
- goimports
settings:
gofmt:
simplify: false
goimports:
local-prefixes:
- github.com/goharbor/harbor
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
- .*_test\.go
- .*test\.go
- testing
- src/jobservice/mgt/mock_manager.go

View File

@ -78,7 +78,7 @@ func (b *BaseAPI) RenderError(code int, text string) {
}
// DecodeJSONReq decodes a json request
func (b *BaseAPI) DecodeJSONReq(v interface{}) error {
func (b *BaseAPI) DecodeJSONReq(v any) error {
err := json.Unmarshal(b.Ctx.Input.CopyBody(1<<35), v)
if err != nil {
log.Errorf("Error while decoding the json request, error: %v, %v",
@ -89,7 +89,7 @@ func (b *BaseAPI) DecodeJSONReq(v interface{}) error {
}
// Validate validates v if it implements interface validation.ValidFormer
func (b *BaseAPI) Validate(v interface{}) (bool, error) {
func (b *BaseAPI) Validate(v any) (bool, error) {
validator := validation.Validation{}
isValid, err := validator.Valid(v)
if err != nil {
@ -108,7 +108,7 @@ func (b *BaseAPI) Validate(v interface{}) (bool, error) {
}
// DecodeJSONReqAndValidate does both decoding and validation
func (b *BaseAPI) DecodeJSONReqAndValidate(v interface{}) (bool, error) {
func (b *BaseAPI) DecodeJSONReqAndValidate(v any) (bool, error) {
if err := b.DecodeJSONReq(v); err != nil {
return false, err
}

View File

@ -252,4 +252,7 @@ const (
// Global Leeway used for token validation
JwtLeeway = 60 * time.Second
// The replication adapter whitelist
ReplicationAdapterWhiteList = "REPLICATION_ADAPTER_WHITELIST"
)

View File

@ -144,6 +144,6 @@ func (l *mLogger) Verbose() bool {
}
// Printf ...
func (l *mLogger) Printf(format string, v ...interface{}) {
func (l *mLogger) Printf(format string, v ...any) {
l.logger.Infof(format, v...)
}

View File

@ -29,7 +29,7 @@ import (
var testCtx context.Context
func execUpdate(o orm.TxOrmer, sql string, params ...interface{}) error {
func execUpdate(o orm.TxOrmer, sql string, params ...any) error {
p, err := o.Raw(sql).Prepare()
if err != nil {
return err

View File

@ -27,7 +27,7 @@ func TestMaxOpenConns(t *testing.T) {
queryNum := 200
results := make([]bool, queryNum)
for i := 0; i < queryNum; i++ {
for i := range queryNum {
wg.Add(1)
go func(i int) {
defer wg.Done()

View File

@ -142,7 +142,7 @@ func ArrayEqual(arrayA, arrayB []int) bool {
return false
}
size := len(arrayA)
for i := 0; i < size; i++ {
for i := range size {
if arrayA[i] != arrayB[i] {
return false
}

View File

@ -69,7 +69,7 @@ func (c *Client) Do(req *http.Request) (*http.Response, error) {
}
// Get ...
func (c *Client) Get(url string, v ...interface{}) error {
func (c *Client) Get(url string, v ...any) error {
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return err
@ -98,7 +98,7 @@ func (c *Client) Head(url string) error {
}
// Post ...
func (c *Client) Post(url string, v ...interface{}) error {
func (c *Client) Post(url string, v ...any) error {
var reader io.Reader
if len(v) > 0 {
if r, ok := v[0].(io.Reader); ok {
@ -123,7 +123,7 @@ func (c *Client) Post(url string, v ...interface{}) error {
}
// Put ...
func (c *Client) Put(url string, v ...interface{}) error {
func (c *Client) Put(url string, v ...any) error {
var reader io.Reader
if len(v) > 0 {
data, err := json.Marshal(v[0])
@ -176,7 +176,7 @@ func (c *Client) do(req *http.Request) ([]byte, error) {
// GetAndIteratePagination iterates the pagination header and returns all resources
// The parameter "v" must be a pointer to a slice
func (c *Client) GetAndIteratePagination(endpoint string, v interface{}) error {
func (c *Client) GetAndIteratePagination(endpoint string, v any) error {
url, err := url.Parse(endpoint)
if err != nil {
return err

View File

@ -15,7 +15,7 @@
package models
// Parameters for job execution.
type Parameters map[string]interface{}
type Parameters map[string]any
// JobRequest is the request of launching a job.
type JobRequest struct {
@ -96,5 +96,5 @@ type JobStatusChange struct {
// Message is designed for sub/pub messages
type Message struct {
Event string
Data interface{} // generic format
Data any // generic format
}

View File

@ -119,7 +119,7 @@ func BenchmarkProjectEvaluator(b *testing.B) {
resource := NewNamespace(public.ProjectID).Resource(rbac.ResourceRepository)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
evaluator.HasPermission(context.TODO(), resource, rbac.ActionPull)
}
}

View File

@ -43,7 +43,7 @@ func (ns *projectNamespace) Resource(subresources ...types.Resource) types.Resou
return types.Resource(fmt.Sprintf("/project/%d", ns.projectID)).Subresource(subresources...)
}
func (ns *projectNamespace) Identity() interface{} {
func (ns *projectNamespace) Identity() any {
return ns.projectID
}

View File

@ -162,6 +162,7 @@ var (
{Resource: rbac.ResourceRobot, Action: rbac.ActionRead},
{Resource: rbac.ResourceRobot, Action: rbac.ActionList},
{Resource: rbac.ResourceNotificationPolicy, Action: rbac.ActionRead},
{Resource: rbac.ResourceNotificationPolicy, Action: rbac.ActionList},
{Resource: rbac.ResourceScan, Action: rbac.ActionCreate},

View File

@ -38,7 +38,7 @@ func (ns *systemNamespace) Resource(subresources ...types.Resource) types.Resour
return types.Resource("/system/").Subresource(subresources...)
}
func (ns *systemNamespace) Identity() interface{} {
func (ns *systemNamespace) Identity() any {
return nil
}

View File

@ -63,7 +63,7 @@ func (t *tokenSecurityCtx) GetMyProjects() ([]*models.Project, error) {
return []*models.Project{}, nil
}
func (t *tokenSecurityCtx) GetProjectRoles(_ interface{}) []int {
func (t *tokenSecurityCtx) GetProjectRoles(_ any) []int {
return []int{}
}

View File

@ -18,7 +18,7 @@ import (
"github.com/goharbor/harbor/src/common"
)
var defaultConfig = map[string]interface{}{
var defaultConfig = map[string]any{
common.ExtEndpoint: "https://host01.com",
common.AUTHMode: common.DBAuth,
common.DatabaseType: "postgresql",
@ -66,6 +66,6 @@ var defaultConfig = map[string]interface{}{
}
// GetDefaultConfigMap returns the default config map for easier modification.
func GetDefaultConfigMap() map[string]interface{} {
func GetDefaultConfigMap() map[string]any {
return defaultConfig
}

View File

@ -30,7 +30,7 @@ type GCResult struct {
}
// NewRegistryCtl returns a mock registry server
func NewRegistryCtl(_ map[string]interface{}) (*httptest.Server, error) {
func NewRegistryCtl(_ map[string]any) (*httptest.Server, error) {
m := []*RequestHandlerMapping{}
gcr := GCResult{true, "hello-world", time.Now(), time.Now()}

View File

@ -94,9 +94,9 @@ func NewServer(mappings ...*RequestHandlerMapping) *httptest.Server {
}
// GetUnitTestConfig ...
func GetUnitTestConfig() map[string]interface{} {
func GetUnitTestConfig() map[string]any {
ipAddress := os.Getenv("IP")
return map[string]interface{}{
return map[string]any{
common.ExtEndpoint: fmt.Sprintf("https://%s", ipAddress),
common.AUTHMode: "db_auth",
common.DatabaseType: "postgresql",
@ -130,7 +130,7 @@ func GetUnitTestConfig() map[string]interface{} {
}
// TraceCfgMap ...
func TraceCfgMap(cfgs map[string]interface{}) {
func TraceCfgMap(cfgs map[string]any) {
var keys []string
for k := range cfgs {
keys = append(keys, k)

View File

@ -89,7 +89,7 @@ type SearchUserEntry struct {
ExtID string `json:"externalId"`
UserName string `json:"userName"`
Emails []SearchUserEmailEntry `json:"emails"`
Groups []interface{}
Groups []any
}
// SearchUserRes is the struct to parse the result of search user API of UAA

View File

@ -75,7 +75,7 @@ func GenerateRandomStringWithLen(length int) string {
if err != nil {
log.Warningf("Error reading random bytes: %v", err)
}
for i := 0; i < length; i++ {
for i := range length {
result[i] = chars[int(result[i])%l]
}
return string(result)
@ -140,7 +140,7 @@ func ParseTimeStamp(timestamp string) (*time.Time, error) {
}
// ConvertMapToStruct is used to fill the specified struct with map.
func ConvertMapToStruct(object interface{}, values interface{}) error {
func ConvertMapToStruct(object any, values any) error {
if object == nil {
return errors.New("nil struct is not supported")
}
@ -158,7 +158,7 @@ func ConvertMapToStruct(object interface{}, values interface{}) error {
}
// ParseProjectIDOrName parses value to ID(int64) or name(string)
func ParseProjectIDOrName(value interface{}) (int64, string, error) {
func ParseProjectIDOrName(value any) (int64, string, error) {
if value == nil {
return 0, "", errors.New("harborIDOrName is nil")
}
@ -177,7 +177,7 @@ func ParseProjectIDOrName(value interface{}) (int64, string, error) {
}
// SafeCastString -- cast an object to string safely
func SafeCastString(value interface{}) string {
func SafeCastString(value any) string {
if result, ok := value.(string); ok {
return result
}
@ -185,7 +185,7 @@ func SafeCastString(value interface{}) string {
}
// SafeCastInt --
func SafeCastInt(value interface{}) int {
func SafeCastInt(value any) int {
if result, ok := value.(int); ok {
return result
}
@ -193,7 +193,7 @@ func SafeCastInt(value interface{}) int {
}
// SafeCastBool --
func SafeCastBool(value interface{}) bool {
func SafeCastBool(value any) bool {
if result, ok := value.(bool); ok {
return result
}
@ -201,7 +201,7 @@ func SafeCastBool(value interface{}) bool {
}
// SafeCastFloat64 --
func SafeCastFloat64(value interface{}) float64 {
func SafeCastFloat64(value any) float64 {
if result, ok := value.(float64); ok {
return result
}
@ -214,9 +214,9 @@ func TrimLower(str string) string {
}
// GetStrValueOfAnyType return string format of any value, for map, need to convert to json
func GetStrValueOfAnyType(value interface{}) string {
func GetStrValueOfAnyType(value any) string {
var strVal string
if _, ok := value.(map[string]interface{}); ok {
if _, ok := value.(map[string]any); ok {
b, err := json.Marshal(value)
if err != nil {
log.Errorf("can not marshal json object, error %v", err)
@ -237,18 +237,18 @@ func GetStrValueOfAnyType(value interface{}) string {
}
// IsIllegalLength ...
func IsIllegalLength(s string, min int, max int) bool {
if min == -1 {
return (len(s) > max)
func IsIllegalLength(s string, minVal int, maxVal int) bool {
if minVal == -1 {
return (len(s) > maxVal)
}
if max == -1 {
return (len(s) <= min)
if maxVal == -1 {
return (len(s) <= minVal)
}
return (len(s) < min || len(s) > max)
return (len(s) < minVal || len(s) > maxVal)
}
// ParseJSONInt ...
func ParseJSONInt(value interface{}) (int, bool) {
func ParseJSONInt(value any) (int, bool) {
switch v := value.(type) {
case float64:
return int(v), true
@ -337,13 +337,3 @@ func MostMatchSorter(a, b string, matchWord string) bool {
func IsLocalPath(path string) bool {
return len(path) == 0 || (strings.HasPrefix(path, "/") && !strings.HasPrefix(path, "//"))
}
// StringInSlice check if the string is in the slice
func StringInSlice(str string, slice []string) bool {
for _, s := range slice {
if s == str {
return true
}
}
return false
}

View File

@ -216,7 +216,7 @@ type testingStruct struct {
}
func TestConvertMapToStruct(t *testing.T) {
dataMap := make(map[string]interface{})
dataMap := make(map[string]any)
dataMap["Name"] = "testing"
dataMap["Count"] = 100
@ -232,7 +232,7 @@ func TestConvertMapToStruct(t *testing.T) {
func TestSafeCastString(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -254,7 +254,7 @@ func TestSafeCastString(t *testing.T) {
func TestSafeCastBool(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -276,7 +276,7 @@ func TestSafeCastBool(t *testing.T) {
func TestSafeCastInt(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -298,7 +298,7 @@ func TestSafeCastInt(t *testing.T) {
func TestSafeCastFloat64(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -342,7 +342,7 @@ func TestTrimLower(t *testing.T) {
func TestGetStrValueOfAnyType(t *testing.T) {
type args struct {
value interface{}
value any
}
tests := []struct {
name string
@ -357,7 +357,7 @@ func TestGetStrValueOfAnyType(t *testing.T) {
{"string", args{"hello world"}, "hello world"},
{"bool", args{true}, "true"},
{"bool", args{false}, "false"},
{"map", args{map[string]interface{}{"key1": "value1"}}, "{\"key1\":\"value1\"}"},
{"map", args{map[string]any{"key1": "value1"}}, "{\"key1\":\"value1\"}"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@ -66,8 +66,7 @@ func parseV1alpha1SkipList(artifact *artifact.Artifact, manifest *v1.Manifest) {
skipListAnnotationKey := fmt.Sprintf("%s.%s.%s", AnnotationPrefix, V1alpha1, SkipList)
skipList, ok := manifest.Config.Annotations[skipListAnnotationKey]
if ok {
skipKeyList := strings.Split(skipList, ",")
for _, skipKey := range skipKeyList {
for skipKey := range strings.SplitSeq(skipList, ",") {
delete(metadata, skipKey)
}
artifact.ExtraAttrs = metadata

View File

@ -231,7 +231,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err := manifest.Payload()
p.Require().Nil(err)
metadata := map[string]interface{}{}
metadata := map[string]any{}
configBlob := io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -244,7 +244,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 12)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("sha256:d923b93eadde0af5c639a972710a4d919066aba5d0dfbf4b9385099f70272da0", art.Icon)
// ormbManifestWithoutSkipList
@ -255,7 +255,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err = manifest.Payload()
p.Require().Nil(err)
metadata = map[string]interface{}{}
metadata = map[string]any{}
configBlob = io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -268,7 +268,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 13)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("sha256:d923b93eadde0af5c639a972710a4d919066aba5d0dfbf4b9385099f70272da0", art.Icon)
// ormbManifestWithoutIcon
@ -279,7 +279,7 @@ func (p *v1alpha1TestSuite) TestParse() {
manifestMediaType, content, err = manifest.Payload()
p.Require().Nil(err)
metadata = map[string]interface{}{}
metadata = map[string]any{}
configBlob = io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
p.Require().Nil(err)
@ -290,7 +290,7 @@ func (p *v1alpha1TestSuite) TestParse() {
p.Len(art.ExtraAttrs, 12)
p.Equal("CNN Model", art.ExtraAttrs["description"])
p.Equal("TensorFlow", art.ExtraAttrs["framework"])
p.Equal([]interface{}{map[string]interface{}{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal([]any{map[string]any{"name": "batch_size", "value": "32"}}, art.ExtraAttrs["hyperparameters"])
p.Equal("", art.Icon)
}

View File

@ -313,7 +313,7 @@ func (c *controller) getByTag(ctx context.Context, repository, tag string, optio
return nil, err
}
tags, err := c.tagCtl.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"RepositoryID": repo.RepositoryID,
"Name": tag,
},
@ -356,7 +356,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
return nil
}
parents, err := c.artMgr.ListReferences(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"ChildID": id,
},
})
@ -385,7 +385,7 @@ func (c *controller) deleteDeeply(ctx context.Context, id int64, isRoot, isAcces
if acc.IsHard() {
// if this acc artifact has parent(is child), set isRoot to false
parents, err := c.artMgr.ListReferences(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"ChildID": acc.GetData().ArtifactID,
},
})
@ -752,7 +752,7 @@ func (c *controller) populateIcon(art *Artifact) {
func (c *controller) populateTags(ctx context.Context, art *Artifact, option *tag.Option) {
tags, err := c.tagCtl.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"artifact_id": art.ID,
},
}, option)

View File

@ -56,7 +56,7 @@ func (suite *IteratorTestSuite) TeardownSuite() {
func (suite *IteratorTestSuite) TestIterator() {
suite.accMgr.On("List", mock.Anything, mock.Anything).Return([]accessorymodel.Accessory{}, nil)
q1 := &q.Query{PageNumber: 1, PageSize: 5, Keywords: map[string]interface{}{}}
q1 := &q.Query{PageNumber: 1, PageSize: 5, Keywords: map[string]any{}}
suite.artMgr.On("List", mock.Anything, q1).Return([]*artifact.Artifact{
{ID: 1},
{ID: 2},
@ -65,7 +65,7 @@ func (suite *IteratorTestSuite) TestIterator() {
{ID: 5},
}, nil)
q2 := &q.Query{PageNumber: 2, PageSize: 5, Keywords: map[string]interface{}{}}
q2 := &q.Query{PageNumber: 2, PageSize: 5, Keywords: map[string]any{}}
suite.artMgr.On("List", mock.Anything, q2).Return([]*artifact.Artifact{
{ID: 6},
{ID: 7},

View File

@ -40,7 +40,7 @@ func (artifact *Artifact) UnmarshalJSON(data []byte) error {
type Alias Artifact
ali := &struct {
*Alias
AccessoryItems []interface{} `json:"accessories,omitempty"`
AccessoryItems []any `json:"accessories,omitempty"`
}{
Alias: (*Alias)(artifact),
}

View File

@ -44,7 +44,7 @@ type ManifestProcessor struct {
// AbstractMetadata abstracts metadata of artifact
func (m *ManifestProcessor) AbstractMetadata(ctx context.Context, artifact *artifact.Artifact, content []byte) error {
// parse metadata from config layer
metadata := map[string]interface{}{}
metadata := map[string]any{}
if err := m.UnmarshalConfig(ctx, artifact.RepositoryName, content, &metadata); err != nil {
return err
}
@ -55,7 +55,7 @@ func (m *ManifestProcessor) AbstractMetadata(ctx context.Context, artifact *arti
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]interface{}{}
artifact.ExtraAttrs = map[string]any{}
}
for _, property := range m.properties {
artifact.ExtraAttrs[property] = metadata[property]
@ -80,7 +80,7 @@ func (m *ManifestProcessor) ListAdditionTypes(_ context.Context, _ *artifact.Art
}
// UnmarshalConfig unmarshal the config blob of the artifact into the specified object "v"
func (m *ManifestProcessor) UnmarshalConfig(_ context.Context, repository string, manifest []byte, v interface{}) error {
func (m *ManifestProcessor) UnmarshalConfig(_ context.Context, repository string, manifest []byte, v any) error {
// unmarshal manifest
mani := &v1.Manifest{}
if err := json.Unmarshal(manifest, mani); err != nil {

View File

@ -89,7 +89,7 @@ func (p *processorTestSuite) TestAbstractAddition() {
Repository: "github.com/goharbor",
},
},
Values: map[string]interface{}{
Values: map[string]any{
"cluster.enable": true,
"cluster.slaveCount": 1,
"image.pullPolicy": "Always",

View File

@ -17,6 +17,8 @@ package parser
import (
"context"
"fmt"
"io"
"path/filepath"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
@ -40,6 +42,11 @@ const (
// defaultFileSizeLimit is the default file size limit.
defaultFileSizeLimit = 1024 * 1024 * 4 // 4MB
// formatTar is the format of tar file.
formatTar = ".tar"
// formatRaw is the format of raw file.
formatRaw = ".raw"
)
// newBase creates a new base parser.
@ -70,10 +77,23 @@ func (b *base) Parse(_ context.Context, artifact *artifact.Artifact, layer *ocis
}
defer stream.Close()
content, err := untar(stream)
content, err := decodeContent(layer.MediaType, stream)
if err != nil {
return "", nil, fmt.Errorf("failed to untar the content: %w", err)
return "", nil, fmt.Errorf("failed to decode content: %w", err)
}
return contentTypeTextPlain, content, nil
}
func decodeContent(mediaType string, reader io.Reader) ([]byte, error) {
format := filepath.Ext(mediaType)
switch format {
case formatTar:
return untar(reader)
case formatRaw:
return io.ReadAll(reader)
default:
return nil, fmt.Errorf("unsupported format: %s", format)
}
}

View File

@ -63,10 +63,11 @@ func TestBaseParse(t *testing.T) {
expectedError: "failed to pull blob from registry: registry error",
},
{
name: "successful parse",
name: "successful parse (tar format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
Digest: "sha256:1234",
MediaType: "vnd.foo.bar.tar",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
@ -82,6 +83,34 @@ func TestBaseParse(t *testing.T) {
},
expectedType: contentTypeTextPlain,
},
{
name: "successful parse (raw format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.raw",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
buf.Write([]byte("test content"))
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
},
{
name: "error parse (unsupported format)",
artifact: &artifact.Artifact{RepositoryName: "test/repo"},
layer: &v1.Descriptor{
MediaType: "vnd.foo.bar.unknown",
Digest: "sha256:1234",
},
mockSetup: func(m *mock.Client) {
var buf bytes.Buffer
buf.Write([]byte("test content"))
m.On("PullBlob", "test/repo", "sha256:1234").Return(int64(0), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedError: "failed to decode content: unsupported format: .unknown",
},
}
for _, tt := range tests {

View File

@ -17,6 +17,7 @@ package parser
import (
"context"
"fmt"
"slices"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
@ -47,7 +48,10 @@ func (l *license) Parse(ctx context.Context, artifact *artifact.Artifact, manife
// lookup the license file layer
var layer *ocispec.Descriptor
for _, desc := range manifest.Layers {
if desc.MediaType == modelspec.MediaTypeModelDoc {
if slices.Contains([]string{
modelspec.MediaTypeModelDoc,
modelspec.MediaTypeModelDocRaw,
}, desc.MediaType) {
if desc.Annotations != nil {
filepath := desc.Annotations[modelspec.AnnotationFilepath]
if filepath == "LICENSE" || filepath == "LICENSE.txt" {

View File

@ -83,6 +83,29 @@ func TestLicenseParser(t *testing.T) {
expectedType: contentTypeTextPlain,
expectedOutput: []byte("MIT License"),
},
{
name: "LICENSE parse success (raw)",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDocRaw,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "LICENSE",
},
Digest: "sha256:abc123",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
buf.Write([]byte("MIT License"))
mc.On("PullBlob", mock.Anything, "sha256:abc123").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeTextPlain,
expectedOutput: []byte("MIT License"),
},
{
name: "LICENSE.txt parse success",
manifest: &ocispec.Manifest{

View File

@ -17,6 +17,7 @@ package parser
import (
"context"
"fmt"
"slices"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
@ -47,7 +48,10 @@ func (r *readme) Parse(ctx context.Context, artifact *artifact.Artifact, manifes
// lookup the readme file layer.
var layer *ocispec.Descriptor
for _, desc := range manifest.Layers {
if desc.MediaType == modelspec.MediaTypeModelDoc {
if slices.Contains([]string{
modelspec.MediaTypeModelDoc,
modelspec.MediaTypeModelDocRaw,
}, desc.MediaType) {
if desc.Annotations != nil {
filepath := desc.Annotations[modelspec.AnnotationFilepath]
if filepath == "README" || filepath == "README.md" {

View File

@ -113,6 +113,29 @@ func TestReadmeParser(t *testing.T) {
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "README parse success (raw)",
manifest: &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: modelspec.MediaTypeModelDocRaw,
Annotations: map[string]string{
modelspec.AnnotationFilepath: "README",
},
Digest: "sha256:def456",
},
},
},
setupMockReg: func(mc *mockregistry.Client) {
var buf bytes.Buffer
buf.Write([]byte("# Test README"))
mc.On("PullBlob", mock.Anything, "sha256:def456").
Return(int64(buf.Len()), io.NopCloser(bytes.NewReader(buf.Bytes())), nil)
},
expectedType: contentTypeMarkdown,
expectedOutput: []byte("# Test README"),
},
{
name: "registry error",
manifest: &ocispec.Manifest{

View File

@ -156,7 +156,7 @@ func TestAddNode(t *testing.T) {
// Verify the path exists.
current := root
parts := filepath.Clean(tt.path)
for _, part := range strings.Split(parts, string(filepath.Separator)) {
for part := range strings.SplitSeq(parts, string(filepath.Separator)) {
if part == "" {
continue
}

View File

@ -110,7 +110,7 @@ func (d *defaultProcessor) AbstractMetadata(ctx context.Context, artifact *artif
}
defer blob.Close()
// parse metadata from config layer
metadata := map[string]interface{}{}
metadata := map[string]any{}
if err = json.NewDecoder(blob).Decode(&metadata); err != nil {
return err
}

View File

@ -268,7 +268,7 @@ func (d *defaultProcessorTestSuite) TestAbstractMetadata() {
manifestMediaType, content, err := manifest.Payload()
d.Require().Nil(err)
metadata := map[string]interface{}{}
metadata := map[string]any{}
configBlob := io.NopCloser(strings.NewReader(ormbConfig))
err = json.NewDecoder(configBlob).Decode(&metadata)
d.Require().Nil(err)
@ -289,7 +289,7 @@ func (d *defaultProcessorTestSuite) TestAbstractMetadataOfOCIManifesttWithUnknow
d.Require().Nil(err)
configBlob := io.NopCloser(strings.NewReader(UnknownJsonConfig))
metadata := map[string]interface{}{}
metadata := map[string]any{}
err = json.NewDecoder(configBlob).Decode(&metadata)
d.Require().Nil(err)

View File

@ -44,7 +44,7 @@ func (m *manifestV1Processor) AbstractMetadata(_ context.Context, artifact *arti
return err
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]interface{}{}
artifact.ExtraAttrs = map[string]any{}
}
artifact.ExtraAttrs["architecture"] = mani.Architecture
return nil

View File

@ -59,7 +59,7 @@ func (m *manifestV2Processor) AbstractMetadata(ctx context.Context, artifact *ar
return err
}
if artifact.ExtraAttrs == nil {
artifact.ExtraAttrs = map[string]interface{}{}
artifact.ExtraAttrs = map[string]any{}
}
artifact.ExtraAttrs["created"] = config.Created
artifact.ExtraAttrs["architecture"] = config.Architecture

View File

@ -62,14 +62,14 @@ type Processor struct {
}
func (m *Processor) AbstractMetadata(ctx context.Context, art *artifact.Artifact, manifestBody []byte) error {
art.ExtraAttrs = map[string]interface{}{}
art.ExtraAttrs = map[string]any{}
manifest := &v1.Manifest{}
if err := json.Unmarshal(manifestBody, manifest); err != nil {
return err
}
if art.ExtraAttrs == nil {
art.ExtraAttrs = map[string]interface{}{}
art.ExtraAttrs = map[string]any{}
}
if manifest.Annotations[AnnotationVariantKey] == AnnotationVariantValue || manifest.Annotations[AnnotationHandlerKey] == AnnotationHandlerValue {
// for annotation way

View File

@ -225,10 +225,10 @@ func (c *controller) Get(ctx context.Context, digest string, options ...Option)
opts := newOptions(options...)
keywords := make(map[string]interface{})
keywords := make(map[string]any)
if digest != "" {
ol := q.OrList{
Values: []interface{}{
Values: []any{
digest,
},
}

View File

@ -232,7 +232,7 @@ func (suite *ControllerTestSuite) TestGet() {
func (suite *ControllerTestSuite) TestSync() {
var references []distribution.Descriptor
for i := 0; i < 5; i++ {
for i := range 5 {
references = append(references, distribution.Descriptor{
MediaType: fmt.Sprintf("media type %d", i),
Digest: suite.Digest(),

View File

@ -46,11 +46,11 @@ type Controller interface {
// UserConfigs get the user scope configurations
UserConfigs(ctx context.Context) (map[string]*models.Value, error)
// UpdateUserConfigs update the user scope configurations
UpdateUserConfigs(ctx context.Context, conf map[string]interface{}) error
UpdateUserConfigs(ctx context.Context, conf map[string]any) error
// AllConfigs get all configurations, used by internal, should include the system config items
AllConfigs(ctx context.Context) (map[string]interface{}, error)
AllConfigs(ctx context.Context) (map[string]any, error)
// ConvertForGet - delete sensitive attrs and add editable field to every attr
ConvertForGet(ctx context.Context, cfg map[string]interface{}, internal bool) (map[string]*models.Value, error)
ConvertForGet(ctx context.Context, cfg map[string]any, internal bool) (map[string]*models.Value, error)
// OverwriteConfig overwrite config in the database and set all configure read only when CONFIG_OVERWRITE_JSON is provided
OverwriteConfig(ctx context.Context) error
}
@ -70,13 +70,13 @@ func (c *controller) UserConfigs(ctx context.Context) (map[string]*models.Value,
return c.ConvertForGet(ctx, configs, false)
}
func (c *controller) AllConfigs(ctx context.Context) (map[string]interface{}, error) {
func (c *controller) AllConfigs(ctx context.Context) (map[string]any, error) {
mgr := config.GetCfgManager(ctx)
configs := mgr.GetAll(ctx)
return configs, nil
}
func (c *controller) UpdateUserConfigs(ctx context.Context, conf map[string]interface{}) error {
func (c *controller) UpdateUserConfigs(ctx context.Context, conf map[string]any) error {
if readOnlyForAll {
return errors.ForbiddenError(nil).WithMessage("current config is init by env variable: CONFIG_OVERWRITE_JSON, it cannot be updated")
}
@ -97,7 +97,7 @@ func (c *controller) UpdateUserConfigs(ctx context.Context, conf map[string]inte
return c.updateLogEndpoint(ctx, conf)
}
func (c *controller) updateLogEndpoint(ctx context.Context, cfgs map[string]interface{}) error {
func (c *controller) updateLogEndpoint(ctx context.Context, cfgs map[string]any) error {
// check if the audit log forward endpoint updated
if _, ok := cfgs[common.AuditLogForwardEndpoint]; ok {
auditEP := config.AuditLogForwardEndpoint(ctx)
@ -112,7 +112,7 @@ func (c *controller) updateLogEndpoint(ctx context.Context, cfgs map[string]inte
return nil
}
func (c *controller) validateCfg(ctx context.Context, cfgs map[string]interface{}) error {
func (c *controller) validateCfg(ctx context.Context, cfgs map[string]any) error {
mgr := config.GetCfgManager(ctx)
// check if auth can be modified
@ -146,7 +146,7 @@ func (c *controller) validateCfg(ctx context.Context, cfgs map[string]interface{
return nil
}
func verifySkipAuditLogCfg(ctx context.Context, cfgs map[string]interface{}, mgr config.Manager) error {
func verifySkipAuditLogCfg(ctx context.Context, cfgs map[string]any, mgr config.Manager) error {
updated := false
endPoint := mgr.Get(ctx, common.AuditLogForwardEndpoint).GetString()
skipAuditDB := mgr.Get(ctx, common.SkipAuditLogDatabase).GetBool()
@ -169,7 +169,7 @@ func verifySkipAuditLogCfg(ctx context.Context, cfgs map[string]interface{}, mgr
}
// verifyValueLengthCfg verifies the cfgs which need to check the value max length to align with frontend.
func verifyValueLengthCfg(_ context.Context, cfgs map[string]interface{}) error {
func verifyValueLengthCfg(_ context.Context, cfgs map[string]any) error {
maxValue := maxValueLimitedByLength(common.UIMaxLengthLimitedOfNumber)
validateCfgs := []string{
common.TokenExpiration,
@ -206,7 +206,7 @@ func maxValueLimitedByLength(length int) int64 {
var value int64
// the times for multiple, should *10 for every time
times := 1
for i := 0; i < length; i++ {
for range length {
value = value + int64(9*times)
times = times * 10
}
@ -217,11 +217,11 @@ func maxValueLimitedByLength(length int) int64 {
// ScanAllPolicy is represent the json request and object for scan all policy
// Only for migrating from the legacy schedule.
type ScanAllPolicy struct {
Type string `json:"type"`
Param map[string]interface{} `json:"parameter,omitempty"`
Type string `json:"type"`
Param map[string]any `json:"parameter,omitempty"`
}
func (c *controller) ConvertForGet(ctx context.Context, cfg map[string]interface{}, internal bool) (map[string]*models.Value, error) {
func (c *controller) ConvertForGet(ctx context.Context, cfg map[string]any, internal bool) (map[string]*models.Value, error) {
result := map[string]*models.Value{}
mList := metadata.Instance().GetAll()
@ -270,7 +270,7 @@ func (c *controller) ConvertForGet(ctx context.Context, cfg map[string]interface
}
func (c *controller) OverwriteConfig(ctx context.Context) error {
cfgMap := map[string]interface{}{}
cfgMap := map[string]any{}
if v, ok := os.LookupEnv(configOverwriteJSON); ok {
err := json.Unmarshal([]byte(v), &cfgMap)
if err != nil {

View File

@ -33,7 +33,7 @@ func Test_verifySkipAuditLogCfg(t *testing.T) {
Return(&metadata.ConfigureValue{Name: common.SkipAuditLogDatabase, Value: "true"})
type args struct {
ctx context.Context
cfgs map[string]interface{}
cfgs map[string]any
mgr config.Manager
}
tests := []struct {
@ -42,17 +42,17 @@ func Test_verifySkipAuditLogCfg(t *testing.T) {
wantErr bool
}{
{name: "both configured", args: args{ctx: context.TODO(),
cfgs: map[string]interface{}{common.AuditLogForwardEndpoint: "harbor-log:15041",
cfgs: map[string]any{common.AuditLogForwardEndpoint: "harbor-log:15041",
common.SkipAuditLogDatabase: true},
mgr: cfgManager}, wantErr: false},
{name: "no forward endpoint config", args: args{ctx: context.TODO(),
cfgs: map[string]interface{}{common.SkipAuditLogDatabase: true},
cfgs: map[string]any{common.SkipAuditLogDatabase: true},
mgr: cfgManager}, wantErr: true},
{name: "none configured", args: args{ctx: context.TODO(),
cfgs: map[string]interface{}{},
cfgs: map[string]any{},
mgr: cfgManager}, wantErr: false},
{name: "enabled skip audit log database, but change log forward endpoint to empty", args: args{ctx: context.TODO(),
cfgs: map[string]interface{}{common.AuditLogForwardEndpoint: ""},
cfgs: map[string]any{common.AuditLogForwardEndpoint: ""},
mgr: cfgManager}, wantErr: true},
}
for _, tt := range tests {
@ -89,24 +89,24 @@ func Test_maxValueLimitedByLength(t *testing.T) {
func Test_verifyValueLengthCfg(t *testing.T) {
type args struct {
ctx context.Context
cfgs map[string]interface{}
cfgs map[string]any
}
tests := []struct {
name string
args args
wantErr bool
}{
{name: "valid config", args: args{context.TODO(), map[string]interface{}{
{name: "valid config", args: args{context.TODO(), map[string]any{
common.TokenExpiration: float64(100),
common.RobotTokenDuration: float64(100),
common.SessionTimeout: float64(100),
}}, wantErr: false},
{name: "invalid config with negative value", args: args{context.TODO(), map[string]interface{}{
{name: "invalid config with negative value", args: args{context.TODO(), map[string]any{
common.TokenExpiration: float64(-1),
common.RobotTokenDuration: float64(100),
common.SessionTimeout: float64(100),
}}, wantErr: true},
{name: "invalid config with value over length limit", args: args{context.TODO(), map[string]interface{}{
{name: "invalid config with value over length limit", args: args{context.TODO(), map[string]any{
common.TokenExpiration: float64(100),
common.RobotTokenDuration: float64(100000000000000000),
common.SessionTimeout: float64(100),

View File

@ -28,12 +28,12 @@ import (
htesting "github.com/goharbor/harbor/src/testing"
)
var TestDBConfig = map[string]interface{}{
var TestDBConfig = map[string]any{
common.LDAPBaseDN: "dc=example,dc=com",
common.LDAPURL: "ldap.example.com",
}
var TestConfigWithScanAll = map[string]interface{}{
var TestConfigWithScanAll = map[string]any{
"postgresql_host": "localhost",
"postgresql_database": "registry",
"postgresql_password": "root123",
@ -67,7 +67,7 @@ func (c *controllerTestSuite) TestGetUserCfg() {
}
func (c *controllerTestSuite) TestConvertForGet() {
conf := map[string]interface{}{
conf := map[string]any{
"ldap_url": "ldaps.myexample,com",
"ldap_base_dn": "dc=myexample,dc=com",
"auth_mode": "ldap_auth",
@ -83,7 +83,7 @@ func (c *controllerTestSuite) TestConvertForGet() {
c.False(exist)
// password type should be sent to internal api call
conf2 := map[string]interface{}{
conf2 := map[string]any{
"ldap_url": "ldaps.myexample,com",
"ldap_base_dn": "dc=myexample,dc=com",
"auth_mode": "ldap_auth",
@ -109,7 +109,7 @@ func (c *controllerTestSuite) TestGetAll() {
func (c *controllerTestSuite) TestUpdateUserCfg() {
userConf := map[string]interface{}{
userConf := map[string]any{
common.LDAPURL: "ldaps.myexample,com",
common.LDAPBaseDN: "dc=myexample,dc=com",
}
@ -121,7 +121,7 @@ func (c *controllerTestSuite) TestUpdateUserCfg() {
}
c.Equal("dc=myexample,dc=com", cfgResp["ldap_base_dn"].Val)
c.Equal("ldaps.myexample,com", cfgResp["ldap_url"].Val)
badCfg := map[string]interface{}{
badCfg := map[string]any{
common.LDAPScope: 5,
}
err2 := c.controller.UpdateUserConfigs(ctx, badCfg)
@ -130,7 +130,7 @@ func (c *controllerTestSuite) TestUpdateUserCfg() {
}
/*func (c *controllerTestSuite) TestCheckUnmodifiable() {
conf := map[string]interface{}{
conf := map[string]any{
"ldap_url": "ldaps.myexample,com",
"ldap_base_dn": "dc=myexample,dc=com",
"auth_mode": "ldap_auth",

View File

@ -41,7 +41,7 @@ func (h *Handler) Name() string {
}
// Handle ...
func (h *Handler) Handle(ctx context.Context, value interface{}) error {
func (h *Handler) Handle(ctx context.Context, value any) error {
var addAuditLog bool
switch v := value.(type) {
case *event.PushArtifactEvent, *event.DeleteArtifactEvent,

View File

@ -99,7 +99,7 @@ func (a *ArtifactEventHandler) Name() string {
}
// Handle ...
func (a *ArtifactEventHandler) Handle(ctx context.Context, value interface{}) error {
func (a *ArtifactEventHandler) Handle(ctx context.Context, value any) error {
switch v := value.(type) {
case *event.PullArtifactEvent:
return a.onPull(ctx, v.ArtifactEvent)
@ -190,7 +190,7 @@ func (a *ArtifactEventHandler) syncFlushPullTime(ctx context.Context, artifactID
if tagName != "" {
tags, err := tag.Ctl.List(ctx, q.New(
map[string]interface{}{
map[string]any{
"ArtifactID": artifactID,
"Name": tagName,
}), nil)

View File

@ -53,7 +53,7 @@ func (a *ProjectEventHandler) onProjectDelete(ctx context.Context, event *event.
}
// Handle handle project event
func (a *ProjectEventHandler) Handle(ctx context.Context, value interface{}) error {
func (a *ProjectEventHandler) Handle(ctx context.Context, value any) error {
switch v := value.(type) {
case *event.DeleteProjectEvent:
return a.onProjectDelete(ctx, v)

View File

@ -36,7 +36,7 @@ func (p *Handler) Name() string {
}
// Handle ...
func (p *Handler) Handle(ctx context.Context, value interface{}) error {
func (p *Handler) Handle(ctx context.Context, value any) error {
switch v := value.(type) {
case *event.PushArtifactEvent:
return p.handlePushArtifact(ctx, v)

View File

@ -82,7 +82,7 @@ func (suite *PreheatTestSuite) TestName() {
// TestHandle ...
func (suite *PreheatTestSuite) TestHandle() {
type args struct {
data interface{}
data any
}
tests := []struct {
name string

View File

@ -36,7 +36,7 @@ func (r *Handler) Name() string {
}
// Handle ...
func (r *Handler) Handle(ctx context.Context, value interface{}) error {
func (r *Handler) Handle(ctx context.Context, value any) error {
pushArtEvent, ok := value.(*event.PushArtifactEvent)
if ok {
return r.handlePushArtifact(ctx, pushArtEvent)
@ -78,7 +78,7 @@ func (r *Handler) handlePushArtifact(ctx context.Context, event *event.PushArtif
Metadata: &model.ResourceMetadata{
Repository: &model.Repository{
Name: event.Repository,
Metadata: map[string]interface{}{
Metadata: map[string]any{
"public": strconv.FormatBool(public),
},
},
@ -138,7 +138,7 @@ func (r *Handler) handleCreateTag(ctx context.Context, event *event.CreateTagEve
Metadata: &model.ResourceMetadata{
Repository: &model.Repository{
Name: event.Repository,
Metadata: map[string]interface{}{
Metadata: map[string]any{
"public": strconv.FormatBool(public),
},
},

View File

@ -17,7 +17,7 @@ func TestMain(m *testing.M) {
}
func TestBuildImageResourceURL(t *testing.T) {
cfg := map[string]interface{}{
cfg := map[string]any{
common.ExtEndpoint: "https://demo.goharbor.io",
}
config.InitWithSettings(cfg)

View File

@ -39,7 +39,7 @@ func (a *Handler) Name() string {
}
// Handle preprocess artifact event data and then publish hook event
func (a *Handler) Handle(ctx context.Context, value interface{}) error {
func (a *Handler) Handle(ctx context.Context, value any) error {
if !config.NotificationEnable(ctx) {
log.Debug("notification feature is not enabled")
return nil

View File

@ -45,7 +45,7 @@ func (r *ReplicationHandler) Name() string {
}
// Handle ...
func (r *ReplicationHandler) Handle(ctx context.Context, value interface{}) error {
func (r *ReplicationHandler) Handle(ctx context.Context, value any) error {
if !config.NotificationEnable(ctx) {
log.Debug("notification feature is not enabled")
return nil

View File

@ -73,7 +73,7 @@ func TestReplicationHandler_Handle(t *testing.T) {
handler := &ReplicationHandler{}
type args struct {
data interface{}
data any
}
tests := []struct {
name string

View File

@ -40,7 +40,7 @@ func (r *RetentionHandler) Name() string {
}
// Handle ...
func (r *RetentionHandler) Handle(ctx context.Context, value interface{}) error {
func (r *RetentionHandler) Handle(ctx context.Context, value any) error {
if !config.NotificationEnable(ctx) {
log.Debug("notification feature is not enabled")
return nil

View File

@ -61,7 +61,7 @@ func TestRetentionHandler_Handle(t *testing.T) {
}, nil)
type args struct {
data interface{}
data any
}
tests := []struct {
name string

View File

@ -38,7 +38,7 @@ func (qp *Handler) Name() string {
}
// Handle ...
func (qp *Handler) Handle(ctx context.Context, value interface{}) error {
func (qp *Handler) Handle(ctx context.Context, value any) error {
quotaEvent, ok := value.(*event.QuotaEvent)
if !ok {
return errors.New("invalid quota event type")

View File

@ -53,7 +53,7 @@ func TestQuotaPreprocessHandler(t *testing.T) {
// SetupSuite prepares env for test suite.
func (suite *QuotaPreprocessHandlerSuite) SetupSuite() {
common_dao.PrepareTestForPostgresSQL()
cfg := map[string]interface{}{
cfg := map[string]any{
common.NotificationEnable: true,
}
config.InitWithSettings(cfg)
@ -110,7 +110,7 @@ func (m *MockHandler) Name() string {
}
// Handle ...
func (m *MockHandler) Handle(ctx context.Context, value interface{}) error {
func (m *MockHandler) Handle(ctx context.Context, value any) error {
return nil
}

View File

@ -42,7 +42,7 @@ func (si *Handler) Name() string {
}
// Handle preprocess chart event data and then publish hook event
func (si *Handler) Handle(ctx context.Context, value interface{}) error {
func (si *Handler) Handle(ctx context.Context, value any) error {
if value == nil {
return errors.New("empty scan artifact event")
}
@ -129,7 +129,7 @@ func constructScanImagePayload(ctx context.Context, event *event.ScanImageEvent,
// Wait for reasonable time to make sure the report is ready
// Interval=500ms and total time = 5s
// If the report is still not ready in the total time, then failed at then
for i := 0; i < 10; i++ {
for range 10 {
// First check in case it is ready
if re, err := scan.DefaultController.GetReport(ctx, art, []string{v1.MimeTypeNativeReport, v1.MimeTypeGenericVulnerabilityReport}); err == nil {
if len(re) > 0 && len(re[0].Report) > 0 {
@ -142,7 +142,7 @@ func constructScanImagePayload(ctx context.Context, event *event.ScanImageEvent,
time.Sleep(500 * time.Millisecond)
}
scanSummaries := map[string]interface{}{}
scanSummaries := map[string]any{}
if event.ScanType == v1.ScanTypeVulnerability {
scanSummaries, err = scan.DefaultController.GetSummary(ctx, art, event.ScanType, []string{v1.MimeTypeNativeReport, v1.MimeTypeGenericVulnerabilityReport})
if err != nil {
@ -150,7 +150,7 @@ func constructScanImagePayload(ctx context.Context, event *event.ScanImageEvent,
}
}
sbomOverview := map[string]interface{}{}
sbomOverview := map[string]any{}
if event.ScanType == v1.ScanTypeSbom {
sbomOverview, err = scan.DefaultController.GetSummary(ctx, art, event.ScanType, []string{v1.MimeTypeSBOMReport})
if err != nil {

View File

@ -63,7 +63,7 @@ func TestScanImagePreprocessHandler(t *testing.T) {
// SetupSuite prepares env for test suite.
func (suite *ScanImagePreprocessHandlerSuite) SetupSuite() {
common_dao.PrepareTestForPostgresSQL()
cfg := map[string]interface{}{
cfg := map[string]any{
common.NotificationEnable: true,
}
config.InitWithSettings(cfg)
@ -92,7 +92,7 @@ func (suite *ScanImagePreprocessHandlerSuite) SetupSuite() {
mc := &scantesting.Controller{}
var options []report.Option
s := make(map[string]interface{})
s := make(map[string]any)
mc.On("GetSummary", a, []string{v1.MimeTypeNativeReport}, options).Return(s, nil)
mock.OnAnything(mc, "GetSummary").Return(s, nil)
mock.OnAnything(mc, "GetReport").Return(reports, nil)
@ -153,7 +153,7 @@ func (m *MockHTTPHandler) Name() string {
}
// Handle ...
func (m *MockHTTPHandler) Handle(ctx context.Context, value interface{}) error {
func (m *MockHTTPHandler) Handle(ctx context.Context, value any) error {
return nil
}

View File

@ -33,7 +33,7 @@ type robotEventTestSuite struct {
}
func (t *tagEventTestSuite) TestResolveOfCreateRobotEventMetadata() {
cfg := map[string]interface{}{
cfg := map[string]any{
common.RobotPrefix: "robot$",
}
config.InitWithSettings(cfg)
@ -57,7 +57,7 @@ func (t *tagEventTestSuite) TestResolveOfCreateRobotEventMetadata() {
}
func (t *tagEventTestSuite) TestResolveOfDeleteRobotEventMetadata() {
cfg := map[string]interface{}{
cfg := map[string]any{
common.RobotPrefix: "robot$",
}
config.InitWithSettings(cfg)

View File

@ -75,7 +75,7 @@ type controller struct {
// Start starts the manual GC
func (c *controller) Start(ctx context.Context, policy Policy, trigger string) (int64, error) {
para := make(map[string]interface{})
para := make(map[string]any)
para["delete_untagged"] = policy.DeleteUntagged
para["dry_run"] = policy.DryRun
para["workers"] = policy.Workers
@ -129,7 +129,7 @@ func (c *controller) ListExecutions(ctx context.Context, query *q.Query) ([]*Exe
// GetExecution ...
func (c *controller) GetExecution(ctx context.Context, id int64) (*Execution, error) {
execs, err := c.exeMgr.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"ID": id,
"VendorType": job.GarbageCollectionVendorType,
},
@ -147,7 +147,7 @@ func (c *controller) GetExecution(ctx context.Context, id int64) (*Execution, er
// GetTask ...
func (c *controller) GetTask(ctx context.Context, id int64) (*Task, error) {
tasks, err := c.taskMgr.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"ID": id,
"VendorType": job.GarbageCollectionVendorType,
},
@ -203,7 +203,7 @@ func (c *controller) GetSchedule(ctx context.Context) (*scheduler.Schedule, erro
// CreateSchedule ...
func (c *controller) CreateSchedule(ctx context.Context, cronType, cron string, policy Policy) (int64, error) {
extras := make(map[string]interface{})
extras := make(map[string]any)
extras["delete_untagged"] = policy.DeleteUntagged
extras["workers"] = policy.Workers
return c.schedulerMgr.Schedule(ctx, job.GarbageCollectionVendorType, -1, cronType, cron, job.GarbageCollectionVendorType, policy, extras)

View File

@ -38,7 +38,7 @@ func (g *gcCtrTestSuite) TestStart() {
g.taskMgr.On("Create", mock.Anything, mock.Anything, mock.Anything).Return(int64(1), nil)
g.taskMgr.On("Stop", mock.Anything, mock.Anything).Return(nil)
dataMap := make(map[string]interface{})
dataMap := make(map[string]any)
p := Policy{
DeleteUntagged: true,
ExtraAttrs: dataMap,
@ -146,7 +146,7 @@ func (g *gcCtrTestSuite) TestCreateSchedule() {
g.scheduler.On("Schedule", mock.Anything, mock.Anything, mock.Anything, mock.Anything,
mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(int64(1), nil)
dataMap := make(map[string]interface{})
dataMap := make(map[string]any)
p := Policy{
DeleteUntagged: true,
ExtraAttrs: dataMap,

View File

@ -20,11 +20,11 @@ import (
// Policy ...
type Policy struct {
Trigger *Trigger `json:"trigger"`
DeleteUntagged bool `json:"deleteuntagged"`
DryRun bool `json:"dryrun"`
Workers int `json:"workers"`
ExtraAttrs map[string]interface{} `json:"extra_attrs"`
Trigger *Trigger `json:"trigger"`
DeleteUntagged bool `json:"deleteuntagged"`
DryRun bool `json:"dryrun"`
Workers int `json:"workers"`
ExtraAttrs map[string]any `json:"extra_attrs"`
}
// TriggerType represents the type of trigger.
@ -47,7 +47,7 @@ type Execution struct {
Status string
StatusMessage string
Trigger string
ExtraAttrs map[string]interface{}
ExtraAttrs map[string]any
StartTime time.Time
UpdateTime time.Time
}

View File

@ -48,7 +48,7 @@ func (c *controller) GetHealth(_ context.Context) *OverallHealthStatus {
for name, checker := range registry {
go check(name, checker, timeout, ch)
}
for i := 0; i < len(registry); i++ {
for range len(registry) {
componentStatus := <-ch
if len(componentStatus.Error) != 0 {
isHealthy = false

View File

@ -138,7 +138,7 @@ func (c *controller) Get(ctx context.Context, digest string) (*Icon, error) {
} else {
// read icon from blob
artifacts, err := c.artMgr.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"Icon": digest,
},
})

View File

@ -17,6 +17,7 @@ package jobmonitor
import (
"context"
"fmt"
"slices"
"strings"
"time"
@ -278,12 +279,7 @@ func (w *monitorController) ListQueues(ctx context.Context) ([]*jm.Queue, error)
}
func skippedUnusedJobType(jobType string) bool {
for _, t := range skippedJobTypes {
if jobType == t {
return true
}
}
return false
return slices.Contains(skippedJobTypes, jobType)
}
func (w *monitorController) PauseJobQueues(ctx context.Context, jobType string) error {

View File

@ -22,7 +22,7 @@ type Execution struct {
Status string
StatusMessage string
Trigger string
ExtraAttrs map[string]interface{}
ExtraAttrs map[string]any
StartTime time.Time
EndTime time.Time
}

View File

@ -35,7 +35,7 @@ type SchedulerController interface {
// Get the schedule
Get(ctx context.Context, vendorType string) (*scheduler.Schedule, error)
// Create with cron type & string
Create(ctx context.Context, vendorType, cronType, cron, callbackFuncName string, policy interface{}, extrasParam map[string]interface{}) (int64, error)
Create(ctx context.Context, vendorType, cronType, cron, callbackFuncName string, policy any, extrasParam map[string]any) (int64, error)
// Delete the schedule
Delete(ctx context.Context, vendorType string) error
// List lists schedules
@ -76,7 +76,7 @@ func (s *schedulerController) Get(ctx context.Context, vendorType string) (*sche
}
func (s *schedulerController) Create(ctx context.Context, vendorType, cronType, cron, callbackFuncName string,
policy interface{}, extrasParam map[string]interface{}) (int64, error) {
policy any, extrasParam map[string]any) (int64, error) {
return s.schedulerMgr.Schedule(ctx, vendorType, -1, cronType, cron, callbackFuncName, policy, extrasParam)
}

View File

@ -49,7 +49,7 @@ func (s *ScheduleTestSuite) TestCreateSchedule() {
s.scheduler.On("Schedule", mock.Anything, mock.Anything, mock.Anything, mock.Anything,
mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(int64(1), nil)
dataMap := make(map[string]interface{})
dataMap := make(map[string]any)
p := purge.JobPolicy{}
id, err := s.ctl.Create(nil, job.PurgeAuditVendorType, "Daily", "* * * * * *", purge.SchedulerCallback, p, dataMap)
s.Nil(err)
@ -76,7 +76,7 @@ func (s *ScheduleTestSuite) TestGetSchedule() {
func (s *ScheduleTestSuite) TestListSchedule() {
mock.OnAnything(s.scheduler, "ListSchedules").Return([]*scheduler.Schedule{
{ID: 1, VendorType: "GARBAGE_COLLECTION", CRON: "0 0 0 * * *", ExtraAttrs: map[string]interface{}{"args": "sample args"}}}, nil).Once()
{ID: 1, VendorType: "GARBAGE_COLLECTION", CRON: "0 0 0 * * *", ExtraAttrs: map[string]any{"args": "sample args"}}}, nil).Once()
schedules, err := s.scheduler.ListSchedules(nil, nil)
s.Assert().Nil(err)
s.Assert().Equal(1, len(schedules))

View File

@ -29,7 +29,7 @@ import (
"github.com/goharbor/harbor/src/testing/pkg/ldap"
)
var defaultConfigWithVerifyCert = map[string]interface{}{
var defaultConfigWithVerifyCert = map[string]any{
common.ExtEndpoint: "https://host01.com",
common.AUTHMode: common.LDAPAuth,
common.DatabaseType: "postgresql",

View File

@ -34,17 +34,17 @@ import (
// Controller defines the operation related to project member
type Controller interface {
// Get gets the project member with ID
Get(ctx context.Context, projectNameOrID interface{}, memberID int) (*models.Member, error)
Get(ctx context.Context, projectNameOrID any, memberID int) (*models.Member, error)
// Create add project member to project
Create(ctx context.Context, projectNameOrID interface{}, req Request) (int, error)
Create(ctx context.Context, projectNameOrID any, req Request) (int, error)
// Delete member from project
Delete(ctx context.Context, projectNameOrID interface{}, memberID int) error
Delete(ctx context.Context, projectNameOrID any, memberID int) error
// List lists all project members with condition
List(ctx context.Context, projectNameOrID interface{}, entityName string, query *q.Query) ([]*models.Member, error)
List(ctx context.Context, projectNameOrID any, entityName string, query *q.Query) ([]*models.Member, error)
// UpdateRole update the project member role
UpdateRole(ctx context.Context, projectNameOrID interface{}, memberID int, role int) error
UpdateRole(ctx context.Context, projectNameOrID any, memberID int, role int) error
// Count get the total amount of project members
Count(ctx context.Context, projectNameOrID interface{}, query *q.Query) (int, error)
Count(ctx context.Context, projectNameOrID any, query *q.Query) (int, error)
// IsProjectAdmin judges if the user is a project admin of any project
IsProjectAdmin(ctx context.Context, member commonmodels.User) (bool, error)
}
@ -89,7 +89,7 @@ func NewController() Controller {
return &controller{mgr: member.Mgr, projectMgr: pkg.ProjectMgr, userManager: user.New(), groupManager: usergroup.Mgr}
}
func (c *controller) Count(ctx context.Context, projectNameOrID interface{}, query *q.Query) (int, error) {
func (c *controller) Count(ctx context.Context, projectNameOrID any, query *q.Query) (int, error) {
p, err := c.projectMgr.Get(ctx, projectNameOrID)
if err != nil {
return 0, err
@ -97,7 +97,7 @@ func (c *controller) Count(ctx context.Context, projectNameOrID interface{}, que
return c.mgr.GetTotalOfProjectMembers(ctx, p.ProjectID, query)
}
func (c *controller) UpdateRole(ctx context.Context, projectNameOrID interface{}, memberID int, role int) error {
func (c *controller) UpdateRole(ctx context.Context, projectNameOrID any, memberID int, role int) error {
p, err := c.projectMgr.Get(ctx, projectNameOrID)
if err != nil {
return err
@ -108,7 +108,7 @@ func (c *controller) UpdateRole(ctx context.Context, projectNameOrID interface{}
return c.mgr.UpdateRole(ctx, p.ProjectID, memberID, role)
}
func (c *controller) Get(ctx context.Context, projectNameOrID interface{}, memberID int) (*models.Member, error) {
func (c *controller) Get(ctx context.Context, projectNameOrID any, memberID int) (*models.Member, error) {
p, err := c.projectMgr.Get(ctx, projectNameOrID)
if err != nil {
return nil, err
@ -119,7 +119,7 @@ func (c *controller) Get(ctx context.Context, projectNameOrID interface{}, membe
return c.mgr.Get(ctx, p.ProjectID, memberID)
}
func (c *controller) Create(ctx context.Context, projectNameOrID interface{}, req Request) (int, error) {
func (c *controller) Create(ctx context.Context, projectNameOrID any, req Request) (int, error) {
p, err := c.projectMgr.Get(ctx, projectNameOrID)
if err != nil {
return 0, err
@ -239,7 +239,7 @@ func isValidRole(role int) bool {
}
}
func (c *controller) List(ctx context.Context, projectNameOrID interface{}, entityName string, query *q.Query) ([]*models.Member, error) {
func (c *controller) List(ctx context.Context, projectNameOrID any, entityName string, query *q.Query) ([]*models.Member, error) {
p, err := c.projectMgr.Get(ctx, projectNameOrID)
if err != nil {
return nil, err
@ -254,7 +254,7 @@ func (c *controller) List(ctx context.Context, projectNameOrID interface{}, enti
return c.mgr.List(ctx, pm, query)
}
func (c *controller) Delete(ctx context.Context, projectNameOrID interface{}, memberID int) error {
func (c *controller) Delete(ctx context.Context, projectNameOrID any, memberID int) error {
p, err := c.projectMgr.Get(ctx, projectNameOrID)
if err != nil {
return err

View File

@ -180,7 +180,7 @@ func (c *controller) CreateInstance(ctx context.Context, instance *providerModel
// Avoid duplicated endpoint
var query = &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"endpoint": instance.Endpoint,
},
}
@ -208,7 +208,7 @@ func (c *controller) DeleteInstance(ctx context.Context, id int64) error {
}
// delete instance should check the instance whether be used by policies
policies, err := c.ListPolicies(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"provider_id": id,
},
})
@ -235,7 +235,7 @@ func (c *controller) UpdateInstance(ctx context.Context, instance *providerModel
if !instance.Enabled {
// update instance should check the instance whether be used by policies
policies, err := c.ListPolicies(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"provider_id": instance.ID,
},
})
@ -311,7 +311,7 @@ func (c *controller) CreatePolicy(ctx context.Context, schema *policyModels.Sche
schema.Trigger.Type == policyModels.TriggerTypeScheduled &&
len(schema.Trigger.Settings.Cron) > 0 {
// schedule and update policy
extras := make(map[string]interface{})
extras := make(map[string]any)
if _, err = c.scheduler.Schedule(ctx, job.P2PPreheatVendorType, id, "", schema.Trigger.Settings.Cron,
SchedulerCallback, TriggerParam{PolicyID: id}, extras); err != nil {
return 0, err
@ -409,7 +409,7 @@ func (c *controller) UpdatePolicy(ctx context.Context, schema *policyModels.Sche
// schedule new
if needSch {
extras := make(map[string]interface{})
extras := make(map[string]any)
if _, err := c.scheduler.Schedule(ctx, job.P2PPreheatVendorType, schema.ID, "", cron, SchedulerCallback,
TriggerParam{PolicyID: schema.ID}, extras); err != nil {
return err
@ -465,7 +465,7 @@ func (c *controller) DeletePoliciesOfProject(ctx context.Context, project int64)
// deleteExecs delete executions
func (c *controller) deleteExecs(ctx context.Context, vendorID int64) error {
executions, err := c.executionMgr.List(ctx, &q.Query{
Keywords: map[string]interface{}{
Keywords: map[string]any{
"VendorType": job.P2PPreheatVendorType,
"VendorID": vendorID,
},

View File

@ -82,7 +82,7 @@ func (s *preheatSuite) SetupSuite() {
},
}, nil)
s.fakeInstanceMgr.On("Save", mock.Anything, mock.Anything).Return(int64(1), nil)
s.fakeInstanceMgr.On("Count", mock.Anything, &q.Query{Keywords: map[string]interface{}{
s.fakeInstanceMgr.On("Count", mock.Anything, &q.Query{Keywords: map[string]any{
"endpoint": "http://localhost",
}}).Return(int64(1), nil)
s.fakeInstanceMgr.On("Count", mock.Anything, mock.Anything).Return(int64(0), nil)
@ -117,7 +117,7 @@ func (s *preheatSuite) TearDownSuite() {
func (s *preheatSuite) TestGetAvailableProviders() {
providers, err := s.controller.GetAvailableProviders()
s.Equal(2, len(providers))
expectProviders := map[string]interface{}{}
expectProviders := map[string]any{}
expectProviders["dragonfly"] = nil
expectProviders["kraken"] = nil
_, ok := expectProviders[providers[0].ID]
@ -177,7 +177,7 @@ func (s *preheatSuite) TestCreateInstance() {
func (s *preheatSuite) TestDeleteInstance() {
// instance be used should not be deleted
s.fakeInstanceMgr.On("Get", s.ctx, int64(1)).Return(&providerModel.Instance{ID: 1}, nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]interface{}{"provider_id": int64(1)}}).Return([]*policy.Schema{
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]any{"provider_id": int64(1)}}).Return([]*policy.Schema{
{
ProviderID: 1,
},
@ -186,7 +186,7 @@ func (s *preheatSuite) TestDeleteInstance() {
s.Error(err, "instance should not be deleted")
s.fakeInstanceMgr.On("Get", s.ctx, int64(2)).Return(&providerModel.Instance{ID: 2}, nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]interface{}{"provider_id": int64(2)}}).Return([]*policy.Schema{}, nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]any{"provider_id": int64(2)}}).Return([]*policy.Schema{}, nil)
s.fakeInstanceMgr.On("Delete", s.ctx, int64(2)).Return(nil)
err = s.controller.DeleteInstance(s.ctx, int64(2))
s.NoError(err, "instance can be deleted")
@ -202,7 +202,7 @@ func (s *preheatSuite) TestUpdateInstance() {
// disable instance should error due to with policy used
s.fakeInstanceMgr.On("Get", s.ctx, int64(1001)).Return(&providerModel.Instance{ID: 1001}, nil)
s.fakeInstanceMgr.On("Update", s.ctx, &providerModel.Instance{ID: 1001}).Return(nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]interface{}{"provider_id": int64(1001)}}).Return([]*policy.Schema{
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]any{"provider_id": int64(1001)}}).Return([]*policy.Schema{
{ProviderID: 1001},
}, nil)
err = s.controller.UpdateInstance(s.ctx, &providerModel.Instance{ID: 1001})
@ -211,14 +211,14 @@ func (s *preheatSuite) TestUpdateInstance() {
// disable instance can be deleted if no policy used
s.fakeInstanceMgr.On("Get", s.ctx, int64(1002)).Return(&providerModel.Instance{ID: 1002}, nil)
s.fakeInstanceMgr.On("Update", s.ctx, &providerModel.Instance{ID: 1002}).Return(nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]interface{}{"provider_id": int64(1002)}}).Return([]*policy.Schema{}, nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]any{"provider_id": int64(1002)}}).Return([]*policy.Schema{}, nil)
err = s.controller.UpdateInstance(s.ctx, &providerModel.Instance{ID: 1002})
s.NoError(err, "instance can be disabled")
// not support change vendor type
s.fakeInstanceMgr.On("Get", s.ctx, int64(1003)).Return(&providerModel.Instance{ID: 1003, Vendor: "dragonfly"}, nil)
s.fakeInstanceMgr.On("Update", s.ctx, &providerModel.Instance{ID: 1003, Vendor: "kraken"}).Return(nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]interface{}{"provider_id": int64(1003)}}).Return([]*policy.Schema{}, nil)
s.fakePolicyMgr.On("ListPolicies", s.ctx, &q.Query{Keywords: map[string]any{"provider_id": int64(1003)}}).Return([]*policy.Schema{}, nil)
err = s.controller.UpdateInstance(s.ctx, &providerModel.Instance{ID: 1003, Vendor: "kraken"})
s.Error(err, "provider vendor cannot be changed")
}
@ -347,7 +347,7 @@ func (s *preheatSuite) TestDeletePoliciesOfProject() {
for _, p := range fakePolicies {
s.fakePolicyMgr.On("Get", s.ctx, p.ID).Return(p, nil)
s.fakePolicyMgr.On("Delete", s.ctx, p.ID).Return(nil)
s.fakeExecutionMgr.On("List", s.ctx, &q.Query{Keywords: map[string]interface{}{"VendorID": p.ID, "VendorType": "P2P_PREHEAT"}}).Return([]*taskModel.Execution{}, nil)
s.fakeExecutionMgr.On("List", s.ctx, &q.Query{Keywords: map[string]any{"VendorID": p.ID, "VendorType": "P2P_PREHEAT"}}).Return([]*taskModel.Execution{}, nil)
}
err := s.controller.DeletePoliciesOfProject(s.ctx, 10)

Some files were not shown because too many files have changed in this diff Show More