Compare commits

...

138 Commits

Author SHA1 Message Date
Gaius ca65aba1ec
feat(api): add security token to ObjectStorage (#537)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-09-01 20:21:31 +08:00
Chlins Zhang 9aac8e2b9c
feat: add proxy port to host in v1 scheduler.AnnounceHostRequest (#536)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-08-25 17:35:02 +08:00
Chlins Zhang a46c1b9c85
feat: add proxy port to host in v1 (#535)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-08-25 17:03:32 +08:00
Chlins Zhang b6d2590f97
feat: add proxy port to common.Host (#530)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-08-18 20:22:31 +08:00
dependabot[bot] d31f58d7dc
chore(deps): bump google.golang.org/protobuf from 1.36.6 to 1.36.7 (#528)
Bumps google.golang.org/protobuf from 1.36.6 to 1.36.7.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-version: 1.36.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-13 11:42:15 +08:00
Gaius 959325fec5
feat: set optional for rx_bandwidth and tx_bandwidth (#525)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-08-08 18:45:13 +08:00
Gaius 913cb29c09
feat: add bandwidth for network (#524)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-08-08 16:44:10 +08:00
this is my name dbd1bc4d38
feat: add remote_ip in DeleteCacheTaskRequest (#523)
Signed-off-by: fu220 <2863318196@qq.com>
2025-08-04 13:34:37 +08:00
Gaius 603db45223
feat: add files for StatImage (#522)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-08-04 11:11:48 +08:00
dependabot[bot] e15d4cec24
chore(deps): bump tokio from 1.47.0 to 1.47.1 (#521)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.47.0 to 1.47.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.47.0...tokio-1.47.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.47.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 10:33:54 +08:00
Gaius e16aad3f23
feat: add insecure_skip_verify and certificate_chain for PreheatImage (#518)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-08-01 14:14:22 +08:00
dependabot[bot] 96a6f1a224
chore(deps): bump tokio from 1.46.1 to 1.47.0 (#515) 2025-08-01 10:11:50 +08:00
dependabot[bot] 1e5544f041
chore(deps): bump google.golang.org/grpc from 1.73.0 to 1.74.2 (#516) 2025-08-01 10:11:44 +08:00
Gaius cd8b9986c1
feat(golang/proto): add PreheatImage and StatImage for scheduler
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-29 19:39:46 +08:00
Gaius 7662562f49
feat: add PreheatImage and StatImage for scheduler (#517)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-29 19:03:37 +08:00
this is my name 80c539463b
feat: add Cache Task mechanism (#512)
* # Description
This pull request introduces APIs for the new Cache Task mechanism, enabling support for memory cache operations. Additionally, by refactoring the Download model and removing the load_to_cache field, the Task component now focuses exclusively on disk I/O operations, while the Cache Task handles memory cache operations. Finally, the version number of the dragonfly-api package has been incremented.

## Key Changes

### common.proto
- Added a new enum type Cache to TaskType to identify CacheTask.
- Introduced a new structure CachePeer to represent peers handling cache tasks.
- Added a new structure CacheTask to define metadata for cache tasks.
- Removed the load_to_cache field from the Download structure, as Tasks will no longer handle cache operations.

### dfdaemon.proto
- Added three RPC calls (DownloadCacheTask, StatCacheTask, DeleteCacheTask) to the DfdaemonUpload and DfdaemonDownload services for cache task management.
- Added two RPC calls (SyncCachePieces, DownloadCachePiece) to the DfdaemonUpload service for cache piece synchronization.
- Introduced 9 new message types to support requests and responses for CacheTask downloads.

### scheduler.proto
- Added five RPC calls to the Scheduler service: AnnounceCachePeer, StatCachePeer, DeleteCachePeer for cache peer management. StatCacheTask, DeleteCacheTask for cache task management.
- Introduced 16 new message types to cover the full lifecycle state transitions of cache tasks.

## Version Update
- Incremented the version of the dragonfly-api package from 2.1.47 to 2.1.48 in Cargo.toml to reflect the new changes.

## Related Issue

## Motivation and Context
- Separation of Responsibilities: Tasks now focus exclusively on disk I/O operations, while CacheTask independently handles memory cache management.

Signed-off-by: fu220 <2863318196@qq.com>

* Changed Download and introduced CacheDownload.

- Added more detailed descriptions for the CACHE type in TaskType, while clarifying that the STANDARD type interacts exclusively with disk operations.
- Removed the load_to_cache field and related information from the Download structure.
- Introduced CacheDownload for handling CacheTask downloads.

Signed-off-by: fu220 <2863318196@qq.com>

* Removed CacheDownload and directly merged into the Request

Signed-off-by: fu220 <2863318196@qq.com>

* Allow large_enum_variant in RegisterCachePeerRequest

Signed-off-by: fu220 <2863318196@qq.com>

---------

Signed-off-by: fu220 <2863318196@qq.com>
2025-07-24 18:27:23 +08:00
Gaius 9c637ef230
feat: add is_finished for DownloadTaskStartedResponse (#514)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-23 17:15:43 +08:00
Gaius c619a3c8ac
feat: local_only specifies whether to query task information from local client only (#513)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-23 11:53:20 +08:00
Gaius 8165c4bff1
feat: remove remote_ip in DownloadPersistentCachePieceRequest
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-22 14:24:43 +08:00
Gaius 71e4d8083f
feat: add remote ip for request (#511)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-22 13:02:08 +08:00
Gaius be780d75c1
feat: add remote ip for request (#510)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-21 17:28:46 +08:00
Chlins Zhang 0b94987d3d
feat: add scheduler cluster ID to list schedulers request (#509)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-18 15:40:38 +08:00
Gaius 928e93e8fe
feat: update ListTaskEntriesResponse by removing unused fields (#508)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-16 19:19:03 +08:00
Chlins Zhang 7d884ac8a9
chore: bump version to 2.1.42 (#507)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-16 18:19:08 +08:00
Chlins Zhang ad356a5030
feat: add scheduler configuration field to UpdateSchedulerRequest (#506)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-07-16 18:08:29 +08:00
dependabot[bot] d71ef77192
chore(deps): bump tokio from 1.45.1 to 1.46.1 (#504)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.45.1 to 1.46.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.45.1...tokio-1.46.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.46.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-15 12:13:42 +08:00
Gaius 20d9c4bc90
feat: add ListTaskEntries for downloading directory (#505)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-07-08 11:45:19 +08:00
Gaius b3635340f7
feat: add digest for download task and supports CRC32, SHA256, and SHA512 algorithms (#502)
feat: add digest for download task and supports CRC32, SHA256, and SHA512 algorithms.

Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-06-27 12:26:56 +08:00
dependabot[bot] 6f25e8acc6
chore(deps): bump prost-types from 0.14.0 to 0.14.1 (#501)
Bumps [prost-types](https://github.com/tokio-rs/prost) from 0.14.0 to 0.14.1.
- [Release notes](https://github.com/tokio-rs/prost/releases)
- [Changelog](https://github.com/tokio-rs/prost/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/prost/compare/v0.14.0...v0.14.1)

---
updated-dependencies:
- dependency-name: prost-types
  dependency-version: 0.14.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-23 15:43:38 +08:00
dependabot[bot] 17c5322a9e
chore(deps): bump prost-types from 0.13.5 to 0.14.0 (#500)
Bumps [prost-types](https://github.com/tokio-rs/prost) from 0.13.5 to 0.14.0.
- [Release notes](https://github.com/tokio-rs/prost/releases)
- [Changelog](https://github.com/tokio-rs/prost/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/prost/compare/v0.13.5...v0.14.0)

---
updated-dependencies:
- dependency-name: prost-types
  dependency-version: 0.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 10:56:32 +08:00
dependabot[bot] 47585b7eaf
chore(deps): bump google.golang.org/grpc from 1.72.1 to 1.73.0 (#498)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.72.1 to 1.73.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.72.1...v1.73.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.73.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-09 10:35:47 +08:00
dependabot[bot] cc94665915
chore(deps): bump tokio from 1.45.0 to 1.45.1 (#496)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.45.0 to 1.45.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.45.0...tokio-1.45.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.45.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-26 10:32:32 +08:00
Gaius e528000637
chore(README.md): add maintainer google groups for communication channels and remove discussion group (#495)
* chore(README.md): add maintainer google groups for communication channels and remove discussion group

Signed-off-by: Gaius <gaius.qi@gmail.com>

* Update README.md

---------

Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-05-23 16:04:24 +08:00
dependabot[bot] 05a74d22cb
chore(deps): bump google.golang.org/grpc from 1.72.0 to 1.72.1 (#494)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.72.0 to 1.72.1.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.72.0...v1.72.1)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.72.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 10:34:13 +08:00
dependabot[bot] 9b789e3418
chore(deps): bump tokio from 1.44.2 to 1.45.0 (#492)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.2 to 1.45.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.2...tokio-1.45.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.45.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 10:33:45 +08:00
dependabot[bot] 38508c2d68
chore(deps): bump golangci/golangci-lint-action from 6 to 8 (#488)
* chore(deps): bump golangci/golangci-lint-action from 6 to 8

Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6 to 8.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v6...v8)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix lint errors

Signed-off-by: yxxhero <aiopsclub@163.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: yxxhero <aiopsclub@163.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: yxxhero <aiopsclub@163.com>
2025-05-05 15:56:39 +08:00
dependabot[bot] 203a7b194b
chore(deps): bump go.uber.org/mock from 0.5.1 to 0.5.2 (#490)
Bumps [go.uber.org/mock](https://github.com/uber/mock) from 0.5.1 to 0.5.2.
- [Release notes](https://github.com/uber/mock/releases)
- [Changelog](https://github.com/uber-go/mock/blob/main/CHANGELOG.md)
- [Commits](https://github.com/uber/mock/compare/v0.5.1...v0.5.2)

---
updated-dependencies:
- dependency-name: go.uber.org/mock
  dependency-version: 0.5.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 15:42:39 +08:00
dependabot[bot] 96fe4250ae
chore(deps): bump prost-wkt-types from 0.6.0 to 0.6.1 (#489)
Bumps [prost-wkt-types](https://github.com/fdeantoni/prost-wkt) from 0.6.0 to 0.6.1.
- [Release notes](https://github.com/fdeantoni/prost-wkt/releases)
- [Changelog](https://github.com/fdeantoni/prost-wkt/blob/master/CHANGELOG.md)
- [Commits](https://github.com/fdeantoni/prost-wkt/compare/v0.6.0...v0.6.1)

---
updated-dependencies:
- dependency-name: prost-wkt-types
  dependency-version: 0.6.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 15:42:13 +08:00
dependabot[bot] d7a9e1c3d2
chore(deps): bump google.golang.org/grpc from 1.71.1 to 1.72.0 (#487)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.71.1 to 1.72.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.71.1...v1.72.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.72.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-28 10:37:26 +08:00
Gaius 07d2017a8a
feat: add content_for_calculating_task_id for UploadPersistentCacheTaskRequest (#486)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-04-24 18:03:20 +08:00
Gaius 18ebde934b
feat: updates the task ID calculation logic in the Download (#485)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-04-24 17:03:12 +08:00
Gaius 53d7b93edc
feat: add content_for_calculating_task_id for Download message (#484)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-04-24 15:49:30 +08:00
Chlins Zhang e509a2c3cc
chore: bump golang version to 1.23.8 (#483)
Signed-off-by: chlins <chlins.zhang@gmail.com>
2025-04-16 19:54:38 +08:00
dependabot[bot] da640571e2
chore(deps): bump go.uber.org/mock from 0.4.0 to 0.5.1 (#482)
Bumps [go.uber.org/mock](https://github.com/uber/mock) from 0.4.0 to 0.5.1.
- [Release notes](https://github.com/uber/mock/releases)
- [Changelog](https://github.com/uber-go/mock/blob/main/CHANGELOG.md)
- [Commits](https://github.com/uber/mock/compare/v0.4.0...v0.5.1)

---
updated-dependencies:
- dependency-name: go.uber.org/mock
  dependency-version: 0.5.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-14 10:40:20 +08:00
dependabot[bot] 8c8e4d0b7b
chore(deps): bump tokio from 1.44.1 to 1.44.2 (#481)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.1 to 1.44.2.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.1...tokio-1.44.2)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.44.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 11:05:45 +08:00
dependabot[bot] 86c6246bef
chore(deps): bump google.golang.org/grpc from 1.71.0 to 1.71.1 (#480)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.71.0 to 1.71.1.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.71.0...v1.71.1)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.71.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 11:05:32 +08:00
dependabot[bot] 75f50ea5ab
chore(deps): bump google.golang.org/protobuf from 1.36.5 to 1.36.6 (#477)
Bumps google.golang.org/protobuf from 1.36.5 to 1.36.6.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 10:30:36 +08:00
Gaius e264b50709
feat: add force_hard_link for DownloadPersistentCacheTaskRequest (#475)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-03-28 15:35:55 +08:00
Gaius 8e1782b9d8
feat: add ExchangeIBVerbsQueuePairEndpoint to DfdaemonUpload
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-03-25 18:52:55 +08:00
Gaius b7701531cf
feat: add ExchangeIBVerbsQueuePairEndpoint for DfdaemonUpload (#474)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-03-25 18:34:27 +08:00
Gaius 6bd235200f
feat: add IbVerbsConnection service to support connection between ibverbs peers (#473)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-03-24 20:16:07 +08:00
Gaius 50ab9e7b9c
feat: optimize the comment for output_path and force_hard_link (#472)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-03-20 14:22:42 +08:00
Gaius 33be225960
feat: add force_hard_link field for Download message (#471)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-03-19 15:40:43 +08:00
dependabot[bot] d69955fe1f
chore(deps): bump tokio from 1.44.0 to 1.44.1 (#470)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.0 to 1.44.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.0...tokio-1.44.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-17 10:39:55 +08:00
dependabot[bot] 2c77b64e97
chore(deps): bump tokio from 1.43.0 to 1.44.0 (#468)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.43.0 to 1.44.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.43.0...tokio-1.44.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-11 00:09:14 +08:00
dependabot[bot] 9e4f8d11bb
chore(deps): bump serde from 1.0.218 to 1.0.219 (#469)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.218 to 1.0.219.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.218...v1.0.219)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-11 00:09:00 +08:00
dependabot[bot] 8677f335fc
chore(deps): bump google.golang.org/grpc from 1.70.0 to 1.71.0 (#467) 2025-03-10 09:57:59 +08:00
Gaius be1ca95b4c
feat: remove piece_length in Task (#466)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-02-26 11:43:43 +08:00
Gaius b040351547
feat: add validation for piece_length (#465)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-02-24 15:52:00 +08:00
Gaius ff110f324b
feat: add piece_length for Persistent Cache Task (#464)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-02-24 12:26:26 +08:00
dependabot[bot] 4b5d9d7120
chore(deps): bump serde from 1.0.217 to 1.0.218 (#463)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.217 to 1.0.218.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.217...v1.0.218)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-24 10:41:37 +08:00
dependabot[bot] 87be7d1d22
chore(deps): bump prost-types from 0.13.4 to 0.13.5 (#462)
Bumps [prost-types](https://github.com/tokio-rs/prost) from 0.13.4 to 0.13.5.
- [Release notes](https://github.com/tokio-rs/prost/releases)
- [Changelog](https://github.com/tokio-rs/prost/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/prost/compare/v0.13.4...v0.13.5)

---
updated-dependencies:
- dependency-name: prost-types
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-24 10:41:16 +08:00
dependabot[bot] ee08ab7f31
chore(deps): bump prost from 0.13.4 to 0.13.5 (#461)
Bumps [prost](https://github.com/tokio-rs/prost) from 0.13.4 to 0.13.5.
- [Release notes](https://github.com/tokio-rs/prost/releases)
- [Changelog](https://github.com/tokio-rs/prost/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/prost/compare/v0.13.4...v0.13.5)

---
updated-dependencies:
- dependency-name: prost
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-17 10:28:59 +08:00
Gaius 9bee47043a
feat: add digest for DownloadPersistentCachePieceResponse (#460)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-02-14 11:04:22 +08:00
Gaius 8d312b5628
feat: change validation of the ttl in UploadPersistentCacheTaskStartedRequest (#459)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-02-12 17:05:38 +08:00
Gaius f201aeb1b8
feat: add ttl for PersistentCacheTask (#458)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-02-12 15:33:41 +08:00
KennyMcCormick ff1adb1882
add digest in DownloadPieceResponse (#457)
Signed-off-by: cormick <cormick1080@gmail.com>
2025-02-12 11:22:36 +08:00
dependabot[bot] 96d32ae2af
chore(deps): bump google.golang.org/protobuf from 1.36.4 to 1.36.5 (#456)
Bumps google.golang.org/protobuf from 1.36.4 to 1.36.5.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-10 10:45:40 +08:00
dependabot[bot] 4d150a7dfd
chore(deps): bump google.golang.org/protobuf from 1.36.3 to 1.36.4 (#454)
Bumps google.golang.org/protobuf from 1.36.3 to 1.36.4.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 10:31:52 +08:00
dependabot[bot] 4df0158d23
chore(deps): bump github.com/envoyproxy/protoc-gen-validate from 1.1.0 to 1.2.1 (#455)
chore(deps): bump github.com/envoyproxy/protoc-gen-validate

Bumps [github.com/envoyproxy/protoc-gen-validate](https://github.com/envoyproxy/protoc-gen-validate) from 1.1.0 to 1.2.1.
- [Release notes](https://github.com/envoyproxy/protoc-gen-validate/releases)
- [Changelog](https://github.com/bufbuild/protoc-gen-validate/blob/main/.goreleaser.yaml)
- [Commits](https://github.com/envoyproxy/protoc-gen-validate/compare/v1.1.0...v1.2.1)

---
updated-dependencies:
- dependency-name: github.com/envoyproxy/protoc-gen-validate
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 10:31:32 +08:00
dependabot[bot] d257df3a42
chore(deps): bump google.golang.org/grpc from 1.69.4 to 1.70.0 (#453) 2025-01-27 09:46:44 +08:00
Gaius 45c2817e5e
feat: add task_id for UploadPersistentCacheTask
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-26 20:21:14 +08:00
Gaius c6f625f3ad
feat: add task_id for UpdatePersistentCacheTask
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-26 19:48:49 +08:00
Gaius c8e0b2526f
feat: add UpdatePersistentCacheTask api
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-26 19:06:56 +08:00
Gaius 572faaa58f
feat: add is_replicated for DownloadPersistentCacheTaskRequest
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-26 18:08:27 +08:00
Gaius 7fde34770f
feat: add persistent field for RegisterPersistentCachePeerRequest
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-26 15:12:45 +08:00
Gaius 2c761170dd
feat: add DeletePersistentCacheTask for UploadDfdaemon
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-26 13:26:42 +08:00
Gaius c8398377dd
feat: remove DeletePersistentCacheTask in DfdaemonDownload (#452)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-26 11:06:44 +08:00
Gaius 05b74d50e3
feat: remove update_persistent_cache_task api (#451)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-24 17:52:18 +08:00
Gaius 2a23158326
feat: remove response from UpdatePersistentCacheTask
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-22 11:13:39 +08:00
Gaius d4e497d037
feat: add UpdatePersistentCacheTask to upload server (#450)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-22 10:36:51 +08:00
Gaius c7f8c2a600
feat: add UpdatePersistentCacheTask for setting the value of persistent (#449)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-21 22:14:01 +08:00
Gaius 1920f90700
feat: remove ReschedulePeerFailedRequest and ReschedulePersistentCachePeerFailedRequest
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-21 20:40:33 +08:00
Gaius 214460fd27
feat: add ReschedulePeerFailedRequest and ReschedulePersistentCachePeerFailedRequest (#448)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-21 12:38:09 +08:00
Gaius 161f9d810f
feat: add DownloadPersistentCachePiece and SyncPersistentCachePieces api (#447)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-20 12:08:33 +08:00
dependabot[bot] 73d6bea208
chore(deps): bump google.golang.org/grpc from 1.69.2 to 1.69.4 (#445)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.69.2 to 1.69.4.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.69.2...v1.69.4)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 10:28:13 +08:00
dependabot[bot] dc2431554e
chore(deps): bump google.golang.org/protobuf from 1.36.2 to 1.36.3 (#446)
Bumps google.golang.org/protobuf from 1.36.2 to 1.36.3.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 10:26:42 +08:00
Gaius 86b994e4ae
feat: remove timeout from PersistentCacheTask (#444)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-17 15:27:13 +08:00
Gaius b9b98411e4
feat: remove digest field in PersistentCacheTask (#443)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-17 11:58:26 +08:00
Gaius 13a45c2c30
feat: remove host_id and task_id in RegisterPersistentCachePeerRequest (#442)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-16 15:32:33 +08:00
Gaius 352bcc2611
feat: set digest to optional for persistent cache task
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-15 14:15:43 +08:00
Gaius 76b2eac2d0
feat: add ReadPersistentCacheTask and WritePersistentCacheTask message (#441)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-15 12:14:58 +08:00
dependabot[bot] 4b7da56505
chore(deps): bump tokio from 1.42.0 to 1.43.0 (#440)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.42.0 to 1.43.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.42.0...tokio-1.43.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-15 10:29:00 +08:00
dependabot[bot] 1a1fa3aaec
chore(deps): bump google.golang.org/protobuf from 1.36.1 to 1.36.2 (#439)
Bumps google.golang.org/protobuf from 1.36.1 to 1.36.2.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-15 10:28:30 +08:00
SouthWest7 bff2301ac7
feat: add 'load_to_cache' field to Download message (#437)
Signed-off-by: southwest <1403572259@qq.com>
2025-01-08 10:41:36 +08:00
dependabot[bot] 1834d538ba
chore(deps): bump serde from 1.0.216 to 1.0.217 (#438) 2025-01-06 09:28:40 +08:00
Gaius 05f657ba25
feat: set output optional for presistent cache (#436)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-03 10:48:02 +08:00
Gaius 502a8fbe6e
feat: set output_path to optional and add need_piece_content for downloading persistent cache (#435)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2025-01-02 15:51:29 +08:00
baowj 190d05d9bf
feat: add method SyncHost to service DfdaemonUpload (#431)
Signed-off-by: baowj <bwj_678@qq.com>
2024-12-31 16:48:34 +08:00
Gaius af929d036c
feat: add need_piece_content for Download message (#434)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-31 12:14:35 +08:00
dependabot[bot] c3487f9ffd
chore(deps): bump google.golang.org/protobuf from 1.36.0 to 1.36.1 (#432) 2024-12-31 09:25:11 +08:00
dependabot[bot] 7f2a53a4eb
chore(deps): bump serde from 1.0.216 to 1.0.217 (#433) 2024-12-31 09:24:51 +08:00
dependabot[bot] de2bebaccf
chore(deps): bump google.golang.org/grpc from 1.69.0 to 1.69.2 (#429)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.69.0 to 1.69.2.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.69.0...v1.69.2)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-24 10:48:19 +08:00
dependabot[bot] 4aa3825a82
chore(deps): bump google.golang.org/protobuf from 1.35.2 to 1.36.0 (#430)
Bumps google.golang.org/protobuf from 1.35.2 to 1.36.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-23 10:46:11 +08:00
Gaius 4a0832895c
chore: rename repo Dragonfly2 to dragonfly (#428)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-20 17:13:23 +08:00
dependabot[bot] 956178e875
chore(deps): bump prost-types from 0.13.3 to 0.13.4 (#426)
Bumps [prost-types](https://github.com/tokio-rs/prost) from 0.13.3 to 0.13.4.
- [Release notes](https://github.com/tokio-rs/prost/releases)
- [Changelog](https://github.com/tokio-rs/prost/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/prost/compare/v0.13.3...v0.13.4)

---
updated-dependencies:
- dependency-name: prost-types
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-17 17:00:42 +08:00
dependabot[bot] bcc8f4c984
chore(deps): bump google.golang.org/grpc from 1.68.1 to 1.69.0 (#427)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.68.1 to 1.69.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.68.1...v1.69.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-17 17:00:23 +08:00
dependabot[bot] f6b1efbe8d
chore(deps): bump serde from 1.0.215 to 1.0.216 (#425) 2024-12-16 17:43:58 +08:00
dependabot[bot] 9d4dd33c94
chore(deps): bump google.golang.org/grpc from 1.68.0 to 1.68.1 (#423)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.68.0 to 1.68.1.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.68.0...v1.68.1)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 10:37:20 +08:00
dependabot[bot] f4f537a6fb
chore(deps): bump prost from 0.13.3 to 0.13.4 (#422)
Bumps [prost](https://github.com/tokio-rs/prost) from 0.13.3 to 0.13.4.
- [Release notes](https://github.com/tokio-rs/prost/releases)
- [Changelog](https://github.com/tokio-rs/prost/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/prost/compare/v0.13.3...v0.13.4)

---
updated-dependencies:
- dependency-name: prost
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 10:37:01 +08:00
dependabot[bot] 1ce6ea0738
chore(deps): bump tokio from 1.41.1 to 1.42.0 (#421)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.1 to 1.42.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.1...tokio-1.42.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 10:34:37 +08:00
Gaius d92317ee72
chore: update cargo version
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-11-29 19:29:16 +08:00
Gaius 8ac708ed9c
fix: partten for crc32 (#420)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-11-29 19:26:27 +08:00
Gaius f79271fd78
feat: support castagnoli for crc32 (#419)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-11-25 21:55:42 +08:00
Chlins Zhang 4bb7e96aa7
feat: add field is_prefetch to the download proto (#418)
feat: add field is_prefetch to the Download proto

Signed-off-by: suhan.zcy <suhan.zcy@antgroup.com>
Co-authored-by: suhan.zcy <suhan.zcy@antgroup.com>
2024-11-25 20:37:44 +08:00
dependabot[bot] 28d0ce80da
chore(deps): bump google.golang.org/protobuf from 1.35.1 to 1.35.2 (#415)
Bumps google.golang.org/protobuf from 1.35.1 to 1.35.2.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-25 16:00:16 +08:00
Gaius 5934c78ace
feat: add unknow message for errordetails (#417)
feat: add unknown message for errordetails

Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-11-21 14:49:15 +08:00
Gaius 491cee009a
feat: add read bandwidth and write bandwidth for disk (#416)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-11-18 14:16:54 +08:00
dependabot[bot] 12368e38e3
chore(deps): bump serde from 1.0.214 to 1.0.215 (#414)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.214 to 1.0.215.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.214...v1.0.215)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 10:48:00 +08:00
Chongzhi Deng 50fde09a36
feat: add delegation_token for ObjectStorage (#413)
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-11-14 17:27:31 +08:00
dependabot[bot] e491cd9ad7
chore(deps): bump tokio from 1.41.0 to 1.41.1 (#411)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.0 to 1.41.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.0...tokio-1.41.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-14 15:38:13 +08:00
dependabot[bot] 8a8e1b428f
chore(deps): bump google.golang.org/grpc from 1.67.1 to 1.68.0 (#412)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.67.1 to 1.68.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.67.1...v1.68.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-14 15:37:38 +08:00
dependabot[bot] d925fd7d64
chore(deps): bump serde from 1.0.213 to 1.0.214 (#410)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.213 to 1.0.214.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.213...v1.0.214)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-04 10:31:01 +08:00
Gaius 48131a6317
feat: add current_replica_count and current_persistent_replica_count for PersistentCacheTask (#409)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-10-30 21:23:35 +08:00
Gaius 8cbd5239ad
feat: add task info for UploadPersistentCacheTaskStarted (#408)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-10-30 11:08:27 +08:00
dependabot[bot] b703fee634
chore(deps): bump serde from 1.0.210 to 1.0.213 (#406)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.210 to 1.0.213.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.210...v1.0.213)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-28 10:54:03 +08:00
dependabot[bot] 0da9b4db58
chore(deps): bump tokio from 1.40.0 to 1.41.0 (#407)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.40.0 to 1.41.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.40.0...tokio-1.41.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-28 10:53:42 +08:00
dependabot[bot] 6a1c228126
chore(deps): bump google.golang.org/protobuf from 1.34.2 to 1.35.1 (#404) 2024-10-14 09:55:51 +08:00
Gaius 50b7abc20e
feat: remove security proto (#403)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-10-10 17:54:09 +08:00
dependabot[bot] b246448e87
chore(deps): bump google.golang.org/grpc from 1.67.0 to 1.67.1 (#402) 2024-10-07 10:28:59 +08:00
Gaius 9c71f692e7
chore: update verison to v2.0.166
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-09-30 18:08:05 +08:00
Gaius 791f3415d7
chore: update verison to v2.0.165
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-09-30 17:57:45 +08:00
Gaius a65d57a10b
feat: remove sync probes api (#401)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-09-30 17:13:44 +08:00
Gaius b5432fbf2a
feat: add rate limit for network (#400)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-09-30 16:29:44 +08:00
dependabot[bot] 4ad01a3404
chore(deps): bump tonic from 0.12.2 to 0.12.3 (#397)
Bumps [tonic](https://github.com/hyperium/tonic) from 0.12.2 to 0.12.3.
- [Release notes](https://github.com/hyperium/tonic/releases)
- [Changelog](https://github.com/hyperium/tonic/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/tonic/compare/v0.12.2...v0.12.3)

---
updated-dependencies:
- dependency-name: tonic
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-30 10:30:40 +08:00
dependabot[bot] 28136d14a5
chore(deps): bump prost-types from 0.13.2 to 0.13.3 (#398)
Bumps [prost-types](https://github.com/tokio-rs/prost) from 0.13.2 to 0.13.3.
- [Release notes](https://github.com/tokio-rs/prost/releases)
- [Changelog](https://github.com/tokio-rs/prost/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/prost/compare/v0.13.2...v0.13.3)

---
updated-dependencies:
- dependency-name: prost-types
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-30 10:30:19 +08:00
Gaius cc23703438
feat: fixed the comments for PersistentCacheTask (#396)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-09-29 14:18:23 +08:00
Gaius 0b031ea044
feat: generate mocks (#395)
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-09-26 16:42:29 +08:00
60 changed files with 24516 additions and 7735 deletions

View File

@ -18,10 +18,9 @@ jobs:
fetch-depth: '0'
- name: Golangci lint
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v8
with:
version: v1.54
args: --verbose
version: v2.1.6
rust-lint:
name: Rust Lint

View File

@ -1,37 +1,53 @@
version: "2"
run:
deadline: 3m
modules-download-mode: readonly
linters-settings:
gocyclo:
min-complexity: 60
gci:
sections:
- standard
- default
linters:
default: none
enable:
- errcheck
- goconst
- gocyclo
- govet
- misspell
- staticcheck
settings:
gocyclo:
min-complexity: 60
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- staticcheck
text: 'SA1019:'
paths:
- third_party$
- builtin$
- examples$
issues:
new: true
exclude-rules:
- linters:
- staticcheck
text: "SA1019:"
linters:
disable-all: true
formatters:
enable:
- gci
- gofmt
- golint
- misspell
- govet
- goconst
- deadcode
- gocyclo
- staticcheck
- errcheck
settings:
gci:
sections:
- standard
- default
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
output:
format: colored-line-number
print-issued-lines: true
print-linter-name: true
formats:
text:
path: stdout
print-linter-name: true
print-issued-lines: true

149
Cargo.lock generated
View File

@ -1,6 +1,6 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
version = 4
[[package]]
name = "addr2line"
@ -190,10 +190,10 @@ dependencies = [
[[package]]
name = "dragonfly-api"
version = "2.0.161"
version = "2.1.61"
dependencies = [
"prost",
"prost-types",
"prost 0.13.5",
"prost-types 0.14.1",
"prost-wkt-types",
"serde",
"tokio",
@ -443,7 +443,7 @@ dependencies = [
"http-body",
"hyper",
"pin-project-lite",
"socket2",
"socket2 0.5.7",
"tokio",
"tower",
"tower-service",
@ -476,6 +476,17 @@ version = "0.3.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f958d3d68f4167080a18141e10381e7634563984a537f2a49a30fd8e53ac5767"
[[package]]
name = "io-uring"
version = "0.7.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b86e202f00093dcba4275d4636b93ef9dd75d025ae560d2521b45ea28ab49013"
dependencies = [
"bitflags",
"cfg-if",
"libc",
]
[[package]]
name = "itertools"
version = "0.13.0"
@ -493,9 +504,9 @@ checksum = "49f1f14873335454500d59611f1cf4a4b0f786f9ac11f4312a78e4cf2566695b"
[[package]]
name = "libc"
version = "0.2.158"
version = "0.2.174"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d8adc4bb1803a324070e64a98ae98f38934d91957a99cfb3a43dcbc01bc56439"
checksum = "1171693293099992e19cddea4e8b849964e9846f4acee11b3948bcc337be8776"
[[package]]
name = "linux-raw-sys"
@ -656,12 +667,22 @@ dependencies = [
[[package]]
name = "prost"
version = "0.13.3"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7b0487d90e047de87f984913713b85c601c05609aad5b0df4b4573fbf69aa13f"
checksum = "2796faa41db3ec313a31f7624d9286acf277b52de526150b7e69f3debf891ee5"
dependencies = [
"bytes",
"prost-derive",
"prost-derive 0.13.5",
]
[[package]]
name = "prost"
version = "0.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7231bd9b3d3d33c86b58adbac74b5ec0ad9f496b19d22801d773636feaa95f3d"
dependencies = [
"bytes",
"prost-derive 0.14.1",
]
[[package]]
@ -678,8 +699,8 @@ dependencies = [
"once_cell",
"petgraph",
"prettyplease",
"prost",
"prost-types",
"prost 0.13.5",
"prost-types 0.13.5",
"regex",
"syn",
"tempfile",
@ -687,9 +708,22 @@ dependencies = [
[[package]]
name = "prost-derive"
version = "0.13.3"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9552f850d5f0964a4e4d0bf306459ac29323ddfbae05e35a7c0d35cb0803cc5"
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
dependencies = [
"anyhow",
"itertools",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "prost-derive"
version = "0.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9120690fafc389a67ba3803df527d0ec9cbbc9cc45e4cc20b332996dfb672425"
dependencies = [
"anyhow",
"itertools",
@ -700,22 +734,31 @@ dependencies = [
[[package]]
name = "prost-types"
version = "0.13.2"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "60caa6738c7369b940c3d49246a8d1749323674c65cb13010134f5c9bad5b519"
checksum = "52c2c1bf36ddb1a1c396b3601a3cec27c2462e45f07c386894ec3ccf5332bd16"
dependencies = [
"prost",
"prost 0.13.5",
]
[[package]]
name = "prost-types"
version = "0.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9b4db3d6da204ed77bb26ba83b6122a73aeb2e87e25fbf7ad2e84c4ccbf8f72"
dependencies = [
"prost 0.14.1",
]
[[package]]
name = "prost-wkt"
version = "0.6.0"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8d84e2bee181b04c2bac339f2bfe818c46a99750488cc6728ce4181d5aa8299"
checksum = "497e1e938f0c09ef9cabe1d49437b4016e03e8f82fbbe5d1c62a9b61b9decae1"
dependencies = [
"chrono",
"inventory",
"prost",
"prost 0.13.5",
"serde",
"serde_derive",
"serde_json",
@ -724,27 +767,27 @@ dependencies = [
[[package]]
name = "prost-wkt-build"
version = "0.6.0"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a669d5acbe719010c6f62a64e6d7d88fdedc1fe46e419747949ecb6312e9b14"
checksum = "07b8bf115b70a7aa5af1fd5d6e9418492e9ccb6e4785e858c938e28d132a884b"
dependencies = [
"heck",
"prost",
"prost 0.13.5",
"prost-build",
"prost-types",
"prost-types 0.13.5",
"quote",
]
[[package]]
name = "prost-wkt-types"
version = "0.6.0"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "01ef068e9b82e654614b22e6b13699bd545b6c0e2e721736008b00b38aeb4f64"
checksum = "c8cdde6df0a98311c839392ca2f2f0bcecd545f86a62b4e3c6a49c336e970fe5"
dependencies = [
"chrono",
"prost",
"prost 0.13.5",
"prost-build",
"prost-types",
"prost-types 0.13.5",
"prost-wkt",
"prost-wkt-build",
"regex",
@ -854,18 +897,18 @@ checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
[[package]]
name = "serde"
version = "1.0.210"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8e3592472072e6e22e0a54d5904d9febf8508f65fb8552499a1abc7d1078c3a"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.210"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "243902eda00fad750862fc144cea25caca5e20d615af0a81bee94ca738f1df1f"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
dependencies = [
"proc-macro2",
"quote",
@ -916,10 +959,20 @@ dependencies = [
]
[[package]]
name = "syn"
version = "2.0.76"
name = "socket2"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "578e081a14e0cefc3279b0472138c513f37b41a08d5a3cca9b6e4e8ceb6cd525"
checksum = "233504af464074f9d066d7b5416c5f9b894a5862a6506e306f7b816cdd6f1807"
dependencies = [
"libc",
"windows-sys 0.59.0",
]
[[package]]
name = "syn"
version = "2.0.85"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5023162dfcd14ef8f32034d8bcd4cc5ddc61ef7a247c024a33e24e1f24d21b56"
dependencies = [
"proc-macro2",
"quote",
@ -953,25 +1006,27 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.40.0"
version = "1.47.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2b070231665d27ad9ec9b8df639893f46727666c6767db40317fbe920a5d998"
checksum = "89e49afdadebb872d3145a5638b59eb0691ea23e46ca484037cfab3b76b95038"
dependencies = [
"backtrace",
"bytes",
"io-uring",
"libc",
"mio",
"pin-project-lite",
"socket2",
"slab",
"socket2 0.6.0",
"tokio-macros",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
name = "tokio-macros"
version = "2.4.0"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "693d596312e88961bc67d7f1f97af8a70227d9f90c31bba5806eec004978d752"
checksum = "6e06d43f1345a3bcd39f6a56dbb7dcab2ba47e68e8ac134855e7e2bdbaf8cab8"
dependencies = [
"proc-macro2",
"quote",
@ -980,9 +1035,9 @@ dependencies = [
[[package]]
name = "tokio-stream"
version = "0.1.15"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "267ac89e0bec6e691e5813911606935d77c476ff49024f98abcea3e7b15e37af"
checksum = "4f4e6ce100d0eb49a2734f8c0812bcd324cf357d21810932c5df6b96ef2b86f1"
dependencies = [
"futures-core",
"pin-project-lite",
@ -1004,9 +1059,9 @@ dependencies = [
[[package]]
name = "tonic"
version = "0.12.2"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c6f6ba989e4b2c58ae83d862d3a3e27690b6e3ae630d0deb59f3697f32aa88ad"
checksum = "877c5b330756d856ffcc4553ab34a5684481ade925ecc54bcd1bf02b1d0d4d52"
dependencies = [
"async-stream",
"async-trait",
@ -1022,8 +1077,8 @@ dependencies = [
"hyper-util",
"percent-encoding",
"pin-project",
"prost",
"socket2",
"prost 0.13.5",
"socket2 0.5.7",
"tokio",
"tokio-stream",
"tower",

View File

@ -1,6 +1,6 @@
[package]
name = "dragonfly-api"
version = "2.0.161"
version = "2.1.61"
authors = ["Gaius <gaius.qi@gmail.com>"]
edition = "2021"
license = "Apache-2.0"
@ -11,10 +11,10 @@ readme = "README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tonic = "0.12.1"
prost = "0.13.3"
prost-types = "0.13.2"
tokio = { version = "1.40.0", features = ["rt-multi-thread", "macros"] }
tonic = "0.12.3"
prost = "0.13.5"
prost-types = "0.14.1"
tokio = { version = "1.47.1", features = ["rt-multi-thread", "macros"] }
serde = { version = "1.0", features = ["derive"] }
prost-wkt-types = "0.6"

View File

@ -17,7 +17,7 @@ all: help
# Run code lint
lint: markdownlint
@echo "Begin to golangci-lint."
@golangci-lint run
@./hack/golanglint.sh
.PHONY: lint
# Run markdown lint

View File

@ -1,6 +1,6 @@
# api
[![Discussions](https://img.shields.io/badge/discussions-on%20github-blue?style=flat-square)](https://github.com/dragonflyoss/Dragonfly2/discussions)
[![Discussions](https://img.shields.io/badge/discussions-on%20github-blue?style=flat-square)](https://github.com/dragonflyoss/dragonfly/discussions)
[![LICENSE](https://img.shields.io/github/license/dragonflyoss/api.svg?style=flat-square)](https://github.com/dragonflyoss/api/blob/main/LICENSE)
Canonical location of the Dragonfly API definition.
@ -9,24 +9,24 @@ The project includes the api definitions of dragonfly services and the mocks of
## Note to developers
If developers need to change dragonfly api definition,
please contact [dragonfly maintainers](https://github.com/dragonflyoss/Dragonfly2/blob/main/MAINTAINERS.md).
please contact [dragonfly maintainers](https://github.com/dragonflyoss/dragonfly/blob/main/MAINTAINERS.md).
## Community
Join the conversation and help the community.
- **Slack Channel**: [#dragonfly](https://cloud-native.slack.com/messages/dragonfly/) on [CNCF Slack](https://slack.cncf.io/)
- **Discussion Group**: <dragonfly-discuss@googlegroups.com>
- **Github Discussions**: [Dragonfly Discussion Forum](https://github.com/dragonflyoss/dragonfly/discussions)
- **Developer Group**: <dragonfly-developers@googlegroups.com>
- **Github Discussions**: [Dragonfly Discussion Forum](https://github.com/dragonflyoss/Dragonfly2/discussions)
- **Maintainer Group**: <dragonfly-maintainers@googlegroups.com>
- **Twitter**: [@dragonfly_oss](https://twitter.com/dragonfly_oss)
- **DingTalk**: [22880028764](https://qr.dingtalk.com/action/joingroup?code=v1,k1,pkV9IbsSyDusFQdByPSK3HfCG61ZCLeb8b/lpQ3uUqI=&_dt_no_comment=1&origin=11)
## Contributing
You should check out our
[CONTRIBUTING](https://github.com/dragonflyoss/Dragonfly2/blob/main/CONTRIBUTING.md) and develop the project together.
[CONTRIBUTING](https://github.com/dragonflyoss/dragonfly/blob/main/CONTRIBUTING.md) and develop the project together.
## Code of Conduct
Please refer to our [Code of Conduct](https://github.com/dragonflyoss/Dragonfly2/blob/main/CODE_OF_CONDUCT.md).
Please refer to our [Code of Conduct](https://github.com/dragonflyoss/dragonfly/blob/main/CODE_OF_CONDUCT.md).

View File

@ -2,8 +2,15 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
tonic_build::configure()
.file_descriptor_set_path("src/descriptor.bin")
.protoc_arg("--experimental_allow_proto3_optional")
.type_attribute(".", "#[derive(serde::Serialize, serde::Deserialize)]", )
.type_attribute("scheduler.v2.AnnouncePeerRequest.request", "#[allow(clippy::large_enum_variant)]", )
.type_attribute(".", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(
"scheduler.v2.AnnouncePeerRequest.request",
"#[allow(clippy::large_enum_variant)]",
)
.type_attribute(
"scheduler.v2.AnnounceCachePeerRequest.request",
"#[allow(clippy::large_enum_variant)]",
)
.extern_path(".google.protobuf.Timestamp", "::prost_wkt_types::Timestamp")
.extern_path(".google.protobuf.Duration", "::prost_wkt_types::Duration")
.out_dir("src")
@ -11,7 +18,6 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
&[
"proto/common.proto",
"proto/errordetails.proto",
"proto/security.proto",
"proto/dfdaemon.proto",
"proto/manager.proto",
"proto/scheduler.proto",

18
go.mod
View File

@ -1,17 +1,17 @@
module d7y.io/api/v2
go 1.21
go 1.23.8
require (
github.com/envoyproxy/protoc-gen-validate v1.1.0
go.uber.org/mock v0.4.0
google.golang.org/grpc v1.67.0
google.golang.org/protobuf v1.34.2
github.com/envoyproxy/protoc-gen-validate v1.2.1
go.uber.org/mock v0.5.2
google.golang.org/grpc v1.74.2
google.golang.org/protobuf v1.36.7
)
require (
golang.org/x/net v0.28.0 // indirect
golang.org/x/sys v0.24.0 // indirect
golang.org/x/text v0.17.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 // indirect
golang.org/x/net v0.40.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.25.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a // indirect
)

56
go.sum
View File

@ -1,18 +1,38 @@
github.com/envoyproxy/protoc-gen-validate v1.1.0 h1:tntQDh69XqOCOZsDz0lVJQez/2L6Uu2PdjCQwWCJ3bM=
github.com/envoyproxy/protoc-gen-validate v1.1.0/go.mod h1:sXRDRVmzEbkM7CVcM06s9shE/m23dg3wzjl0UWqJ2q4=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
go.uber.org/mock v0.4.0 h1:VcM4ZOtdbR4f6VXfiOpwpVJDL6lCReaZ6mw31wqh7KU=
go.uber.org/mock v0.4.0/go.mod h1:a6FSlNadKUHUa9IP5Vyt1zh4fC7uAwxMutEAscFbkZc=
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE=
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg=
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 h1:e7S5W7MGGLaSu8j3YjdezkZ+m1/Nm0uRVRMEMGk26Xs=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
google.golang.org/grpc v1.67.0 h1:IdH9y6PF5MPSdAntIcpjQ+tXO41pcQsfZV2RxtQgVcw=
google.golang.org/grpc v1.67.0/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.uber.org/mock v0.5.2 h1:LbtPTcP8A5k9WPXj54PPPbjcI4Y6lhyOZXn+VS7wNko=
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a h1:v2PbRU4K3llS09c7zodFpNePeamkAwG3mPrAery9VeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.74.2 h1:WoosgB65DlWVC9FqI82dGsZhWFNBSLjQ84bjROOpMu4=
google.golang.org/grpc v1.74.2/go.mod h1:CtQ+BGjaAIXHs/5YS3i473GqwBBa1zGQNevxdeBEXrM=
google.golang.org/protobuf v1.36.7 h1:IgrO7UwFQGJdRNXH/sQux4R1Dj1WAKcLElzeeRaXV2A=
google.golang.org/protobuf v1.36.7/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=

11
hack/golanglint.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
LINT_DIR=/data
GOLANGCI_IMAGE=golangci/golangci-lint:v1.54
docker run --rm \
-w "${LINT_DIR}" \
-v "$(pwd):${LINT_DIR}:ro" \
${GOLANGCI_IMAGE} \
golangci-lint \
run -v

View File

@ -5,8 +5,7 @@ PROTO_PATH=pkg/apis
LANGUAGE=go
proto_modules="common/v1 common/v2 cdnsystem/v1 dfdaemon/v1 dfdaemon/v2
errordetails/v1 errordetails/v2 manager/v1 manager/v2 scheduler/v1 scheduler/v2
security/v1"
errordetails/v1 errordetails/v2 manager/v1 manager/v2 scheduler/v1 scheduler/v2"
echo "generate protos..."

View File

@ -24,6 +24,7 @@ import (
type MockSeederClient struct {
ctrl *gomock.Controller
recorder *MockSeederClientMockRecorder
isgomock struct{}
}
// MockSeederClientMockRecorder is the mock recorder for MockSeederClient.
@ -107,6 +108,7 @@ func (mr *MockSeederClientMockRecorder) SyncPieceTasks(ctx any, opts ...any) *go
type MockSeeder_ObtainSeedsClient struct {
ctrl *gomock.Controller
recorder *MockSeeder_ObtainSeedsClientMockRecorder
isgomock struct{}
}
// MockSeeder_ObtainSeedsClientMockRecorder is the mock recorder for MockSeeder_ObtainSeedsClient.
@ -230,6 +232,7 @@ func (mr *MockSeeder_ObtainSeedsClientMockRecorder) Trailer() *gomock.Call {
type MockSeeder_SyncPieceTasksClient struct {
ctrl *gomock.Controller
recorder *MockSeeder_SyncPieceTasksClientMockRecorder
isgomock struct{}
}
// MockSeeder_SyncPieceTasksClientMockRecorder is the mock recorder for MockSeeder_SyncPieceTasksClient.
@ -367,6 +370,7 @@ func (mr *MockSeeder_SyncPieceTasksClientMockRecorder) Trailer() *gomock.Call {
type MockSeederServer struct {
ctrl *gomock.Controller
recorder *MockSeederServerMockRecorder
isgomock struct{}
}
// MockSeederServerMockRecorder is the mock recorder for MockSeederServer.
@ -433,6 +437,7 @@ func (mr *MockSeederServerMockRecorder) SyncPieceTasks(arg0 any) *gomock.Call {
type MockUnsafeSeederServer struct {
ctrl *gomock.Controller
recorder *MockUnsafeSeederServerMockRecorder
isgomock struct{}
}
// MockUnsafeSeederServerMockRecorder is the mock recorder for MockUnsafeSeederServer.
@ -468,6 +473,7 @@ func (mr *MockUnsafeSeederServerMockRecorder) mustEmbedUnimplementedSeederServer
type MockSeeder_ObtainSeedsServer struct {
ctrl *gomock.Controller
recorder *MockSeeder_ObtainSeedsServerMockRecorder
isgomock struct{}
}
// MockSeeder_ObtainSeedsServerMockRecorder is the mock recorder for MockSeeder_ObtainSeedsServer.
@ -587,6 +593,7 @@ func (mr *MockSeeder_ObtainSeedsServerMockRecorder) SetTrailer(arg0 any) *gomock
type MockSeeder_SyncPieceTasksServer struct {
ctrl *gomock.Controller
recorder *MockSeeder_SyncPieceTasksServerMockRecorder
isgomock struct{}
}
// MockSeeder_SyncPieceTasksServerMockRecorder is the mock recorder for MockSeeder_SyncPieceTasksServer.

View File

@ -963,6 +963,8 @@ type Host struct {
Location string `protobuf:"bytes,7,opt,name=location,proto3" json:"location,omitempty"`
// IDC where the peer host is located.
Idc string `protobuf:"bytes,8,opt,name=idc,proto3" json:"idc,omitempty"`
// Port of proxy server.
ProxyPort int32 `protobuf:"varint,9,opt,name=proxy_port,json=proxyPort,proto3" json:"proxy_port,omitempty"`
}
func (x *Host) Reset() {
@ -1046,6 +1048,13 @@ func (x *Host) GetIdc() string {
return ""
}
func (x *Host) GetProxyPort() int32 {
if x != nil {
return x.ProxyPort
}
return 0
}
var File_pkg_apis_common_v1_common_proto protoreflect.FileDescriptor
var file_pkg_apis_common_v1_common_proto_rawDesc = []byte{
@ -1152,7 +1161,7 @@ var file_pkg_apis_common_v1_common_proto_rawDesc = []byte{
0x6e, 0x64, 0x5f, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x18, 0x09, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x17, 0x2e, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2e, 0x45, 0x78, 0x74, 0x65,
0x6e, 0x64, 0x41, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x52, 0x0f, 0x65, 0x78, 0x74,
0x65, 0x6e, 0x64, 0x41, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x22, 0xf8, 0x01, 0x0a,
0x65, 0x6e, 0x64, 0x41, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x22, 0xa5, 0x02, 0x0a,
0x04, 0x48, 0x6f, 0x73, 0x74, 0x12, 0x17, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72, 0x02, 0x10, 0x01, 0x52, 0x02, 0x69, 0x64, 0x12, 0x17,
0x0a, 0x02, 0x69, 0x70, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72,
@ -1168,71 +1177,74 @@ var file_pkg_apis_common_v1_common_proto_rawDesc = []byte{
0x01, 0x28, 0x09, 0x42, 0x0a, 0xfa, 0x42, 0x07, 0x72, 0x05, 0x10, 0x01, 0xd0, 0x01, 0x01, 0x52,
0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x1c, 0x0a, 0x03, 0x69, 0x64, 0x63,
0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0a, 0xfa, 0x42, 0x07, 0x72, 0x05, 0x10, 0x01, 0xd0,
0x01, 0x01, 0x52, 0x03, 0x69, 0x64, 0x63, 0x2a, 0xd9, 0x05, 0x0a, 0x04, 0x43, 0x6f, 0x64, 0x65,
0x12, 0x11, 0x0a, 0x0d, 0x58, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45,
0x44, 0x10, 0x00, 0x12, 0x0c, 0x0a, 0x07, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x10, 0xc8,
0x01, 0x12, 0x16, 0x0a, 0x11, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x55, 0x6e, 0x61, 0x76, 0x61,
0x69, 0x6c, 0x61, 0x62, 0x6c, 0x65, 0x10, 0xf4, 0x03, 0x12, 0x13, 0x0a, 0x0e, 0x52, 0x65, 0x73,
0x6f, 0x75, 0x72, 0x63, 0x65, 0x4c, 0x61, 0x63, 0x6b, 0x65, 0x64, 0x10, 0xe8, 0x07, 0x12, 0x18,
0x0a, 0x13, 0x42, 0x61, 0x63, 0x6b, 0x54, 0x6f, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x62,
0x6f, 0x72, 0x74, 0x65, 0x64, 0x10, 0xe9, 0x07, 0x12, 0x0f, 0x0a, 0x0a, 0x42, 0x61, 0x64, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x10, 0xf8, 0x0a, 0x12, 0x15, 0x0a, 0x10, 0x50, 0x65, 0x65,
0x72, 0x54, 0x61, 0x73, 0x6b, 0x4e, 0x6f, 0x74, 0x46, 0x6f, 0x75, 0x6e, 0x64, 0x10, 0xfc, 0x0a,
0x12, 0x11, 0x0a, 0x0c, 0x55, 0x6e, 0x6b, 0x6e, 0x6f, 0x77, 0x6e, 0x45, 0x72, 0x72, 0x6f, 0x72,
0x10, 0xdc, 0x0b, 0x12, 0x13, 0x0a, 0x0e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x54, 0x69,
0x6d, 0x65, 0x4f, 0x75, 0x74, 0x10, 0xe0, 0x0b, 0x12, 0x10, 0x0a, 0x0b, 0x43, 0x6c, 0x69, 0x65,
0x6e, 0x74, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0xa0, 0x1f, 0x12, 0x1b, 0x0a, 0x16, 0x43, 0x6c,
0x69, 0x65, 0x6e, 0x74, 0x50, 0x69, 0x65, 0x63, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x46, 0x61, 0x69, 0x6c, 0x10, 0xa1, 0x1f, 0x12, 0x1a, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e,
0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74,
0x10, 0xa2, 0x1f, 0x12, 0x1a, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x43, 0x6f, 0x6e,
0x74, 0x65, 0x78, 0x74, 0x43, 0x61, 0x6e, 0x63, 0x65, 0x6c, 0x65, 0x64, 0x10, 0xa3, 0x1f, 0x12,
0x19, 0x0a, 0x14, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x57, 0x61, 0x69, 0x74, 0x50, 0x69, 0x65,
0x63, 0x65, 0x52, 0x65, 0x61, 0x64, 0x79, 0x10, 0xa4, 0x1f, 0x12, 0x1c, 0x0a, 0x17, 0x43, 0x6c,
0x69, 0x65, 0x6e, 0x74, 0x50, 0x69, 0x65, 0x63, 0x65, 0x44, 0x6f, 0x77, 0x6e, 0x6c, 0x6f, 0x61,
0x64, 0x46, 0x61, 0x69, 0x6c, 0x10, 0xa5, 0x1f, 0x12, 0x1b, 0x0a, 0x16, 0x43, 0x6c, 0x69, 0x65,
0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x4c, 0x69, 0x6d, 0x69, 0x74, 0x46, 0x61,
0x69, 0x6c, 0x10, 0xa6, 0x1f, 0x12, 0x1a, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x43,
0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0xa7,
0x1f, 0x12, 0x1a, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x42, 0x61, 0x63, 0x6b, 0x53,
0x6f, 0x75, 0x72, 0x63, 0x65, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0xa8, 0x1f, 0x12, 0x18, 0x0a,
0x13, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x50, 0x69, 0x65, 0x63, 0x65, 0x4e, 0x6f, 0x74, 0x46,
0x6f, 0x75, 0x6e, 0x64, 0x10, 0xb4, 0x22, 0x12, 0x0f, 0x0a, 0x0a, 0x53, 0x63, 0x68, 0x65, 0x64,
0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0x88, 0x27, 0x12, 0x18, 0x0a, 0x13, 0x53, 0x63, 0x68, 0x65,
0x64, 0x4e, 0x65, 0x65, 0x64, 0x42, 0x61, 0x63, 0x6b, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x10,
0x89, 0x27, 0x12, 0x12, 0x0a, 0x0d, 0x53, 0x63, 0x68, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x47,
0x6f, 0x6e, 0x65, 0x10, 0x8a, 0x27, 0x12, 0x16, 0x0a, 0x11, 0x53, 0x63, 0x68, 0x65, 0x64, 0x50,
0x65, 0x65, 0x72, 0x4e, 0x6f, 0x74, 0x46, 0x6f, 0x75, 0x6e, 0x64, 0x10, 0x8c, 0x27, 0x12, 0x23,
0x0a, 0x1e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x50, 0x69, 0x65, 0x63, 0x65,
0x52, 0x65, 0x73, 0x75, 0x6c, 0x74, 0x52, 0x65, 0x70, 0x6f, 0x72, 0x74, 0x46, 0x61, 0x69, 0x6c,
0x10, 0x8d, 0x27, 0x12, 0x19, 0x0a, 0x14, 0x53, 0x63, 0x68, 0x65, 0x64, 0x54, 0x61, 0x73, 0x6b,
0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0x8e, 0x27, 0x12, 0x14,
0x0a, 0x0f, 0x53, 0x63, 0x68, 0x65, 0x64, 0x52, 0x65, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65,
0x72, 0x10, 0x8f, 0x27, 0x12, 0x13, 0x0a, 0x0e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x46, 0x6f, 0x72,
0x62, 0x69, 0x64, 0x64, 0x65, 0x6e, 0x10, 0x90, 0x27, 0x12, 0x18, 0x0a, 0x13, 0x43, 0x44, 0x4e,
0x54, 0x61, 0x73, 0x6b, 0x52, 0x65, 0x67, 0x69, 0x73, 0x74, 0x72, 0x79, 0x46, 0x61, 0x69, 0x6c,
0x10, 0xf1, 0x2e, 0x12, 0x14, 0x0a, 0x0f, 0x43, 0x44, 0x4e, 0x54, 0x61, 0x73, 0x6b, 0x4e, 0x6f,
0x74, 0x46, 0x6f, 0x75, 0x6e, 0x64, 0x10, 0x84, 0x32, 0x12, 0x18, 0x0a, 0x13, 0x49, 0x6e, 0x76,
0x61, 0x6c, 0x69, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65,
0x10, 0xd9, 0x36, 0x2a, 0x17, 0x0a, 0x0a, 0x50, 0x69, 0x65, 0x63, 0x65, 0x53, 0x74, 0x79, 0x6c,
0x65, 0x12, 0x09, 0x0a, 0x05, 0x50, 0x4c, 0x41, 0x49, 0x4e, 0x10, 0x00, 0x2a, 0x43, 0x0a, 0x09,
0x53, 0x69, 0x7a, 0x65, 0x53, 0x63, 0x6f, 0x70, 0x65, 0x12, 0x0a, 0x0a, 0x06, 0x4e, 0x4f, 0x52,
0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, 0x09, 0x0a, 0x05, 0x53, 0x4d, 0x41, 0x4c, 0x4c, 0x10, 0x01,
0x12, 0x08, 0x0a, 0x04, 0x54, 0x49, 0x4e, 0x59, 0x10, 0x02, 0x12, 0x09, 0x0a, 0x05, 0x45, 0x4d,
0x50, 0x54, 0x59, 0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x10,
0x04, 0x2a, 0x30, 0x0a, 0x08, 0x54, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0a, 0x0a,
0x06, 0x4e, 0x6f, 0x72, 0x6d, 0x61, 0x6c, 0x10, 0x00, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x66, 0x43,
0x61, 0x63, 0x68, 0x65, 0x10, 0x01, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x66, 0x53, 0x74, 0x6f, 0x72,
0x65, 0x10, 0x02, 0x2a, 0x5e, 0x0a, 0x08, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12,
0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c, 0x30, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x4c,
0x45, 0x56, 0x45, 0x4c, 0x31, 0x10, 0x01, 0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c,
0x32, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c, 0x33, 0x10, 0x03, 0x12,
0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c, 0x34, 0x10, 0x04, 0x12, 0x0a, 0x0a, 0x06, 0x4c,
0x45, 0x56, 0x45, 0x4c, 0x35, 0x10, 0x05, 0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c,
0x36, 0x10, 0x06, 0x42, 0x29, 0x5a, 0x27, 0x64, 0x37, 0x79, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70,
0x69, 0x2f, 0x76, 0x32, 0x2f, 0x70, 0x6b, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x63, 0x6f,
0x6d, 0x6d, 0x6f, 0x6e, 0x2f, 0x76, 0x31, 0x3b, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x62, 0x06,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x01, 0x01, 0x52, 0x03, 0x69, 0x64, 0x63, 0x12, 0x2b, 0x0a, 0x0a, 0x70, 0x72, 0x6f, 0x78, 0x79,
0x5f, 0x70, 0x6f, 0x72, 0x74, 0x18, 0x09, 0x20, 0x01, 0x28, 0x05, 0x42, 0x0c, 0xfa, 0x42, 0x09,
0x1a, 0x07, 0x10, 0xff, 0xff, 0x03, 0x28, 0x80, 0x08, 0x52, 0x09, 0x70, 0x72, 0x6f, 0x78, 0x79,
0x50, 0x6f, 0x72, 0x74, 0x2a, 0xd9, 0x05, 0x0a, 0x04, 0x43, 0x6f, 0x64, 0x65, 0x12, 0x11, 0x0a,
0x0d, 0x58, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00,
0x12, 0x0c, 0x0a, 0x07, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x10, 0xc8, 0x01, 0x12, 0x16,
0x0a, 0x11, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x55, 0x6e, 0x61, 0x76, 0x61, 0x69, 0x6c, 0x61,
0x62, 0x6c, 0x65, 0x10, 0xf4, 0x03, 0x12, 0x13, 0x0a, 0x0e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72,
0x63, 0x65, 0x4c, 0x61, 0x63, 0x6b, 0x65, 0x64, 0x10, 0xe8, 0x07, 0x12, 0x18, 0x0a, 0x13, 0x42,
0x61, 0x63, 0x6b, 0x54, 0x6f, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x62, 0x6f, 0x72, 0x74,
0x65, 0x64, 0x10, 0xe9, 0x07, 0x12, 0x0f, 0x0a, 0x0a, 0x42, 0x61, 0x64, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x10, 0xf8, 0x0a, 0x12, 0x15, 0x0a, 0x10, 0x50, 0x65, 0x65, 0x72, 0x54, 0x61,
0x73, 0x6b, 0x4e, 0x6f, 0x74, 0x46, 0x6f, 0x75, 0x6e, 0x64, 0x10, 0xfc, 0x0a, 0x12, 0x11, 0x0a,
0x0c, 0x55, 0x6e, 0x6b, 0x6e, 0x6f, 0x77, 0x6e, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0xdc, 0x0b,
0x12, 0x13, 0x0a, 0x0e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x4f,
0x75, 0x74, 0x10, 0xe0, 0x0b, 0x12, 0x10, 0x0a, 0x0b, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x45,
0x72, 0x72, 0x6f, 0x72, 0x10, 0xa0, 0x1f, 0x12, 0x1b, 0x0a, 0x16, 0x43, 0x6c, 0x69, 0x65, 0x6e,
0x74, 0x50, 0x69, 0x65, 0x63, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x46, 0x61, 0x69,
0x6c, 0x10, 0xa1, 0x1f, 0x12, 0x1a, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x53, 0x63,
0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x10, 0xa2, 0x1f,
0x12, 0x1a, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x78,
0x74, 0x43, 0x61, 0x6e, 0x63, 0x65, 0x6c, 0x65, 0x64, 0x10, 0xa3, 0x1f, 0x12, 0x19, 0x0a, 0x14,
0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x57, 0x61, 0x69, 0x74, 0x50, 0x69, 0x65, 0x63, 0x65, 0x52,
0x65, 0x61, 0x64, 0x79, 0x10, 0xa4, 0x1f, 0x12, 0x1c, 0x0a, 0x17, 0x43, 0x6c, 0x69, 0x65, 0x6e,
0x74, 0x50, 0x69, 0x65, 0x63, 0x65, 0x44, 0x6f, 0x77, 0x6e, 0x6c, 0x6f, 0x61, 0x64, 0x46, 0x61,
0x69, 0x6c, 0x10, 0xa5, 0x1f, 0x12, 0x1b, 0x0a, 0x16, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x4c, 0x69, 0x6d, 0x69, 0x74, 0x46, 0x61, 0x69, 0x6c, 0x10,
0xa6, 0x1f, 0x12, 0x1a, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x43, 0x6f, 0x6e, 0x6e,
0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0xa7, 0x1f, 0x12, 0x1a,
0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x42, 0x61, 0x63, 0x6b, 0x53, 0x6f, 0x75, 0x72,
0x63, 0x65, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0xa8, 0x1f, 0x12, 0x18, 0x0a, 0x13, 0x43, 0x6c,
0x69, 0x65, 0x6e, 0x74, 0x50, 0x69, 0x65, 0x63, 0x65, 0x4e, 0x6f, 0x74, 0x46, 0x6f, 0x75, 0x6e,
0x64, 0x10, 0xb4, 0x22, 0x12, 0x0f, 0x0a, 0x0a, 0x53, 0x63, 0x68, 0x65, 0x64, 0x45, 0x72, 0x72,
0x6f, 0x72, 0x10, 0x88, 0x27, 0x12, 0x18, 0x0a, 0x13, 0x53, 0x63, 0x68, 0x65, 0x64, 0x4e, 0x65,
0x65, 0x64, 0x42, 0x61, 0x63, 0x6b, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x10, 0x89, 0x27, 0x12,
0x12, 0x0a, 0x0d, 0x53, 0x63, 0x68, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x47, 0x6f, 0x6e, 0x65,
0x10, 0x8a, 0x27, 0x12, 0x16, 0x0a, 0x11, 0x53, 0x63, 0x68, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72,
0x4e, 0x6f, 0x74, 0x46, 0x6f, 0x75, 0x6e, 0x64, 0x10, 0x8c, 0x27, 0x12, 0x23, 0x0a, 0x1e, 0x53,
0x63, 0x68, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x50, 0x69, 0x65, 0x63, 0x65, 0x52, 0x65, 0x73,
0x75, 0x6c, 0x74, 0x52, 0x65, 0x70, 0x6f, 0x72, 0x74, 0x46, 0x61, 0x69, 0x6c, 0x10, 0x8d, 0x27,
0x12, 0x19, 0x0a, 0x14, 0x53, 0x63, 0x68, 0x65, 0x64, 0x54, 0x61, 0x73, 0x6b, 0x53, 0x74, 0x61,
0x74, 0x75, 0x73, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x10, 0x8e, 0x27, 0x12, 0x14, 0x0a, 0x0f, 0x53,
0x63, 0x68, 0x65, 0x64, 0x52, 0x65, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x10, 0x8f,
0x27, 0x12, 0x13, 0x0a, 0x0e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x46, 0x6f, 0x72, 0x62, 0x69, 0x64,
0x64, 0x65, 0x6e, 0x10, 0x90, 0x27, 0x12, 0x18, 0x0a, 0x13, 0x43, 0x44, 0x4e, 0x54, 0x61, 0x73,
0x6b, 0x52, 0x65, 0x67, 0x69, 0x73, 0x74, 0x72, 0x79, 0x46, 0x61, 0x69, 0x6c, 0x10, 0xf1, 0x2e,
0x12, 0x14, 0x0a, 0x0f, 0x43, 0x44, 0x4e, 0x54, 0x61, 0x73, 0x6b, 0x4e, 0x6f, 0x74, 0x46, 0x6f,
0x75, 0x6e, 0x64, 0x10, 0x84, 0x32, 0x12, 0x18, 0x0a, 0x13, 0x49, 0x6e, 0x76, 0x61, 0x6c, 0x69,
0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x10, 0xd9, 0x36,
0x2a, 0x17, 0x0a, 0x0a, 0x50, 0x69, 0x65, 0x63, 0x65, 0x53, 0x74, 0x79, 0x6c, 0x65, 0x12, 0x09,
0x0a, 0x05, 0x50, 0x4c, 0x41, 0x49, 0x4e, 0x10, 0x00, 0x2a, 0x43, 0x0a, 0x09, 0x53, 0x69, 0x7a,
0x65, 0x53, 0x63, 0x6f, 0x70, 0x65, 0x12, 0x0a, 0x0a, 0x06, 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c,
0x10, 0x00, 0x12, 0x09, 0x0a, 0x05, 0x53, 0x4d, 0x41, 0x4c, 0x4c, 0x10, 0x01, 0x12, 0x08, 0x0a,
0x04, 0x54, 0x49, 0x4e, 0x59, 0x10, 0x02, 0x12, 0x09, 0x0a, 0x05, 0x45, 0x4d, 0x50, 0x54, 0x59,
0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x10, 0x04, 0x2a, 0x30,
0x0a, 0x08, 0x54, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0a, 0x0a, 0x06, 0x4e, 0x6f,
0x72, 0x6d, 0x61, 0x6c, 0x10, 0x00, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x66, 0x43, 0x61, 0x63, 0x68,
0x65, 0x10, 0x01, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x66, 0x53, 0x74, 0x6f, 0x72, 0x65, 0x10, 0x02,
0x2a, 0x5e, 0x0a, 0x08, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x0a, 0x0a, 0x06,
0x4c, 0x45, 0x56, 0x45, 0x4c, 0x30, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45,
0x4c, 0x31, 0x10, 0x01, 0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c, 0x32, 0x10, 0x02,
0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c, 0x33, 0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06,
0x4c, 0x45, 0x56, 0x45, 0x4c, 0x34, 0x10, 0x04, 0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45,
0x4c, 0x35, 0x10, 0x05, 0x12, 0x0a, 0x0a, 0x06, 0x4c, 0x45, 0x56, 0x45, 0x4c, 0x36, 0x10, 0x06,
0x42, 0x29, 0x5a, 0x27, 0x64, 0x37, 0x79, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76,
0x32, 0x2f, 0x70, 0x6b, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x63, 0x6f, 0x6d, 0x6d, 0x6f,
0x6e, 0x2f, 0x76, 0x31, 0x3b, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x62, 0x06, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x33,
}
var (

View File

@ -1013,6 +1013,17 @@ func (m *Host) validate(all bool) error {
}
if val := m.GetProxyPort(); val < 1024 || val >= 65535 {
err := HostValidationError{
field: "ProxyPort",
reason: "value must be inside range [1024, 65535)",
}
if !all {
return err
}
errors = append(errors, err)
}
if len(errors) > 0 {
return HostMultiError(errors)
}

View File

@ -57,7 +57,7 @@ enum Code {
// Scheduler response error 5000-5999.
SchedError = 5000;
// Client should try to download from source.
SchedNeedBackSource = 5001;
SchedNeedBackSource = 5001;
// Client should disconnect from scheduler.
SchedPeerGone = 5002;
// Peer not found in scheduler.
@ -245,4 +245,6 @@ message Host {
string location = 7 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// IDC where the peer host is located.
string idc = 8 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Port of proxy server.
int32 proxy_port = 9 [(validate.rules).int32 = {gte: 1024, lt: 65535}];
}

File diff suppressed because it is too large Load Diff

View File

@ -363,6 +363,335 @@ var _ interface {
ErrorName() string
} = PeerValidationError{}
// Validate checks the field values on CachePeer with the rules defined in the
// proto definition for this message. If any rules are violated, the first
// error encountered is returned, or nil if there are no violations.
func (m *CachePeer) Validate() error {
return m.validate(false)
}
// ValidateAll checks the field values on CachePeer with the rules defined in
// the proto definition for this message. If any rules are violated, the
// result is a list of violation errors wrapped in CachePeerMultiError, or nil
// if none found.
func (m *CachePeer) ValidateAll() error {
return m.validate(true)
}
func (m *CachePeer) validate(all bool) error {
if m == nil {
return nil
}
var errors []error
if utf8.RuneCountInString(m.GetId()) < 1 {
err := CachePeerValidationError{
field: "Id",
reason: "value length must be at least 1 runes",
}
if !all {
return err
}
errors = append(errors, err)
}
if _, ok := Priority_name[int32(m.GetPriority())]; !ok {
err := CachePeerValidationError{
field: "Priority",
reason: "value must be one of the defined enum values",
}
if !all {
return err
}
errors = append(errors, err)
}
if len(m.GetPieces()) > 0 {
if len(m.GetPieces()) < 1 {
err := CachePeerValidationError{
field: "Pieces",
reason: "value must contain at least 1 item(s)",
}
if !all {
return err
}
errors = append(errors, err)
}
for idx, item := range m.GetPieces() {
_, _ = idx, item
if all {
switch v := interface{}(item).(type) {
case interface{ ValidateAll() error }:
if err := v.ValidateAll(); err != nil {
errors = append(errors, CachePeerValidationError{
field: fmt.Sprintf("Pieces[%v]", idx),
reason: "embedded message failed validation",
cause: err,
})
}
case interface{ Validate() error }:
if err := v.Validate(); err != nil {
errors = append(errors, CachePeerValidationError{
field: fmt.Sprintf("Pieces[%v]", idx),
reason: "embedded message failed validation",
cause: err,
})
}
}
} else if v, ok := interface{}(item).(interface{ Validate() error }); ok {
if err := v.Validate(); err != nil {
return CachePeerValidationError{
field: fmt.Sprintf("Pieces[%v]", idx),
reason: "embedded message failed validation",
cause: err,
}
}
}
}
}
if m.GetCost() == nil {
err := CachePeerValidationError{
field: "Cost",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if utf8.RuneCountInString(m.GetState()) < 1 {
err := CachePeerValidationError{
field: "State",
reason: "value length must be at least 1 runes",
}
if !all {
return err
}
errors = append(errors, err)
}
if m.GetTask() == nil {
err := CachePeerValidationError{
field: "Task",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if all {
switch v := interface{}(m.GetTask()).(type) {
case interface{ ValidateAll() error }:
if err := v.ValidateAll(); err != nil {
errors = append(errors, CachePeerValidationError{
field: "Task",
reason: "embedded message failed validation",
cause: err,
})
}
case interface{ Validate() error }:
if err := v.Validate(); err != nil {
errors = append(errors, CachePeerValidationError{
field: "Task",
reason: "embedded message failed validation",
cause: err,
})
}
}
} else if v, ok := interface{}(m.GetTask()).(interface{ Validate() error }); ok {
if err := v.Validate(); err != nil {
return CachePeerValidationError{
field: "Task",
reason: "embedded message failed validation",
cause: err,
}
}
}
if m.GetHost() == nil {
err := CachePeerValidationError{
field: "Host",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if all {
switch v := interface{}(m.GetHost()).(type) {
case interface{ ValidateAll() error }:
if err := v.ValidateAll(); err != nil {
errors = append(errors, CachePeerValidationError{
field: "Host",
reason: "embedded message failed validation",
cause: err,
})
}
case interface{ Validate() error }:
if err := v.Validate(); err != nil {
errors = append(errors, CachePeerValidationError{
field: "Host",
reason: "embedded message failed validation",
cause: err,
})
}
}
} else if v, ok := interface{}(m.GetHost()).(interface{ Validate() error }); ok {
if err := v.Validate(); err != nil {
return CachePeerValidationError{
field: "Host",
reason: "embedded message failed validation",
cause: err,
}
}
}
// no validation rules for NeedBackToSource
if m.GetCreatedAt() == nil {
err := CachePeerValidationError{
field: "CreatedAt",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if m.GetUpdatedAt() == nil {
err := CachePeerValidationError{
field: "UpdatedAt",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if m.Range != nil {
if all {
switch v := interface{}(m.GetRange()).(type) {
case interface{ ValidateAll() error }:
if err := v.ValidateAll(); err != nil {
errors = append(errors, CachePeerValidationError{
field: "Range",
reason: "embedded message failed validation",
cause: err,
})
}
case interface{ Validate() error }:
if err := v.Validate(); err != nil {
errors = append(errors, CachePeerValidationError{
field: "Range",
reason: "embedded message failed validation",
cause: err,
})
}
}
} else if v, ok := interface{}(m.GetRange()).(interface{ Validate() error }); ok {
if err := v.Validate(); err != nil {
return CachePeerValidationError{
field: "Range",
reason: "embedded message failed validation",
cause: err,
}
}
}
}
if len(errors) > 0 {
return CachePeerMultiError(errors)
}
return nil
}
// CachePeerMultiError is an error wrapping multiple validation errors returned
// by CachePeer.ValidateAll() if the designated constraints aren't met.
type CachePeerMultiError []error
// Error returns a concatenation of all the error messages it wraps.
func (m CachePeerMultiError) Error() string {
var msgs []string
for _, err := range m {
msgs = append(msgs, err.Error())
}
return strings.Join(msgs, "; ")
}
// AllErrors returns a list of validation violation errors.
func (m CachePeerMultiError) AllErrors() []error { return m }
// CachePeerValidationError is the validation error returned by
// CachePeer.Validate if the designated constraints aren't met.
type CachePeerValidationError struct {
field string
reason string
cause error
key bool
}
// Field function returns field value.
func (e CachePeerValidationError) Field() string { return e.field }
// Reason function returns reason value.
func (e CachePeerValidationError) Reason() string { return e.reason }
// Cause function returns cause value.
func (e CachePeerValidationError) Cause() error { return e.cause }
// Key function returns key value.
func (e CachePeerValidationError) Key() bool { return e.key }
// ErrorName returns error name.
func (e CachePeerValidationError) ErrorName() string { return "CachePeerValidationError" }
// Error satisfies the builtin error interface
func (e CachePeerValidationError) Error() string {
cause := ""
if e.cause != nil {
cause = fmt.Sprintf(" | caused by: %v", e.cause)
}
key := ""
if e.key {
key = "key for "
}
return fmt.Sprintf(
"invalid %sCachePeer.%s: %s%s",
key,
e.field,
e.reason,
cause)
}
var _ error = CachePeerValidationError{}
var _ interface {
Field() string
Reason() string
Key() bool
Cause() error
ErrorName() string
} = CachePeerValidationError{}
// Validate checks the field values on PersistentCachePeer with the rules
// defined in the proto definition for this message. If any rules are
// violated, the first error encountered is returned, or nil if there are no violations.
@ -668,17 +997,6 @@ func (m *Task) validate(all bool) error {
// no validation rules for RequestHeader
if m.GetPieceLength() < 1 {
err := TaskValidationError{
field: "PieceLength",
reason: "value must be greater than or equal to 1",
}
if !all {
return err
}
errors = append(errors, err)
}
// no validation rules for ContentLength
// no validation rules for PieceCount
@ -778,7 +1096,7 @@ func (m *Task) validate(all bool) error {
if !_Task_Digest_Pattern.MatchString(m.GetDigest()) {
err := TaskValidationError{
field: "Digest",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$\"",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$\"",
}
if !all {
return err
@ -875,7 +1193,272 @@ var _ interface {
ErrorName() string
} = TaskValidationError{}
var _Task_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$")
var _Task_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$")
// Validate checks the field values on CacheTask with the rules defined in the
// proto definition for this message. If any rules are violated, the first
// error encountered is returned, or nil if there are no violations.
func (m *CacheTask) Validate() error {
return m.validate(false)
}
// ValidateAll checks the field values on CacheTask with the rules defined in
// the proto definition for this message. If any rules are violated, the
// result is a list of violation errors wrapped in CacheTaskMultiError, or nil
// if none found.
func (m *CacheTask) ValidateAll() error {
return m.validate(true)
}
func (m *CacheTask) validate(all bool) error {
if m == nil {
return nil
}
var errors []error
if utf8.RuneCountInString(m.GetId()) < 1 {
err := CacheTaskValidationError{
field: "Id",
reason: "value length must be at least 1 runes",
}
if !all {
return err
}
errors = append(errors, err)
}
if _, ok := TaskType_name[int32(m.GetType())]; !ok {
err := CacheTaskValidationError{
field: "Type",
reason: "value must be one of the defined enum values",
}
if !all {
return err
}
errors = append(errors, err)
}
if uri, err := url.Parse(m.GetUrl()); err != nil {
err = CacheTaskValidationError{
field: "Url",
reason: "value must be a valid URI",
cause: err,
}
if !all {
return err
}
errors = append(errors, err)
} else if !uri.IsAbs() {
err := CacheTaskValidationError{
field: "Url",
reason: "value must be absolute",
}
if !all {
return err
}
errors = append(errors, err)
}
// no validation rules for RequestHeader
// no validation rules for ContentLength
// no validation rules for PieceCount
// no validation rules for SizeScope
if len(m.GetPieces()) > 0 {
if len(m.GetPieces()) < 1 {
err := CacheTaskValidationError{
field: "Pieces",
reason: "value must contain at least 1 item(s)",
}
if !all {
return err
}
errors = append(errors, err)
}
for idx, item := range m.GetPieces() {
_, _ = idx, item
if all {
switch v := interface{}(item).(type) {
case interface{ ValidateAll() error }:
if err := v.ValidateAll(); err != nil {
errors = append(errors, CacheTaskValidationError{
field: fmt.Sprintf("Pieces[%v]", idx),
reason: "embedded message failed validation",
cause: err,
})
}
case interface{ Validate() error }:
if err := v.Validate(); err != nil {
errors = append(errors, CacheTaskValidationError{
field: fmt.Sprintf("Pieces[%v]", idx),
reason: "embedded message failed validation",
cause: err,
})
}
}
} else if v, ok := interface{}(item).(interface{ Validate() error }); ok {
if err := v.Validate(); err != nil {
return CacheTaskValidationError{
field: fmt.Sprintf("Pieces[%v]", idx),
reason: "embedded message failed validation",
cause: err,
}
}
}
}
}
if utf8.RuneCountInString(m.GetState()) < 1 {
err := CacheTaskValidationError{
field: "State",
reason: "value length must be at least 1 runes",
}
if !all {
return err
}
errors = append(errors, err)
}
// no validation rules for PeerCount
// no validation rules for HasAvailablePeer
if m.GetCreatedAt() == nil {
err := CacheTaskValidationError{
field: "CreatedAt",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if m.GetUpdatedAt() == nil {
err := CacheTaskValidationError{
field: "UpdatedAt",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if m.Digest != nil {
if m.GetDigest() != "" {
if !_CacheTask_Digest_Pattern.MatchString(m.GetDigest()) {
err := CacheTaskValidationError{
field: "Digest",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$\"",
}
if !all {
return err
}
errors = append(errors, err)
}
}
}
if m.Tag != nil {
// no validation rules for Tag
}
if m.Application != nil {
// no validation rules for Application
}
if len(errors) > 0 {
return CacheTaskMultiError(errors)
}
return nil
}
// CacheTaskMultiError is an error wrapping multiple validation errors returned
// by CacheTask.ValidateAll() if the designated constraints aren't met.
type CacheTaskMultiError []error
// Error returns a concatenation of all the error messages it wraps.
func (m CacheTaskMultiError) Error() string {
var msgs []string
for _, err := range m {
msgs = append(msgs, err.Error())
}
return strings.Join(msgs, "; ")
}
// AllErrors returns a list of validation violation errors.
func (m CacheTaskMultiError) AllErrors() []error { return m }
// CacheTaskValidationError is the validation error returned by
// CacheTask.Validate if the designated constraints aren't met.
type CacheTaskValidationError struct {
field string
reason string
cause error
key bool
}
// Field function returns field value.
func (e CacheTaskValidationError) Field() string { return e.field }
// Reason function returns reason value.
func (e CacheTaskValidationError) Reason() string { return e.reason }
// Cause function returns cause value.
func (e CacheTaskValidationError) Cause() error { return e.cause }
// Key function returns key value.
func (e CacheTaskValidationError) Key() bool { return e.key }
// ErrorName returns error name.
func (e CacheTaskValidationError) ErrorName() string { return "CacheTaskValidationError" }
// Error satisfies the builtin error interface
func (e CacheTaskValidationError) Error() string {
cause := ""
if e.cause != nil {
cause = fmt.Sprintf(" | caused by: %v", e.cause)
}
key := ""
if e.key {
key = "key for "
}
return fmt.Sprintf(
"invalid %sCacheTask.%s: %s%s",
key,
e.field,
e.reason,
cause)
}
var _ error = CacheTaskValidationError{}
var _ interface {
Field() string
Reason() string
Key() bool
Cause() error
ErrorName() string
} = CacheTaskValidationError{}
var _CacheTask_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$")
// Validate checks the field values on PersistentCacheTask with the rules
// defined in the proto definition for this message. If any rules are
@ -921,32 +1504,14 @@ func (m *PersistentCacheTask) validate(all bool) error {
errors = append(errors, err)
}
if m.GetReplicaCount() < 1 {
err := PersistentCacheTaskValidationError{
field: "ReplicaCount",
reason: "value must be greater than or equal to 1",
}
if !all {
return err
}
errors = append(errors, err)
}
// no validation rules for CurrentPersistentReplicaCount
if !_PersistentCacheTask_Digest_Pattern.MatchString(m.GetDigest()) {
err := PersistentCacheTaskValidationError{
field: "Digest",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$\"",
}
if !all {
return err
}
errors = append(errors, err)
}
// no validation rules for CurrentReplicaCount
if m.GetPieceLength() < 1 {
if val := m.GetPieceLength(); val < 4194304 || val > 67108864 {
err := PersistentCacheTaskValidationError{
field: "PieceLength",
reason: "value must be greater than or equal to 1",
reason: "value must be inside range [4194304, 67108864]",
}
if !all {
return err
@ -969,6 +1534,17 @@ func (m *PersistentCacheTask) validate(all bool) error {
errors = append(errors, err)
}
if m.GetTtl() == nil {
err := PersistentCacheTaskValidationError{
field: "Ttl",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if m.GetCreatedAt() == nil {
err := PersistentCacheTaskValidationError{
field: "CreatedAt",
@ -1079,8 +1655,6 @@ var _ interface {
ErrorName() string
} = PersistentCacheTaskValidationError{}
var _PersistentCacheTask_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$")
// Validate checks the field values on Host with the rules defined in the proto
// definition for this message. If any rules are violated, the first error
// encountered is returned, or nil if there are no violations.
@ -1182,6 +1756,8 @@ func (m *Host) validate(all bool) error {
// no validation rules for DisableShared
// no validation rules for ProxyPort
if m.Cpu != nil {
if all {
@ -1943,6 +2519,10 @@ func (m *Network) validate(all bool) error {
// no validation rules for UploadTcpConnectionCount
// no validation rules for MaxRxBandwidth
// no validation rules for MaxTxBandwidth
if m.Location != nil {
// no validation rules for Location
}
@ -1951,6 +2531,14 @@ func (m *Network) validate(all bool) error {
// no validation rules for Idc
}
if m.RxBandwidth != nil {
// no validation rules for RxBandwidth
}
if m.TxBandwidth != nil {
// no validation rules for TxBandwidth
}
if len(errors) > 0 {
return NetworkMultiError(errors)
}
@ -2083,6 +2671,10 @@ func (m *Disk) validate(all bool) error {
errors = append(errors, err)
}
// no validation rules for ReadBandwidth
// no validation rules for WriteBandwidth
if len(errors) > 0 {
return DiskMultiError(errors)
}
@ -2349,6 +2941,12 @@ func (m *Download) validate(all bool) error {
// no validation rules for Prefetch
// no validation rules for IsPrefetch
// no validation rules for NeedPieceContent
// no validation rules for ForceHardLink
if m.Digest != nil {
if m.GetDigest() != "" {
@ -2356,7 +2954,7 @@ func (m *Download) validate(all bool) error {
if !_Download_Digest_Pattern.MatchString(m.GetDigest()) {
err := DownloadValidationError{
field: "Digest",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$\"",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$\"",
}
if !all {
return err
@ -2413,10 +3011,10 @@ func (m *Download) validate(all bool) error {
if m.GetPieceLength() != 0 {
if m.GetPieceLength() < 1 {
if val := m.GetPieceLength(); val < 4194304 || val > 67108864 {
err := DownloadValidationError{
field: "PieceLength",
reason: "value must be greater than or equal to 1",
reason: "value must be inside range [4194304, 67108864]",
}
if !all {
return err
@ -2513,6 +3111,62 @@ func (m *Download) validate(all bool) error {
}
if m.Hdfs != nil {
if all {
switch v := interface{}(m.GetHdfs()).(type) {
case interface{ ValidateAll() error }:
if err := v.ValidateAll(); err != nil {
errors = append(errors, DownloadValidationError{
field: "Hdfs",
reason: "embedded message failed validation",
cause: err,
})
}
case interface{ Validate() error }:
if err := v.Validate(); err != nil {
errors = append(errors, DownloadValidationError{
field: "Hdfs",
reason: "embedded message failed validation",
cause: err,
})
}
}
} else if v, ok := interface{}(m.GetHdfs()).(interface{ Validate() error }); ok {
if err := v.Validate(); err != nil {
return DownloadValidationError{
field: "Hdfs",
reason: "embedded message failed validation",
cause: err,
}
}
}
}
if m.ContentForCalculatingTaskId != nil {
// no validation rules for ContentForCalculatingTaskId
}
if m.RemoteIp != nil {
if m.GetRemoteIp() != "" {
if ip := net.ParseIP(m.GetRemoteIp()); ip == nil {
err := DownloadValidationError{
field: "RemoteIp",
reason: "value must be a valid IP address",
}
if !all {
return err
}
errors = append(errors, err)
}
}
}
if len(errors) > 0 {
return DownloadMultiError(errors)
}
@ -2590,7 +3244,7 @@ var _ interface {
ErrorName() string
} = DownloadValidationError{}
var _Download_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$")
var _Download_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$")
// Validate checks the field values on ObjectStorage with the rules defined in
// the proto definition for this message. If any rules are violated, the first
@ -2739,6 +3393,25 @@ func (m *ObjectStorage) validate(all bool) error {
}
if m.SecurityToken != nil {
if m.GetSecurityToken() != "" {
if utf8.RuneCountInString(m.GetSecurityToken()) < 1 {
err := ObjectStorageValidationError{
field: "SecurityToken",
reason: "value length must be at least 1 runes",
}
if !all {
return err
}
errors = append(errors, err)
}
}
}
if len(errors) > 0 {
return ObjectStorageMultiError(errors)
}
@ -2817,6 +3490,123 @@ var _ interface {
ErrorName() string
} = ObjectStorageValidationError{}
// Validate checks the field values on HDFS with the rules defined in the proto
// definition for this message. If any rules are violated, the first error
// encountered is returned, or nil if there are no violations.
func (m *HDFS) Validate() error {
return m.validate(false)
}
// ValidateAll checks the field values on HDFS with the rules defined in the
// proto definition for this message. If any rules are violated, the result is
// a list of violation errors wrapped in HDFSMultiError, or nil if none found.
func (m *HDFS) ValidateAll() error {
return m.validate(true)
}
func (m *HDFS) validate(all bool) error {
if m == nil {
return nil
}
var errors []error
if m.DelegationToken != nil {
if m.GetDelegationToken() != "" {
if utf8.RuneCountInString(m.GetDelegationToken()) < 1 {
err := HDFSValidationError{
field: "DelegationToken",
reason: "value length must be at least 1 runes",
}
if !all {
return err
}
errors = append(errors, err)
}
}
}
if len(errors) > 0 {
return HDFSMultiError(errors)
}
return nil
}
// HDFSMultiError is an error wrapping multiple validation errors returned by
// HDFS.ValidateAll() if the designated constraints aren't met.
type HDFSMultiError []error
// Error returns a concatenation of all the error messages it wraps.
func (m HDFSMultiError) Error() string {
var msgs []string
for _, err := range m {
msgs = append(msgs, err.Error())
}
return strings.Join(msgs, "; ")
}
// AllErrors returns a list of validation violation errors.
func (m HDFSMultiError) AllErrors() []error { return m }
// HDFSValidationError is the validation error returned by HDFS.Validate if the
// designated constraints aren't met.
type HDFSValidationError struct {
field string
reason string
cause error
key bool
}
// Field function returns field value.
func (e HDFSValidationError) Field() string { return e.field }
// Reason function returns reason value.
func (e HDFSValidationError) Reason() string { return e.reason }
// Cause function returns cause value.
func (e HDFSValidationError) Cause() error { return e.cause }
// Key function returns key value.
func (e HDFSValidationError) Key() bool { return e.key }
// ErrorName returns error name.
func (e HDFSValidationError) ErrorName() string { return "HDFSValidationError" }
// Error satisfies the builtin error interface
func (e HDFSValidationError) Error() string {
cause := ""
if e.cause != nil {
cause = fmt.Sprintf(" | caused by: %v", e.cause)
}
key := ""
if e.key {
key = "key for "
}
return fmt.Sprintf(
"invalid %sHDFS.%s: %s%s",
key,
e.field,
e.reason,
cause)
}
var _ error = HDFSValidationError{}
var _ interface {
Field() string
Reason() string
Key() bool
Cause() error
ErrorName() string
} = HDFSValidationError{}
// Validate checks the field values on Range with the rules defined in the
// proto definition for this message. If any rules are violated, the first
// error encountered is returned, or nil if there are no violations.
@ -2951,7 +3741,7 @@ func (m *Piece) validate(all bool) error {
if !_Piece_Digest_Pattern.MatchString(m.GetDigest()) {
err := PieceValidationError{
field: "Digest",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$\"",
reason: "value does not match regex pattern \"^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$\"",
}
if !all {
return err
@ -3102,4 +3892,4 @@ var _ interface {
ErrorName() string
} = PieceValidationError{}
var _Piece_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$")
var _Piece_Digest_Pattern = regexp.MustCompile("^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$")

View File

@ -48,7 +48,7 @@ enum TaskType {
// local peer(local cache). When the standard task is never downloaded in the
// P2P cluster, dfdaemon will download the task from the source. When the standard
// task is downloaded in the P2P cluster, dfdaemon will download the task from
// the remote peer or local peer(local cache).
// the remote peer or local peer(local cache), where peers use disk storage to store tasks.
STANDARD = 0;
// PERSISTENT is persistent type of task, it can import file and export file in P2P cluster.
@ -57,11 +57,18 @@ enum TaskType {
// prevent data loss.
PERSISTENT = 1;
// PERSIST_CACHE is persistent cache type of task, it can import file and export file in P2P cluster.
// PERSISTENT_CACHE is persistent cache type of task, it can import file and export file in P2P cluster.
// When the persistent cache task is imported into the P2P cluster, dfdaemon will store
// the task in the peer's disk and copy multiple replicas to remote peers to prevent data loss.
// When the expiration time is reached, task will be deleted in the P2P cluster.
PERSISTENT_CACHE = 2;
// CACHE is cache type of task, it can download from source, remote peer and
// local peer(local cache). When the cache task is never downloaded in the
// P2P cluster, dfdaemon will download the cache task from the source. When the cache
// task is downloaded in the P2P cluster, dfdaemon will download the cache task from
// the remote peer or local peer(local cache), where peers use memory storage to store cache tasks.
CACHE = 3;
}
// TrafficType represents type of traffic.
@ -138,13 +145,40 @@ message Peer {
google.protobuf.Timestamp updated_at = 11 [(validate.rules).timestamp.required = true];
}
// CachePeer metadata.
message CachePeer {
// Peer id.
string id = 1 [(validate.rules).string.min_len = 1];
// Range is url range of request.
optional Range range = 2;
// Peer priority.
Priority priority = 3 [(validate.rules).enum.defined_only = true];
// Pieces of peer.
repeated Piece pieces = 4 [(validate.rules).repeated = {min_items: 1, ignore_empty: true}];
// Peer downloads costs time.
google.protobuf.Duration cost = 5 [(validate.rules).duration.required = true];
// Peer state.
string state = 6 [(validate.rules).string.min_len = 1];
// CacheTask info.
CacheTask task = 7 [(validate.rules).message.required = true];
// Host info.
Host host = 8 [(validate.rules).message.required = true];
// NeedBackToSource needs downloaded from source.
bool need_back_to_source = 9;
// Peer create time.
google.protobuf.Timestamp created_at = 10 [(validate.rules).timestamp.required = true];
// Peer update time.
google.protobuf.Timestamp updated_at = 11 [(validate.rules).timestamp.required = true];
}
// PersistentCachePeer metadata.
message PersistentCachePeer {
// Peer id.
string id = 1 [(validate.rules).string.min_len = 1];
// Persistent represents whether the persistent cache peer is persistent.
// If the persistent cache peer is persistent, the persistent cache peer will
// not be deleted when dfdaemon runs garbage collection.
// not be deleted when dfdaemon runs garbage collection. It only be deleted
// when the task is deleted by the user.
bool persistent = 2;
// Peer downloads costs time.
google.protobuf.Duration cost = 3 [(validate.rules).duration.required = true];
@ -168,8 +202,13 @@ message Task {
TaskType type = 2 [(validate.rules).enum.defined_only = true];
// Download url.
string url = 3 [(validate.rules).string.uri = true];
// Digest of the task digest, for example blake3:xxx or sha256:yyy.
optional string digest = 4 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$", ignore_empty: true}];
// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
// Returns an error if the computed digest mismatches the expected value.
//
// Performance
// Digest calculation increases processing time. Enable only when data integrity verification is critical.
optional string digest = 4 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
// URL tag identifies different task for same url.
optional string tag = 5;
// Application of task.
@ -182,54 +221,106 @@ message Task {
repeated string filtered_query_params = 7;
// Task request headers.
map<string, string> request_header = 8;
// Task piece length.
uint64 piece_length = 9 [(validate.rules).uint64.gte = 1];
// Task content length.
uint64 content_length = 10;
uint64 content_length = 9;
// Task piece count.
uint32 piece_count = 11;
uint32 piece_count = 10;
// Task size scope.
SizeScope size_scope = 12;
SizeScope size_scope = 11;
// Pieces of task.
repeated Piece pieces = 13 [(validate.rules).repeated = {min_items: 1, ignore_empty: true}];
repeated Piece pieces = 12 [(validate.rules).repeated = {min_items: 1, ignore_empty: true}];
// Task state.
string state = 14 [(validate.rules).string.min_len = 1];
string state = 13 [(validate.rules).string.min_len = 1];
// Task peer count.
uint32 peer_count = 15;
uint32 peer_count = 14;
// Task contains available peer.
bool has_available_peer = 16;
bool has_available_peer = 15;
// Task create time.
google.protobuf.Timestamp created_at = 17 [(validate.rules).timestamp.required = true];
google.protobuf.Timestamp created_at = 16 [(validate.rules).timestamp.required = true];
// Task update time.
google.protobuf.Timestamp updated_at = 18 [(validate.rules).timestamp.required = true];
google.protobuf.Timestamp updated_at = 17 [(validate.rules).timestamp.required = true];
}
// CacheTask metadata.
message CacheTask {
// Task id.
string id = 1 [(validate.rules).string.min_len = 1];
// Task type.
TaskType type = 2 [(validate.rules).enum.defined_only = true];
// Download url.
string url = 3 [(validate.rules).string.uri = true];
// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
// Returns an error if the computed digest mismatches the expected value.
//
// Performance
// Digest calculation increases processing time. Enable only when data integrity verification is critical.
optional string digest = 4 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
// URL tag identifies different task for same url.
optional string tag = 5;
// Application of task.
optional string application = 6;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 7;
// Task request headers.
map<string, string> request_header = 8;
// Task content length.
uint64 content_length = 9;
// Task piece count.
uint32 piece_count = 10;
// Task size scope.
SizeScope size_scope = 11;
// Pieces of task.
repeated Piece pieces = 12 [(validate.rules).repeated = {min_items: 1, ignore_empty: true}];
// Task state.
string state = 13 [(validate.rules).string.min_len = 1];
// Task peer count.
uint32 peer_count = 14;
// Task contains available peer.
bool has_available_peer = 15;
// Task create time.
google.protobuf.Timestamp created_at = 16 [(validate.rules).timestamp.required = true];
// Task update time.
google.protobuf.Timestamp updated_at = 17 [(validate.rules).timestamp.required = true];
}
// PersistentCacheTask metadata.
message PersistentCacheTask {
// Task id.
string id = 1 [(validate.rules).string.min_len = 1];
// Replica count of the persistent cache task.
// Replica count of the persistent cache task. The persistent cache task will
// not be deleted when dfdamon runs garbage collection. It only be deleted
// when the task is deleted by the user.
uint64 persistent_replica_count = 2 [(validate.rules).uint64.gte = 1];
// Replica count of the persistent cache task.
uint64 replica_count = 3 [(validate.rules).uint64.gte = 1];
// Digest of the task digest, for example blake3:xxx or sha256:yyy.
string digest = 4 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$"}];
// Current replica count of the persistent cache task. The persistent cache task
// will not be deleted when dfdaemon runs garbage collection. It only be deleted
// when the task is deleted by the user.
uint64 current_persistent_replica_count = 3;
// Current replica count of the cache task. If cache task is not persistent,
// the persistent cache task will be deleted when dfdaemon runs garbage collection.
uint64 current_replica_count = 4;
// Tag is used to distinguish different persistent cache tasks.
optional string tag = 5;
// Application of task.
optional string application = 6;
// Task piece length.
uint64 piece_length = 7 [(validate.rules).uint64.gte = 1];
uint64 piece_length = 7 [(validate.rules).uint64 = {gte: 4194304, lte: 67108864}];
// Task content length.
uint64 content_length = 8;
// Task piece count.
uint32 piece_count = 9;
// Task state.
string state = 10 [(validate.rules).string.min_len = 1];
// TTL of the persistent cache task.
google.protobuf.Duration ttl = 11 [(validate.rules).duration.required = true];
// Task create time.
google.protobuf.Timestamp created_at = 11 [(validate.rules).timestamp.required = true];
google.protobuf.Timestamp created_at = 12 [(validate.rules).timestamp.required = true];
// Task update time.
google.protobuf.Timestamp updated_at = 12 [(validate.rules).timestamp.required = true];
google.protobuf.Timestamp updated_at = 13 [(validate.rules).timestamp.required = true];
}
// Host metadata.
@ -270,6 +361,8 @@ message Host {
uint64 scheduler_cluster_id = 17;
// Disable shared data for other peers.
bool disable_shared = 18;
// Port of proxy server.
int32 proxy_port = 19;
}
// CPU Stat.
@ -337,6 +430,23 @@ message Network {
optional string location = 3;
// IDC where the peer host is located
optional string idc = 4;
// Maximum bandwidth of the network interface, in bps (bits per second).
uint64 max_rx_bandwidth = 9;
// Receive bandwidth of the network interface, in bps (bits per second).
optional uint64 rx_bandwidth = 10;
// Maximum bandwidth of the network interface for transmission, in bps (bits per second).
uint64 max_tx_bandwidth = 11;
// Transmit bandwidth of the network interface, in bps (bits per second).
optional uint64 tx_bandwidth = 12;
reserved 5;
reserved "download_rate";
reserved 6;
reserved "download_rate_limit";
reserved 7;
reserved "upload_rate";
reserved 8;
reserved "upload_rate_limit";
}
// Disk Stat.
@ -357,6 +467,10 @@ message Disk {
uint64 inodes_free = 7;
// Used percent of indoes on the data path of dragonfly directory.
double inodes_used_percent = 8 [(validate.rules).double = {gte: 0, lte: 100}];
// Disk read bandwidth, in bytes per second.
uint64 read_bandwidth = 9;
// Disk write bandwidth, in bytes per second.
uint64 write_bandwidth = 10;
}
// Build information.
@ -378,7 +492,7 @@ message Download {
// Download url.
string url = 1 [(validate.rules).string.uri = true];
// Digest of the task digest, for example blake3:xxx or sha256:yyy.
optional string digest = 2 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$", ignore_empty: true}];
optional string digest = 2 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
// Range is url range of request. If protocol is http, range
// will set in request header. If protocol is others, range
// will set in range field.
@ -399,22 +513,48 @@ message Download {
repeated string filtered_query_params = 8;
// Task request headers.
map<string, string> request_header = 9;
// Task piece length.
optional uint64 piece_length = 10 [(validate.rules).uint64 = {gte: 1, ignore_empty: true}];
// File path to be exported.
// Piece Length is the piece length(bytes) for downloading file. The value needs to
// be greater than 4MiB (4,194,304 bytes) and less than 64MiB (67,108,864 bytes),
// for example: 4194304(4mib), 8388608(8mib). If the piece length is not specified,
// the piece length will be calculated according to the file size.
optional uint64 piece_length = 10 [(validate.rules).uint64 = {gte: 4194304, lte: 67108864, ignore_empty: true}];
// File path to be downloaded. If output_path is set, the downloaded file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the download. If hard link creation fails,
// it will copy the file to the output path after the download is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 11 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Download timeout.
optional google.protobuf.Duration timeout = 12;
// Dfdaemon will disable download back-to-source by itself, if disable_back_to_source is true.
// Dfdaemon cannot download the task from the source if disable_back_to_source is true.
bool disable_back_to_source = 13;
// Scheduler will triggers peer to download back-to-source, if need_back_to_source is true.
// Scheduler needs to schedule the task downloads from the source if need_back_to_source is true.
bool need_back_to_source = 14;
// certificate_chain is the client certs with DER format for the backend client to download back-to-source.
repeated bytes certificate_chain = 15;
// Prefetch pre-downloads all pieces of the task when download task with range request.
// Prefetch pre-downloads all pieces of the task when the download task request is a range request.
bool prefetch = 16;
// Object Storage related information.
// Object storage protocol information.
optional ObjectStorage object_storage = 17;
// HDFS protocol information.
optional HDFS hdfs = 18;
// is_prefetch is the flag to indicate whether the request is a prefetch request.
bool is_prefetch = 19;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 20;
// force_hard_link is the flag to indicate whether the download file must be hard linked to the output path.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
bool force_hard_link = 22;
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on url, piece_length, tag, application, and filtered_query_params.
optional string content_for_calculating_task_id = 23;
// remote_ip represents the IP address of the client initiating the download request.
// For proxy requests, it is set to the IP address of the request source.
// For dfget requests, it is set to the IP address of the dfget.
optional string remote_ip = 24 [(validate.rules).string = {ip: true, ignore_empty: true}];
reserved 21;
reserved "load_to_cache";
}
// Object Storage related information.
@ -428,11 +568,19 @@ message ObjectStorage {
// Access secret that used to access the object storage service.
optional string access_key_secret = 4 [(validate.rules).string.min_len = 1];
// Session token that used to access s3 storage service.
optional string session_token = 5 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
optional string session_token = 5 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Local path to credential file for Google Cloud Storage service OAuth2 authentication.
optional string credential_path = 6 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Predefined ACL that used for the Google Cloud Storage service.
optional string predefined_acl = 7 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Temporary STS security token for accessing OSS.
optional string security_token = 8 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
}
// HDFS related information.
message HDFS {
// Delegation token for Web HDFS operator.
optional string delegation_token = 1 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
}
// Range represents download range.
@ -454,7 +602,7 @@ message Piece {
// Piece length.
uint64 length = 4;
// Digest of the piece data, for example blake3:xxx or sha256:yyy.
string digest = 5 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]{8})$", ignore_empty: true}];
string digest = 5 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
// Piece content.
optional bytes content = 6 [(validate.rules).bytes = {min_len: 1, ignore_empty: true}];
// Traffic type.

View File

@ -25,6 +25,7 @@ import (
type MockDaemonClient struct {
ctrl *gomock.Controller
recorder *MockDaemonClientMockRecorder
isgomock struct{}
}
// MockDaemonClientMockRecorder is the mock recorder for MockDaemonClient.
@ -248,6 +249,7 @@ func (mr *MockDaemonClientMockRecorder) SyncPieceTasks(ctx any, opts ...any) *go
type MockDaemon_DownloadClient struct {
ctrl *gomock.Controller
recorder *MockDaemon_DownloadClientMockRecorder
isgomock struct{}
}
// MockDaemon_DownloadClientMockRecorder is the mock recorder for MockDaemon_DownloadClient.
@ -371,6 +373,7 @@ func (mr *MockDaemon_DownloadClientMockRecorder) Trailer() *gomock.Call {
type MockDaemon_SyncPieceTasksClient struct {
ctrl *gomock.Controller
recorder *MockDaemon_SyncPieceTasksClientMockRecorder
isgomock struct{}
}
// MockDaemon_SyncPieceTasksClientMockRecorder is the mock recorder for MockDaemon_SyncPieceTasksClient.
@ -508,6 +511,7 @@ func (mr *MockDaemon_SyncPieceTasksClientMockRecorder) Trailer() *gomock.Call {
type MockDaemon_PeerExchangeClient struct {
ctrl *gomock.Controller
recorder *MockDaemon_PeerExchangeClientMockRecorder
isgomock struct{}
}
// MockDaemon_PeerExchangeClientMockRecorder is the mock recorder for MockDaemon_PeerExchangeClient.
@ -645,6 +649,7 @@ func (mr *MockDaemon_PeerExchangeClientMockRecorder) Trailer() *gomock.Call {
type MockDaemonServer struct {
ctrl *gomock.Controller
recorder *MockDaemonServerMockRecorder
isgomock struct{}
}
// MockDaemonServerMockRecorder is the mock recorder for MockDaemonServer.
@ -815,6 +820,7 @@ func (mr *MockDaemonServerMockRecorder) SyncPieceTasks(arg0 any) *gomock.Call {
type MockUnsafeDaemonServer struct {
ctrl *gomock.Controller
recorder *MockUnsafeDaemonServerMockRecorder
isgomock struct{}
}
// MockUnsafeDaemonServerMockRecorder is the mock recorder for MockUnsafeDaemonServer.
@ -850,6 +856,7 @@ func (mr *MockUnsafeDaemonServerMockRecorder) mustEmbedUnimplementedDaemonServer
type MockDaemon_DownloadServer struct {
ctrl *gomock.Controller
recorder *MockDaemon_DownloadServerMockRecorder
isgomock struct{}
}
// MockDaemon_DownloadServerMockRecorder is the mock recorder for MockDaemon_DownloadServer.
@ -969,6 +976,7 @@ func (mr *MockDaemon_DownloadServerMockRecorder) SetTrailer(arg0 any) *gomock.Ca
type MockDaemon_SyncPieceTasksServer struct {
ctrl *gomock.Controller
recorder *MockDaemon_SyncPieceTasksServerMockRecorder
isgomock struct{}
}
// MockDaemon_SyncPieceTasksServerMockRecorder is the mock recorder for MockDaemon_SyncPieceTasksServer.
@ -1103,6 +1111,7 @@ func (mr *MockDaemon_SyncPieceTasksServerMockRecorder) SetTrailer(arg0 any) *gom
type MockDaemon_PeerExchangeServer struct {
ctrl *gomock.Controller
recorder *MockDaemon_PeerExchangeServerMockRecorder
isgomock struct{}
}
// MockDaemon_PeerExchangeServerMockRecorder is the mock recorder for MockDaemon_PeerExchangeServer.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -46,6 +46,9 @@ message DownloadTaskStartedResponse {
// Need to download pieces.
repeated common.v2.Piece pieces = 4;
// is_finished indicates whether the download task is finished.
bool is_finished = 5;
}
// DownloadPieceFinishedResponse represents piece download finished response of DownloadTaskResponse.
@ -105,24 +108,228 @@ message DownloadPieceRequest{
message DownloadPieceResponse {
// Piece information.
common.v2.Piece piece = 1 [(validate.rules).message.required = true];
}
// UploadTaskRequest represents request of UploadTask.
message UploadTaskRequest {
// Task metadata.
common.v2.Task task = 1 [(validate.rules).message.required = true];
// Piece metadata digest, it is used to verify the integrity of the piece metadata.
optional string digest = 2 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
}
// StatTaskRequest represents request of StatTask.
message StatTaskRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// Remote IP represents the IP address of the client initiating the stat request.
optional string remote_ip = 2 [(validate.rules).string = {ip: true, ignore_empty: true}];
// local_only specifies whether to query task information from local client only.
// If true, the query will be restricted to the local client.
// By default (false), the query may be forwarded to the scheduler
// for a cluster-wide search.
bool local_only = 3;
}
// ListTaskEntriesRequest represents request of ListTaskEntries.
message ListTaskEntriesRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// URL to be listed the entries.
string url = 2;
// HTTP header to be sent with the request.
map<string, string> request_header = 3;
// List timeout.
optional google.protobuf.Duration timeout = 4;
// certificate_chain is the client certs with DER format for the backend client to list the entries.
repeated bytes certificate_chain = 5;
// Object storage protocol information.
optional common.v2.ObjectStorage object_storage = 6;
// HDFS protocol information.
optional common.v2.HDFS hdfs = 7;
// Remote IP represents the IP address of the client initiating the list request.
optional string remote_ip = 8 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// ListTaskEntriesResponse represents response of ListTaskEntries.
message ListTaskEntriesResponse {
// Content length is the content length of the response
uint64 content_length = 1;
// HTTP header to be sent with the request.
map<string, string> response_header = 2;
// Backend HTTP status code.
optional int32 status_code = 3 [(validate.rules).int32 = {gte: 100, lt: 599, ignore_empty: true}];
/// Entries is the information of the entries in the directory.
repeated Entry entries = 4;
}
// Entry represents an entry in a directory.
message Entry {
// URL of the entry.
string url = 1;
// Size of the entry.
uint64 content_length = 2;
// Is directory or not.
bool is_dir = 3;
}
// DeleteTaskRequest represents request of DeleteTask.
message DeleteTaskRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// Remote IP represents the IP address of the client initiating the delete request.
optional string remote_ip = 2 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// DownloadCacheTaskRequest represents request of DownloadCacheTask.
message DownloadCacheTaskRequest {
// Download url.
string url = 1 [(validate.rules).string.uri = true];
// Digest of the task digest, for example blake3:xxx or sha256:yyy.
optional string digest = 2 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
// Range is url range of request. If protocol is http, range
// will set in request header. If protocol is others, range
// will set in range field.
optional common.v2.Range range = 3;
// Task type.
common.v2.TaskType type = 4 [(validate.rules).enum.defined_only = true];
// URL tag identifies different task for same url.
optional string tag = 5;
// Application of task.
optional string application = 6;
// Peer priority.
common.v2.Priority priority = 7 [(validate.rules).enum.defined_only = true];
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 8;
// Task request headers.
map<string, string> request_header = 9;
// Task piece length, the value needs to be greater than or equal to 4194304(4MiB).
optional uint64 piece_length = 10 [(validate.rules).uint64 = {gte: 4194304, ignore_empty: true}];
// File path to be downloaded. If output_path is set, the downloaded file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the download. If hard link creation fails,
// it will copy the file to the output path after the download is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 11 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Download timeout.
optional google.protobuf.Duration timeout = 12;
// Dfdaemon cannot download the task from the source if disable_back_to_source is true.
bool disable_back_to_source = 13;
// Scheduler needs to schedule the task downloads from the source if need_back_to_source is true.
bool need_back_to_source = 14;
// certificate_chain is the client certs with DER format for the backend client to download back-to-source.
repeated bytes certificate_chain = 15;
// Prefetch pre-downloads all pieces of the task when the download task request is a range request.
bool prefetch = 16;
// Object storage protocol information.
optional common.v2.ObjectStorage object_storage = 17;
// HDFS protocol information.
optional common.v2.HDFS hdfs = 18;
// is_prefetch is the flag to indicate whether the request is a prefetch request.
bool is_prefetch = 19;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 20;
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on url, piece_length, tag, application, and filtered_query_params.
optional string content_for_calculating_task_id = 21;
// remote_ip represents the IP address of the client initiating the download request.
// For proxy requests, it is set to the IP address of the request source.
// For dfget requests, it is set to the IP address of the dfget.
optional string remote_ip = 22 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// DownloadCacheTaskStartedResponse represents cache task download started response of DownloadCacheTaskResponse.
message DownloadCacheTaskStartedResponse {
// Task content length.
uint64 content_length = 1;
// Range is url range of request. If protocol is http, range
// is parsed from http header. If other protocol, range comes
// from download range field.
optional common.v2.Range range = 2;
// Task response headers.
map<string, string> response_header = 3;
// Need to download pieces.
repeated common.v2.Piece pieces = 4;
// is_finished indicates whether the download task is finished.
bool is_finished = 5;
}
// DownloadCacheTaskResponse represents response of DownloadCacheTask.
message DownloadCacheTaskResponse {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Peer id.
string peer_id = 3 [(validate.rules).string.min_len = 1];
oneof response {
option (validate.required) = true;
DownloadCacheTaskStartedResponse download_cache_task_started_response = 4;
DownloadPieceFinishedResponse download_piece_finished_response = 5;
}
}
// SyncCachePiecesRequest represents request of SyncCachePieces.
message SyncCachePiecesRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Interested piece numbers.
repeated uint32 interested_cache_piece_numbers = 3 [(validate.rules).repeated = {min_items: 1}];
}
// SyncCachePiecesResponse represents response of SyncCachePieces.
message SyncCachePiecesResponse {
// Exist piece number.
uint32 number = 1;
// Piece offset.
uint64 offset = 2;
// Piece length.
uint64 length = 3;
}
// DownloadCachePieceRequest represents request of DownloadCachePiece.
message DownloadCachePieceRequest{
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Piece number.
uint32 piece_number = 3;
}
// DownloadCachePieceResponse represents response of DownloadCachePieces.
message DownloadCachePieceResponse {
// Piece information.
common.v2.Piece piece = 1 [(validate.rules).message.required = true];
// Piece metadata digest, it is used to verify the integrity of the piece metadata.
optional string digest = 2 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
}
// StatCacheTaskRequest represents request of StatCacheTask.
message StatCacheTaskRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// Remote IP represents the IP address of the client initiating the stat request.
optional string remote_ip = 2 [(validate.rules).string = {ip: true, ignore_empty: true}];
// local_only specifies whether to query task information from local client only.
// If true, the query will be restricted to the local client.
// By default (false), the query may be forwarded to the scheduler
// for a cluster-wide search.
bool local_only = 3;
}
// DeleteCacheTaskRequest represents request of DeleteCacheTask.
message DeleteCacheTaskRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// Remote IP represents the IP address of the client initiating the delete request.
optional string remote_ip = 2 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// DownloadPersistentCacheTaskRequest represents request of DownloadPersistentCacheTask.
@ -137,10 +344,27 @@ message DownloadPersistentCacheTaskRequest {
optional string tag = 3;
// Application of task.
optional string application = 4;
// File path to be exported.
string output_path = 5 [(validate.rules).string.min_len = 1];
// File path to be exported. If output_path is set, the exported file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the export. If hard link creation fails,
// it will copy the file to the output path after the export is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 5 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Download timeout.
optional google.protobuf.Duration timeout = 6;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 7;
// force_hard_link is the flag to indicate whether the exported file must be hard linked to the output path.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
bool force_hard_link = 8;
// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
// Returns an error if the computed digest mismatches the expected value.
//
// Performance
// Digest calculation increases processing time. Enable only when data integrity verification is critical.
optional string digest = 9;
// Remote IP represents the IP address of the client initiating the download request.
optional string remote_ip = 10 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// DownloadPersistentCacheTaskStartedResponse represents task download started response of DownloadPersistentCacheTaskResponse.
@ -168,30 +392,123 @@ message DownloadPersistentCacheTaskResponse {
// UploadPersistentCacheTaskRequest represents request of UploadPersistentCacheTask.
message UploadPersistentCacheTaskRequest {
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on the file content, tag and application by crc32 algorithm`.
optional string content_for_calculating_task_id = 1;
// Upload file path of persistent cache task.
string path = 1 [(validate.rules).string = {min_len: 1}];
string path = 2 [(validate.rules).string = {min_len: 1}];
// Replica count of the persistent persistent cache task.
uint64 persistent_replica_count = 2 [(validate.rules).uint64.gte = 1];
uint64 persistent_replica_count = 3 [(validate.rules).uint64 = {gte: 1, lte: 5}];
// Tag is used to distinguish different persistent cache tasks.
optional string tag = 3;
// Application of task.
optional string application = 4;
optional string tag = 4;
// Application of the persistent cache task.
optional string application = 5;
// Piece length of the persistent cache task, the value needs to be greater than or equal to 4194304(4MiB).
optional uint64 piece_length = 6 [(validate.rules).uint64 = {gte: 4194304, ignore_empty: true}];
// TTL of the persistent cache task.
google.protobuf.Duration ttl = 5 [(validate.rules).duration = {gte:{seconds: 60}, lte:{seconds: 604800}}];
// Upload timeout.
optional google.protobuf.Duration timeout = 6;
google.protobuf.Duration ttl = 7 [(validate.rules).duration = {gte:{seconds: 60}, lte:{seconds: 604800}}];
// Download timeout.
optional google.protobuf.Duration timeout = 8;
// Remote IP represents the IP address of the client initiating the upload request.
optional string remote_ip = 9 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// UpdatePersistentCacheTaskRequest represents request of UpdatePersistentCacheTask.
message UpdatePersistentCacheTaskRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// Persistent represents whether the persistent cache peer is persistent.
// If the persistent cache peer is persistent, the persistent cache peer will
// not be deleted when dfdaemon runs garbage collection. It only be deleted
// when the task is deleted by the user.
bool persistent = 2;
// Remote IP represents the IP address of the client initiating the update request.
optional string remote_ip = 3 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// StatPersistentCacheTaskRequest represents request of StatPersistentCacheTask.
message StatPersistentCacheTaskRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// Remote IP represents the IP address of the client initiating the stat request.
optional string remote_ip = 2 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// DeletePersistentCacheTaskRequest represents request of DeletePersistentCacheTask.
message DeletePersistentCacheTaskRequest {
// Task id.
string task_id = 1 [(validate.rules).string.min_len = 1];
// Remote IP represents the IP address of the client initiating the delete request.
optional string remote_ip = 2 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// SyncPersistentCachePiecesRequest represents request of SyncPersistentCachePieces.
message SyncPersistentCachePiecesRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Interested piece numbers.
repeated uint32 interested_piece_numbers = 3 [(validate.rules).repeated = {min_items: 1}];
}
// SyncPersistentCachePiecesResponse represents response of SyncPersistentCachePieces.
message SyncPersistentCachePiecesResponse {
// Exist piece number.
uint32 number = 1;
// Piece offset.
uint64 offset = 2;
// Piece length.
uint64 length = 3;
}
// DownloadPersistentCachePieceRequest represents request of DownloadPersistentCachePiece.
message DownloadPersistentCachePieceRequest{
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Piece number.
uint32 piece_number = 3;
}
// DownloadPersistentCachePieceResponse represents response of DownloadPersistentCachePieces.
message DownloadPersistentCachePieceResponse {
// Piece information.
common.v2.Piece piece = 1 [(validate.rules).message.required = true];
// Piece metadata digest, it is used to verify the integrity of the piece metadata.
optional string digest = 2 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
}
// SyncHostRequest represents request of SyncHost.
message SyncHostRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Peer id.
string peer_id = 2 [(validate.rules).string.min_len = 1];
}
// IBVerbsQueuePairEndpoint represents queue pair endpoint of IBVerbs.
message IBVerbsQueuePairEndpoint {
// Number of the queue pair.
uint32 num = 1;
// Local identifier of the context.
uint32 lid = 2;
// Global identifier of the context.
bytes gid = 3 [(validate.rules).bytes.len = 16];
}
// ExchangeIBVerbsQueuePairEndpointRequest represents request of ExchangeIBVerbsQueuePairEndpoint.
message ExchangeIBVerbsQueuePairEndpointRequest {
// Information of the source's queue pair endpoint of IBVerbs.
IBVerbsQueuePairEndpoint endpoint = 1 [(validate.rules).message.required = true];
}
// ExchangeIBVerbsQueuePairEndpointResponse represents response of ExchangeIBVerbsQueuePairEndpoint.
message ExchangeIBVerbsQueuePairEndpointResponse {
// Information of the destination's queue pair endpoint of IBVerbs.
IBVerbsQueuePairEndpoint endpoint = 1 [(validate.rules).message.required = true];
}
// DfdaemonUpload represents dfdaemon upload service.
@ -211,14 +528,44 @@ service DfdaemonUpload {
// DownloadPiece downloads piece from the remote peer.
rpc DownloadPiece(DownloadPieceRequest)returns(DownloadPieceResponse);
// DownloadCacheTask downloads cache task from p2p network.
rpc DownloadCacheTask(DownloadCacheTaskRequest) returns(stream DownloadCacheTaskResponse);
// StatCacheTask stats cache task information.
rpc StatCacheTask(StatCacheTaskRequest) returns(common.v2.CacheTask);
// DeleteCacheTask deletes cache task from p2p network.
rpc DeleteCacheTask(DeleteCacheTaskRequest) returns(google.protobuf.Empty);
// SyncCachePieces syncs cache piece metadatas from remote peer.
rpc SyncCachePieces(SyncCachePiecesRequest) returns(stream SyncCachePiecesResponse);
// DownloadCachePiece downloads cache piece from the remote peer.
rpc DownloadCachePiece(DownloadCachePieceRequest)returns(DownloadCachePieceResponse);
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
rpc DownloadPersistentCacheTask(DownloadPersistentCacheTaskRequest) returns(stream DownloadPersistentCacheTaskResponse);
// UpdatePersistentCacheTask updates metadate of thr persistent cache task in p2p network.
rpc UpdatePersistentCacheTask(UpdatePersistentCacheTaskRequest) returns(google.protobuf.Empty);
// StatPersistentCacheTask stats persistent cache task information.
rpc StatPersistentCacheTask(StatPersistentCacheTaskRequest) returns(common.v2.PersistentCacheTask);
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
rpc DeletePersistentCacheTask(DeletePersistentCacheTaskRequest) returns(google.protobuf.Empty);
// SyncPersistentCachePieces syncs persistent cache pieces from remote peer.
rpc SyncPersistentCachePieces(SyncPersistentCachePiecesRequest) returns(stream SyncPersistentCachePiecesResponse);
// DownloadPersistentCachePiece downloads persistent cache piece from the remote peer.
rpc DownloadPersistentCachePiece(DownloadPersistentCachePieceRequest)returns(DownloadPersistentCachePieceResponse);
// SyncHost sync host info from parents.
rpc SyncHost(SyncHostRequest) returns (stream common.v2.Host);
// ExchangeIBVerbsQueuePairEndpoint exchanges queue pair endpoint of IBVerbs with remote peer.
rpc ExchangeIBVerbsQueuePairEndpoint(ExchangeIBVerbsQueuePairEndpointRequest) returns(ExchangeIBVerbsQueuePairEndpointResponse);
}
// DfdaemonDownload represents dfdaemon download service.
@ -226,18 +573,27 @@ service DfdaemonDownload {
// DownloadTask downloads task from p2p network.
rpc DownloadTask(DownloadTaskRequest) returns(stream DownloadTaskResponse);
// UploadTask uploads task to p2p network.
rpc UploadTask(UploadTaskRequest) returns(google.protobuf.Empty);
// StatTask stats task information.
rpc StatTask(StatTaskRequest) returns(common.v2.Task);
// ListTaskEntries lists task entries for downloading directory.
rpc ListTaskEntries(ListTaskEntriesRequest) returns(ListTaskEntriesResponse);
// DeleteTask deletes task from p2p network.
rpc DeleteTask(DeleteTaskRequest) returns(google.protobuf.Empty);
// DeleteHost releases host in scheduler.
rpc DeleteHost(google.protobuf.Empty)returns(google.protobuf.Empty);
// DownloadCacheTask downloads cache task from p2p network.
rpc DownloadCacheTask(DownloadCacheTaskRequest) returns(stream DownloadCacheTaskResponse);
// StatCacheTask stats cache task information.
rpc StatCacheTask(StatCacheTaskRequest) returns(common.v2.CacheTask);
// DeleteCacheTask deletes cache task from p2p network.
rpc DeleteCacheTask(DeleteCacheTaskRequest) returns(google.protobuf.Empty);
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
rpc DownloadPersistentCacheTask(DownloadPersistentCacheTaskRequest) returns(stream DownloadPersistentCacheTaskResponse);
@ -246,7 +602,4 @@ service DfdaemonDownload {
// StatPersistentCacheTask stats persistent cache task information.
rpc StatPersistentCacheTask(StatPersistentCacheTaskRequest) returns(common.v2.PersistentCacheTask);
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
rpc DeletePersistentCacheTask(DeletePersistentCacheTaskRequest) returns(google.protobuf.Empty);
}

View File

@ -34,12 +34,32 @@ type DfdaemonUploadClient interface {
SyncPieces(ctx context.Context, in *SyncPiecesRequest, opts ...grpc.CallOption) (DfdaemonUpload_SyncPiecesClient, error)
// DownloadPiece downloads piece from the remote peer.
DownloadPiece(ctx context.Context, in *DownloadPieceRequest, opts ...grpc.CallOption) (*DownloadPieceResponse, error)
// DownloadCacheTask downloads cache task from p2p network.
DownloadCacheTask(ctx context.Context, in *DownloadCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonUpload_DownloadCacheTaskClient, error)
// StatCacheTask stats cache task information.
StatCacheTask(ctx context.Context, in *StatCacheTaskRequest, opts ...grpc.CallOption) (*v2.CacheTask, error)
// DeleteCacheTask deletes cache task from p2p network.
DeleteCacheTask(ctx context.Context, in *DeleteCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// SyncCachePieces syncs cache piece metadatas from remote peer.
SyncCachePieces(ctx context.Context, in *SyncCachePiecesRequest, opts ...grpc.CallOption) (DfdaemonUpload_SyncCachePiecesClient, error)
// DownloadCachePiece downloads cache piece from the remote peer.
DownloadCachePiece(ctx context.Context, in *DownloadCachePieceRequest, opts ...grpc.CallOption) (*DownloadCachePieceResponse, error)
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
DownloadPersistentCacheTask(ctx context.Context, in *DownloadPersistentCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonUpload_DownloadPersistentCacheTaskClient, error)
// UpdatePersistentCacheTask updates metadate of thr persistent cache task in p2p network.
UpdatePersistentCacheTask(ctx context.Context, in *UpdatePersistentCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// StatPersistentCacheTask stats persistent cache task information.
StatPersistentCacheTask(ctx context.Context, in *StatPersistentCacheTaskRequest, opts ...grpc.CallOption) (*v2.PersistentCacheTask, error)
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
DeletePersistentCacheTask(ctx context.Context, in *DeletePersistentCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// SyncPersistentCachePieces syncs persistent cache pieces from remote peer.
SyncPersistentCachePieces(ctx context.Context, in *SyncPersistentCachePiecesRequest, opts ...grpc.CallOption) (DfdaemonUpload_SyncPersistentCachePiecesClient, error)
// DownloadPersistentCachePiece downloads persistent cache piece from the remote peer.
DownloadPersistentCachePiece(ctx context.Context, in *DownloadPersistentCachePieceRequest, opts ...grpc.CallOption) (*DownloadPersistentCachePieceResponse, error)
// SyncHost sync host info from parents.
SyncHost(ctx context.Context, in *SyncHostRequest, opts ...grpc.CallOption) (DfdaemonUpload_SyncHostClient, error)
// ExchangeIBVerbsQueuePairEndpoint exchanges queue pair endpoint of IBVerbs with remote peer.
ExchangeIBVerbsQueuePairEndpoint(ctx context.Context, in *ExchangeIBVerbsQueuePairEndpointRequest, opts ...grpc.CallOption) (*ExchangeIBVerbsQueuePairEndpointResponse, error)
}
type dfdaemonUploadClient struct {
@ -141,8 +161,99 @@ func (c *dfdaemonUploadClient) DownloadPiece(ctx context.Context, in *DownloadPi
return out, nil
}
func (c *dfdaemonUploadClient) DownloadCacheTask(ctx context.Context, in *DownloadCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonUpload_DownloadCacheTaskClient, error) {
stream, err := c.cc.NewStream(ctx, &DfdaemonUpload_ServiceDesc.Streams[2], "/dfdaemon.v2.DfdaemonUpload/DownloadCacheTask", opts...)
if err != nil {
return nil, err
}
x := &dfdaemonUploadDownloadCacheTaskClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type DfdaemonUpload_DownloadCacheTaskClient interface {
Recv() (*DownloadCacheTaskResponse, error)
grpc.ClientStream
}
type dfdaemonUploadDownloadCacheTaskClient struct {
grpc.ClientStream
}
func (x *dfdaemonUploadDownloadCacheTaskClient) Recv() (*DownloadCacheTaskResponse, error) {
m := new(DownloadCacheTaskResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *dfdaemonUploadClient) StatCacheTask(ctx context.Context, in *StatCacheTaskRequest, opts ...grpc.CallOption) (*v2.CacheTask, error) {
out := new(v2.CacheTask)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonUpload/StatCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonUploadClient) DeleteCacheTask(ctx context.Context, in *DeleteCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonUpload/DeleteCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonUploadClient) SyncCachePieces(ctx context.Context, in *SyncCachePiecesRequest, opts ...grpc.CallOption) (DfdaemonUpload_SyncCachePiecesClient, error) {
stream, err := c.cc.NewStream(ctx, &DfdaemonUpload_ServiceDesc.Streams[3], "/dfdaemon.v2.DfdaemonUpload/SyncCachePieces", opts...)
if err != nil {
return nil, err
}
x := &dfdaemonUploadSyncCachePiecesClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type DfdaemonUpload_SyncCachePiecesClient interface {
Recv() (*SyncCachePiecesResponse, error)
grpc.ClientStream
}
type dfdaemonUploadSyncCachePiecesClient struct {
grpc.ClientStream
}
func (x *dfdaemonUploadSyncCachePiecesClient) Recv() (*SyncCachePiecesResponse, error) {
m := new(SyncCachePiecesResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *dfdaemonUploadClient) DownloadCachePiece(ctx context.Context, in *DownloadCachePieceRequest, opts ...grpc.CallOption) (*DownloadCachePieceResponse, error) {
out := new(DownloadCachePieceResponse)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonUpload/DownloadCachePiece", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonUploadClient) DownloadPersistentCacheTask(ctx context.Context, in *DownloadPersistentCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonUpload_DownloadPersistentCacheTaskClient, error) {
stream, err := c.cc.NewStream(ctx, &DfdaemonUpload_ServiceDesc.Streams[2], "/dfdaemon.v2.DfdaemonUpload/DownloadPersistentCacheTask", opts...)
stream, err := c.cc.NewStream(ctx, &DfdaemonUpload_ServiceDesc.Streams[4], "/dfdaemon.v2.DfdaemonUpload/DownloadPersistentCacheTask", opts...)
if err != nil {
return nil, err
}
@ -173,6 +284,15 @@ func (x *dfdaemonUploadDownloadPersistentCacheTaskClient) Recv() (*DownloadPersi
return m, nil
}
func (c *dfdaemonUploadClient) UpdatePersistentCacheTask(ctx context.Context, in *UpdatePersistentCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonUpload/UpdatePersistentCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonUploadClient) StatPersistentCacheTask(ctx context.Context, in *StatPersistentCacheTaskRequest, opts ...grpc.CallOption) (*v2.PersistentCacheTask, error) {
out := new(v2.PersistentCacheTask)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonUpload/StatPersistentCacheTask", in, out, opts...)
@ -191,6 +311,88 @@ func (c *dfdaemonUploadClient) DeletePersistentCacheTask(ctx context.Context, in
return out, nil
}
func (c *dfdaemonUploadClient) SyncPersistentCachePieces(ctx context.Context, in *SyncPersistentCachePiecesRequest, opts ...grpc.CallOption) (DfdaemonUpload_SyncPersistentCachePiecesClient, error) {
stream, err := c.cc.NewStream(ctx, &DfdaemonUpload_ServiceDesc.Streams[5], "/dfdaemon.v2.DfdaemonUpload/SyncPersistentCachePieces", opts...)
if err != nil {
return nil, err
}
x := &dfdaemonUploadSyncPersistentCachePiecesClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type DfdaemonUpload_SyncPersistentCachePiecesClient interface {
Recv() (*SyncPersistentCachePiecesResponse, error)
grpc.ClientStream
}
type dfdaemonUploadSyncPersistentCachePiecesClient struct {
grpc.ClientStream
}
func (x *dfdaemonUploadSyncPersistentCachePiecesClient) Recv() (*SyncPersistentCachePiecesResponse, error) {
m := new(SyncPersistentCachePiecesResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *dfdaemonUploadClient) DownloadPersistentCachePiece(ctx context.Context, in *DownloadPersistentCachePieceRequest, opts ...grpc.CallOption) (*DownloadPersistentCachePieceResponse, error) {
out := new(DownloadPersistentCachePieceResponse)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonUpload/DownloadPersistentCachePiece", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonUploadClient) SyncHost(ctx context.Context, in *SyncHostRequest, opts ...grpc.CallOption) (DfdaemonUpload_SyncHostClient, error) {
stream, err := c.cc.NewStream(ctx, &DfdaemonUpload_ServiceDesc.Streams[6], "/dfdaemon.v2.DfdaemonUpload/SyncHost", opts...)
if err != nil {
return nil, err
}
x := &dfdaemonUploadSyncHostClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type DfdaemonUpload_SyncHostClient interface {
Recv() (*v2.Host, error)
grpc.ClientStream
}
type dfdaemonUploadSyncHostClient struct {
grpc.ClientStream
}
func (x *dfdaemonUploadSyncHostClient) Recv() (*v2.Host, error) {
m := new(v2.Host)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *dfdaemonUploadClient) ExchangeIBVerbsQueuePairEndpoint(ctx context.Context, in *ExchangeIBVerbsQueuePairEndpointRequest, opts ...grpc.CallOption) (*ExchangeIBVerbsQueuePairEndpointResponse, error) {
out := new(ExchangeIBVerbsQueuePairEndpointResponse)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonUpload/ExchangeIBVerbsQueuePairEndpoint", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// DfdaemonUploadServer is the server API for DfdaemonUpload service.
// All implementations should embed UnimplementedDfdaemonUploadServer
// for forward compatibility
@ -205,12 +407,32 @@ type DfdaemonUploadServer interface {
SyncPieces(*SyncPiecesRequest, DfdaemonUpload_SyncPiecesServer) error
// DownloadPiece downloads piece from the remote peer.
DownloadPiece(context.Context, *DownloadPieceRequest) (*DownloadPieceResponse, error)
// DownloadCacheTask downloads cache task from p2p network.
DownloadCacheTask(*DownloadCacheTaskRequest, DfdaemonUpload_DownloadCacheTaskServer) error
// StatCacheTask stats cache task information.
StatCacheTask(context.Context, *StatCacheTaskRequest) (*v2.CacheTask, error)
// DeleteCacheTask deletes cache task from p2p network.
DeleteCacheTask(context.Context, *DeleteCacheTaskRequest) (*emptypb.Empty, error)
// SyncCachePieces syncs cache piece metadatas from remote peer.
SyncCachePieces(*SyncCachePiecesRequest, DfdaemonUpload_SyncCachePiecesServer) error
// DownloadCachePiece downloads cache piece from the remote peer.
DownloadCachePiece(context.Context, *DownloadCachePieceRequest) (*DownloadCachePieceResponse, error)
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
DownloadPersistentCacheTask(*DownloadPersistentCacheTaskRequest, DfdaemonUpload_DownloadPersistentCacheTaskServer) error
// UpdatePersistentCacheTask updates metadate of thr persistent cache task in p2p network.
UpdatePersistentCacheTask(context.Context, *UpdatePersistentCacheTaskRequest) (*emptypb.Empty, error)
// StatPersistentCacheTask stats persistent cache task information.
StatPersistentCacheTask(context.Context, *StatPersistentCacheTaskRequest) (*v2.PersistentCacheTask, error)
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
DeletePersistentCacheTask(context.Context, *DeletePersistentCacheTaskRequest) (*emptypb.Empty, error)
// SyncPersistentCachePieces syncs persistent cache pieces from remote peer.
SyncPersistentCachePieces(*SyncPersistentCachePiecesRequest, DfdaemonUpload_SyncPersistentCachePiecesServer) error
// DownloadPersistentCachePiece downloads persistent cache piece from the remote peer.
DownloadPersistentCachePiece(context.Context, *DownloadPersistentCachePieceRequest) (*DownloadPersistentCachePieceResponse, error)
// SyncHost sync host info from parents.
SyncHost(*SyncHostRequest, DfdaemonUpload_SyncHostServer) error
// ExchangeIBVerbsQueuePairEndpoint exchanges queue pair endpoint of IBVerbs with remote peer.
ExchangeIBVerbsQueuePairEndpoint(context.Context, *ExchangeIBVerbsQueuePairEndpointRequest) (*ExchangeIBVerbsQueuePairEndpointResponse, error)
}
// UnimplementedDfdaemonUploadServer should be embedded to have forward compatible implementations.
@ -232,15 +454,45 @@ func (UnimplementedDfdaemonUploadServer) SyncPieces(*SyncPiecesRequest, Dfdaemon
func (UnimplementedDfdaemonUploadServer) DownloadPiece(context.Context, *DownloadPieceRequest) (*DownloadPieceResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DownloadPiece not implemented")
}
func (UnimplementedDfdaemonUploadServer) DownloadCacheTask(*DownloadCacheTaskRequest, DfdaemonUpload_DownloadCacheTaskServer) error {
return status.Errorf(codes.Unimplemented, "method DownloadCacheTask not implemented")
}
func (UnimplementedDfdaemonUploadServer) StatCacheTask(context.Context, *StatCacheTaskRequest) (*v2.CacheTask, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatCacheTask not implemented")
}
func (UnimplementedDfdaemonUploadServer) DeleteCacheTask(context.Context, *DeleteCacheTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteCacheTask not implemented")
}
func (UnimplementedDfdaemonUploadServer) SyncCachePieces(*SyncCachePiecesRequest, DfdaemonUpload_SyncCachePiecesServer) error {
return status.Errorf(codes.Unimplemented, "method SyncCachePieces not implemented")
}
func (UnimplementedDfdaemonUploadServer) DownloadCachePiece(context.Context, *DownloadCachePieceRequest) (*DownloadCachePieceResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DownloadCachePiece not implemented")
}
func (UnimplementedDfdaemonUploadServer) DownloadPersistentCacheTask(*DownloadPersistentCacheTaskRequest, DfdaemonUpload_DownloadPersistentCacheTaskServer) error {
return status.Errorf(codes.Unimplemented, "method DownloadPersistentCacheTask not implemented")
}
func (UnimplementedDfdaemonUploadServer) UpdatePersistentCacheTask(context.Context, *UpdatePersistentCacheTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method UpdatePersistentCacheTask not implemented")
}
func (UnimplementedDfdaemonUploadServer) StatPersistentCacheTask(context.Context, *StatPersistentCacheTaskRequest) (*v2.PersistentCacheTask, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatPersistentCacheTask not implemented")
}
func (UnimplementedDfdaemonUploadServer) DeletePersistentCacheTask(context.Context, *DeletePersistentCacheTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeletePersistentCacheTask not implemented")
}
func (UnimplementedDfdaemonUploadServer) SyncPersistentCachePieces(*SyncPersistentCachePiecesRequest, DfdaemonUpload_SyncPersistentCachePiecesServer) error {
return status.Errorf(codes.Unimplemented, "method SyncPersistentCachePieces not implemented")
}
func (UnimplementedDfdaemonUploadServer) DownloadPersistentCachePiece(context.Context, *DownloadPersistentCachePieceRequest) (*DownloadPersistentCachePieceResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DownloadPersistentCachePiece not implemented")
}
func (UnimplementedDfdaemonUploadServer) SyncHost(*SyncHostRequest, DfdaemonUpload_SyncHostServer) error {
return status.Errorf(codes.Unimplemented, "method SyncHost not implemented")
}
func (UnimplementedDfdaemonUploadServer) ExchangeIBVerbsQueuePairEndpoint(context.Context, *ExchangeIBVerbsQueuePairEndpointRequest) (*ExchangeIBVerbsQueuePairEndpointResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ExchangeIBVerbsQueuePairEndpoint not implemented")
}
// UnsafeDfdaemonUploadServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to DfdaemonUploadServer will
@ -349,6 +601,102 @@ func _DfdaemonUpload_DownloadPiece_Handler(srv interface{}, ctx context.Context,
return interceptor(ctx, in, info, handler)
}
func _DfdaemonUpload_DownloadCacheTask_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(DownloadCacheTaskRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(DfdaemonUploadServer).DownloadCacheTask(m, &dfdaemonUploadDownloadCacheTaskServer{stream})
}
type DfdaemonUpload_DownloadCacheTaskServer interface {
Send(*DownloadCacheTaskResponse) error
grpc.ServerStream
}
type dfdaemonUploadDownloadCacheTaskServer struct {
grpc.ServerStream
}
func (x *dfdaemonUploadDownloadCacheTaskServer) Send(m *DownloadCacheTaskResponse) error {
return x.ServerStream.SendMsg(m)
}
func _DfdaemonUpload_StatCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StatCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonUploadServer).StatCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonUpload/StatCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonUploadServer).StatCacheTask(ctx, req.(*StatCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonUpload_DeleteCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonUploadServer).DeleteCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonUpload/DeleteCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonUploadServer).DeleteCacheTask(ctx, req.(*DeleteCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonUpload_SyncCachePieces_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(SyncCachePiecesRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(DfdaemonUploadServer).SyncCachePieces(m, &dfdaemonUploadSyncCachePiecesServer{stream})
}
type DfdaemonUpload_SyncCachePiecesServer interface {
Send(*SyncCachePiecesResponse) error
grpc.ServerStream
}
type dfdaemonUploadSyncCachePiecesServer struct {
grpc.ServerStream
}
func (x *dfdaemonUploadSyncCachePiecesServer) Send(m *SyncCachePiecesResponse) error {
return x.ServerStream.SendMsg(m)
}
func _DfdaemonUpload_DownloadCachePiece_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DownloadCachePieceRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonUploadServer).DownloadCachePiece(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonUpload/DownloadCachePiece",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonUploadServer).DownloadCachePiece(ctx, req.(*DownloadCachePieceRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonUpload_DownloadPersistentCacheTask_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(DownloadPersistentCacheTaskRequest)
if err := stream.RecvMsg(m); err != nil {
@ -370,6 +718,24 @@ func (x *dfdaemonUploadDownloadPersistentCacheTaskServer) Send(m *DownloadPersis
return x.ServerStream.SendMsg(m)
}
func _DfdaemonUpload_UpdatePersistentCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(UpdatePersistentCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonUploadServer).UpdatePersistentCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonUpload/UpdatePersistentCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonUploadServer).UpdatePersistentCacheTask(ctx, req.(*UpdatePersistentCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonUpload_StatPersistentCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StatPersistentCacheTaskRequest)
if err := dec(in); err != nil {
@ -406,6 +772,84 @@ func _DfdaemonUpload_DeletePersistentCacheTask_Handler(srv interface{}, ctx cont
return interceptor(ctx, in, info, handler)
}
func _DfdaemonUpload_SyncPersistentCachePieces_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(SyncPersistentCachePiecesRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(DfdaemonUploadServer).SyncPersistentCachePieces(m, &dfdaemonUploadSyncPersistentCachePiecesServer{stream})
}
type DfdaemonUpload_SyncPersistentCachePiecesServer interface {
Send(*SyncPersistentCachePiecesResponse) error
grpc.ServerStream
}
type dfdaemonUploadSyncPersistentCachePiecesServer struct {
grpc.ServerStream
}
func (x *dfdaemonUploadSyncPersistentCachePiecesServer) Send(m *SyncPersistentCachePiecesResponse) error {
return x.ServerStream.SendMsg(m)
}
func _DfdaemonUpload_DownloadPersistentCachePiece_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DownloadPersistentCachePieceRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonUploadServer).DownloadPersistentCachePiece(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonUpload/DownloadPersistentCachePiece",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonUploadServer).DownloadPersistentCachePiece(ctx, req.(*DownloadPersistentCachePieceRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonUpload_SyncHost_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(SyncHostRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(DfdaemonUploadServer).SyncHost(m, &dfdaemonUploadSyncHostServer{stream})
}
type DfdaemonUpload_SyncHostServer interface {
Send(*v2.Host) error
grpc.ServerStream
}
type dfdaemonUploadSyncHostServer struct {
grpc.ServerStream
}
func (x *dfdaemonUploadSyncHostServer) Send(m *v2.Host) error {
return x.ServerStream.SendMsg(m)
}
func _DfdaemonUpload_ExchangeIBVerbsQueuePairEndpoint_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ExchangeIBVerbsQueuePairEndpointRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonUploadServer).ExchangeIBVerbsQueuePairEndpoint(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonUpload/ExchangeIBVerbsQueuePairEndpoint",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonUploadServer).ExchangeIBVerbsQueuePairEndpoint(ctx, req.(*ExchangeIBVerbsQueuePairEndpointRequest))
}
return interceptor(ctx, in, info, handler)
}
// DfdaemonUpload_ServiceDesc is the grpc.ServiceDesc for DfdaemonUpload service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@ -425,6 +869,22 @@ var DfdaemonUpload_ServiceDesc = grpc.ServiceDesc{
MethodName: "DownloadPiece",
Handler: _DfdaemonUpload_DownloadPiece_Handler,
},
{
MethodName: "StatCacheTask",
Handler: _DfdaemonUpload_StatCacheTask_Handler,
},
{
MethodName: "DeleteCacheTask",
Handler: _DfdaemonUpload_DeleteCacheTask_Handler,
},
{
MethodName: "DownloadCachePiece",
Handler: _DfdaemonUpload_DownloadCachePiece_Handler,
},
{
MethodName: "UpdatePersistentCacheTask",
Handler: _DfdaemonUpload_UpdatePersistentCacheTask_Handler,
},
{
MethodName: "StatPersistentCacheTask",
Handler: _DfdaemonUpload_StatPersistentCacheTask_Handler,
@ -433,6 +893,14 @@ var DfdaemonUpload_ServiceDesc = grpc.ServiceDesc{
MethodName: "DeletePersistentCacheTask",
Handler: _DfdaemonUpload_DeletePersistentCacheTask_Handler,
},
{
MethodName: "DownloadPersistentCachePiece",
Handler: _DfdaemonUpload_DownloadPersistentCachePiece_Handler,
},
{
MethodName: "ExchangeIBVerbsQueuePairEndpoint",
Handler: _DfdaemonUpload_ExchangeIBVerbsQueuePairEndpoint_Handler,
},
},
Streams: []grpc.StreamDesc{
{
@ -445,11 +913,31 @@ var DfdaemonUpload_ServiceDesc = grpc.ServiceDesc{
Handler: _DfdaemonUpload_SyncPieces_Handler,
ServerStreams: true,
},
{
StreamName: "DownloadCacheTask",
Handler: _DfdaemonUpload_DownloadCacheTask_Handler,
ServerStreams: true,
},
{
StreamName: "SyncCachePieces",
Handler: _DfdaemonUpload_SyncCachePieces_Handler,
ServerStreams: true,
},
{
StreamName: "DownloadPersistentCacheTask",
Handler: _DfdaemonUpload_DownloadPersistentCacheTask_Handler,
ServerStreams: true,
},
{
StreamName: "SyncPersistentCachePieces",
Handler: _DfdaemonUpload_SyncPersistentCachePieces_Handler,
ServerStreams: true,
},
{
StreamName: "SyncHost",
Handler: _DfdaemonUpload_SyncHost_Handler,
ServerStreams: true,
},
},
Metadata: "pkg/apis/dfdaemon/v2/dfdaemon.proto",
}
@ -460,22 +948,26 @@ var DfdaemonUpload_ServiceDesc = grpc.ServiceDesc{
type DfdaemonDownloadClient interface {
// DownloadTask downloads task from p2p network.
DownloadTask(ctx context.Context, in *DownloadTaskRequest, opts ...grpc.CallOption) (DfdaemonDownload_DownloadTaskClient, error)
// UploadTask uploads task to p2p network.
UploadTask(ctx context.Context, in *UploadTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// StatTask stats task information.
StatTask(ctx context.Context, in *StatTaskRequest, opts ...grpc.CallOption) (*v2.Task, error)
// ListTaskEntries lists task entries for downloading directory.
ListTaskEntries(ctx context.Context, in *ListTaskEntriesRequest, opts ...grpc.CallOption) (*ListTaskEntriesResponse, error)
// DeleteTask deletes task from p2p network.
DeleteTask(ctx context.Context, in *DeleteTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// DeleteHost releases host in scheduler.
DeleteHost(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*emptypb.Empty, error)
// DownloadCacheTask downloads cache task from p2p network.
DownloadCacheTask(ctx context.Context, in *DownloadCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonDownload_DownloadCacheTaskClient, error)
// StatCacheTask stats cache task information.
StatCacheTask(ctx context.Context, in *StatCacheTaskRequest, opts ...grpc.CallOption) (*v2.CacheTask, error)
// DeleteCacheTask deletes cache task from p2p network.
DeleteCacheTask(ctx context.Context, in *DeleteCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
DownloadPersistentCacheTask(ctx context.Context, in *DownloadPersistentCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonDownload_DownloadPersistentCacheTaskClient, error)
// UploadPersistentCacheTask uploads persistent cache task to p2p network.
UploadPersistentCacheTask(ctx context.Context, in *UploadPersistentCacheTaskRequest, opts ...grpc.CallOption) (*v2.PersistentCacheTask, error)
// StatPersistentCacheTask stats persistent cache task information.
StatPersistentCacheTask(ctx context.Context, in *StatPersistentCacheTaskRequest, opts ...grpc.CallOption) (*v2.PersistentCacheTask, error)
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
DeletePersistentCacheTask(ctx context.Context, in *DeletePersistentCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
}
type dfdaemonDownloadClient struct {
@ -518,18 +1010,18 @@ func (x *dfdaemonDownloadDownloadTaskClient) Recv() (*DownloadTaskResponse, erro
return m, nil
}
func (c *dfdaemonDownloadClient) UploadTask(ctx context.Context, in *UploadTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonDownload/UploadTask", in, out, opts...)
func (c *dfdaemonDownloadClient) StatTask(ctx context.Context, in *StatTaskRequest, opts ...grpc.CallOption) (*v2.Task, error) {
out := new(v2.Task)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonDownload/StatTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonDownloadClient) StatTask(ctx context.Context, in *StatTaskRequest, opts ...grpc.CallOption) (*v2.Task, error) {
out := new(v2.Task)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonDownload/StatTask", in, out, opts...)
func (c *dfdaemonDownloadClient) ListTaskEntries(ctx context.Context, in *ListTaskEntriesRequest, opts ...grpc.CallOption) (*ListTaskEntriesResponse, error) {
out := new(ListTaskEntriesResponse)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonDownload/ListTaskEntries", in, out, opts...)
if err != nil {
return nil, err
}
@ -554,8 +1046,58 @@ func (c *dfdaemonDownloadClient) DeleteHost(ctx context.Context, in *emptypb.Emp
return out, nil
}
func (c *dfdaemonDownloadClient) DownloadCacheTask(ctx context.Context, in *DownloadCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonDownload_DownloadCacheTaskClient, error) {
stream, err := c.cc.NewStream(ctx, &DfdaemonDownload_ServiceDesc.Streams[1], "/dfdaemon.v2.DfdaemonDownload/DownloadCacheTask", opts...)
if err != nil {
return nil, err
}
x := &dfdaemonDownloadDownloadCacheTaskClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type DfdaemonDownload_DownloadCacheTaskClient interface {
Recv() (*DownloadCacheTaskResponse, error)
grpc.ClientStream
}
type dfdaemonDownloadDownloadCacheTaskClient struct {
grpc.ClientStream
}
func (x *dfdaemonDownloadDownloadCacheTaskClient) Recv() (*DownloadCacheTaskResponse, error) {
m := new(DownloadCacheTaskResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *dfdaemonDownloadClient) StatCacheTask(ctx context.Context, in *StatCacheTaskRequest, opts ...grpc.CallOption) (*v2.CacheTask, error) {
out := new(v2.CacheTask)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonDownload/StatCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonDownloadClient) DeleteCacheTask(ctx context.Context, in *DeleteCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonDownload/DeleteCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *dfdaemonDownloadClient) DownloadPersistentCacheTask(ctx context.Context, in *DownloadPersistentCacheTaskRequest, opts ...grpc.CallOption) (DfdaemonDownload_DownloadPersistentCacheTaskClient, error) {
stream, err := c.cc.NewStream(ctx, &DfdaemonDownload_ServiceDesc.Streams[1], "/dfdaemon.v2.DfdaemonDownload/DownloadPersistentCacheTask", opts...)
stream, err := c.cc.NewStream(ctx, &DfdaemonDownload_ServiceDesc.Streams[2], "/dfdaemon.v2.DfdaemonDownload/DownloadPersistentCacheTask", opts...)
if err != nil {
return nil, err
}
@ -604,37 +1146,32 @@ func (c *dfdaemonDownloadClient) StatPersistentCacheTask(ctx context.Context, in
return out, nil
}
func (c *dfdaemonDownloadClient) DeletePersistentCacheTask(ctx context.Context, in *DeletePersistentCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/dfdaemon.v2.DfdaemonDownload/DeletePersistentCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// DfdaemonDownloadServer is the server API for DfdaemonDownload service.
// All implementations should embed UnimplementedDfdaemonDownloadServer
// for forward compatibility
type DfdaemonDownloadServer interface {
// DownloadTask downloads task from p2p network.
DownloadTask(*DownloadTaskRequest, DfdaemonDownload_DownloadTaskServer) error
// UploadTask uploads task to p2p network.
UploadTask(context.Context, *UploadTaskRequest) (*emptypb.Empty, error)
// StatTask stats task information.
StatTask(context.Context, *StatTaskRequest) (*v2.Task, error)
// ListTaskEntries lists task entries for downloading directory.
ListTaskEntries(context.Context, *ListTaskEntriesRequest) (*ListTaskEntriesResponse, error)
// DeleteTask deletes task from p2p network.
DeleteTask(context.Context, *DeleteTaskRequest) (*emptypb.Empty, error)
// DeleteHost releases host in scheduler.
DeleteHost(context.Context, *emptypb.Empty) (*emptypb.Empty, error)
// DownloadCacheTask downloads cache task from p2p network.
DownloadCacheTask(*DownloadCacheTaskRequest, DfdaemonDownload_DownloadCacheTaskServer) error
// StatCacheTask stats cache task information.
StatCacheTask(context.Context, *StatCacheTaskRequest) (*v2.CacheTask, error)
// DeleteCacheTask deletes cache task from p2p network.
DeleteCacheTask(context.Context, *DeleteCacheTaskRequest) (*emptypb.Empty, error)
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
DownloadPersistentCacheTask(*DownloadPersistentCacheTaskRequest, DfdaemonDownload_DownloadPersistentCacheTaskServer) error
// UploadPersistentCacheTask uploads persistent cache task to p2p network.
UploadPersistentCacheTask(context.Context, *UploadPersistentCacheTaskRequest) (*v2.PersistentCacheTask, error)
// StatPersistentCacheTask stats persistent cache task information.
StatPersistentCacheTask(context.Context, *StatPersistentCacheTaskRequest) (*v2.PersistentCacheTask, error)
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
DeletePersistentCacheTask(context.Context, *DeletePersistentCacheTaskRequest) (*emptypb.Empty, error)
}
// UnimplementedDfdaemonDownloadServer should be embedded to have forward compatible implementations.
@ -644,18 +1181,27 @@ type UnimplementedDfdaemonDownloadServer struct {
func (UnimplementedDfdaemonDownloadServer) DownloadTask(*DownloadTaskRequest, DfdaemonDownload_DownloadTaskServer) error {
return status.Errorf(codes.Unimplemented, "method DownloadTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) UploadTask(context.Context, *UploadTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method UploadTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) StatTask(context.Context, *StatTaskRequest) (*v2.Task, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) ListTaskEntries(context.Context, *ListTaskEntriesRequest) (*ListTaskEntriesResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListTaskEntries not implemented")
}
func (UnimplementedDfdaemonDownloadServer) DeleteTask(context.Context, *DeleteTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) DeleteHost(context.Context, *emptypb.Empty) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteHost not implemented")
}
func (UnimplementedDfdaemonDownloadServer) DownloadCacheTask(*DownloadCacheTaskRequest, DfdaemonDownload_DownloadCacheTaskServer) error {
return status.Errorf(codes.Unimplemented, "method DownloadCacheTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) StatCacheTask(context.Context, *StatCacheTaskRequest) (*v2.CacheTask, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatCacheTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) DeleteCacheTask(context.Context, *DeleteCacheTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteCacheTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) DownloadPersistentCacheTask(*DownloadPersistentCacheTaskRequest, DfdaemonDownload_DownloadPersistentCacheTaskServer) error {
return status.Errorf(codes.Unimplemented, "method DownloadPersistentCacheTask not implemented")
}
@ -665,9 +1211,6 @@ func (UnimplementedDfdaemonDownloadServer) UploadPersistentCacheTask(context.Con
func (UnimplementedDfdaemonDownloadServer) StatPersistentCacheTask(context.Context, *StatPersistentCacheTaskRequest) (*v2.PersistentCacheTask, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatPersistentCacheTask not implemented")
}
func (UnimplementedDfdaemonDownloadServer) DeletePersistentCacheTask(context.Context, *DeletePersistentCacheTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeletePersistentCacheTask not implemented")
}
// UnsafeDfdaemonDownloadServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to DfdaemonDownloadServer will
@ -701,24 +1244,6 @@ func (x *dfdaemonDownloadDownloadTaskServer) Send(m *DownloadTaskResponse) error
return x.ServerStream.SendMsg(m)
}
func _DfdaemonDownload_UploadTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(UploadTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonDownloadServer).UploadTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonDownload/UploadTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonDownloadServer).UploadTask(ctx, req.(*UploadTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonDownload_StatTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StatTaskRequest)
if err := dec(in); err != nil {
@ -737,6 +1262,24 @@ func _DfdaemonDownload_StatTask_Handler(srv interface{}, ctx context.Context, de
return interceptor(ctx, in, info, handler)
}
func _DfdaemonDownload_ListTaskEntries_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListTaskEntriesRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonDownloadServer).ListTaskEntries(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonDownload/ListTaskEntries",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonDownloadServer).ListTaskEntries(ctx, req.(*ListTaskEntriesRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonDownload_DeleteTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteTaskRequest)
if err := dec(in); err != nil {
@ -773,6 +1316,63 @@ func _DfdaemonDownload_DeleteHost_Handler(srv interface{}, ctx context.Context,
return interceptor(ctx, in, info, handler)
}
func _DfdaemonDownload_DownloadCacheTask_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(DownloadCacheTaskRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(DfdaemonDownloadServer).DownloadCacheTask(m, &dfdaemonDownloadDownloadCacheTaskServer{stream})
}
type DfdaemonDownload_DownloadCacheTaskServer interface {
Send(*DownloadCacheTaskResponse) error
grpc.ServerStream
}
type dfdaemonDownloadDownloadCacheTaskServer struct {
grpc.ServerStream
}
func (x *dfdaemonDownloadDownloadCacheTaskServer) Send(m *DownloadCacheTaskResponse) error {
return x.ServerStream.SendMsg(m)
}
func _DfdaemonDownload_StatCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StatCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonDownloadServer).StatCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonDownload/StatCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonDownloadServer).StatCacheTask(ctx, req.(*StatCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonDownload_DeleteCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonDownloadServer).DeleteCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonDownload/DeleteCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonDownloadServer).DeleteCacheTask(ctx, req.(*DeleteCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _DfdaemonDownload_DownloadPersistentCacheTask_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(DownloadPersistentCacheTaskRequest)
if err := stream.RecvMsg(m); err != nil {
@ -830,24 +1430,6 @@ func _DfdaemonDownload_StatPersistentCacheTask_Handler(srv interface{}, ctx cont
return interceptor(ctx, in, info, handler)
}
func _DfdaemonDownload_DeletePersistentCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeletePersistentCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(DfdaemonDownloadServer).DeletePersistentCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/dfdaemon.v2.DfdaemonDownload/DeletePersistentCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(DfdaemonDownloadServer).DeletePersistentCacheTask(ctx, req.(*DeletePersistentCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
// DfdaemonDownload_ServiceDesc is the grpc.ServiceDesc for DfdaemonDownload service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@ -855,14 +1437,14 @@ var DfdaemonDownload_ServiceDesc = grpc.ServiceDesc{
ServiceName: "dfdaemon.v2.DfdaemonDownload",
HandlerType: (*DfdaemonDownloadServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "UploadTask",
Handler: _DfdaemonDownload_UploadTask_Handler,
},
{
MethodName: "StatTask",
Handler: _DfdaemonDownload_StatTask_Handler,
},
{
MethodName: "ListTaskEntries",
Handler: _DfdaemonDownload_ListTaskEntries_Handler,
},
{
MethodName: "DeleteTask",
Handler: _DfdaemonDownload_DeleteTask_Handler,
@ -871,6 +1453,14 @@ var DfdaemonDownload_ServiceDesc = grpc.ServiceDesc{
MethodName: "DeleteHost",
Handler: _DfdaemonDownload_DeleteHost_Handler,
},
{
MethodName: "StatCacheTask",
Handler: _DfdaemonDownload_StatCacheTask_Handler,
},
{
MethodName: "DeleteCacheTask",
Handler: _DfdaemonDownload_DeleteCacheTask_Handler,
},
{
MethodName: "UploadPersistentCacheTask",
Handler: _DfdaemonDownload_UploadPersistentCacheTask_Handler,
@ -879,10 +1469,6 @@ var DfdaemonDownload_ServiceDesc = grpc.ServiceDesc{
MethodName: "StatPersistentCacheTask",
Handler: _DfdaemonDownload_StatPersistentCacheTask_Handler,
},
{
MethodName: "DeletePersistentCacheTask",
Handler: _DfdaemonDownload_DeletePersistentCacheTask_Handler,
},
},
Streams: []grpc.StreamDesc{
{
@ -890,6 +1476,11 @@ var DfdaemonDownload_ServiceDesc = grpc.ServiceDesc{
Handler: _DfdaemonDownload_DownloadTask_Handler,
ServerStreams: true,
},
{
StreamName: "DownloadCacheTask",
Handler: _DfdaemonDownload_DownloadCacheTask_Handler,
ServerStreams: true,
},
{
StreamName: "DownloadPersistentCacheTask",
Handler: _DfdaemonDownload_DownloadPersistentCacheTask_Handler,

File diff suppressed because it is too large Load Diff

View File

@ -103,6 +103,55 @@ func (x *Backend) GetStatusCode() int32 {
return 0
}
// Unknown is error detail for Unknown.
type Unknown struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Unknown error message.
Message *string `protobuf:"bytes,1,opt,name=message,proto3,oneof" json:"message,omitempty"`
}
func (x *Unknown) Reset() {
*x = Unknown{}
if protoimpl.UnsafeEnabled {
mi := &file_pkg_apis_errordetails_v2_errordetails_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Unknown) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Unknown) ProtoMessage() {}
func (x *Unknown) ProtoReflect() protoreflect.Message {
mi := &file_pkg_apis_errordetails_v2_errordetails_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Unknown.ProtoReflect.Descriptor instead.
func (*Unknown) Descriptor() ([]byte, []int) {
return file_pkg_apis_errordetails_v2_errordetails_proto_rawDescGZIP(), []int{1}
}
func (x *Unknown) GetMessage() string {
if x != nil && x.Message != nil {
return *x.Message
}
return ""
}
var File_pkg_apis_errordetails_v2_errordetails_proto protoreflect.FileDescriptor
var file_pkg_apis_errordetails_v2_errordetails_proto_rawDesc = []byte{
@ -125,11 +174,14 @@ var file_pkg_apis_errordetails_v2_errordetails_proto_rawDesc = []byte{
0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79,
0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x42, 0x0e, 0x0a, 0x0c, 0x5f, 0x73,
0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x63, 0x6f, 0x64, 0x65, 0x42, 0x35, 0x5a, 0x33, 0x64, 0x37,
0x79, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x32, 0x2f, 0x70, 0x6b, 0x67, 0x2f,
0x61, 0x70, 0x69, 0x73, 0x2f, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c,
0x73, 0x2f, 0x76, 0x32, 0x3b, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c,
0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x63, 0x6f, 0x64, 0x65, 0x22, 0x34, 0x0a, 0x07, 0x55, 0x6e,
0x6b, 0x6e, 0x6f, 0x77, 0x6e, 0x12, 0x1d, 0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67,
0x65, 0x88, 0x01, 0x01, 0x42, 0x0a, 0x0a, 0x08, 0x5f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
0x42, 0x35, 0x5a, 0x33, 0x64, 0x37, 0x79, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76,
0x32, 0x2f, 0x70, 0x6b, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x65, 0x72, 0x72, 0x6f, 0x72,
0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x2f, 0x76, 0x32, 0x3b, 0x65, 0x72, 0x72, 0x6f, 0x72,
0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@ -144,13 +196,14 @@ func file_pkg_apis_errordetails_v2_errordetails_proto_rawDescGZIP() []byte {
return file_pkg_apis_errordetails_v2_errordetails_proto_rawDescData
}
var file_pkg_apis_errordetails_v2_errordetails_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_pkg_apis_errordetails_v2_errordetails_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
var file_pkg_apis_errordetails_v2_errordetails_proto_goTypes = []interface{}{
(*Backend)(nil), // 0: errordetails.v2.Backend
nil, // 1: errordetails.v2.Backend.HeaderEntry
(*Unknown)(nil), // 1: errordetails.v2.Unknown
nil, // 2: errordetails.v2.Backend.HeaderEntry
}
var file_pkg_apis_errordetails_v2_errordetails_proto_depIdxs = []int32{
1, // 0: errordetails.v2.Backend.header:type_name -> errordetails.v2.Backend.HeaderEntry
2, // 0: errordetails.v2.Backend.header:type_name -> errordetails.v2.Backend.HeaderEntry
1, // [1:1] is the sub-list for method output_type
1, // [1:1] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
@ -176,15 +229,28 @@ func file_pkg_apis_errordetails_v2_errordetails_proto_init() {
return nil
}
}
file_pkg_apis_errordetails_v2_errordetails_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Unknown); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
file_pkg_apis_errordetails_v2_errordetails_proto_msgTypes[0].OneofWrappers = []interface{}{}
file_pkg_apis_errordetails_v2_errordetails_proto_msgTypes[1].OneofWrappers = []interface{}{}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_pkg_apis_errordetails_v2_errordetails_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumMessages: 3,
NumExtensions: 0,
NumServices: 0,
},

View File

@ -155,3 +155,105 @@ var _ interface {
Cause() error
ErrorName() string
} = BackendValidationError{}
// Validate checks the field values on Unknown with the rules defined in the
// proto definition for this message. If any rules are violated, the first
// error encountered is returned, or nil if there are no violations.
func (m *Unknown) Validate() error {
return m.validate(false)
}
// ValidateAll checks the field values on Unknown with the rules defined in the
// proto definition for this message. If any rules are violated, the result is
// a list of violation errors wrapped in UnknownMultiError, or nil if none found.
func (m *Unknown) ValidateAll() error {
return m.validate(true)
}
func (m *Unknown) validate(all bool) error {
if m == nil {
return nil
}
var errors []error
if m.Message != nil {
// no validation rules for Message
}
if len(errors) > 0 {
return UnknownMultiError(errors)
}
return nil
}
// UnknownMultiError is an error wrapping multiple validation errors returned
// by Unknown.ValidateAll() if the designated constraints aren't met.
type UnknownMultiError []error
// Error returns a concatenation of all the error messages it wraps.
func (m UnknownMultiError) Error() string {
var msgs []string
for _, err := range m {
msgs = append(msgs, err.Error())
}
return strings.Join(msgs, "; ")
}
// AllErrors returns a list of validation violation errors.
func (m UnknownMultiError) AllErrors() []error { return m }
// UnknownValidationError is the validation error returned by Unknown.Validate
// if the designated constraints aren't met.
type UnknownValidationError struct {
field string
reason string
cause error
key bool
}
// Field function returns field value.
func (e UnknownValidationError) Field() string { return e.field }
// Reason function returns reason value.
func (e UnknownValidationError) Reason() string { return e.reason }
// Cause function returns cause value.
func (e UnknownValidationError) Cause() error { return e.cause }
// Key function returns key value.
func (e UnknownValidationError) Key() bool { return e.key }
// ErrorName returns error name.
func (e UnknownValidationError) ErrorName() string { return "UnknownValidationError" }
// Error satisfies the builtin error interface
func (e UnknownValidationError) Error() string {
cause := ""
if e.cause != nil {
cause = fmt.Sprintf(" | caused by: %v", e.cause)
}
key := ""
if e.key {
key = "key for "
}
return fmt.Sprintf(
"invalid %sUnknown.%s: %s%s",
key,
e.field,
e.reason,
cause)
}
var _ error = UnknownValidationError{}
var _ interface {
Field() string
Reason() string
Key() bool
Cause() error
ErrorName() string
} = UnknownValidationError{}

View File

@ -32,3 +32,8 @@ message Backend {
optional int32 status_code = 3 [(validate.rules).int32 = {gte: 100, lt: 599, ignore_empty: true}];
}
// Unknown is error detail for Unknown.
message Unknown {
// Unknown error message.
optional string message = 1;
}

View File

@ -24,6 +24,7 @@ import (
type MockManagerClient struct {
ctrl *gomock.Controller
recorder *MockManagerClientMockRecorder
isgomock struct{}
}
// MockManagerClientMockRecorder is the mock recorder for MockManagerClient.
@ -247,6 +248,7 @@ func (mr *MockManagerClientMockRecorder) UpdateSeedPeer(ctx, in any, opts ...any
type MockManager_KeepAliveClient struct {
ctrl *gomock.Controller
recorder *MockManager_KeepAliveClientMockRecorder
isgomock struct{}
}
// MockManager_KeepAliveClientMockRecorder is the mock recorder for MockManager_KeepAliveClient.
@ -384,6 +386,7 @@ func (mr *MockManager_KeepAliveClientMockRecorder) Trailer() *gomock.Call {
type MockManagerServer struct {
ctrl *gomock.Controller
recorder *MockManagerServerMockRecorder
isgomock struct{}
}
// MockManagerServerMockRecorder is the mock recorder for MockManagerServer.
@ -556,6 +559,7 @@ func (mr *MockManagerServerMockRecorder) UpdateSeedPeer(arg0, arg1 any) *gomock.
type MockUnsafeManagerServer struct {
ctrl *gomock.Controller
recorder *MockUnsafeManagerServerMockRecorder
isgomock struct{}
}
// MockUnsafeManagerServerMockRecorder is the mock recorder for MockUnsafeManagerServer.
@ -591,6 +595,7 @@ func (mr *MockUnsafeManagerServerMockRecorder) mustEmbedUnimplementedManagerServ
type MockManager_KeepAliveServer struct {
ctrl *gomock.Controller
recorder *MockManager_KeepAliveServerMockRecorder
isgomock struct{}
}
// MockManager_KeepAliveServerMockRecorder is the mock recorder for MockManager_KeepAliveServer.

View File

@ -1053,6 +1053,8 @@ type UpdateSchedulerRequest struct {
Port int32 `protobuf:"varint,7,opt,name=port,proto3" json:"port,omitempty"`
// Scheduler features.
Features []string `protobuf:"bytes,8,rep,name=features,proto3" json:"features,omitempty"`
// Scheduler configuration.
Config []byte `protobuf:"bytes,9,opt,name=config,proto3" json:"config,omitempty"`
}
func (x *UpdateSchedulerRequest) Reset() {
@ -1143,6 +1145,13 @@ func (x *UpdateSchedulerRequest) GetFeatures() []string {
return nil
}
func (x *UpdateSchedulerRequest) GetConfig() []byte {
if x != nil {
return x.Config
}
return nil
}
// ListSchedulersRequest represents request of ListSchedulers.
type ListSchedulersRequest struct {
state protoimpl.MessageState
@ -1163,6 +1172,8 @@ type ListSchedulersRequest struct {
Version string `protobuf:"bytes,6,opt,name=version,proto3" json:"version,omitempty"`
// Dfdaemon commit.
Commit string `protobuf:"bytes,7,opt,name=commit,proto3" json:"commit,omitempty"`
// ID of the cluster to which the scheduler belongs.
SchedulerClusterId uint64 `protobuf:"varint,8,opt,name=scheduler_cluster_id,json=schedulerClusterId,proto3" json:"scheduler_cluster_id,omitempty"`
}
func (x *ListSchedulersRequest) Reset() {
@ -1246,6 +1257,13 @@ func (x *ListSchedulersRequest) GetCommit() string {
return ""
}
func (x *ListSchedulersRequest) GetSchedulerClusterId() uint64 {
if x != nil {
return x.SchedulerClusterId
}
return 0
}
// ListSchedulersResponse represents response of ListSchedulers.
type ListSchedulersResponse struct {
state protoimpl.MessageState
@ -1855,7 +1873,7 @@ var file_pkg_apis_manager_v2_manager_proto_rawDesc = []byte{
0x12, 0x73, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x43, 0x6c, 0x75, 0x73, 0x74, 0x65,
0x72, 0x49, 0x64, 0x12, 0x1a, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x42,
0x0a, 0xfa, 0x42, 0x07, 0x72, 0x05, 0x70, 0x01, 0xd0, 0x01, 0x01, 0x52, 0x02, 0x69, 0x70, 0x22,
0xfd, 0x02, 0x0a, 0x16, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75,
0x95, 0x03, 0x0a, 0x16, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75,
0x6c, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x41, 0x0a, 0x0b, 0x73, 0x6f,
0x75, 0x72, 0x63, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32,
0x16, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x6f, 0x75,
@ -1877,140 +1895,144 @@ var file_pkg_apis_manager_v2_manager_proto_rawDesc = []byte{
0x70, 0x6f, 0x72, 0x74, 0x18, 0x07, 0x20, 0x01, 0x28, 0x05, 0x42, 0x0c, 0xfa, 0x42, 0x09, 0x1a,
0x07, 0x10, 0xff, 0xff, 0x03, 0x28, 0x80, 0x08, 0x52, 0x04, 0x70, 0x6f, 0x72, 0x74, 0x12, 0x1a,
0x0a, 0x08, 0x66, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x18, 0x08, 0x20, 0x03, 0x28, 0x09,
0x52, 0x08, 0x66, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x42, 0x06, 0x0a, 0x04, 0x5f, 0x69,
0x64, 0x63, 0x42, 0x0b, 0x0a, 0x09, 0x5f, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x22,
0xd3, 0x02, 0x0a, 0x15, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65,
0x72, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x41, 0x0a, 0x0b, 0x73, 0x6f, 0x75,
0x72, 0x63, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x16,
0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x6f, 0x75, 0x72,
0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x42, 0x08, 0xfa, 0x42, 0x05, 0x82, 0x01, 0x02, 0x10, 0x01,
0x52, 0x0a, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x23, 0x0a, 0x08,
0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07,
0xfa, 0x42, 0x04, 0x72, 0x02, 0x68, 0x01, 0x52, 0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d,
0x65, 0x12, 0x17, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa,
0x42, 0x04, 0x72, 0x02, 0x70, 0x01, 0x52, 0x02, 0x69, 0x70, 0x12, 0x24, 0x0a, 0x03, 0x69, 0x64,
0x63, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0d, 0xfa, 0x42, 0x0a, 0x72, 0x08, 0x10, 0x01,
0x18, 0x80, 0x08, 0xd0, 0x01, 0x01, 0x48, 0x00, 0x52, 0x03, 0x69, 0x64, 0x63, 0x88, 0x01, 0x01,
0x12, 0x2e, 0x0a, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x05, 0x20, 0x01,
0x28, 0x09, 0x42, 0x0d, 0xfa, 0x42, 0x0a, 0x72, 0x08, 0x10, 0x01, 0x18, 0x80, 0x08, 0xd0, 0x01,
0x01, 0x48, 0x01, 0x52, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x88, 0x01, 0x01,
0x12, 0x27, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x06, 0x20, 0x01, 0x28,
0x52, 0x08, 0x66, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x63, 0x6f,
0x6e, 0x66, 0x69, 0x67, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66,
0x69, 0x67, 0x42, 0x06, 0x0a, 0x04, 0x5f, 0x69, 0x64, 0x63, 0x42, 0x0b, 0x0a, 0x09, 0x5f, 0x6c,
0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x85, 0x03, 0x0a, 0x15, 0x4c, 0x69, 0x73, 0x74,
0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x12, 0x41, 0x0a, 0x0b, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x16, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72,
0x2e, 0x76, 0x32, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x42, 0x08,
0xfa, 0x42, 0x05, 0x82, 0x01, 0x02, 0x10, 0x01, 0x52, 0x0a, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x54, 0x79, 0x70, 0x65, 0x12, 0x23, 0x0a, 0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72, 0x02, 0x68, 0x01, 0x52,
0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x17, 0x0a, 0x02, 0x69, 0x70, 0x18,
0x03, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72, 0x02, 0x70, 0x01, 0x52, 0x02,
0x69, 0x70, 0x12, 0x24, 0x0a, 0x03, 0x69, 0x64, 0x63, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x42,
0x0d, 0xfa, 0x42, 0x0a, 0x72, 0x08, 0x10, 0x01, 0x18, 0x80, 0x08, 0xd0, 0x01, 0x01, 0x48, 0x00,
0x52, 0x03, 0x69, 0x64, 0x63, 0x88, 0x01, 0x01, 0x12, 0x2e, 0x0a, 0x08, 0x6c, 0x6f, 0x63, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0d, 0xfa, 0x42, 0x0a, 0x72,
0x08, 0x10, 0x01, 0x18, 0x80, 0x08, 0xd0, 0x01, 0x01, 0x48, 0x01, 0x52, 0x08, 0x6c, 0x6f, 0x63,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x88, 0x01, 0x01, 0x12, 0x27, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73,
0x69, 0x6f, 0x6e, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0d, 0xfa, 0x42, 0x0a, 0x72, 0x08,
0x10, 0x01, 0x18, 0x80, 0x08, 0xd0, 0x01, 0x01, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f,
0x6e, 0x12, 0x25, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x69, 0x74, 0x18, 0x07, 0x20, 0x01, 0x28,
0x09, 0x42, 0x0d, 0xfa, 0x42, 0x0a, 0x72, 0x08, 0x10, 0x01, 0x18, 0x80, 0x08, 0xd0, 0x01, 0x01,
0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x25, 0x0a, 0x06, 0x63, 0x6f, 0x6d,
0x6d, 0x69, 0x74, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0d, 0xfa, 0x42, 0x0a, 0x72, 0x08,
0x10, 0x01, 0x18, 0x80, 0x08, 0xd0, 0x01, 0x01, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x69, 0x74,
0x42, 0x06, 0x0a, 0x04, 0x5f, 0x69, 0x64, 0x63, 0x42, 0x0b, 0x0a, 0x09, 0x5f, 0x6c, 0x6f, 0x63,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x4f, 0x0a, 0x16, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68,
0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12,
0x35, 0x0a, 0x0a, 0x73, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x18, 0x01, 0x20,
0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32,
0x2e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x52, 0x0a, 0x73, 0x63, 0x68, 0x65,
0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x22, 0x57, 0x0a, 0x0b, 0x55, 0x52, 0x4c, 0x50, 0x72, 0x69,
0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x1d, 0x0a, 0x05, 0x72, 0x65, 0x67, 0x65, 0x78, 0x18, 0x01,
0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72, 0x02, 0x10, 0x01, 0x52, 0x05, 0x72,
0x65, 0x67, 0x65, 0x78, 0x12, 0x29, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0e, 0x32, 0x13, 0x2e, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2e, 0x76, 0x32, 0x2e,
0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x22,
0x6d, 0x0a, 0x13, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72,
0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x29, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18,
0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x13, 0x2e, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2e, 0x76,
0x32, 0x2e, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75,
0x65, 0x12, 0x2b, 0x0a, 0x04, 0x75, 0x72, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x17, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x55, 0x52, 0x4c,
0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x52, 0x04, 0x75, 0x72, 0x6c, 0x73, 0x22, 0xbb,
0x01, 0x0a, 0x0b, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x17,
0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x32,
0x02, 0x28, 0x01, 0x52, 0x02, 0x69, 0x64, 0x12, 0x1e, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18,
0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0a, 0xfa, 0x42, 0x07, 0x72, 0x05, 0x10, 0x01, 0x18, 0x80,
0x08, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x1a, 0x0a, 0x03, 0x75, 0x72, 0x6c, 0x18, 0x03,
0x20, 0x01, 0x28, 0x09, 0x42, 0x08, 0xfa, 0x42, 0x05, 0x72, 0x03, 0x88, 0x01, 0x01, 0x52, 0x03,
0x75, 0x72, 0x6c, 0x12, 0x10, 0x0a, 0x03, 0x62, 0x69, 0x6f, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09,
0x52, 0x03, 0x62, 0x69, 0x6f, 0x12, 0x45, 0x0a, 0x08, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74,
0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65,
0x72, 0x2e, 0x76, 0x32, 0x2e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x42, 0x08, 0xfa, 0x42, 0x05, 0x8a, 0x01, 0x02,
0x10, 0x01, 0x52, 0x08, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x22, 0x9a, 0x01, 0x0a,
0x17, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x41, 0x0a, 0x0b, 0x73, 0x6f, 0x75, 0x72,
0x63, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x16, 0x2e,
0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63,
0x65, 0x54, 0x79, 0x70, 0x65, 0x42, 0x08, 0xfa, 0x42, 0x05, 0x82, 0x01, 0x02, 0x10, 0x01, 0x52,
0x0a, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x23, 0x0a, 0x08, 0x68,
0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa,
0x42, 0x04, 0x72, 0x02, 0x68, 0x01, 0x52, 0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65,
0x12, 0x17, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42,
0x04, 0x72, 0x02, 0x70, 0x01, 0x52, 0x02, 0x69, 0x70, 0x22, 0x57, 0x0a, 0x18, 0x4c, 0x69, 0x73,
0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x69, 0x74, 0x12, 0x30, 0x0a, 0x14, 0x73, 0x63, 0x68, 0x65,
0x64, 0x75, 0x6c, 0x65, 0x72, 0x5f, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x69, 0x64,
0x18, 0x08, 0x20, 0x01, 0x28, 0x04, 0x52, 0x12, 0x73, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65,
0x72, 0x43, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x49, 0x64, 0x42, 0x06, 0x0a, 0x04, 0x5f, 0x69,
0x64, 0x63, 0x42, 0x0b, 0x0a, 0x09, 0x5f, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x22,
0x4f, 0x0a, 0x16, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72,
0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x35, 0x0a, 0x0a, 0x73, 0x63, 0x68,
0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e,
0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x64,
0x75, 0x6c, 0x65, 0x72, 0x52, 0x0a, 0x73, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73,
0x22, 0x57, 0x0a, 0x0b, 0x55, 0x52, 0x4c, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12,
0x1d, 0x0a, 0x05, 0x72, 0x65, 0x67, 0x65, 0x78, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07,
0xfa, 0x42, 0x04, 0x72, 0x02, 0x10, 0x01, 0x52, 0x05, 0x72, 0x65, 0x67, 0x65, 0x78, 0x12, 0x29,
0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x13, 0x2e,
0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2e, 0x76, 0x32, 0x2e, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69,
0x74, 0x79, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x22, 0x6d, 0x0a, 0x13, 0x41, 0x70, 0x70,
0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79,
0x12, 0x29, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32,
0x13, 0x2e, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2e, 0x76, 0x32, 0x2e, 0x50, 0x72, 0x69, 0x6f,
0x72, 0x69, 0x74, 0x79, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x2b, 0x0a, 0x04, 0x75,
0x72, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x6d, 0x61, 0x6e, 0x61,
0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x55, 0x52, 0x4c, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69,
0x74, 0x79, 0x52, 0x04, 0x75, 0x72, 0x6c, 0x73, 0x22, 0xbb, 0x01, 0x0a, 0x0b, 0x41, 0x70, 0x70,
0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x17, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01,
0x20, 0x01, 0x28, 0x04, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x32, 0x02, 0x28, 0x01, 0x52, 0x02, 0x69,
0x64, 0x12, 0x1e, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42,
0x0a, 0xfa, 0x42, 0x07, 0x72, 0x05, 0x10, 0x01, 0x18, 0x80, 0x08, 0x52, 0x04, 0x6e, 0x61, 0x6d,
0x65, 0x12, 0x1a, 0x0a, 0x03, 0x75, 0x72, 0x6c, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x42, 0x08,
0xfa, 0x42, 0x05, 0x72, 0x03, 0x88, 0x01, 0x01, 0x52, 0x03, 0x75, 0x72, 0x6c, 0x12, 0x10, 0x0a,
0x03, 0x62, 0x69, 0x6f, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x62, 0x69, 0x6f, 0x12,
0x45, 0x0a, 0x08, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x1f, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x41,
0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x69, 0x6f, 0x72, 0x69,
0x74, 0x79, 0x42, 0x08, 0xfa, 0x42, 0x05, 0x8a, 0x01, 0x02, 0x10, 0x01, 0x52, 0x08, 0x70, 0x72,
0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x22, 0x9a, 0x01, 0x0a, 0x17, 0x4c, 0x69, 0x73, 0x74, 0x41,
0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x41, 0x0a, 0x0b, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x74, 0x79, 0x70,
0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x16, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65,
0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x42,
0x08, 0xfa, 0x42, 0x05, 0x82, 0x01, 0x02, 0x10, 0x01, 0x52, 0x0a, 0x73, 0x6f, 0x75, 0x72, 0x63,
0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x23, 0x0a, 0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d,
0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72, 0x02, 0x68, 0x01,
0x52, 0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x17, 0x0a, 0x02, 0x69, 0x70,
0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72, 0x02, 0x70, 0x01, 0x52,
0x02, 0x69, 0x70, 0x22, 0x57, 0x0a, 0x18, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x70, 0x6c, 0x69,
0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12,
0x3b, 0x0a, 0x0c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18,
0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e,
0x76, 0x32, 0x2e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0c,
0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0xbe, 0x01, 0x0a,
0x10, 0x4b, 0x65, 0x65, 0x70, 0x41, 0x6c, 0x69, 0x76, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x12, 0x41, 0x0a, 0x0b, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x16, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72,
0x2e, 0x76, 0x32, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x42, 0x08,
0xfa, 0x42, 0x05, 0x82, 0x01, 0x02, 0x10, 0x01, 0x52, 0x0a, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x54, 0x79, 0x70, 0x65, 0x12, 0x23, 0x0a, 0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x72, 0x02, 0x68, 0x01, 0x52,
0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x26, 0x0a, 0x0a, 0x63, 0x6c, 0x75,
0x73, 0x74, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x42, 0x07, 0xfa,
0x42, 0x04, 0x32, 0x02, 0x28, 0x01, 0x52, 0x09, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x49,
0x64, 0x12, 0x1a, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0a, 0xfa,
0x42, 0x07, 0x72, 0x05, 0x70, 0x01, 0xd0, 0x01, 0x01, 0x52, 0x02, 0x69, 0x70, 0x2a, 0x49, 0x0a,
0x0a, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x14, 0x0a, 0x10, 0x53,
0x43, 0x48, 0x45, 0x44, 0x55, 0x4c, 0x45, 0x52, 0x5f, 0x53, 0x4f, 0x55, 0x52, 0x43, 0x45, 0x10,
0x00, 0x12, 0x0f, 0x0a, 0x0b, 0x50, 0x45, 0x45, 0x52, 0x5f, 0x53, 0x4f, 0x55, 0x52, 0x43, 0x45,
0x10, 0x01, 0x12, 0x14, 0x0a, 0x10, 0x53, 0x45, 0x45, 0x44, 0x5f, 0x50, 0x45, 0x45, 0x52, 0x5f,
0x53, 0x4f, 0x55, 0x52, 0x43, 0x45, 0x10, 0x02, 0x32, 0xcf, 0x05, 0x0a, 0x07, 0x4d, 0x61, 0x6e,
0x61, 0x67, 0x65, 0x72, 0x12, 0x43, 0x0a, 0x0b, 0x47, 0x65, 0x74, 0x53, 0x65, 0x65, 0x64, 0x50,
0x65, 0x65, 0x72, 0x12, 0x1e, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32,
0x2e, 0x47, 0x65, 0x74, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x1a, 0x14, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32,
0x2e, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x12, 0x54, 0x0a, 0x0d, 0x4c, 0x69, 0x73,
0x74, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x73, 0x12, 0x20, 0x2e, 0x6d, 0x61, 0x6e,
0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x65, 0x65, 0x64,
0x50, 0x65, 0x65, 0x72, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x21, 0x2e, 0x6d,
0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x65,
0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12,
0x49, 0x0a, 0x0e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65,
0x72, 0x12, 0x21, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x55,
0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x1a, 0x14, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76,
0x32, 0x2e, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x12, 0x4b, 0x0a, 0x0e, 0x44, 0x65,
0x6c, 0x65, 0x74, 0x65, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x12, 0x21, 0x2e, 0x6d,
0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65,
0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x12, 0x46, 0x0a, 0x0c, 0x47, 0x65, 0x74, 0x53, 0x63,
0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x12, 0x1f, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65,
0x72, 0x2e, 0x76, 0x32, 0x2e, 0x47, 0x65, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65,
0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x15, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67,
0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x12,
0x4c, 0x0a, 0x0f, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c,
0x65, 0x72, 0x12, 0x22, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e,
0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x15, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72,
0x2e, 0x76, 0x32, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x12, 0x57, 0x0a,
0x0e, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x12,
0x21, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73,
0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x1a, 0x22, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e,
0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x73, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x5d, 0x0a, 0x10, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70,
0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x23, 0x2e, 0x6d, 0x61, 0x6e,
0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x70, 0x6c,
0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
0x24, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73,
0x74, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x3b, 0x0a, 0x0c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x6d, 0x61,
0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f,
0x6e, 0x73, 0x22, 0xbe, 0x01, 0x0a, 0x10, 0x4b, 0x65, 0x65, 0x70, 0x41, 0x6c, 0x69, 0x76, 0x65,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x41, 0x0a, 0x0b, 0x73, 0x6f, 0x75, 0x72, 0x63,
0x65, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x16, 0x2e, 0x6d,
0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x54, 0x79, 0x70, 0x65, 0x42, 0x08, 0xfa, 0x42, 0x05, 0x82, 0x01, 0x02, 0x10, 0x01, 0x52, 0x0a,
0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x23, 0x0a, 0x08, 0x68, 0x6f,
0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x07, 0xfa, 0x42,
0x04, 0x72, 0x02, 0x68, 0x01, 0x52, 0x08, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x12,
0x26, 0x0a, 0x0a, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20,
0x01, 0x28, 0x04, 0x42, 0x07, 0xfa, 0x42, 0x04, 0x32, 0x02, 0x28, 0x01, 0x52, 0x09, 0x63, 0x6c,
0x75, 0x73, 0x74, 0x65, 0x72, 0x49, 0x64, 0x12, 0x1a, 0x0a, 0x02, 0x69, 0x70, 0x18, 0x04, 0x20,
0x01, 0x28, 0x09, 0x42, 0x0a, 0xfa, 0x42, 0x07, 0x72, 0x05, 0x70, 0x01, 0xd0, 0x01, 0x01, 0x52,
0x02, 0x69, 0x70, 0x2a, 0x49, 0x0a, 0x0a, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70,
0x65, 0x12, 0x14, 0x0a, 0x10, 0x53, 0x43, 0x48, 0x45, 0x44, 0x55, 0x4c, 0x45, 0x52, 0x5f, 0x53,
0x4f, 0x55, 0x52, 0x43, 0x45, 0x10, 0x00, 0x12, 0x0f, 0x0a, 0x0b, 0x50, 0x45, 0x45, 0x52, 0x5f,
0x53, 0x4f, 0x55, 0x52, 0x43, 0x45, 0x10, 0x01, 0x12, 0x14, 0x0a, 0x10, 0x53, 0x45, 0x45, 0x44,
0x5f, 0x50, 0x45, 0x45, 0x52, 0x5f, 0x53, 0x4f, 0x55, 0x52, 0x43, 0x45, 0x10, 0x02, 0x32, 0xcf,
0x05, 0x0a, 0x07, 0x4d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x12, 0x43, 0x0a, 0x0b, 0x47, 0x65,
0x74, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x12, 0x1e, 0x2e, 0x6d, 0x61, 0x6e, 0x61,
0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x47, 0x65, 0x74, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65,
0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x14, 0x2e, 0x6d, 0x61, 0x6e, 0x61,
0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x12,
0x54, 0x0a, 0x0d, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x73,
0x12, 0x20, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69,
0x73, 0x74, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x1a, 0x21, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e,
0x4c, 0x69, 0x73, 0x74, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x73, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x49, 0x0a, 0x0e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53,
0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x12, 0x21, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65,
0x72, 0x2e, 0x76, 0x32, 0x2e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x65, 0x65, 0x64, 0x50,
0x65, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x14, 0x2e, 0x6d, 0x61, 0x6e,
0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72,
0x12, 0x4b, 0x0a, 0x0e, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65,
0x65, 0x72, 0x12, 0x21, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e,
0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x53, 0x65, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x12, 0x46, 0x0a,
0x0c, 0x47, 0x65, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x12, 0x1f, 0x2e,
0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x47, 0x65, 0x74, 0x53, 0x63,
0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x15,
0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x63, 0x68, 0x65,
0x64, 0x75, 0x6c, 0x65, 0x72, 0x12, 0x4c, 0x0a, 0x0f, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53,
0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x12, 0x22, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67,
0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x63, 0x68, 0x65,
0x64, 0x75, 0x6c, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x15, 0x2e, 0x6d,
0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75,
0x6c, 0x65, 0x72, 0x12, 0x57, 0x0a, 0x0e, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64,
0x75, 0x6c, 0x65, 0x72, 0x73, 0x12, 0x21, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e,
0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c, 0x65, 0x72,
0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67,
0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75,
0x6c, 0x65, 0x72, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x5d, 0x0a, 0x10,
0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73,
0x12, 0x23, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4c, 0x69,
0x73, 0x74, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x24, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e,
0x76, 0x32, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x43, 0x0a, 0x09, 0x4b,
0x65, 0x65, 0x70, 0x41, 0x6c, 0x69, 0x76, 0x65, 0x12, 0x1c, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67,
0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x4b, 0x65, 0x65, 0x70, 0x41, 0x6c, 0x69, 0x76, 0x65, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x28, 0x01,
0x42, 0x2b, 0x5a, 0x29, 0x64, 0x37, 0x79, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76,
0x32, 0x2f, 0x70, 0x6b, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x6d, 0x61, 0x6e, 0x61, 0x67,
0x65, 0x72, 0x2f, 0x76, 0x32, 0x3b, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x62, 0x06, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x33,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x43, 0x0a, 0x09, 0x4b, 0x65, 0x65, 0x70, 0x41, 0x6c, 0x69,
0x76, 0x65, 0x12, 0x1c, 0x2e, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e,
0x4b, 0x65, 0x65, 0x70, 0x41, 0x6c, 0x69, 0x76, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x1a, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x28, 0x01, 0x42, 0x2b, 0x5a, 0x29, 0x64, 0x37,
0x79, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x32, 0x2f, 0x70, 0x6b, 0x67, 0x2f,
0x61, 0x70, 0x69, 0x73, 0x2f, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x2f, 0x76, 0x32, 0x3b,
0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x72, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (

View File

@ -1836,6 +1836,8 @@ func (m *UpdateSchedulerRequest) validate(all bool) error {
errors = append(errors, err)
}
// no validation rules for Config
if m.Idc != nil {
if m.GetIdc() != "" {
@ -2070,6 +2072,8 @@ func (m *ListSchedulersRequest) validate(all bool) error {
}
// no validation rules for SchedulerClusterId
if m.Idc != nil {
if m.GetIdc() != "" {

View File

@ -214,6 +214,8 @@ message UpdateSchedulerRequest {
int32 port = 7 [(validate.rules).int32 = {gte: 1024, lt: 65535}];
// Scheduler features.
repeated string features = 8;
// Scheduler configuration.
bytes config = 9;
}
// ListSchedulersRequest represents request of ListSchedulers.
@ -232,6 +234,8 @@ message ListSchedulersRequest {
string version = 6 [(validate.rules).string = {min_len: 1, max_len: 1024, ignore_empty: true}];
// Dfdaemon commit.
string commit = 7 [(validate.rules).string = {min_len: 1, max_len: 1024, ignore_empty: true}];
// ID of the cluster to which the scheduler belongs.
uint64 scheduler_cluster_id = 8;
}
// ListSchedulersResponse represents response of ListSchedulers.

View File

@ -24,6 +24,7 @@ import (
type MockManagerClient struct {
ctrl *gomock.Controller
recorder *MockManagerClientMockRecorder
isgomock struct{}
}
// MockManagerClientMockRecorder is the mock recorder for MockManagerClient.
@ -227,6 +228,7 @@ func (mr *MockManagerClientMockRecorder) UpdateSeedPeer(ctx, in any, opts ...any
type MockManager_KeepAliveClient struct {
ctrl *gomock.Controller
recorder *MockManager_KeepAliveClientMockRecorder
isgomock struct{}
}
// MockManager_KeepAliveClientMockRecorder is the mock recorder for MockManager_KeepAliveClient.
@ -364,6 +366,7 @@ func (mr *MockManager_KeepAliveClientMockRecorder) Trailer() *gomock.Call {
type MockManagerServer struct {
ctrl *gomock.Controller
recorder *MockManagerServerMockRecorder
isgomock struct{}
}
// MockManagerServerMockRecorder is the mock recorder for MockManagerServer.
@ -521,6 +524,7 @@ func (mr *MockManagerServerMockRecorder) UpdateSeedPeer(arg0, arg1 any) *gomock.
type MockUnsafeManagerServer struct {
ctrl *gomock.Controller
recorder *MockUnsafeManagerServerMockRecorder
isgomock struct{}
}
// MockUnsafeManagerServerMockRecorder is the mock recorder for MockUnsafeManagerServer.
@ -556,6 +560,7 @@ func (mr *MockUnsafeManagerServerMockRecorder) mustEmbedUnimplementedManagerServ
type MockManager_KeepAliveServer struct {
ctrl *gomock.Controller
recorder *MockManager_KeepAliveServerMockRecorder
isgomock struct{}
}
// MockManager_KeepAliveServerMockRecorder is the mock recorder for MockManager_KeepAliveServer.

View File

@ -24,6 +24,7 @@ import (
type MockSchedulerClient struct {
ctrl *gomock.Controller
recorder *MockSchedulerClientMockRecorder
isgomock struct{}
}
// MockSchedulerClientMockRecorder is the mock recorder for MockSchedulerClient.
@ -203,30 +204,11 @@ func (mr *MockSchedulerClientMockRecorder) StatTask(ctx, in any, opts ...any) *g
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StatTask", reflect.TypeOf((*MockSchedulerClient)(nil).StatTask), varargs...)
}
// SyncProbes mocks base method.
func (m *MockSchedulerClient) SyncProbes(ctx context.Context, opts ...grpc.CallOption) (scheduler.Scheduler_SyncProbesClient, error) {
m.ctrl.T.Helper()
varargs := []any{ctx}
for _, a := range opts {
varargs = append(varargs, a)
}
ret := m.ctrl.Call(m, "SyncProbes", varargs...)
ret0, _ := ret[0].(scheduler.Scheduler_SyncProbesClient)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// SyncProbes indicates an expected call of SyncProbes.
func (mr *MockSchedulerClientMockRecorder) SyncProbes(ctx any, opts ...any) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]any{ctx}, opts...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SyncProbes", reflect.TypeOf((*MockSchedulerClient)(nil).SyncProbes), varargs...)
}
// MockScheduler_ReportPieceResultClient is a mock of Scheduler_ReportPieceResultClient interface.
type MockScheduler_ReportPieceResultClient struct {
ctrl *gomock.Controller
recorder *MockScheduler_ReportPieceResultClientMockRecorder
isgomock struct{}
}
// MockScheduler_ReportPieceResultClientMockRecorder is the mock recorder for MockScheduler_ReportPieceResultClient.
@ -360,147 +342,11 @@ func (mr *MockScheduler_ReportPieceResultClientMockRecorder) Trailer() *gomock.C
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Trailer", reflect.TypeOf((*MockScheduler_ReportPieceResultClient)(nil).Trailer))
}
// MockScheduler_SyncProbesClient is a mock of Scheduler_SyncProbesClient interface.
type MockScheduler_SyncProbesClient struct {
ctrl *gomock.Controller
recorder *MockScheduler_SyncProbesClientMockRecorder
}
// MockScheduler_SyncProbesClientMockRecorder is the mock recorder for MockScheduler_SyncProbesClient.
type MockScheduler_SyncProbesClientMockRecorder struct {
mock *MockScheduler_SyncProbesClient
}
// NewMockScheduler_SyncProbesClient creates a new mock instance.
func NewMockScheduler_SyncProbesClient(ctrl *gomock.Controller) *MockScheduler_SyncProbesClient {
mock := &MockScheduler_SyncProbesClient{ctrl: ctrl}
mock.recorder = &MockScheduler_SyncProbesClientMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockScheduler_SyncProbesClient) EXPECT() *MockScheduler_SyncProbesClientMockRecorder {
return m.recorder
}
// CloseSend mocks base method.
func (m *MockScheduler_SyncProbesClient) CloseSend() error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CloseSend")
ret0, _ := ret[0].(error)
return ret0
}
// CloseSend indicates an expected call of CloseSend.
func (mr *MockScheduler_SyncProbesClientMockRecorder) CloseSend() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CloseSend", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).CloseSend))
}
// Context mocks base method.
func (m *MockScheduler_SyncProbesClient) Context() context.Context {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Context")
ret0, _ := ret[0].(context.Context)
return ret0
}
// Context indicates an expected call of Context.
func (mr *MockScheduler_SyncProbesClientMockRecorder) Context() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Context", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).Context))
}
// Header mocks base method.
func (m *MockScheduler_SyncProbesClient) Header() (metadata.MD, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Header")
ret0, _ := ret[0].(metadata.MD)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// Header indicates an expected call of Header.
func (mr *MockScheduler_SyncProbesClientMockRecorder) Header() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Header", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).Header))
}
// Recv mocks base method.
func (m *MockScheduler_SyncProbesClient) Recv() (*scheduler.SyncProbesResponse, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Recv")
ret0, _ := ret[0].(*scheduler.SyncProbesResponse)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// Recv indicates an expected call of Recv.
func (mr *MockScheduler_SyncProbesClientMockRecorder) Recv() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Recv", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).Recv))
}
// RecvMsg mocks base method.
func (m_2 *MockScheduler_SyncProbesClient) RecvMsg(m any) error {
m_2.ctrl.T.Helper()
ret := m_2.ctrl.Call(m_2, "RecvMsg", m)
ret0, _ := ret[0].(error)
return ret0
}
// RecvMsg indicates an expected call of RecvMsg.
func (mr *MockScheduler_SyncProbesClientMockRecorder) RecvMsg(m any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RecvMsg", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).RecvMsg), m)
}
// Send mocks base method.
func (m *MockScheduler_SyncProbesClient) Send(arg0 *scheduler.SyncProbesRequest) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Send", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// Send indicates an expected call of Send.
func (mr *MockScheduler_SyncProbesClientMockRecorder) Send(arg0 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Send", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).Send), arg0)
}
// SendMsg mocks base method.
func (m_2 *MockScheduler_SyncProbesClient) SendMsg(m any) error {
m_2.ctrl.T.Helper()
ret := m_2.ctrl.Call(m_2, "SendMsg", m)
ret0, _ := ret[0].(error)
return ret0
}
// SendMsg indicates an expected call of SendMsg.
func (mr *MockScheduler_SyncProbesClientMockRecorder) SendMsg(m any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SendMsg", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).SendMsg), m)
}
// Trailer mocks base method.
func (m *MockScheduler_SyncProbesClient) Trailer() metadata.MD {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Trailer")
ret0, _ := ret[0].(metadata.MD)
return ret0
}
// Trailer indicates an expected call of Trailer.
func (mr *MockScheduler_SyncProbesClientMockRecorder) Trailer() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Trailer", reflect.TypeOf((*MockScheduler_SyncProbesClient)(nil).Trailer))
}
// MockSchedulerServer is a mock of SchedulerServer interface.
type MockSchedulerServer struct {
ctrl *gomock.Controller
recorder *MockSchedulerServerMockRecorder
isgomock struct{}
}
// MockSchedulerServerMockRecorder is the mock recorder for MockSchedulerServer.
@ -639,24 +485,11 @@ func (mr *MockSchedulerServerMockRecorder) StatTask(arg0, arg1 any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StatTask", reflect.TypeOf((*MockSchedulerServer)(nil).StatTask), arg0, arg1)
}
// SyncProbes mocks base method.
func (m *MockSchedulerServer) SyncProbes(arg0 scheduler.Scheduler_SyncProbesServer) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SyncProbes", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// SyncProbes indicates an expected call of SyncProbes.
func (mr *MockSchedulerServerMockRecorder) SyncProbes(arg0 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SyncProbes", reflect.TypeOf((*MockSchedulerServer)(nil).SyncProbes), arg0)
}
// MockUnsafeSchedulerServer is a mock of UnsafeSchedulerServer interface.
type MockUnsafeSchedulerServer struct {
ctrl *gomock.Controller
recorder *MockUnsafeSchedulerServerMockRecorder
isgomock struct{}
}
// MockUnsafeSchedulerServerMockRecorder is the mock recorder for MockUnsafeSchedulerServer.
@ -692,6 +525,7 @@ func (mr *MockUnsafeSchedulerServerMockRecorder) mustEmbedUnimplementedScheduler
type MockScheduler_ReportPieceResultServer struct {
ctrl *gomock.Controller
recorder *MockScheduler_ReportPieceResultServerMockRecorder
isgomock struct{}
}
// MockScheduler_ReportPieceResultServerMockRecorder is the mock recorder for MockScheduler_ReportPieceResultServer.
@ -821,137 +655,3 @@ func (mr *MockScheduler_ReportPieceResultServerMockRecorder) SetTrailer(arg0 any
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetTrailer", reflect.TypeOf((*MockScheduler_ReportPieceResultServer)(nil).SetTrailer), arg0)
}
// MockScheduler_SyncProbesServer is a mock of Scheduler_SyncProbesServer interface.
type MockScheduler_SyncProbesServer struct {
ctrl *gomock.Controller
recorder *MockScheduler_SyncProbesServerMockRecorder
}
// MockScheduler_SyncProbesServerMockRecorder is the mock recorder for MockScheduler_SyncProbesServer.
type MockScheduler_SyncProbesServerMockRecorder struct {
mock *MockScheduler_SyncProbesServer
}
// NewMockScheduler_SyncProbesServer creates a new mock instance.
func NewMockScheduler_SyncProbesServer(ctrl *gomock.Controller) *MockScheduler_SyncProbesServer {
mock := &MockScheduler_SyncProbesServer{ctrl: ctrl}
mock.recorder = &MockScheduler_SyncProbesServerMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockScheduler_SyncProbesServer) EXPECT() *MockScheduler_SyncProbesServerMockRecorder {
return m.recorder
}
// Context mocks base method.
func (m *MockScheduler_SyncProbesServer) Context() context.Context {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Context")
ret0, _ := ret[0].(context.Context)
return ret0
}
// Context indicates an expected call of Context.
func (mr *MockScheduler_SyncProbesServerMockRecorder) Context() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Context", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).Context))
}
// Recv mocks base method.
func (m *MockScheduler_SyncProbesServer) Recv() (*scheduler.SyncProbesRequest, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Recv")
ret0, _ := ret[0].(*scheduler.SyncProbesRequest)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// Recv indicates an expected call of Recv.
func (mr *MockScheduler_SyncProbesServerMockRecorder) Recv() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Recv", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).Recv))
}
// RecvMsg mocks base method.
func (m_2 *MockScheduler_SyncProbesServer) RecvMsg(m any) error {
m_2.ctrl.T.Helper()
ret := m_2.ctrl.Call(m_2, "RecvMsg", m)
ret0, _ := ret[0].(error)
return ret0
}
// RecvMsg indicates an expected call of RecvMsg.
func (mr *MockScheduler_SyncProbesServerMockRecorder) RecvMsg(m any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RecvMsg", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).RecvMsg), m)
}
// Send mocks base method.
func (m *MockScheduler_SyncProbesServer) Send(arg0 *scheduler.SyncProbesResponse) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Send", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// Send indicates an expected call of Send.
func (mr *MockScheduler_SyncProbesServerMockRecorder) Send(arg0 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Send", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).Send), arg0)
}
// SendHeader mocks base method.
func (m *MockScheduler_SyncProbesServer) SendHeader(arg0 metadata.MD) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SendHeader", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// SendHeader indicates an expected call of SendHeader.
func (mr *MockScheduler_SyncProbesServerMockRecorder) SendHeader(arg0 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SendHeader", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).SendHeader), arg0)
}
// SendMsg mocks base method.
func (m_2 *MockScheduler_SyncProbesServer) SendMsg(m any) error {
m_2.ctrl.T.Helper()
ret := m_2.ctrl.Call(m_2, "SendMsg", m)
ret0, _ := ret[0].(error)
return ret0
}
// SendMsg indicates an expected call of SendMsg.
func (mr *MockScheduler_SyncProbesServerMockRecorder) SendMsg(m any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SendMsg", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).SendMsg), m)
}
// SetHeader mocks base method.
func (m *MockScheduler_SyncProbesServer) SetHeader(arg0 metadata.MD) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SetHeader", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// SetHeader indicates an expected call of SetHeader.
func (mr *MockScheduler_SyncProbesServerMockRecorder) SetHeader(arg0 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetHeader", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).SetHeader), arg0)
}
// SetTrailer mocks base method.
func (m *MockScheduler_SyncProbesServer) SetTrailer(arg0 metadata.MD) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetTrailer", arg0)
}
// SetTrailer indicates an expected call of SetTrailer.
func (mr *MockScheduler_SyncProbesServerMockRecorder) SetTrailer(arg0 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetTrailer", reflect.TypeOf((*MockScheduler_SyncProbesServer)(nil).SetTrailer), arg0)
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -22,8 +22,6 @@ import "pkg/apis/common/v1/common.proto";
import "pkg/apis/errordetails/v1/errordetails.proto";
import "validate/validate.proto";
import "google/protobuf/empty.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
option go_package = "d7y.io/api/v2/pkg/apis/scheduler/v1;scheduler";
@ -91,6 +89,8 @@ message PeerHost{
string location = 7;
// IDC where the peer host is located
string idc = 8;
// Port of proxy server.
int32 proxy_port = 9 [(validate.rules).int32 = {gte: 1024, lt: 65535}];
}
// PieceResult represents request of ReportPieceResult.
@ -269,6 +269,8 @@ message AnnounceHostRequest{
uint64 scheduler_cluster_id = 17;
// Port of object storage server.
int32 object_storage_port = 18 [(validate.rules).int32 = {gte: 1024, lt: 65535, ignore_empty: true}];
// Port of proxy server.
int32 proxy_port = 19 [(validate.rules).int32 = {gte: 1024, lt: 65535, ignore_empty: true}];
}
// CPU Stat.
@ -370,60 +372,6 @@ message Build {
string platform = 4;
}
// ProbeStartedRequest represents started request of SyncProbesRequest.
message ProbeStartedRequest {
}
// Probe information.
message Probe {
// Destination host metadata.
common.Host host = 1 [(validate.rules).message.required = true];
// RTT is the round-trip time sent via this pinger.
google.protobuf.Duration rtt = 2 [(validate.rules).duration.required = true];
// Probe create time.
google.protobuf.Timestamp created_at = 3 [(validate.rules).timestamp.required = true];
}
// ProbeFinishedRequest represents finished request of SyncProbesRequest.
message ProbeFinishedRequest {
// Probes information.
repeated Probe probes = 1 [(validate.rules).repeated = {min_items: 1}];
}
// FailedProbe information.
message FailedProbe {
// Destination host metadata.
common.Host host = 1 [(validate.rules).message.required = true];
// The description of probing failed.
string description = 2 [(validate.rules).string.min_len = 1];
}
// ProbeFailedRequest represents failed request of SyncProbesRequest.
message ProbeFailedRequest {
// Failed probes information.
repeated FailedProbe probes = 1 [(validate.rules).repeated = {min_items: 1}];
}
// SyncProbesRequest represents request of SyncProbes.
message SyncProbesRequest {
// Source host metadata.
common.Host host = 1 [(validate.rules).message.required = true];
oneof request {
option (validate.required) = true;
ProbeStartedRequest probe_started_request = 2;
ProbeFinishedRequest probe_finished_request = 3;
ProbeFailedRequest probe_failed_request = 4;
}
}
// SyncProbesResponse represents response of SyncProbes.
message SyncProbesResponse {
// Hosts needs to be probed.
repeated common.Host hosts = 1 [(validate.rules).repeated = {min_items: 1, ignore_empty: true}];
}
// Scheduler RPC Service.
service Scheduler{
// RegisterPeerTask registers a peer into task.
@ -449,7 +397,4 @@ service Scheduler{
// LeaveHost makes the peers leaving from host.
rpc LeaveHost(LeaveHostRequest)returns(google.protobuf.Empty);
// SyncProbes sync probes of the host.
rpc SyncProbes(stream SyncProbesRequest)returns(stream SyncProbesResponse);
}

View File

@ -39,8 +39,6 @@ type SchedulerClient interface {
AnnounceHost(ctx context.Context, in *AnnounceHostRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// LeaveHost makes the peers leaving from host.
LeaveHost(ctx context.Context, in *LeaveHostRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// SyncProbes sync probes of the host.
SyncProbes(ctx context.Context, opts ...grpc.CallOption) (Scheduler_SyncProbesClient, error)
}
type schedulerClient struct {
@ -145,37 +143,6 @@ func (c *schedulerClient) LeaveHost(ctx context.Context, in *LeaveHostRequest, o
return out, nil
}
func (c *schedulerClient) SyncProbes(ctx context.Context, opts ...grpc.CallOption) (Scheduler_SyncProbesClient, error) {
stream, err := c.cc.NewStream(ctx, &Scheduler_ServiceDesc.Streams[1], "/scheduler.Scheduler/SyncProbes", opts...)
if err != nil {
return nil, err
}
x := &schedulerSyncProbesClient{stream}
return x, nil
}
type Scheduler_SyncProbesClient interface {
Send(*SyncProbesRequest) error
Recv() (*SyncProbesResponse, error)
grpc.ClientStream
}
type schedulerSyncProbesClient struct {
grpc.ClientStream
}
func (x *schedulerSyncProbesClient) Send(m *SyncProbesRequest) error {
return x.ClientStream.SendMsg(m)
}
func (x *schedulerSyncProbesClient) Recv() (*SyncProbesResponse, error) {
m := new(SyncProbesResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// SchedulerServer is the server API for Scheduler service.
// All implementations should embed UnimplementedSchedulerServer
// for forward compatibility
@ -196,8 +163,6 @@ type SchedulerServer interface {
AnnounceHost(context.Context, *AnnounceHostRequest) (*emptypb.Empty, error)
// LeaveHost makes the peers leaving from host.
LeaveHost(context.Context, *LeaveHostRequest) (*emptypb.Empty, error)
// SyncProbes sync probes of the host.
SyncProbes(Scheduler_SyncProbesServer) error
}
// UnimplementedSchedulerServer should be embedded to have forward compatible implementations.
@ -228,9 +193,6 @@ func (UnimplementedSchedulerServer) AnnounceHost(context.Context, *AnnounceHostR
func (UnimplementedSchedulerServer) LeaveHost(context.Context, *LeaveHostRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method LeaveHost not implemented")
}
func (UnimplementedSchedulerServer) SyncProbes(Scheduler_SyncProbesServer) error {
return status.Errorf(codes.Unimplemented, "method SyncProbes not implemented")
}
// UnsafeSchedulerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to SchedulerServer will
@ -395,32 +357,6 @@ func _Scheduler_LeaveHost_Handler(srv interface{}, ctx context.Context, dec func
return interceptor(ctx, in, info, handler)
}
func _Scheduler_SyncProbes_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(SchedulerServer).SyncProbes(&schedulerSyncProbesServer{stream})
}
type Scheduler_SyncProbesServer interface {
Send(*SyncProbesResponse) error
Recv() (*SyncProbesRequest, error)
grpc.ServerStream
}
type schedulerSyncProbesServer struct {
grpc.ServerStream
}
func (x *schedulerSyncProbesServer) Send(m *SyncProbesResponse) error {
return x.ServerStream.SendMsg(m)
}
func (x *schedulerSyncProbesServer) Recv() (*SyncProbesRequest, error) {
m := new(SyncProbesRequest)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// Scheduler_ServiceDesc is the grpc.ServiceDesc for Scheduler service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@ -464,12 +400,6 @@ var Scheduler_ServiceDesc = grpc.ServiceDesc{
ServerStreams: true,
ClientStreams: true,
},
{
StreamName: "SyncProbes",
Handler: _Scheduler_SyncProbes_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "pkg/apis/scheduler/v1/scheduler.proto",
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -23,7 +23,6 @@ import "pkg/apis/errordetails/v2/errordetails.proto";
import "validate/validate.proto";
import "google/protobuf/empty.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
option go_package = "d7y.io/api/v2/pkg/apis/scheduler/v2;scheduler";
@ -111,6 +110,7 @@ message DownloadPieceBackToSourceFailedRequest {
option (validate.required) = true;
errordetails.v2.Backend backend = 2;
errordetails.v2.Unknown unknown = 3;
}
}
@ -224,76 +224,214 @@ message DeleteHostRequest{
string host_id = 1 [(validate.rules).string.min_len = 1];
}
// ProbeStartedRequest represents started request of SyncProbesRequest.
message ProbeStartedRequest {
// RegisterCachePeerRequest represents cache peer registered request of AnnounceCachePeerRequest.
message RegisterCachePeerRequest {
// Download url.
string url = 1 [(validate.rules).string.uri = true];
// Digest of the task digest, for example blake3:xxx or sha256:yyy.
optional string digest = 2 [(validate.rules).string = {pattern: "^(md5:[a-fA-F0-9]{32}|sha1:[a-fA-F0-9]{40}|sha256:[a-fA-F0-9]{64}|sha512:[a-fA-F0-9]{128}|blake3:[a-fA-F0-9]{64}|crc32:[a-fA-F0-9]+)$", ignore_empty: true}];
// Range is url range of request. If protocol is http, range
// will set in request header. If protocol is others, range
// will set in range field.
optional common.v2.Range range = 3;
// Task type.
common.v2.TaskType type = 4 [(validate.rules).enum.defined_only = true];
// URL tag identifies different task for same url.
optional string tag = 5;
// Application of task.
optional string application = 6;
// Peer priority.
common.v2.Priority priority = 7 [(validate.rules).enum.defined_only = true];
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 8;
// Task request headers.
map<string, string> request_header = 9;
// Task piece length, the value needs to be greater than or equal to 4194304(4MiB).
optional uint64 piece_length = 10 [(validate.rules).uint64 = {gte: 4194304, ignore_empty: true}];
// File path to be downloaded. If output_path is set, the downloaded file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the download. If hard link creation fails,
// it will copy the file to the output path after the download is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 11 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Download timeout.
optional google.protobuf.Duration timeout = 12;
// Dfdaemon cannot download the task from the source if disable_back_to_source is true.
bool disable_back_to_source = 13;
// Scheduler needs to schedule the task downloads from the source if need_back_to_source is true.
bool need_back_to_source = 14;
// certificate_chain is the client certs with DER format for the backend client to download back-to-source.
repeated bytes certificate_chain = 15;
// Prefetch pre-downloads all pieces of the task when the download task request is a range request.
bool prefetch = 16;
// Object storage protocol information.
optional common.v2.ObjectStorage object_storage = 17;
// HDFS protocol information.
optional common.v2.HDFS hdfs = 18;
// is_prefetch is the flag to indicate whether the request is a prefetch request.
bool is_prefetch = 19;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 20;
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on url, piece_length, tag, application, and filtered_query_params.
optional string content_for_calculating_task_id = 21;
// remote_ip represents the IP address of the client initiating the download request.
// For proxy requests, it is set to the IP address of the request source.
// For dfget requests, it is set to the IP address of the dfget.
optional string remote_ip = 22 [(validate.rules).string = {ip: true, ignore_empty: true}];
}
// Probe information.
message Probe {
// Destination host metadata.
common.v2.Host host = 1 [(validate.rules).message.required = true];
// RTT is the round-trip time sent via this pinger.
google.protobuf.Duration rtt = 2 [(validate.rules).duration.required = true];
// Probe create time.
google.protobuf.Timestamp created_at = 3 [(validate.rules).timestamp.required = true];
// DownloadCachePeerStartedRequest represents cache peer download started request of AnnounceCachePeerRequest.
message DownloadCachePeerStartedRequest {
}
// ProbeFinishedRequest represents finished request of SyncProbesRequest.
message ProbeFinishedRequest {
// Probes information.
repeated Probe probes = 1 [(validate.rules).repeated = {min_items: 1}];
// DownloadCachePeerBackToSourceStartedRequest represents cache peer download back-to-source started request of AnnounceCachePeerRequest.
message DownloadCachePeerBackToSourceStartedRequest {
// The description of the back-to-source reason.
optional string description = 1 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
}
// FailedProbe information.
message FailedProbe {
// Destination host metadata.
common.v2.Host host = 1 [(validate.rules).message.required = true];
// The description of probing failed.
// RescheduleCachePeerRequest represents reschedule request of AnnounceCachePeerRequest.
message RescheduleCachePeerRequest {
// Candidate parent ids.
repeated common.v2.CachePeer candidate_parents = 1;
// The description of the reschedule reason.
optional string description = 2 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
}
// ProbeFailedRequest represents failed request of SyncProbesRequest.
message ProbeFailedRequest {
// Failed probes information.
repeated FailedProbe probes = 1 [(validate.rules).repeated = {min_items: 1}];
// DownloadCachePeerFinishedRequest represents cache peer download finished request of AnnounceCachePeerRequest.
message DownloadCachePeerFinishedRequest {
// Total content length.
uint64 content_length = 1;
// Total piece count.
uint32 piece_count = 2;
}
// SyncProbesRequest represents request of SyncProbes.
message SyncProbesRequest {
// Source host metadata.
common.v2.Host host = 1 [(validate.rules).message.required = true];
oneof request {
option (validate.required) = true;
ProbeStartedRequest probe_started_request = 2;
ProbeFinishedRequest probe_finished_request = 3;
ProbeFailedRequest probe_failed_request = 4;
}
// DownloadCachePeerBackToSourceFinishedRequest represents cache peer download back-to-source finished request of AnnounceCachePeerRequest.
message DownloadCachePeerBackToSourceFinishedRequest {
// Total content length.
uint64 content_length = 1;
// Total piece count.
uint32 piece_count = 2;
}
// SyncProbesResponse represents response of SyncProbes.
message SyncProbesResponse {
// Hosts needs to be probed.
repeated common.v2.Host hosts = 1 [(validate.rules).repeated = {min_items: 1, ignore_empty: true}];
// DownloadCachePeerFailedRequest represents cache peer download failed request of AnnounceCachePeerRequest.
message DownloadCachePeerFailedRequest {
// The description of the download failed.
optional string description = 1 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
}
// RegisterPersistentCachePeerRequest represents persistent cache peer registered request of AnnouncePersistentCachePeerRequest.
message RegisterPersistentCachePeerRequest {
// DownloadCachePeerBackToSourceFailedRequest represents cache peer download back-to-source failed request of AnnounceCachePeerRequest.
message DownloadCachePeerBackToSourceFailedRequest {
// The description of the download back-to-source failed.
optional string description = 1 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
}
// AnnounceCachePeerRequest represents request of AnnounceCachePeer.
message AnnounceCachePeerRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Peer id.
string peer_id = 3 [(validate.rules).string.min_len = 1];
oneof request {
option (validate.required) = true;
RegisterCachePeerRequest register_cache_peer_request = 4;
DownloadCachePeerStartedRequest download_cache_peer_started_request = 5;
DownloadCachePeerBackToSourceStartedRequest download_cache_peer_back_to_source_started_request = 6;
RescheduleCachePeerRequest reschedule_cache_peer_request = 7;
DownloadCachePeerFinishedRequest download_cache_peer_finished_request = 8;
DownloadCachePeerBackToSourceFinishedRequest download_cache_peer_back_to_source_finished_request = 9;
DownloadCachePeerFailedRequest download_cache_peer_failed_request = 10;
DownloadCachePeerBackToSourceFailedRequest download_cache_peer_back_to_source_failed_request = 11;
DownloadPieceFinishedRequest download_piece_finished_request = 12;
DownloadPieceBackToSourceFinishedRequest download_piece_back_to_source_finished_request = 13;
DownloadPieceFailedRequest download_piece_failed_request = 14;
DownloadPieceBackToSourceFailedRequest download_piece_back_to_source_failed_request = 15;
}
}
// EmptyCacheTaskResponse represents empty cache task response of AnnounceCachePeerResponse.
message EmptyCacheTaskResponse {
}
// NormalCacheTaskResponse represents normal cache task response of AnnounceCachePeerResponse.
message NormalCacheTaskResponse {
// Candidate parents.
repeated common.v2.CachePeer candidate_parents = 1 [(validate.rules).repeated.min_items = 1];
}
// AnnounceCachePeerResponse represents response of AnnounceCachePeer.
message AnnounceCachePeerResponse {
oneof response {
option (validate.required) = true;
EmptyCacheTaskResponse empty_cache_task_response = 1;
NormalCacheTaskResponse normal_cache_task_response = 2;
NeedBackToSourceResponse need_back_to_source_response = 3;
}
}
// StatCachePeerRequest represents request of StatCachePeer.
message StatCachePeerRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Peer id.
string peer_id = 3 [(validate.rules).string.min_len = 1];
}
// DeleteCachePeerRequest represents request of DeleteCachePeer.
message DeleteCachePeerRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
// Peer id.
string peer_id = 3 [(validate.rules).string.min_len = 1];
}
// StatCacheTaskRequest represents request of StatCacheTask.
message StatCacheTaskRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
}
// DeleteCacheTaskRequest represents request of DeleteCacheTask.
message DeleteCacheTaskRequest {
// Host id.
string host_id = 1 [(validate.rules).string.min_len = 1];
// Task id.
string task_id = 2 [(validate.rules).string.min_len = 1];
}
// RegisterPersistentCachePeerRequest represents persistent cache peer registered request of AnnouncePersistentCachePeerRequest.
message RegisterPersistentCachePeerRequest {
// Persistent represents whether the persistent cache task is persistent.
// If the persistent cache task is persistent, the persistent cache peer will
// not be deleted when dfdaemon runs garbage collection.
bool persistent = 1;
// Tag is used to distinguish different persistent cache tasks.
optional string tag = 3;
optional string tag = 2;
// Application of task.
optional string application = 4;
// Task piece length.
uint64 piece_length = 5 [(validate.rules).uint64.gte = 1];
optional string application = 3;
// Task piece length, the value needs to be greater than or equal to 4194304(4MiB)
uint64 piece_length = 4 [(validate.rules).uint64.gte = 4194304];
// File path to be exported.
string output_path = 6 [(validate.rules).string.min_len = 1];
optional string output_path = 5 [(validate.rules).string = {min_len: 1, ignore_empty: true}];
// Download timeout.
optional google.protobuf.Duration timeout = 7;
optional google.protobuf.Duration timeout = 6;
}
// DownloadPersistentCachePeerStartedRequest represents persistent cache peer download started request of AnnouncePersistentCachePeerRequest.
@ -397,12 +535,14 @@ message UploadPersistentCacheTaskStartedRequest {
optional string tag = 5;
// Application of task.
optional string application = 6;
// Task piece length.
uint64 piece_length = 7 [(validate.rules).uint64.gte = 1];
// Task piece length, the value needs to be greater than or equal to 4194304(4MiB)
uint64 piece_length = 7 [(validate.rules).uint64.gte = 4194304];
// Task content length.
uint64 content_length = 8;
// Task piece count.
uint32 piece_count = 9;
// TTL of the persistent cache task.
google.protobuf.Duration ttl = 8 [(validate.rules).duration = {gte:{seconds: 60}, lte:{seconds: 604800}}];
// Upload timeout.
optional google.protobuf.Duration timeout = 9;
google.protobuf.Duration ttl = 10 [(validate.rules).duration = {gte:{seconds: 300}, lte:{seconds: 604800}}];
}
// UploadPersistentCacheTaskFinishedRequest represents upload persistent cache task finished request of UploadPersistentCacheTaskFinishedRequest.
@ -443,6 +583,145 @@ message DeletePersistentCacheTaskRequest {
string task_id = 2 [(validate.rules).string.min_len = 1];
}
// PreheatImageRequest represents request of PreheatImage.
message PreheatImageRequest {
// URL is the image manifest url for preheating.
string url = 1 [(validate.rules).string.uri = true];
// Piece Length is the piece length(bytes) for downloading file. The value needs to
// be greater than 4MiB (4,194,304 bytes) and less than 64MiB (67,108,864 bytes),
// for example: 4194304(4mib), 8388608(8mib). If the piece length is not specified,
// the piece length will be calculated according to the file size.
optional uint64 piece_length = 2 [(validate.rules).uint64 = {gte: 4194304, lte: 67108864, ignore_empty: true}];
// Tag is the tag for preheating.
optional string tag = 3;
// Application is the application string for preheating.
optional string application = 4;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 5;
// Header is the http headers for authentication.
map<string, string> header = 6;
// Username is the username for authentication.
optional string username = 7;
// Password is the password for authentication.
optional string password = 8;
// Platform is the platform for preheating, such as linux/amd64, linux/arm64, etc.
optional string platform = 9;
// Scope is the scope for preheating, it can be one of the following values:
// - single_seed_peer: preheat from a single seed peer.
// - all_seed_peers: preheat from all seed peers.
// - all_peers: preheat from all available peers.
string scope = 10 [(validate.rules).string.pattern = "^(single_seed_peer|all_seed_peers|all_peers)$"];
// IPs is a list of specific peer IPs for preheating.
// This field has the highest priority: if provided, both 'Count' and 'Percentage' will be ignored.
// Applies to 'all_peers' and 'all_seed_peers' scopes.
repeated string ips = 11;
// Percentage is the percentage of available peers to preheat.
// This field has the lowest priority and is only used if both 'IPs' and 'Count' are not provided.
// It must be a value between 1 and 100 (inclusive) if provided.
// Applies to 'all_peers' and 'all_seed_peers' scopes.
optional uint32 percentage = 12 [(validate.rules).uint32 = {gte: 1, lte: 100, ignore_empty: true}];
// Count is the desired number of peers to preheat.
// This field is used only when 'IPs' is not specified. It has priority over 'Percentage'.
// It must be a value between 1 and 200 (inclusive) if provided.
// Applies to 'all_peers' and 'all_seed_peers' scopes.
optional uint32 count = 13 [(validate.rules).uint32 = {gte: 1, lte: 200, ignore_empty: true}];
// ConcurrentTaskCount specifies the maximum number of tasks (e.g., image layers) to preheat concurrently.
// For example, if preheating 100 layers with ConcurrentTaskCount set to 10, up to 10 layers are processed simultaneously.
// If ConcurrentPeerCount is 10 for 1000 peers, each layer is preheated by 10 peers concurrently.
// Default is 8, maximum is 100.
optional int64 concurrent_task_count = 14 [(validate.rules).int64 = {gte: 1, lte: 100, ignore_empty: true}];
// ConcurrentPeerCount specifies the maximum number of peers to preheat concurrently for a single task (e.g., an image layer).
// For example, if preheating a layer with ConcurrentPeerCount set to 10, up to 10 peers process that layer simultaneously.
// Default is 500, maximum is 1000.
optional int64 concurrent_peer_count = 15 [(validate.rules).int64 = {gte: 1, lte: 1000, ignore_empty: true}];
// Timeout is the timeout for preheating, default is 30 minutes.
optional google.protobuf.Duration timeout = 16;
// Preheat priority.
common.v2.Priority priority = 17 [(validate.rules).enum.defined_only = true];
// certificate_chain is the client certs with DER format for the backend client.
repeated bytes certificate_chain = 18;
// insecure_skip_verify indicates whether to skip TLS verification.
bool insecure_skip_verify = 19;
}
// StatImageRequest represents request of StatImage.
message StatImageRequest {
// URL is the image manifest url for preheating.
string url = 1 [(validate.rules).string.uri = true];
// Piece Length is the piece length(bytes) for downloading file. The value needs to
// be greater than 4MiB (4,194,304 bytes) and less than 64MiB (67,108,864 bytes),
// for example: 4194304(4mib), 8388608(8mib). If the piece length is not specified,
// the piece length will be calculated according to the file size.
optional uint64 piece_length = 2 [(validate.rules).uint64 = {gte: 4194304, lte: 67108864, ignore_empty: true}];
// Tag is the tag for preheating.
optional string tag = 3;
// Application is the application string for preheating.
optional string application = 4;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 5;
// Header is the http headers for authentication.
map<string, string> header = 6;
// Username is the username for authentication.
optional string username = 7;
// Password is the password for authentication.
optional string password = 8;
// Platform is the platform for preheating, such as linux/amd64, linux/arm64, etc.
optional string platform = 9;
// ConcurrentLayerCount specifies the maximum number of layers to get concurrently.
// For example, if stat 100 layers with ConcurrentLayerCount set to 10, up to 10 layers are processed simultaneously.
// If ConcurrentPeerCount is 10 for 1000 peers, each layer is stated by 10 peers concurrently.
// Default is 8, maximum is 100.
optional int64 concurrent_layer_count = 10 [(validate.rules).int64 = {gte: 1, lte: 100, ignore_empty: true}];
// ConcurrentPeerCount specifies the maximum number of peers stat concurrently for a single task (e.g., an image layer).
// For example, if stat a layer with ConcurrentPeerCount set to 10, up to 10 peers process that layer simultaneously.
// Default is 500, maximum is 1000.
optional int64 concurrent_peer_count = 11 [(validate.rules).int64 = {gte: 1, lte: 1000, ignore_empty: true}];
// Timeout is the timeout for preheating, default is 30 minutes.
optional google.protobuf.Duration timeout = 12;
// certificate_chain is the client certs with DER format for the backend client.
repeated bytes certificate_chain = 13;
// insecure_skip_verify indicates whether to skip TLS verification.
bool insecure_skip_verify = 14;
}
// StatImageResponse represents response of StatImage.
message StatImageResponse {
// Image is the image information.
Image image = 1;
// Peers is the peers that have downloaded the image.
repeated PeerImage peers = 2;
}
// PeerImage represents a peer in the get image job.
message PeerImage {
// IP is the IP address of the peer.
string ip = 1 [(validate.rules).string.ip = true];
// Hostname is the hostname of the peer.
string hostname = 2 [(validate.rules).string.min_len = 1];
// CachedLayers is the list of layers that the peer has downloaded.
repeated Layer cached_layers = 3;
}
// Image represents the image information.
message Image {
// Layers is the list of layers of the image.
repeated Layer layers = 1;
}
// Layer represents a layer of the image.
message Layer {
// URL is the URL of the layer.
string url = 1 [(validate.rules).string.uri = true];
}
// Scheduler RPC Service.
service Scheduler {
// AnnouncePeer announces peer to scheduler.
@ -469,8 +748,20 @@ service Scheduler {
// DeleteHost releases host in scheduler.
rpc DeleteHost(DeleteHostRequest)returns(google.protobuf.Empty);
// SyncProbes sync probes of the host.
rpc SyncProbes(stream SyncProbesRequest)returns(stream SyncProbesResponse);
// AnnounceCachePeer announces cache peer to scheduler.
rpc AnnounceCachePeer(stream AnnounceCachePeerRequest) returns(stream AnnounceCachePeerResponse);
// Checks information of cache peer.
rpc StatCachePeer(StatCachePeerRequest)returns(common.v2.CachePeer);
// DeleteCachePeer releases cache peer in scheduler.
rpc DeleteCachePeer(DeleteCachePeerRequest)returns(google.protobuf.Empty);
// Checks information of cache task.
rpc StatCacheTask(StatCacheTaskRequest)returns(common.v2.CacheTask);
// DeleteCacheTask releases cache task in scheduler.
rpc DeleteCacheTask(DeleteCacheTaskRequest)returns(google.protobuf.Empty);
// AnnouncePersistentCachePeer announces persistent cache peer to scheduler.
rpc AnnouncePersistentCachePeer(stream AnnouncePersistentCachePeerRequest) returns(stream AnnouncePersistentCachePeerResponse);
@ -495,4 +786,23 @@ service Scheduler {
// DeletePersistentCacheTask releases persistent cache task in scheduler.
rpc DeletePersistentCacheTask(DeletePersistentCacheTaskRequest)returns(google.protobuf.Empty);
// PreheatImage synchronously resolves an image manifest and triggers an asynchronous preheat task.
//
// This is a blocking call. The RPC will not return until the server has completed the
// initial synchronous work: resolving the image manifest and preparing all layer URLs.
//
// After this call successfully returns, a scheduler on the server begins the actual
// preheating process, instructing peers to download the layers in the background.
//
// A successful response (google.protobuf.Empty) confirms that the preparation is complete
// and the asynchronous download task has been scheduled.
rpc PreheatImage(PreheatImageRequest)returns(google.protobuf.Empty);
// StatImage provides detailed status for a container image's distribution in peers.
//
// This is a blocking call that first resolves the image manifest and then queries
// all peers to collect the image's download state across the network.
// The response includes both layer information and the status on each peer.
rpc StatImage(StatImageRequest)returns(StatImageResponse);
}

View File

@ -40,8 +40,16 @@ type SchedulerClient interface {
ListHosts(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (*ListHostsResponse, error)
// DeleteHost releases host in scheduler.
DeleteHost(ctx context.Context, in *DeleteHostRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// SyncProbes sync probes of the host.
SyncProbes(ctx context.Context, opts ...grpc.CallOption) (Scheduler_SyncProbesClient, error)
// AnnounceCachePeer announces cache peer to scheduler.
AnnounceCachePeer(ctx context.Context, opts ...grpc.CallOption) (Scheduler_AnnounceCachePeerClient, error)
// Checks information of cache peer.
StatCachePeer(ctx context.Context, in *StatCachePeerRequest, opts ...grpc.CallOption) (*v2.CachePeer, error)
// DeleteCachePeer releases cache peer in scheduler.
DeleteCachePeer(ctx context.Context, in *DeleteCachePeerRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// Checks information of cache task.
StatCacheTask(ctx context.Context, in *StatCacheTaskRequest, opts ...grpc.CallOption) (*v2.CacheTask, error)
// DeleteCacheTask releases cache task in scheduler.
DeleteCacheTask(ctx context.Context, in *DeleteCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// AnnouncePersistentCachePeer announces persistent cache peer to scheduler.
AnnouncePersistentCachePeer(ctx context.Context, opts ...grpc.CallOption) (Scheduler_AnnouncePersistentCachePeerClient, error)
// Checks information of persistent cache peer.
@ -58,6 +66,23 @@ type SchedulerClient interface {
StatPersistentCacheTask(ctx context.Context, in *StatPersistentCacheTaskRequest, opts ...grpc.CallOption) (*v2.PersistentCacheTask, error)
// DeletePersistentCacheTask releases persistent cache task in scheduler.
DeletePersistentCacheTask(ctx context.Context, in *DeletePersistentCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// PreheatImage synchronously resolves an image manifest and triggers an asynchronous preheat task.
//
// This is a blocking call. The RPC will not return until the server has completed the
// initial synchronous work: resolving the image manifest and preparing all layer URLs.
//
// After this call successfully returns, a scheduler on the server begins the actual
// preheating process, instructing peers to download the layers in the background.
//
// A successful response (google.protobuf.Empty) confirms that the preparation is complete
// and the asynchronous download task has been scheduled.
PreheatImage(ctx context.Context, in *PreheatImageRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// StatImage provides detailed status for a container image's distribution in peers.
//
// This is a blocking call that first resolves the image manifest and then queries
// all peers to collect the image's download state across the network.
// The response includes both layer information and the status on each peer.
StatImage(ctx context.Context, in *StatImageRequest, opts ...grpc.CallOption) (*StatImageResponse, error)
}
type schedulerClient struct {
@ -162,37 +187,73 @@ func (c *schedulerClient) DeleteHost(ctx context.Context, in *DeleteHostRequest,
return out, nil
}
func (c *schedulerClient) SyncProbes(ctx context.Context, opts ...grpc.CallOption) (Scheduler_SyncProbesClient, error) {
stream, err := c.cc.NewStream(ctx, &Scheduler_ServiceDesc.Streams[1], "/scheduler.v2.Scheduler/SyncProbes", opts...)
func (c *schedulerClient) AnnounceCachePeer(ctx context.Context, opts ...grpc.CallOption) (Scheduler_AnnounceCachePeerClient, error) {
stream, err := c.cc.NewStream(ctx, &Scheduler_ServiceDesc.Streams[1], "/scheduler.v2.Scheduler/AnnounceCachePeer", opts...)
if err != nil {
return nil, err
}
x := &schedulerSyncProbesClient{stream}
x := &schedulerAnnounceCachePeerClient{stream}
return x, nil
}
type Scheduler_SyncProbesClient interface {
Send(*SyncProbesRequest) error
Recv() (*SyncProbesResponse, error)
type Scheduler_AnnounceCachePeerClient interface {
Send(*AnnounceCachePeerRequest) error
Recv() (*AnnounceCachePeerResponse, error)
grpc.ClientStream
}
type schedulerSyncProbesClient struct {
type schedulerAnnounceCachePeerClient struct {
grpc.ClientStream
}
func (x *schedulerSyncProbesClient) Send(m *SyncProbesRequest) error {
func (x *schedulerAnnounceCachePeerClient) Send(m *AnnounceCachePeerRequest) error {
return x.ClientStream.SendMsg(m)
}
func (x *schedulerSyncProbesClient) Recv() (*SyncProbesResponse, error) {
m := new(SyncProbesResponse)
func (x *schedulerAnnounceCachePeerClient) Recv() (*AnnounceCachePeerResponse, error) {
m := new(AnnounceCachePeerResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *schedulerClient) StatCachePeer(ctx context.Context, in *StatCachePeerRequest, opts ...grpc.CallOption) (*v2.CachePeer, error) {
out := new(v2.CachePeer)
err := c.cc.Invoke(ctx, "/scheduler.v2.Scheduler/StatCachePeer", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *schedulerClient) DeleteCachePeer(ctx context.Context, in *DeleteCachePeerRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/scheduler.v2.Scheduler/DeleteCachePeer", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *schedulerClient) StatCacheTask(ctx context.Context, in *StatCacheTaskRequest, opts ...grpc.CallOption) (*v2.CacheTask, error) {
out := new(v2.CacheTask)
err := c.cc.Invoke(ctx, "/scheduler.v2.Scheduler/StatCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *schedulerClient) DeleteCacheTask(ctx context.Context, in *DeleteCacheTaskRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/scheduler.v2.Scheduler/DeleteCacheTask", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *schedulerClient) AnnouncePersistentCachePeer(ctx context.Context, opts ...grpc.CallOption) (Scheduler_AnnouncePersistentCachePeerClient, error) {
stream, err := c.cc.NewStream(ctx, &Scheduler_ServiceDesc.Streams[2], "/scheduler.v2.Scheduler/AnnouncePersistentCachePeer", opts...)
if err != nil {
@ -287,6 +348,24 @@ func (c *schedulerClient) DeletePersistentCacheTask(ctx context.Context, in *Del
return out, nil
}
func (c *schedulerClient) PreheatImage(ctx context.Context, in *PreheatImageRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
out := new(emptypb.Empty)
err := c.cc.Invoke(ctx, "/scheduler.v2.Scheduler/PreheatImage", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *schedulerClient) StatImage(ctx context.Context, in *StatImageRequest, opts ...grpc.CallOption) (*StatImageResponse, error) {
out := new(StatImageResponse)
err := c.cc.Invoke(ctx, "/scheduler.v2.Scheduler/StatImage", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// SchedulerServer is the server API for Scheduler service.
// All implementations should embed UnimplementedSchedulerServer
// for forward compatibility
@ -307,8 +386,16 @@ type SchedulerServer interface {
ListHosts(context.Context, *emptypb.Empty) (*ListHostsResponse, error)
// DeleteHost releases host in scheduler.
DeleteHost(context.Context, *DeleteHostRequest) (*emptypb.Empty, error)
// SyncProbes sync probes of the host.
SyncProbes(Scheduler_SyncProbesServer) error
// AnnounceCachePeer announces cache peer to scheduler.
AnnounceCachePeer(Scheduler_AnnounceCachePeerServer) error
// Checks information of cache peer.
StatCachePeer(context.Context, *StatCachePeerRequest) (*v2.CachePeer, error)
// DeleteCachePeer releases cache peer in scheduler.
DeleteCachePeer(context.Context, *DeleteCachePeerRequest) (*emptypb.Empty, error)
// Checks information of cache task.
StatCacheTask(context.Context, *StatCacheTaskRequest) (*v2.CacheTask, error)
// DeleteCacheTask releases cache task in scheduler.
DeleteCacheTask(context.Context, *DeleteCacheTaskRequest) (*emptypb.Empty, error)
// AnnouncePersistentCachePeer announces persistent cache peer to scheduler.
AnnouncePersistentCachePeer(Scheduler_AnnouncePersistentCachePeerServer) error
// Checks information of persistent cache peer.
@ -325,6 +412,23 @@ type SchedulerServer interface {
StatPersistentCacheTask(context.Context, *StatPersistentCacheTaskRequest) (*v2.PersistentCacheTask, error)
// DeletePersistentCacheTask releases persistent cache task in scheduler.
DeletePersistentCacheTask(context.Context, *DeletePersistentCacheTaskRequest) (*emptypb.Empty, error)
// PreheatImage synchronously resolves an image manifest and triggers an asynchronous preheat task.
//
// This is a blocking call. The RPC will not return until the server has completed the
// initial synchronous work: resolving the image manifest and preparing all layer URLs.
//
// After this call successfully returns, a scheduler on the server begins the actual
// preheating process, instructing peers to download the layers in the background.
//
// A successful response (google.protobuf.Empty) confirms that the preparation is complete
// and the asynchronous download task has been scheduled.
PreheatImage(context.Context, *PreheatImageRequest) (*emptypb.Empty, error)
// StatImage provides detailed status for a container image's distribution in peers.
//
// This is a blocking call that first resolves the image manifest and then queries
// all peers to collect the image's download state across the network.
// The response includes both layer information and the status on each peer.
StatImage(context.Context, *StatImageRequest) (*StatImageResponse, error)
}
// UnimplementedSchedulerServer should be embedded to have forward compatible implementations.
@ -355,8 +459,20 @@ func (UnimplementedSchedulerServer) ListHosts(context.Context, *emptypb.Empty) (
func (UnimplementedSchedulerServer) DeleteHost(context.Context, *DeleteHostRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteHost not implemented")
}
func (UnimplementedSchedulerServer) SyncProbes(Scheduler_SyncProbesServer) error {
return status.Errorf(codes.Unimplemented, "method SyncProbes not implemented")
func (UnimplementedSchedulerServer) AnnounceCachePeer(Scheduler_AnnounceCachePeerServer) error {
return status.Errorf(codes.Unimplemented, "method AnnounceCachePeer not implemented")
}
func (UnimplementedSchedulerServer) StatCachePeer(context.Context, *StatCachePeerRequest) (*v2.CachePeer, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatCachePeer not implemented")
}
func (UnimplementedSchedulerServer) DeleteCachePeer(context.Context, *DeleteCachePeerRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteCachePeer not implemented")
}
func (UnimplementedSchedulerServer) StatCacheTask(context.Context, *StatCacheTaskRequest) (*v2.CacheTask, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatCacheTask not implemented")
}
func (UnimplementedSchedulerServer) DeleteCacheTask(context.Context, *DeleteCacheTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeleteCacheTask not implemented")
}
func (UnimplementedSchedulerServer) AnnouncePersistentCachePeer(Scheduler_AnnouncePersistentCachePeerServer) error {
return status.Errorf(codes.Unimplemented, "method AnnouncePersistentCachePeer not implemented")
@ -382,6 +498,12 @@ func (UnimplementedSchedulerServer) StatPersistentCacheTask(context.Context, *St
func (UnimplementedSchedulerServer) DeletePersistentCacheTask(context.Context, *DeletePersistentCacheTaskRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeletePersistentCacheTask not implemented")
}
func (UnimplementedSchedulerServer) PreheatImage(context.Context, *PreheatImageRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method PreheatImage not implemented")
}
func (UnimplementedSchedulerServer) StatImage(context.Context, *StatImageRequest) (*StatImageResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method StatImage not implemented")
}
// UnsafeSchedulerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to SchedulerServer will
@ -546,32 +668,104 @@ func _Scheduler_DeleteHost_Handler(srv interface{}, ctx context.Context, dec fun
return interceptor(ctx, in, info, handler)
}
func _Scheduler_SyncProbes_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(SchedulerServer).SyncProbes(&schedulerSyncProbesServer{stream})
func _Scheduler_AnnounceCachePeer_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(SchedulerServer).AnnounceCachePeer(&schedulerAnnounceCachePeerServer{stream})
}
type Scheduler_SyncProbesServer interface {
Send(*SyncProbesResponse) error
Recv() (*SyncProbesRequest, error)
type Scheduler_AnnounceCachePeerServer interface {
Send(*AnnounceCachePeerResponse) error
Recv() (*AnnounceCachePeerRequest, error)
grpc.ServerStream
}
type schedulerSyncProbesServer struct {
type schedulerAnnounceCachePeerServer struct {
grpc.ServerStream
}
func (x *schedulerSyncProbesServer) Send(m *SyncProbesResponse) error {
func (x *schedulerAnnounceCachePeerServer) Send(m *AnnounceCachePeerResponse) error {
return x.ServerStream.SendMsg(m)
}
func (x *schedulerSyncProbesServer) Recv() (*SyncProbesRequest, error) {
m := new(SyncProbesRequest)
func (x *schedulerAnnounceCachePeerServer) Recv() (*AnnounceCachePeerRequest, error) {
m := new(AnnounceCachePeerRequest)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func _Scheduler_StatCachePeer_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StatCachePeerRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SchedulerServer).StatCachePeer(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/scheduler.v2.Scheduler/StatCachePeer",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SchedulerServer).StatCachePeer(ctx, req.(*StatCachePeerRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Scheduler_DeleteCachePeer_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteCachePeerRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SchedulerServer).DeleteCachePeer(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/scheduler.v2.Scheduler/DeleteCachePeer",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SchedulerServer).DeleteCachePeer(ctx, req.(*DeleteCachePeerRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Scheduler_StatCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StatCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SchedulerServer).StatCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/scheduler.v2.Scheduler/StatCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SchedulerServer).StatCacheTask(ctx, req.(*StatCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Scheduler_DeleteCacheTask_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteCacheTaskRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SchedulerServer).DeleteCacheTask(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/scheduler.v2.Scheduler/DeleteCacheTask",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SchedulerServer).DeleteCacheTask(ctx, req.(*DeleteCacheTaskRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Scheduler_AnnouncePersistentCachePeer_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(SchedulerServer).AnnouncePersistentCachePeer(&schedulerAnnouncePersistentCachePeerServer{stream})
}
@ -724,6 +918,42 @@ func _Scheduler_DeletePersistentCacheTask_Handler(srv interface{}, ctx context.C
return interceptor(ctx, in, info, handler)
}
func _Scheduler_PreheatImage_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PreheatImageRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SchedulerServer).PreheatImage(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/scheduler.v2.Scheduler/PreheatImage",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SchedulerServer).PreheatImage(ctx, req.(*PreheatImageRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Scheduler_StatImage_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(StatImageRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(SchedulerServer).StatImage(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/scheduler.v2.Scheduler/StatImage",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(SchedulerServer).StatImage(ctx, req.(*StatImageRequest))
}
return interceptor(ctx, in, info, handler)
}
// Scheduler_ServiceDesc is the grpc.ServiceDesc for Scheduler service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@ -759,6 +989,22 @@ var Scheduler_ServiceDesc = grpc.ServiceDesc{
MethodName: "DeleteHost",
Handler: _Scheduler_DeleteHost_Handler,
},
{
MethodName: "StatCachePeer",
Handler: _Scheduler_StatCachePeer_Handler,
},
{
MethodName: "DeleteCachePeer",
Handler: _Scheduler_DeleteCachePeer_Handler,
},
{
MethodName: "StatCacheTask",
Handler: _Scheduler_StatCacheTask_Handler,
},
{
MethodName: "DeleteCacheTask",
Handler: _Scheduler_DeleteCacheTask_Handler,
},
{
MethodName: "StatPersistentCachePeer",
Handler: _Scheduler_StatPersistentCachePeer_Handler,
@ -787,6 +1033,14 @@ var Scheduler_ServiceDesc = grpc.ServiceDesc{
MethodName: "DeletePersistentCacheTask",
Handler: _Scheduler_DeletePersistentCacheTask_Handler,
},
{
MethodName: "PreheatImage",
Handler: _Scheduler_PreheatImage_Handler,
},
{
MethodName: "StatImage",
Handler: _Scheduler_StatImage_Handler,
},
},
Streams: []grpc.StreamDesc{
{
@ -796,8 +1050,8 @@ var Scheduler_ServiceDesc = grpc.ServiceDesc{
ClientStreams: true,
},
{
StreamName: "SyncProbes",
Handler: _Scheduler_SyncProbes_Handler,
StreamName: "AnnounceCachePeer",
Handler: _Scheduler_AnnounceCachePeer_Handler,
ServerStreams: true,
ClientStreams: true,
},

View File

@ -1,264 +0,0 @@
//
// Copyright 2022 The Dragonfly Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc v3.21.6
// source: pkg/apis/security/v1/security.proto
package security
import (
_ "github.com/envoyproxy/protoc-gen-validate/validate"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
durationpb "google.golang.org/protobuf/types/known/durationpb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// Certificate request type.
// Dragonfly supports peers authentication with Mutual TLS(mTLS)
// For mTLS, all peers need to request TLS certificates for communicating
// The server side may overwrite ant requested certificate filed based on its policies.
type CertificateRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// ASN.1 DER form certificate request.
// The public key in the CSR is used to generate the certificate,
// and other fields in the generated certificate may be overwritten by the CA.
Csr []byte `protobuf:"bytes,1,opt,name=csr,proto3" json:"csr,omitempty"`
// Optional: requested certificate validity period.
ValidityPeriod *durationpb.Duration `protobuf:"bytes,2,opt,name=validity_period,json=validityPeriod,proto3" json:"validity_period,omitempty"`
}
func (x *CertificateRequest) Reset() {
*x = CertificateRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_pkg_apis_security_v1_security_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CertificateRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CertificateRequest) ProtoMessage() {}
func (x *CertificateRequest) ProtoReflect() protoreflect.Message {
mi := &file_pkg_apis_security_v1_security_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CertificateRequest.ProtoReflect.Descriptor instead.
func (*CertificateRequest) Descriptor() ([]byte, []int) {
return file_pkg_apis_security_v1_security_proto_rawDescGZIP(), []int{0}
}
func (x *CertificateRequest) GetCsr() []byte {
if x != nil {
return x.Csr
}
return nil
}
func (x *CertificateRequest) GetValidityPeriod() *durationpb.Duration {
if x != nil {
return x.ValidityPeriod
}
return nil
}
// Certificate response type.
type CertificateResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// ASN.1 DER form certificate chain.
CertificateChain [][]byte `protobuf:"bytes,1,rep,name=certificate_chain,json=certificateChain,proto3" json:"certificate_chain,omitempty"`
}
func (x *CertificateResponse) Reset() {
*x = CertificateResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_pkg_apis_security_v1_security_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CertificateResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CertificateResponse) ProtoMessage() {}
func (x *CertificateResponse) ProtoReflect() protoreflect.Message {
mi := &file_pkg_apis_security_v1_security_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CertificateResponse.ProtoReflect.Descriptor instead.
func (*CertificateResponse) Descriptor() ([]byte, []int) {
return file_pkg_apis_security_v1_security_proto_rawDescGZIP(), []int{1}
}
func (x *CertificateResponse) GetCertificateChain() [][]byte {
if x != nil {
return x.CertificateChain
}
return nil
}
var File_pkg_apis_security_v1_security_proto protoreflect.FileDescriptor
var file_pkg_apis_security_v1_security_proto_rawDesc = []byte{
0x0a, 0x23, 0x70, 0x6b, 0x67, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x73, 0x65, 0x63, 0x75, 0x72,
0x69, 0x74, 0x79, 0x2f, 0x76, 0x31, 0x2f, 0x73, 0x65, 0x63, 0x75, 0x72, 0x69, 0x74, 0x79, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x08, 0x73, 0x65, 0x63, 0x75, 0x72, 0x69, 0x74, 0x79, 0x1a,
0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66,
0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a,
0x17, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x2f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61,
0x74, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x7d, 0x0a, 0x12, 0x43, 0x65, 0x72, 0x74,
0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x19,
0x0a, 0x03, 0x63, 0x73, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x07, 0xfa, 0x42, 0x04,
0x7a, 0x02, 0x10, 0x01, 0x52, 0x03, 0x63, 0x73, 0x72, 0x12, 0x4c, 0x0a, 0x0f, 0x76, 0x61, 0x6c,
0x69, 0x64, 0x69, 0x74, 0x79, 0x5f, 0x70, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x18, 0x02, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x08, 0xfa,
0x42, 0x05, 0xaa, 0x01, 0x02, 0x08, 0x01, 0x52, 0x0e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x69, 0x74,
0x79, 0x50, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x22, 0x4c, 0x0a, 0x13, 0x43, 0x65, 0x72, 0x74, 0x69,
0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x35,
0x0a, 0x11, 0x63, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x5f, 0x63, 0x68,
0x61, 0x69, 0x6e, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0c, 0x42, 0x08, 0xfa, 0x42, 0x05, 0x92, 0x01,
0x02, 0x08, 0x01, 0x52, 0x10, 0x63, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65,
0x43, 0x68, 0x61, 0x69, 0x6e, 0x32, 0x60, 0x0a, 0x0b, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69,
0x63, 0x61, 0x74, 0x65, 0x12, 0x51, 0x0a, 0x10, 0x49, 0x73, 0x73, 0x75, 0x65, 0x43, 0x65, 0x72,
0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x12, 0x1c, 0x2e, 0x73, 0x65, 0x63, 0x75, 0x72,
0x69, 0x74, 0x79, 0x2e, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1d, 0x2e, 0x73, 0x65, 0x63, 0x75, 0x72, 0x69, 0x74,
0x79, 0x2e, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x42, 0x2d, 0x5a, 0x2b, 0x64, 0x37, 0x79, 0x2e, 0x69,
0x6f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x32, 0x2f, 0x70, 0x6b, 0x67, 0x2f, 0x61, 0x70, 0x69,
0x73, 0x2f, 0x73, 0x65, 0x63, 0x75, 0x72, 0x69, 0x74, 0x79, 0x2f, 0x76, 0x31, 0x3b, 0x73, 0x65,
0x63, 0x75, 0x72, 0x69, 0x74, 0x79, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_pkg_apis_security_v1_security_proto_rawDescOnce sync.Once
file_pkg_apis_security_v1_security_proto_rawDescData = file_pkg_apis_security_v1_security_proto_rawDesc
)
func file_pkg_apis_security_v1_security_proto_rawDescGZIP() []byte {
file_pkg_apis_security_v1_security_proto_rawDescOnce.Do(func() {
file_pkg_apis_security_v1_security_proto_rawDescData = protoimpl.X.CompressGZIP(file_pkg_apis_security_v1_security_proto_rawDescData)
})
return file_pkg_apis_security_v1_security_proto_rawDescData
}
var file_pkg_apis_security_v1_security_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_pkg_apis_security_v1_security_proto_goTypes = []interface{}{
(*CertificateRequest)(nil), // 0: security.CertificateRequest
(*CertificateResponse)(nil), // 1: security.CertificateResponse
(*durationpb.Duration)(nil), // 2: google.protobuf.Duration
}
var file_pkg_apis_security_v1_security_proto_depIdxs = []int32{
2, // 0: security.CertificateRequest.validity_period:type_name -> google.protobuf.Duration
0, // 1: security.Certificate.IssueCertificate:input_type -> security.CertificateRequest
1, // 2: security.Certificate.IssueCertificate:output_type -> security.CertificateResponse
2, // [2:3] is the sub-list for method output_type
1, // [1:2] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_pkg_apis_security_v1_security_proto_init() }
func file_pkg_apis_security_v1_security_proto_init() {
if File_pkg_apis_security_v1_security_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_pkg_apis_security_v1_security_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CertificateRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_pkg_apis_security_v1_security_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CertificateResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_pkg_apis_security_v1_security_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_pkg_apis_security_v1_security_proto_goTypes,
DependencyIndexes: file_pkg_apis_security_v1_security_proto_depIdxs,
MessageInfos: file_pkg_apis_security_v1_security_proto_msgTypes,
}.Build()
File_pkg_apis_security_v1_security_proto = out.File
file_pkg_apis_security_v1_security_proto_rawDesc = nil
file_pkg_apis_security_v1_security_proto_goTypes = nil
file_pkg_apis_security_v1_security_proto_depIdxs = nil
}

View File

@ -1,273 +0,0 @@
// Code generated by protoc-gen-validate. DO NOT EDIT.
// source: pkg/apis/security/v1/security.proto
package security
import (
"bytes"
"errors"
"fmt"
"net"
"net/mail"
"net/url"
"regexp"
"sort"
"strings"
"time"
"unicode/utf8"
"google.golang.org/protobuf/types/known/anypb"
)
// ensure the imports are used
var (
_ = bytes.MinRead
_ = errors.New("")
_ = fmt.Print
_ = utf8.UTFMax
_ = (*regexp.Regexp)(nil)
_ = (*strings.Reader)(nil)
_ = net.IPv4len
_ = time.Duration(0)
_ = (*url.URL)(nil)
_ = (*mail.Address)(nil)
_ = anypb.Any{}
_ = sort.Sort
)
// Validate checks the field values on CertificateRequest with the rules
// defined in the proto definition for this message. If any rules are
// violated, the first error encountered is returned, or nil if there are no violations.
func (m *CertificateRequest) Validate() error {
return m.validate(false)
}
// ValidateAll checks the field values on CertificateRequest with the rules
// defined in the proto definition for this message. If any rules are
// violated, the result is a list of violation errors wrapped in
// CertificateRequestMultiError, or nil if none found.
func (m *CertificateRequest) ValidateAll() error {
return m.validate(true)
}
func (m *CertificateRequest) validate(all bool) error {
if m == nil {
return nil
}
var errors []error
if len(m.GetCsr()) < 1 {
err := CertificateRequestValidationError{
field: "Csr",
reason: "value length must be at least 1 bytes",
}
if !all {
return err
}
errors = append(errors, err)
}
if m.GetValidityPeriod() == nil {
err := CertificateRequestValidationError{
field: "ValidityPeriod",
reason: "value is required",
}
if !all {
return err
}
errors = append(errors, err)
}
if len(errors) > 0 {
return CertificateRequestMultiError(errors)
}
return nil
}
// CertificateRequestMultiError is an error wrapping multiple validation errors
// returned by CertificateRequest.ValidateAll() if the designated constraints
// aren't met.
type CertificateRequestMultiError []error
// Error returns a concatenation of all the error messages it wraps.
func (m CertificateRequestMultiError) Error() string {
var msgs []string
for _, err := range m {
msgs = append(msgs, err.Error())
}
return strings.Join(msgs, "; ")
}
// AllErrors returns a list of validation violation errors.
func (m CertificateRequestMultiError) AllErrors() []error { return m }
// CertificateRequestValidationError is the validation error returned by
// CertificateRequest.Validate if the designated constraints aren't met.
type CertificateRequestValidationError struct {
field string
reason string
cause error
key bool
}
// Field function returns field value.
func (e CertificateRequestValidationError) Field() string { return e.field }
// Reason function returns reason value.
func (e CertificateRequestValidationError) Reason() string { return e.reason }
// Cause function returns cause value.
func (e CertificateRequestValidationError) Cause() error { return e.cause }
// Key function returns key value.
func (e CertificateRequestValidationError) Key() bool { return e.key }
// ErrorName returns error name.
func (e CertificateRequestValidationError) ErrorName() string {
return "CertificateRequestValidationError"
}
// Error satisfies the builtin error interface
func (e CertificateRequestValidationError) Error() string {
cause := ""
if e.cause != nil {
cause = fmt.Sprintf(" | caused by: %v", e.cause)
}
key := ""
if e.key {
key = "key for "
}
return fmt.Sprintf(
"invalid %sCertificateRequest.%s: %s%s",
key,
e.field,
e.reason,
cause)
}
var _ error = CertificateRequestValidationError{}
var _ interface {
Field() string
Reason() string
Key() bool
Cause() error
ErrorName() string
} = CertificateRequestValidationError{}
// Validate checks the field values on CertificateResponse with the rules
// defined in the proto definition for this message. If any rules are
// violated, the first error encountered is returned, or nil if there are no violations.
func (m *CertificateResponse) Validate() error {
return m.validate(false)
}
// ValidateAll checks the field values on CertificateResponse with the rules
// defined in the proto definition for this message. If any rules are
// violated, the result is a list of violation errors wrapped in
// CertificateResponseMultiError, or nil if none found.
func (m *CertificateResponse) ValidateAll() error {
return m.validate(true)
}
func (m *CertificateResponse) validate(all bool) error {
if m == nil {
return nil
}
var errors []error
if len(m.GetCertificateChain()) < 1 {
err := CertificateResponseValidationError{
field: "CertificateChain",
reason: "value must contain at least 1 item(s)",
}
if !all {
return err
}
errors = append(errors, err)
}
if len(errors) > 0 {
return CertificateResponseMultiError(errors)
}
return nil
}
// CertificateResponseMultiError is an error wrapping multiple validation
// errors returned by CertificateResponse.ValidateAll() if the designated
// constraints aren't met.
type CertificateResponseMultiError []error
// Error returns a concatenation of all the error messages it wraps.
func (m CertificateResponseMultiError) Error() string {
var msgs []string
for _, err := range m {
msgs = append(msgs, err.Error())
}
return strings.Join(msgs, "; ")
}
// AllErrors returns a list of validation violation errors.
func (m CertificateResponseMultiError) AllErrors() []error { return m }
// CertificateResponseValidationError is the validation error returned by
// CertificateResponse.Validate if the designated constraints aren't met.
type CertificateResponseValidationError struct {
field string
reason string
cause error
key bool
}
// Field function returns field value.
func (e CertificateResponseValidationError) Field() string { return e.field }
// Reason function returns reason value.
func (e CertificateResponseValidationError) Reason() string { return e.reason }
// Cause function returns cause value.
func (e CertificateResponseValidationError) Cause() error { return e.cause }
// Key function returns key value.
func (e CertificateResponseValidationError) Key() bool { return e.key }
// ErrorName returns error name.
func (e CertificateResponseValidationError) ErrorName() string {
return "CertificateResponseValidationError"
}
// Error satisfies the builtin error interface
func (e CertificateResponseValidationError) Error() string {
cause := ""
if e.cause != nil {
cause = fmt.Sprintf(" | caused by: %v", e.cause)
}
key := ""
if e.key {
key = "key for "
}
return fmt.Sprintf(
"invalid %sCertificateResponse.%s: %s%s",
key,
e.field,
e.reason,
cause)
}
var _ error = CertificateResponseValidationError{}
var _ interface {
Field() string
Reason() string
Key() bool
Cause() error
ErrorName() string
} = CertificateResponseValidationError{}

View File

@ -1,54 +0,0 @@
/*
* Copyright 2022 The Dragonfly Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
syntax = "proto3";
package security;
import "google/protobuf/duration.proto";
import "validate/validate.proto";
option go_package = "d7y.io/api/v2/pkg/apis/security/v1;security";
// Refer: https://github.com/istio/api/blob/master/security/v1alpha1/ca.proto
// Istio defines similar api for signing certificate, but it's not applicable in Dragonfly.
// Certificate request type.
// Dragonfly supports peers authentication with Mutual TLS(mTLS)
// For mTLS, all peers need to request TLS certificates for communicating
// The server side may overwrite ant requested certificate filed based on its policies.
message CertificateRequest {
// ASN.1 DER form certificate request.
// The public key in the CSR is used to generate the certificate,
// and other fields in the generated certificate may be overwritten by the CA.
bytes csr = 1 [(validate.rules).bytes.min_len = 1];
// Optional: requested certificate validity period.
google.protobuf.Duration validity_period = 2 [(validate.rules).duration.required = true];
}
// Certificate response type.
message CertificateResponse {
// ASN.1 DER form certificate chain.
repeated bytes certificate_chain = 1 [(validate.rules).repeated.min_items = 1];
}
// Service for managing certificates issued by the CA.
service Certificate {
// Using provided CSR, returns a signed certificate.
rpc IssueCertificate(CertificateRequest)
returns (CertificateResponse) {
}
}

View File

@ -1,105 +0,0 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.2.0
// - protoc v3.21.6
// source: pkg/apis/security/v1/security.proto
package security
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
// CertificateClient is the client API for Certificate service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type CertificateClient interface {
// Using provided CSR, returns a signed certificate.
IssueCertificate(ctx context.Context, in *CertificateRequest, opts ...grpc.CallOption) (*CertificateResponse, error)
}
type certificateClient struct {
cc grpc.ClientConnInterface
}
func NewCertificateClient(cc grpc.ClientConnInterface) CertificateClient {
return &certificateClient{cc}
}
func (c *certificateClient) IssueCertificate(ctx context.Context, in *CertificateRequest, opts ...grpc.CallOption) (*CertificateResponse, error) {
out := new(CertificateResponse)
err := c.cc.Invoke(ctx, "/security.Certificate/IssueCertificate", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// CertificateServer is the server API for Certificate service.
// All implementations should embed UnimplementedCertificateServer
// for forward compatibility
type CertificateServer interface {
// Using provided CSR, returns a signed certificate.
IssueCertificate(context.Context, *CertificateRequest) (*CertificateResponse, error)
}
// UnimplementedCertificateServer should be embedded to have forward compatible implementations.
type UnimplementedCertificateServer struct {
}
func (UnimplementedCertificateServer) IssueCertificate(context.Context, *CertificateRequest) (*CertificateResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method IssueCertificate not implemented")
}
// UnsafeCertificateServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to CertificateServer will
// result in compilation errors.
type UnsafeCertificateServer interface {
mustEmbedUnimplementedCertificateServer()
}
func RegisterCertificateServer(s grpc.ServiceRegistrar, srv CertificateServer) {
s.RegisterService(&Certificate_ServiceDesc, srv)
}
func _Certificate_IssueCertificate_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CertificateRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(CertificateServer).IssueCertificate(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/security.Certificate/IssueCertificate",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(CertificateServer).IssueCertificate(ctx, req.(*CertificateRequest))
}
return interceptor(ctx, in, info, handler)
}
// Certificate_ServiceDesc is the grpc.ServiceDesc for Certificate service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var Certificate_ServiceDesc = grpc.ServiceDesc{
ServiceName: "security.Certificate",
HandlerType: (*CertificateServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "IssueCertificate",
Handler: _Certificate_IssueCertificate_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "pkg/apis/security/v1/security.proto",
}

View File

@ -42,7 +42,7 @@ enum TaskType {
// local peer(local cache). When the standard task is never downloaded in the
// P2P cluster, dfdaemon will download the task from the source. When the standard
// task is downloaded in the P2P cluster, dfdaemon will download the task from
// the remote peer or local peer(local cache).
// the remote peer or local peer(local cache), where peers use disk storage to store tasks.
STANDARD = 0;
// PERSISTENT is persistent type of task, it can import file and export file in P2P cluster.
@ -56,6 +56,13 @@ enum TaskType {
// the task in the peer's disk and copy multiple replicas to remote peers to prevent data loss.
// When the expiration time is reached, task will be deleted in the P2P cluster.
PERSISTENT_CACHE = 2;
// CACHE is cache type of task, it can download from source, remote peer and
// local peer(local cache). When the cache task is never downloaded in the
// P2P cluster, dfdaemon will download the cache task from the source. When the cache
// task is downloaded in the P2P cluster, dfdaemon will download the cache task from
// the remote peer or local peer(local cache), where peers use memory storage to store cache tasks.
CACHE = 3;
}
// TrafficType represents type of traffic.
@ -132,13 +139,40 @@ message Peer {
google.protobuf.Timestamp updated_at = 11;
}
// CachePeer metadata.
message CachePeer {
// Peer id.
string id = 1;
// Range is url range of request.
optional Range range = 2;
// Peer priority.
Priority priority = 3;
// Pieces of peer.
repeated Piece pieces = 4;
// Peer downloads costs time.
google.protobuf.Duration cost = 5;
// Peer state.
string state = 6;
// Cache Task info.
CacheTask task = 7;
// Host info.
Host host = 8;
// NeedBackToSource needs downloaded from source.
bool need_back_to_source = 9;
// Peer create time.
google.protobuf.Timestamp created_at = 10;
// Peer update time.
google.protobuf.Timestamp updated_at = 11;
}
// PersistentCachePeer metadata.
message PersistentCachePeer {
// Peer id.
string id = 1;
// Persistent represents whether the persistent cache peer is persistent.
// If the persistent cache peer is persistent, the persistent cache peer will
// not be deleted when dfdaemon runs garbage collection.
// not be deleted when dfdaemon runs garbage collection. It only be deleted
// when the task is deleted by the user.
bool persistent = 2;
// Peer downloads costs time.
google.protobuf.Duration cost = 3;
@ -162,7 +196,12 @@ message Task {
TaskType type = 2;
// Download url.
string url = 3;
// Digest of the task digest, for example blake3:xxx or sha256:yyy.
// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
// Returns an error if the computed digest mismatches the expected value.
//
// Performance
// Digest calculation increases processing time. Enable only when data integrity verification is critical.
optional string digest = 4;
// URL tag identifies different task for same url.
optional string tag = 5;
@ -176,38 +215,88 @@ message Task {
repeated string filtered_query_params = 7;
// Task request headers.
map<string, string> request_header = 8;
// Task piece length.
uint64 piece_length = 9;
// Task content length.
uint64 content_length = 10;
uint64 content_length = 9;
// Task piece count.
uint32 piece_count = 11;
uint32 piece_count = 10;
// Task size scope.
SizeScope size_scope = 12;
SizeScope size_scope = 11;
// Pieces of task.
repeated Piece pieces = 13;
repeated Piece pieces = 12;
// Task state.
string state = 14;
string state = 13;
// Task peer count.
uint32 peer_count = 15;
uint32 peer_count = 14;
// Task contains available peer.
bool has_available_peer = 16;
bool has_available_peer = 15;
// Task create time.
google.protobuf.Timestamp created_at = 17;
google.protobuf.Timestamp created_at = 16;
// Task update time.
google.protobuf.Timestamp updated_at = 18;
google.protobuf.Timestamp updated_at = 17;
}
// CacheTask metadata.
message CacheTask {
// Task id.
string id = 1;
// Task type.
TaskType type = 2;
// Download url.
string url = 3;
// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
// Returns an error if the computed digest mismatches the expected value.
//
// Performance
// Digest calculation increases processing time. Enable only when data integrity verification is critical.
optional string digest = 4;
// URL tag identifies different task for same url.
optional string tag = 5;
// Application of task.
optional string application = 6;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 7;
// Task request headers.
map<string, string> request_header = 8;
// Task content length.
uint64 content_length = 9;
// Task piece count.
uint32 piece_count = 10;
// Task size scope.
SizeScope size_scope = 11;
// Pieces of task.
repeated Piece pieces = 12;
// Task state.
string state = 13;
// Task peer count.
uint32 peer_count = 14;
// Task contains available peer.
bool has_available_peer = 15;
// Task create time.
google.protobuf.Timestamp created_at = 16;
// Task update time.
google.protobuf.Timestamp updated_at = 17;
}
// PersistentCacheTask metadata.
message PersistentCacheTask {
// Task id.
string id = 1;
// Replica count of the persistent cache task.
// Replica count of the persistent cache task. The persistent cache task will
// not be deleted when dfdamon runs garbage collection. It only be deleted
// when the task is deleted by the user.
uint64 persistent_replica_count = 2;
// Replica count of the persistent cache task.
uint64 replica_count = 3;
// Digest of the task digest, for example blake3:xxx or sha256:yyy.
string digest = 4;
// Current replica count of the persistent cache task. The persistent cache task
// will not be deleted when dfdaemon runs garbage collection. It only be deleted
// when the task is deleted by the user.
uint64 current_persistent_replica_count = 3;
// Current replica count of the cache task. If cache task is not persistent,
// the persistent cache task will be deleted when dfdaemon runs garbage collection.
uint64 current_replica_count = 4;
// Tag is used to distinguish different persistent cache tasks.
optional string tag = 5;
// Application of task.
@ -266,6 +355,8 @@ message Host {
uint64 scheduler_cluster_id = 17;
// Disable shared data for other peers.
bool disable_shared = 18;
// Port of proxy server.
int32 proxy_port = 19;
}
// CPU Stat.
@ -333,6 +424,23 @@ message Network {
optional string location = 3;
// IDC where the peer host is located
optional string idc = 4;
// Maximum bandwidth of the network interface, in bps (bits per second).
uint64 max_rx_bandwidth = 9;
// Receive bandwidth of the network interface, in bps (bits per second).
optional uint64 rx_bandwidth = 10;
// Maximum bandwidth of the network interface for transmission, in bps (bits per second).
uint64 max_tx_bandwidth = 11;
// Transmit bandwidth of the network interface, in bps (bits per second).
optional uint64 tx_bandwidth = 12;
reserved 5;
reserved "download_rate";
reserved 6;
reserved "download_rate_limit";
reserved 7;
reserved "upload_rate";
reserved 8;
reserved "upload_rate_limit";
}
// Disk Stat.
@ -353,6 +461,10 @@ message Disk {
uint64 inodes_free = 7;
// Used percent of indoes on the data path of dragonfly directory.
double inodes_used_percent = 8;
// Disk read bandwidth, in bytes per second.
uint64 read_bandwidth = 9;
// Disk write bandwidth, in bytes per second.
uint64 write_bandwidth = 10;
}
// Build information.
@ -397,20 +509,43 @@ message Download {
map<string, string> request_header = 9;
// Task piece length.
optional uint64 piece_length = 10;
// File path to be exported.
// File path to be downloaded. If output_path is set, the downloaded file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the download. If hard link creation fails,
// it will copy the file to the output path after the download is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 11;
// Download timeout.
optional google.protobuf.Duration timeout = 12;
// Dfdaemon will disable download back-to-source by itself, if disable_back_to_source is true.
// Dfdaemon cannot download the task from the source if disable_back_to_source is true.
bool disable_back_to_source = 13;
// Scheduler will triggers peer to download back-to-source, if need_back_to_source is true.
// Scheduler needs to schedule the task downloads from the source if need_back_to_source is true.
bool need_back_to_source = 14;
// certificate_chain is the client certs with DER format for the backend client to download back-to-source.
repeated bytes certificate_chain = 15;
// Prefetch pre-downloads all pieces of the task when download task with range request.
// Prefetch pre-downloads all pieces of the task when the download task request is a range request.
bool prefetch = 16;
// Object Storage related information.
// Object storage protocol information.
optional ObjectStorage object_storage = 17;
// HDFS protocol information.
optional HDFS hdfs = 18;
// is_prefetch is the flag to indicate whether the request is a prefetch request.
bool is_prefetch = 19;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 20;
// force_hard_link is the flag to indicate whether the download file must be hard linked to the output path.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
bool force_hard_link = 22;
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on url, piece_length, tag, application, and filtered_query_params.
optional string content_for_calculating_task_id = 23;
// remote_ip represents the IP address of the client initiating the download request.
// For proxy requests, it is set to the IP address of the request source.
// For dfget requests, it is set to the IP address of the dfget.
optional string remote_ip = 24;
reserved 21;
reserved "load_to_cache";
}
// Object Storage related information.
@ -424,11 +559,19 @@ message ObjectStorage {
// Access secret that used to access the object storage service.
optional string access_key_secret = 4;
// Session token that used to access s3 storage service.
optional string session_token = 5;
optional string session_token = 5;
// Local path to credential file for Google Cloud Storage service OAuth2 authentication.
optional string credential_path = 6;
// Predefined ACL that used for the Google Cloud Storage service.
optional string predefined_acl = 7;
// Temporary STS security token for accessing OSS.
optional string security_token = 8;
}
// HDFS related information.
message HDFS {
// Delegation token for Web HDFS operator.
optional string delegation_token = 1;
}
// Range represents download range.

View File

@ -43,6 +43,9 @@ message DownloadTaskStartedResponse {
// Need to download pieces.
repeated common.v2.Piece pieces = 4;
// is_finished indicates whether the download task is finished.
bool is_finished = 5;
}
// DownloadPieceFinishedResponse represents piece download finished response of DownloadTaskResponse.
@ -100,18 +103,226 @@ message DownloadPieceRequest{
message DownloadPieceResponse {
// Piece information.
common.v2.Piece piece = 1;
// Piece metadata digest, it is used to verify the integrity of the piece metadata.
optional string digest = 2;
}
// StatTaskRequest represents request of StatTask.
message StatTaskRequest {
// Task id.
string task_id = 1;
// Remote IP represents the IP address of the client initiating the stat request.
optional string remote_ip = 2;
// local_only specifies whether to query task information from local client only.
// If true, the query will be restricted to the local client.
// By default (false), the query may be forwarded to the scheduler
// for a cluster-wide search.
bool local_only = 3;
}
// ListTaskEntriesRequest represents request of ListTaskEntries.
message ListTaskEntriesRequest {
// Task id.
string task_id = 1;
// URL to be listed the entries.
string url = 2;
// HTTP header to be sent with the request.
map<string, string> request_header = 3;
// List timeout.
optional google.protobuf.Duration timeout = 4;
// certificate_chain is the client certs with DER format for the backend client to list the entries.
repeated bytes certificate_chain = 5;
// Object storage protocol information.
optional common.v2.ObjectStorage object_storage = 6;
// HDFS protocol information.
optional common.v2.HDFS hdfs = 7;
// Remote IP represents the IP address of the client initiating the list request.
optional string remote_ip = 8;
}
// ListTaskEntriesResponse represents response of ListTaskEntries.
message ListTaskEntriesResponse {
// Content length is the content length of the response
uint64 content_length = 1;
// HTTP header to be sent with the request.
map<string, string> response_header = 2;
// Backend HTTP status code.
optional int32 status_code = 3;
/// Entries is the information of the entries in the directory.
repeated Entry entries = 4;
}
// Entry represents an entry in a directory.
message Entry {
// URL of the entry.
string url = 1;
// Size of the entry.
uint64 content_length = 2;
// Is directory or not.
bool is_dir = 3;
}
// DeleteTaskRequest represents request of DeleteTask.
message DeleteTaskRequest {
// Task id.
string task_id = 1;
// Remote IP represents the IP address of the client initiating the delete request.
optional string remote_ip = 2;
}
// DownloadCacheTaskRequest represents request of DownloadCacheTask.
message DownloadCacheTaskRequest {
// Download url.
string url = 1;
// Digest of the task digest, for example :xxx or sha256:yyy.
optional string digest = 2;
// Range is url range of request. If protocol is http, range
// will set in request header. If protocol is others, range
// will set in range field.
optional common.v2.Range range = 3;
// Task type.
common.v2.TaskType type = 4;
// URL tag identifies different task for same url.
optional string tag = 5;
// Application of task.
optional string application = 6;
// Peer priority.
common.v2.Priority priority = 7;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 8;
// Task request headers.
map<string, string> request_header = 9;
// Task piece length.
optional uint64 piece_length = 10;
// File path to be downloaded. If output_path is set, the downloaded file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the download. If hard link creation fails,
// it will copy the file to the output path after the download is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 11;
// Download timeout.
optional google.protobuf.Duration timeout = 12;
// Dfdaemon cannot download the task from the source if disable_back_to_source is true.
bool disable_back_to_source = 13;
// Scheduler needs to schedule the task downloads from the source if need_back_to_source is true.
bool need_back_to_source = 14;
// certificate_chain is the client certs with DER format for the backend client to download back-to-source.
repeated bytes certificate_chain = 15;
// Prefetch pre-downloads all pieces of the task when the download task request is a range request.
bool prefetch = 16;
// Object storage protocol information.
optional common.v2.ObjectStorage object_storage = 17;
// HDFS protocol information.
optional common.v2.HDFS hdfs = 18;
// is_prefetch is the flag to indicate whether the request is a prefetch request.
bool is_prefetch = 19;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 20;
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on url, piece_length, tag, application, and filtered_query_params.
optional string content_for_calculating_task_id = 21;
// remote_ip represents the IP address of the client initiating the download request.
// For proxy requests, it is set to the IP address of the request source.
// For dfget requests, it is set to the IP address of the dfget.
optional string remote_ip = 22;
}
// DownloadCacheTaskStartedResponse represents cache task download started response of DownloadCacheTaskResponse.
message DownloadCacheTaskStartedResponse {
// Task content length.
uint64 content_length = 1;
// Range is url range of request. If protocol is http, range
// is parsed from http header. If other protocol, range comes
// from download range field.
optional common.v2.Range range = 2;
// Task response headers.
map<string, string> response_header = 3;
// Need to download pieces.
repeated common.v2.Piece pieces = 4;
// is_finished indicates whether the download task is finished.
bool is_finished = 5;
}
// DownloadCacheTaskResponse represents response of DownloadCacheTask.
message DownloadCacheTaskResponse {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Peer id.
string peer_id = 3;
oneof response {
DownloadCacheTaskStartedResponse download_cache_task_started_response = 4;
DownloadPieceFinishedResponse download_piece_finished_response = 5;
}
}
// SyncCachePiecesRequest represents request of SyncCachePieces.
message SyncCachePiecesRequest {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Interested piece numbers.
repeated uint32 interested_cache_piece_numbers = 3;
}
// SyncCachePiecesResponse represents response of SyncCachePieces.
message SyncCachePiecesResponse {
// Exist piece number.
uint32 number = 1;
// Piece offset.
uint64 offset = 2;
// Piece length.
uint64 length = 3;
}
// DownloadCachePieceRequest represents request of DownloadCachePiece.
message DownloadCachePieceRequest{
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Piece number.
uint32 piece_number = 3;
}
// DownloadCachePieceResponse represents response of DownloadCachePieces.
message DownloadCachePieceResponse {
// Piece information.
common.v2.Piece piece = 1;
// Piece metadata digest, it is used to verify the integrity of the piece metadata.
optional string digest = 2;
}
// StatCacheTaskRequest represents request of StatCacheTask.
message StatCacheTaskRequest {
// Task id.
string task_id = 1;
// Remote IP represents the IP address of the client initiating the stat request.
optional string remote_ip = 2;
// local_only specifies whether to query task information from local client only.
// If true, the query will be restricted to the local client.
// By default (false), the query may be forwarded to the scheduler
// for a cluster-wide search.
bool local_only = 3;
}
// DeleteCacheTaskRequest represents request of DeleteCacheTask.
message DeleteCacheTaskRequest {
// Task id.
string task_id = 1;
// Remote IP represents the IP address of the client initiating the delete request.
optional string remote_ip = 2;
}
// DownloadPersistentCacheTaskRequest represents request of DownloadPersistentCacheTask.
@ -126,10 +337,27 @@ message DownloadPersistentCacheTaskRequest {
optional string tag = 3;
// Application of task.
optional string application = 4;
// File path to be exported.
string output_path = 5;
// File path to be exported. If output_path is set, the exported file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the export. If hard link creation fails,
// it will copy the file to the output path after the export is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 5;
// Download timeout.
optional google.protobuf.Duration timeout = 6;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 7;
// force_hard_link is the flag to indicate whether the exported file must be hard linked to the output path.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
bool force_hard_link = 8;
// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
// Returns an error if the computed digest mismatches the expected value.
//
// Performance
// Digest calculation increases processing time. Enable only when data integrity verification is critical.
optional string digest = 9;
// Remote IP represents the IP address of the client initiating the download request.
optional string remote_ip = 10;
}
// DownloadPersistentCacheTaskStartedResponse represents task download started response of DownloadPersistentCacheTaskResponse.
@ -155,34 +383,127 @@ message DownloadPersistentCacheTaskResponse {
// UploadPersistentCacheTaskRequest represents request of UploadPersistentCacheTask.
message UploadPersistentCacheTaskRequest {
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on the file content, tag and application by crc32 algorithm`.
optional string content_for_calculating_task_id = 1;
// Upload file path of persistent cache task.
string path = 1;
string path = 2;
// Replica count of the persistent persistent cache task.
uint64 persistent_replica_count = 2;
uint64 persistent_replica_count = 3;
// Tag is used to distinguish different persistent cache tasks.
optional string tag = 3;
// Application of task.
optional string application = 4;
optional string tag = 4;
// Application of the persistent cache task.
optional string application = 5;
// Piece length of the persistent cache task, the value needs to be greater than or equal to 4194304(4MiB).
optional uint64 piece_length = 6;
// TTL of the persistent cache task.
google.protobuf.Duration ttl = 5;
// Upload timeout.
optional google.protobuf.Duration timeout = 6;
google.protobuf.Duration ttl = 7;
// Download timeout.
optional google.protobuf.Duration timeout = 8;
// Remote IP represents the IP address of the client initiating the upload request.
optional string remote_ip = 9;
}
// UpdatePersistentCacheTaskRequest represents request of UpdatePersistentCacheTask.
message UpdatePersistentCacheTaskRequest {
// Task id.
string task_id = 1;
// Persistent represents whether the persistent cache peer is persistent.
// If the persistent cache peer is persistent, the persistent cache peer will
// not be deleted when dfdaemon runs garbage collection. It only be deleted
// when the task is deleted by the user.
bool persistent = 2;
// Remote IP represents the IP address of the client initiating the list request.
optional string remote_ip = 3;
}
// StatPersistentCacheTaskRequest represents request of StatPersistentCacheTask.
message StatPersistentCacheTaskRequest {
// Task id.
string task_id = 1;
// Remote IP represents the IP address of the client initiating the stat request.
optional string remote_ip = 2;
}
// DeletePersistentCacheTaskRequest represents request of DeletePersistentCacheTask.
message DeletePersistentCacheTaskRequest {
// Task id.
string task_id = 1;
// Remote IP represents the IP address of the client initiating the delete request.
optional string remote_ip = 2;
}
// SyncPersistentCachePiecesRequest represents request of SyncPersistentCachePieces.
message SyncPersistentCachePiecesRequest {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Interested piece numbers.
repeated uint32 interested_piece_numbers = 3;
}
// SyncPersistentCachePiecesResponse represents response of SyncPersistentCachePieces.
message SyncPersistentCachePiecesResponse {
// Exist piece number.
uint32 number = 1;
// Piece offset.
uint64 offset = 2;
// Piece length.
uint64 length = 3;
}
// DownloadPersistentCachePieceRequest represents request of DownloadPersistentCachePiece.
message DownloadPersistentCachePieceRequest{
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Piece number.
uint32 piece_number = 3;
}
// DownloadPersistentCachePieceResponse represents response of DownloadPersistentCachePieces.
message DownloadPersistentCachePieceResponse {
// Piece information.
common.v2.Piece piece = 1;
// Piece metadata digest, it is used to verify the integrity of the piece metadata.
optional string digest = 2;
}
// SyncHostRequest represents request of SyncHost.
message SyncHostRequest {
// Host id.
string host_id = 1;
// Peer id.
string peer_id = 2;
}
// IBVerbsQueuePairEndpoint represents queue pair endpoint of IBVerbs.
message IBVerbsQueuePairEndpoint {
// Number of the queue pair.
uint32 num = 1;
// Local identifier of the context.
uint32 lid = 2;
// Global identifier of the context.
bytes gid = 3;
}
// ExchangeIBVerbsQueuePairEndpointRequest represents request of ExchangeIBVerbsQueuePairEndpoint.
message ExchangeIBVerbsQueuePairEndpointRequest {
// Information of the source's queue pair endpoint of IBVerbs.
IBVerbsQueuePairEndpoint endpoint = 1;
}
// ExchangeIBVerbsQueuePairEndpointResponse represents response of ExchangeIBVerbsQueuePairEndpoint.
message ExchangeIBVerbsQueuePairEndpointResponse {
// Information of the destination's queue pair endpoint of IBVerbs.
IBVerbsQueuePairEndpoint endpoint = 1;
}
// DfdaemonUpload represents upload service of dfdaemon.
service DfdaemonUpload{
service DfdaemonUpload {
// DownloadTask downloads task from p2p network.
rpc DownloadTask(DownloadTaskRequest) returns(stream DownloadTaskResponse);
@ -198,30 +519,72 @@ service DfdaemonUpload{
// DownloadPiece downloads piece from the remote peer.
rpc DownloadPiece(DownloadPieceRequest)returns(DownloadPieceResponse);
// DownloadCacheTask downloads cache task from p2p network.
rpc DownloadCacheTask(DownloadCacheTaskRequest) returns(stream DownloadCacheTaskResponse);
// StatCacheTask stats cache task information.
rpc StatCacheTask(StatCacheTaskRequest) returns(common.v2.CacheTask);
// DeleteCacheTask deletes cache task from p2p network.
rpc DeleteCacheTask(DeleteCacheTaskRequest) returns(google.protobuf.Empty);
// SyncCachePieces syncs cache piece metadatas from remote peer.
rpc SyncCachePieces(SyncCachePiecesRequest) returns(stream SyncCachePiecesResponse);
// DownloadCachePiece downloads cache piece from the remote peer.
rpc DownloadCachePiece(DownloadCachePieceRequest)returns(DownloadCachePieceResponse);
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
rpc DownloadPersistentCacheTask(DownloadPersistentCacheTaskRequest) returns(stream DownloadPersistentCacheTaskResponse);
// UpdatePersistentCacheTask updates metadate of thr persistent cache task in p2p network.
rpc UpdatePersistentCacheTask(UpdatePersistentCacheTaskRequest) returns(google.protobuf.Empty);
// StatPersistentCacheTask stats persistent cache task information.
rpc StatPersistentCacheTask(StatPersistentCacheTaskRequest) returns(common.v2.PersistentCacheTask);
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
rpc DeletePersistentCacheTask(DeletePersistentCacheTaskRequest) returns(google.protobuf.Empty);
// SyncPersistentCachePieces syncs persistent cache pieces from remote peer.
rpc SyncPersistentCachePieces(SyncPersistentCachePiecesRequest) returns(stream SyncPersistentCachePiecesResponse);
// DownloadPersistentCachePiece downloads persistent cache piece from p2p network.
rpc DownloadPersistentCachePiece(DownloadPersistentCachePieceRequest)returns(DownloadPersistentCachePieceResponse);
// SyncHost sync host info from parents.
rpc SyncHost(SyncHostRequest) returns (stream common.v2.Host);
// ExchangeIBVerbsQueuePairEndpoint exchanges queue pair endpoint of IBVerbs with remote peer.
rpc ExchangeIBVerbsQueuePairEndpoint(ExchangeIBVerbsQueuePairEndpointRequest) returns(ExchangeIBVerbsQueuePairEndpointResponse);
}
// DfdaemonDownload represents download service of dfdaemon.
service DfdaemonDownload{
service DfdaemonDownload {
// DownloadTask downloads task from p2p network.
rpc DownloadTask(DownloadTaskRequest) returns(stream DownloadTaskResponse);
// StatTask stats task information.
rpc StatTask(StatTaskRequest) returns(common.v2.Task);
// ListTaskEntries lists task entries for downloading directory.
rpc ListTaskEntries(ListTaskEntriesRequest) returns(ListTaskEntriesResponse);
// DeleteTask deletes task from p2p network.
rpc DeleteTask(DeleteTaskRequest) returns(google.protobuf.Empty);
// DeleteHost releases host in scheduler.
rpc DeleteHost(google.protobuf.Empty)returns(google.protobuf.Empty);
// DownloadCacheTask downloads cache task from p2p network.
rpc DownloadCacheTask(DownloadCacheTaskRequest) returns(stream DownloadCacheTaskResponse);
// StatCacheTask stats cache task information.
rpc StatCacheTask(StatCacheTaskRequest) returns(common.v2.CacheTask);
// DeleteCacheTask deletes cache task from p2p network.
rpc DeleteCacheTask(DeleteCacheTaskRequest) returns(google.protobuf.Empty);
// DownloadPersistentCacheTask downloads persistent cache task from p2p network.
rpc DownloadPersistentCacheTask(DownloadPersistentCacheTaskRequest) returns(stream DownloadPersistentCacheTaskResponse);
@ -230,7 +593,4 @@ service DfdaemonDownload{
// StatPersistentCacheTask stats persistent cache task information.
rpc StatPersistentCacheTask(StatPersistentCacheTaskRequest) returns(common.v2.PersistentCacheTask);
// DeletePersistentCacheTask deletes persistent cache task from p2p network.
rpc DeletePersistentCacheTask(DeletePersistentCacheTaskRequest) returns(google.protobuf.Empty);
}

View File

@ -27,3 +27,9 @@ message Backend {
// Backend HTTP status code.
optional int32 status_code = 3;
}
// Unknown is error detail for Unknown.
message Unknown {
// Unknown error message.
optional string message = 1;
}

View File

@ -211,6 +211,8 @@ message UpdateSchedulerRequest {
int32 port = 7;
// Scheduler features.
repeated string features = 8;
// Scheduler Configuration.
bytes config = 9;
}
// ListSchedulersRequest represents request of ListSchedulers.
@ -229,6 +231,8 @@ message ListSchedulersRequest {
string version = 6;
// Dfdaemon commit.
string commit = 7;
// ID of the cluster to which the scheduler belongs.
uint64 scheduler_cluster_id = 8;
}
// ListSchedulersResponse represents response of ListSchedulers.

View File

@ -105,6 +105,7 @@ message DownloadPieceBackToSourceFailedRequest {
oneof response {
errordetails.v2.Backend backend = 2;
errordetails.v2.Unknown unknown = 3;
}
}
@ -214,74 +215,209 @@ message DeleteHostRequest{
string host_id = 1;
}
// ProbeStartedRequest represents started request of SyncProbesRequest.
message ProbeStartedRequest {
// RegisterCachePeerRequest represents cache peer registered request of AnnounceCachePeerRequest.
message RegisterCachePeerRequest {
// Download url.
string url = 1;
// Digest of the task digest, for example :xxx or sha256:yyy.
optional string digest = 2;
// Range is url range of request. If protocol is http, range
// will set in request header. If protocol is others, range
// will set in range field.
optional common.v2.Range range = 3;
// Task type.
common.v2.TaskType type = 4;
// URL tag identifies different task for same url.
optional string tag = 5;
// Application of task.
optional string application = 6;
// Peer priority.
common.v2.Priority priority = 7;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 8;
// Task request headers.
map<string, string> request_header = 9;
// Task piece length.
optional uint64 piece_length = 10;
// File path to be downloaded. If output_path is set, the downloaded file will be saved to the specified path.
// Dfdaemon will try to create hard link to the output path before starting the download. If hard link creation fails,
// it will copy the file to the output path after the download is completed.
// For more details refer to https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.
optional string output_path = 11;
// Download timeout.
optional google.protobuf.Duration timeout = 12;
// Dfdaemon cannot download the task from the source if disable_back_to_source is true.
bool disable_back_to_source = 13;
// Scheduler needs to schedule the task downloads from the source if need_back_to_source is true.
bool need_back_to_source = 14;
// certificate_chain is the client certs with DER format for the backend client to download back-to-source.
repeated bytes certificate_chain = 15;
// Prefetch pre-downloads all pieces of the task when the download task request is a range request.
bool prefetch = 16;
// Object storage protocol information.
optional common.v2.ObjectStorage object_storage = 17;
// HDFS protocol information.
optional common.v2.HDFS hdfs = 18;
// is_prefetch is the flag to indicate whether the request is a prefetch request.
bool is_prefetch = 19;
// need_piece_content is the flag to indicate whether the response needs to return piece content.
bool need_piece_content = 20;
// content_for_calculating_task_id is the content used to calculate the task id.
// If content_for_calculating_task_id is set, use its value to calculate the task ID.
// Otherwise, calculate the task ID based on url, piece_length, tag, application, and filtered_query_params.
optional string content_for_calculating_task_id = 21;
// remote_ip represents the IP address of the client initiating the download request.
// For proxy requests, it is set to the IP address of the request source.
// For dfget requests, it is set to the IP address of the dfget.
optional string remote_ip = 22;
}
// Probe information.
message Probe {
// Destination host metadata.
common.v2.Host host = 1;
// RTT is the round-trip time sent via this pinger.
google.protobuf.Duration rtt = 2;
// Probe create time.
google.protobuf.Timestamp created_at = 3;
// DownloadCachePeerStartedRequest represents cache peer download started request of AnnounceCachePeerRequest.
message DownloadCachePeerStartedRequest {
}
// ProbeFinishedRequest represents finished request of SyncProbesRequest.
message ProbeFinishedRequest {
// Probes information.
repeated Probe probes = 1;
// DownloadCachePeerBackToSourceStartedRequest represents cache peer download back-to-source started request of AnnounceCachePeerRequest.
message DownloadCachePeerBackToSourceStartedRequest {
// The description of the back-to-source reason.
optional string description = 1;
}
// FailedProbe information.
message FailedProbe {
// Destination host metadata.
common.v2.Host host = 1;
// The description of probing failed.
// RescheduleCachePeerRequest represents reschedule request of AnnounceCachePeerRequest.
message RescheduleCachePeerRequest {
// Candidate parent ids.
repeated common.v2.CachePeer candidate_parents = 1;
// The description of the reschedule reason.
optional string description = 2;
}
// ProbeFailedRequest represents failed request of SyncProbesRequest.
message ProbeFailedRequest {
// Failed probes information.
repeated FailedProbe probes = 1;
// DownloadCachePeerFinishedRequest represents cache peer download finished request of AnnounceCachePeerRequest.
message DownloadCachePeerFinishedRequest {
// Total content length.
uint64 content_length = 1;
// Total piece count.
uint32 piece_count = 2;
}
// SyncProbesRequest represents request of SyncProbes.
message SyncProbesRequest {
// Source host metadata.
common.v2.Host host = 1;
oneof request {
ProbeStartedRequest probe_started_request = 2;
ProbeFinishedRequest probe_finished_request = 3;
ProbeFailedRequest probe_failed_request = 4;
}
// DownloadCachePeerBackToSourceFinishedRequest represents cache peer download back-to-source finished request of AnnounceCachePeerRequest.
message DownloadCachePeerBackToSourceFinishedRequest {
// Total content length.
uint64 content_length = 1;
// Total piece count.
uint32 piece_count = 2;
}
// SyncProbesResponse represents response of SyncProbes.
message SyncProbesResponse {
// Hosts needs to be probed.
repeated common.v2.Host hosts = 1;
// DownloadCachePeerFailedRequest represents cache peer download failed request of AnnounceCachePeerRequest.
message DownloadCachePeerFailedRequest {
// The description of the download failed.
optional string description = 1;
}
// RegisterPersistentCachePeerRequest represents persistent cache peer registered request of AnnouncePersistentCachePeerRequest.
message RegisterPersistentCachePeerRequest {
// DownloadCachePeerBackToSourceFailedRequest represents cache peer download back-to-source failed request of AnnounceCachePeerRequest.
message DownloadCachePeerBackToSourceFailedRequest {
// The description of the download back-to-source failed.
optional string description = 1;
}
// AnnounceCachePeerRequest represents request of AnnounceCachePeer.
message AnnounceCachePeerRequest {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Peer id.
string peer_id = 3;
oneof request {
RegisterCachePeerRequest register_cache_peer_request = 4;
DownloadCachePeerStartedRequest download_cache_peer_started_request = 5;
DownloadCachePeerBackToSourceStartedRequest download_cache_peer_back_to_source_started_request = 6;
RescheduleCachePeerRequest reschedule_cache_peer_request = 7;
DownloadCachePeerFinishedRequest download_cache_peer_finished_request = 8;
DownloadCachePeerBackToSourceFinishedRequest download_cache_peer_back_to_source_finished_request = 9;
DownloadCachePeerFailedRequest download_cache_peer_failed_request = 10;
DownloadCachePeerBackToSourceFailedRequest download_cache_peer_back_to_source_failed_request = 11;
DownloadPieceFinishedRequest download_piece_finished_request = 12;
DownloadPieceBackToSourceFinishedRequest download_piece_back_to_source_finished_request = 13;
DownloadPieceFailedRequest download_piece_failed_request = 14;
DownloadPieceBackToSourceFailedRequest download_piece_back_to_source_failed_request = 15;
}
}
// EmptyCacheTaskResponse represents empty cache task response of AnnounceCachePeerResponse.
message EmptyCacheTaskResponse {
}
// NormalCacheTaskResponse represents normal cache task response of AnnounceCachePeerResponse.
message NormalCacheTaskResponse {
// Candidate parents.
repeated common.v2.CachePeer candidate_parents = 1;
}
// AnnounceCachePeerResponse represents response of AnnounceCachePeer.
message AnnounceCachePeerResponse {
oneof response {
EmptyCacheTaskResponse empty_cache_task_response = 1;
NormalCacheTaskResponse normal_cache_task_response = 2;
NeedBackToSourceResponse need_back_to_source_response = 3;
}
}
// StatCachePeerRequest represents request of StatCachePeer.
message StatCachePeerRequest {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Peer id.
string peer_id = 3;
}
// DeleteCachePeerRequest represents request of DeleteCachePeer.
message DeleteCachePeerRequest {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
// Peer id.
string peer_id = 3;
}
// StatCacheTaskRequest represents request of StatCacheTask.
message StatCacheTaskRequest {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
}
// DeleteCacheTaskRequest represents request of DeleteCacheTask.
message DeleteCacheTaskRequest {
// Host id.
string host_id = 1;
// Task id.
string task_id = 2;
}
// RegisterPersistentCachePeerRequest represents persistent cache peer registered request of AnnouncePersistentCachePeerRequest.
message RegisterPersistentCachePeerRequest {
// Persistent represents whether the persistent cache task is persistent.
// If the persistent cache task is persistent, the persistent cache peer will
// not be deleted when dfdaemon runs garbage collection.
bool persistent = 1;
// Tag is used to distinguish different persistent cache tasks.
optional string tag = 3;
optional string tag = 2;
// Application of task.
optional string application = 4;
// Task piece length.
uint64 piece_length = 5;
optional string application = 3;
// Task piece length, the value needs to be greater than or equal to 4194304(4MiB).
uint64 piece_length = 4;
// File path to be exported.
string output_path = 6;
optional string output_path = 5;
// Download timeout.
optional google.protobuf.Duration timeout = 7;
optional google.protobuf.Duration timeout = 6;
}
// DownloadPersistentCachePeerStartedRequest represents persistent cache peer download started request of AnnouncePersistentCachePeerRequest.
@ -381,12 +517,14 @@ message UploadPersistentCacheTaskStartedRequest {
optional string tag = 5;
// Application of task.
optional string application = 6;
// Task piece length.
// Task piece length, the value needs to be greater than or equal to 4194304(4MiB).
uint64 piece_length = 7;
// Task content length.
uint64 content_length = 8;
// Task piece count.
uint32 piece_count = 9;
// TTL of the persistent cache task.
google.protobuf.Duration ttl = 8;
// Upload timeout.
optional google.protobuf.Duration timeout = 9;
google.protobuf.Duration ttl = 10;
}
// UploadPersistentCacheTaskFinishedRequest represents upload persistent cache task finished request of UploadPersistentCacheTaskFinished.
@ -427,8 +565,147 @@ message DeletePersistentCacheTaskRequest {
string task_id = 2;
}
// PreheatImageRequest represents request of PreheatImage.
message PreheatImageRequest {
// URL is the image manifest url for preheating.
string url = 1;
// Piece Length is the piece length(bytes) for downloading file. The value needs to
// be greater than 4MiB (4,194,304 bytes) and less than 64MiB (67,108,864 bytes),
// for example: 4194304(4mib), 8388608(8mib). If the piece length is not specified,
// the piece length will be calculated according to the file size.
optional uint64 piece_length = 2;
// Tag is the tag for preheating.
optional string tag = 3;
// Application is the application string for preheating.
optional string application = 4;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 5;
// Header is the http headers for authentication.
map<string, string> header = 6;
// Username is the username for authentication.
optional string username = 7;
// Password is the password for authentication.
optional string password = 8;
// Platform is the platform for preheating, such as linux/amd64, linux/arm64, etc.
optional string platform = 9;
// Scope is the scope for preheating, it can be one of the following values:
// - single_seed_peer: preheat from a single seed peer.
// - all_peers: preheat from all available peers.
// - all_seed_peers: preheat from all seed peers.
string scope = 10;
// IPs is a list of specific peer IPs for preheating.
// This field has the highest priority: if provided, both 'Count' and 'Percentage' will be ignored.
// Applies to 'all_peers' and 'all_seed_peers' scopes.
repeated string ips = 11;
// Percentage is the percentage of available peers to preheat.
// This field has the lowest priority and is only used if both 'IPs' and 'Count' are not provided.
// It must be a value between 1 and 100 (inclusive) if provided.
// Applies to 'all_peers' and 'all_seed_peers' scopes.
optional uint32 percentage = 12;
// Count is the desired number of peers to preheat.
// This field is used only when 'IPs' is not specified. It has priority over 'Percentage'.
// It must be a value between 1 and 200 (inclusive) if provided.
// Applies to 'all_peers' and 'all_seed_peers' scopes.
optional uint32 count = 13;
// ConcurrentTaskCount specifies the maximum number of tasks (e.g., image layers) to preheat concurrently.
// For example, if preheating 100 layers with ConcurrentTaskCount set to 10, up to 10 layers are processed simultaneously.
// If ConcurrentPeerCount is 10 for 1000 peers, each layer is preheated by 10 peers concurrently.
// Default is 8, maximum is 100.
optional int64 concurrent_task_count = 14;
// ConcurrentPeerCount specifies the maximum number of peers to preheat concurrently for a single task (e.g., an image layer).
// For example, if preheating a layer with ConcurrentPeerCount set to 10, up to 10 peers process that layer simultaneously.
// Default is 500, maximum is 1000.
optional int64 concurrent_peer_count = 15;
// Timeout is the timeout for preheating, default is 30 minutes.
optional google.protobuf.Duration timeout = 16;
// Preheat priority.
common.v2.Priority priority = 17;
// certificate_chain is the client certs with DER format for the backend client.
repeated bytes certificate_chain = 18;
// insecure_skip_verify indicates whether to skip TLS verification.
bool insecure_skip_verify = 19;
}
// StatImageRequest represents request of StatImage.
message StatImageRequest {
// URL is the image manifest url for preheating.
string url = 1;
// Piece Length is the piece length(bytes) for downloading file. The value needs to
// be greater than 4MiB (4,194,304 bytes) and less than 64MiB (67,108,864 bytes),
// for example: 4194304(4mib), 8388608(8mib). If the piece length is not specified,
// the piece length will be calculated according to the file size.
optional uint64 piece_length = 2;
// Tag is the tag for preheating.
optional string tag = 3;
// Application is the application string for preheating.
optional string application = 4;
// Filtered query params to generate the task id.
// When filter is ["Signature", "Expires", "ns"], for example:
// http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io and http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io
// will generate the same task id.
// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
repeated string filtered_query_params = 5;
// Header is the http headers for authentication.
map<string, string> header = 6;
// Username is the username for authentication.
optional string username = 7;
// Password is the password for authentication.
optional string password = 8;
// Platform is the platform for preheating, such as linux/amd64, linux/arm64, etc.
optional string platform = 9;
// ConcurrentLayerCount specifies the maximum number of layers to get concurrently.
// For example, if stat 100 layers with ConcurrentLayerCount set to 10, up to 10 layers are processed simultaneously.
// If ConcurrentPeerCount is 10 for 1000 peers, each layer is stated by 10 peers concurrently.
// Default is 8, maximum is 100.
optional int64 concurrent_layer_count = 10;
// ConcurrentPeerCount specifies the maximum number of peers stat concurrently for a single task (e.g., an image layer).
// For example, if stat a layer with ConcurrentPeerCount set to 10, up to 10 peers process that layer simultaneously.
// Default is 500, maximum is 1000.
optional int64 concurrent_peer_count = 11;
// Timeout is the timeout for preheating, default is 30 minutes.
optional google.protobuf.Duration timeout = 12;
// certificate_chain is the client certs with DER format for the backend client.
repeated bytes certificate_chain = 13;
// insecure_skip_verify indicates whether to skip TLS verification.
bool insecure_skip_verify = 14;
}
// StatImageResponse represents response of StatImage.
message StatImageResponse {
// Image is the image information.
Image image = 1;
// Peers is the peers that have downloaded the image.
repeated PeerImage peers = 2;
}
// PeerImage represents a peer in the get image job.
message PeerImage {
// IP is the IP address of the peer.
string ip = 1;
// Hostname is the hostname of the peer.
string hostname = 2;
// CachedLayers is the list of layers that the peer has downloaded.
repeated Layer cached_layers = 3;
}
// Image represents the image information.
message Image {
// Layers is the list of layers of the image.
repeated Layer layers = 1;
}
// Layer represents a layer of the image.
message Layer {
// URL is the URL of the layer.
string url = 1;
}
// Scheduler RPC Service.
service Scheduler{
service Scheduler {
// AnnouncePeer announces peer to scheduler.
rpc AnnouncePeer(stream AnnouncePeerRequest) returns(stream AnnouncePeerResponse);
@ -453,8 +730,20 @@ service Scheduler{
// DeleteHost releases host in scheduler.
rpc DeleteHost(DeleteHostRequest)returns(google.protobuf.Empty);
// SyncProbes sync probes of the host.
rpc SyncProbes(stream SyncProbesRequest)returns(stream SyncProbesResponse);
// AnnounceCachePeer announces cache peer to scheduler.
rpc AnnounceCachePeer(stream AnnounceCachePeerRequest) returns(stream AnnounceCachePeerResponse);
// Checks information of cache peer.
rpc StatCachePeer(StatCachePeerRequest)returns(common.v2.CachePeer);
// DeletCacheePeer releases cache peer in scheduler.
rpc DeleteCachePeer(DeleteCachePeerRequest)returns(google.protobuf.Empty);
// Checks information of cache task.
rpc StatCacheTask(StatCacheTaskRequest)returns(common.v2.CacheTask);
// DeleteCacheTask releases cache task in scheduler.
rpc DeleteCacheTask(DeleteCacheTaskRequest)returns(google.protobuf.Empty);
// AnnouncePersistentCachePeer announces persistent cache peer to scheduler.
rpc AnnouncePersistentCachePeer(stream AnnouncePersistentCachePeerRequest) returns(stream AnnouncePersistentCachePeerResponse);
@ -479,4 +768,23 @@ service Scheduler{
// DeletePersistentCacheTask releases persistent cache task in scheduler.
rpc DeletePersistentCacheTask(DeletePersistentCacheTaskRequest)returns(google.protobuf.Empty);
// PreheatImage synchronously resolves an image manifest and triggers an asynchronous preheat task.
//
// This is a blocking call. The RPC will not return until the server has completed the
// initial synchronous work: resolving the image manifest and preparing all layer URLs.
//
// After this call successfully returns, a scheduler on the server begins the actual
// preheating process, instructing peers to download the layers in the background.
//
// A successful response (google.protobuf.Empty) confirms that the preparation is complete
// and the asynchronous download task has been scheduled.
rpc PreheatImage(PreheatImageRequest)returns(google.protobuf.Empty);
// StatImage provides detailed status for a container image's distribution in peers.
//
// This is a blocking call that first resolves the image manifest and then queries
// all peers to collect the image's download state across the network.
// The response includes both layer information and the status on each peer.
rpc StatImage(StatImageRequest)returns(StatImageResponse);
}

View File

@ -1,51 +0,0 @@
/*
* Copyright 2022 The Dragonfly Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
syntax = "proto3";
package security;
import "google/protobuf/duration.proto";
// Refer: https://github.com/istio/api/blob/master/security/v1alpha1/ca.proto
// Istio defines similar api for signing certificate, but it's not applicable in Dragonfly.
// Certificate request type.
// Dragonfly supports peers authentication with Mutual TLS(mTLS)
// For mTLS, all peers need to request TLS certificates for communicating
// The server side may overwrite ant requested certificate filed based on its policies.
message CertificateRequest {
// ASN.1 DER form certificate request.
// The public key in the CSR is used to generate the certificate,
// and other fields in the generated certificate may be overwritten by the CA.
bytes csr = 1;
// Optional: requested certificate validity period.
google.protobuf.Duration validity_period = 2;
}
// Certificate response type.
message CertificateResponse {
// ASN.1 DER form certificate chain.
repeated bytes certificate_chain = 1;
}
// Service for managing certificates issued by the CA.
service Certificate {
// Using provided CSR, returns a signed certificate.
rpc IssueCertificate(CertificateRequest)
returns (CertificateResponse) {
}
}

View File

@ -38,6 +38,45 @@ pub struct Peer {
#[prost(message, optional, tag = "11")]
pub updated_at: ::core::option::Option<::prost_wkt_types::Timestamp>,
}
/// CachePeer metadata.
#[derive(serde::Serialize, serde::Deserialize)]
#[allow(clippy::derive_partial_eq_without_eq)]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct CachePeer {
/// Peer id.
#[prost(string, tag = "1")]
pub id: ::prost::alloc::string::String,
/// Range is url range of request.
#[prost(message, optional, tag = "2")]
pub range: ::core::option::Option<Range>,
/// Peer priority.
#[prost(enumeration = "Priority", tag = "3")]
pub priority: i32,
/// Pieces of peer.
#[prost(message, repeated, tag = "4")]
pub pieces: ::prost::alloc::vec::Vec<Piece>,
/// Peer downloads costs time.
#[prost(message, optional, tag = "5")]
pub cost: ::core::option::Option<::prost_wkt_types::Duration>,
/// Peer state.
#[prost(string, tag = "6")]
pub state: ::prost::alloc::string::String,
/// Cache Task info.
#[prost(message, optional, tag = "7")]
pub task: ::core::option::Option<CacheTask>,
/// Host info.
#[prost(message, optional, tag = "8")]
pub host: ::core::option::Option<Host>,
/// NeedBackToSource needs downloaded from source.
#[prost(bool, tag = "9")]
pub need_back_to_source: bool,
/// Peer create time.
#[prost(message, optional, tag = "10")]
pub created_at: ::core::option::Option<::prost_wkt_types::Timestamp>,
/// Peer update time.
#[prost(message, optional, tag = "11")]
pub updated_at: ::core::option::Option<::prost_wkt_types::Timestamp>,
}
/// PersistentCachePeer metadata.
#[derive(serde::Serialize, serde::Deserialize)]
#[allow(clippy::derive_partial_eq_without_eq)]
@ -48,7 +87,8 @@ pub struct PersistentCachePeer {
pub id: ::prost::alloc::string::String,
/// Persistent represents whether the persistent cache peer is persistent.
/// If the persistent cache peer is persistent, the persistent cache peer will
/// not be deleted when dfdaemon runs garbage collection.
/// not be deleted when dfdaemon runs garbage collection. It only be deleted
/// when the task is deleted by the user.
#[prost(bool, tag = "2")]
pub persistent: bool,
/// Peer downloads costs time.
@ -84,7 +124,12 @@ pub struct Task {
/// Download url.
#[prost(string, tag = "3")]
pub url: ::prost::alloc::string::String,
/// Digest of the task digest, for example blake3:xxx or sha256:yyy.
/// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
/// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
/// Returns an error if the computed digest mismatches the expected value.
///
/// Performance
/// Digest calculation increases processing time. Enable only when data integrity verification is critical.
#[prost(string, optional, tag = "4")]
pub digest: ::core::option::Option<::prost::alloc::string::String>,
/// URL tag identifies different task for same url.
@ -106,35 +151,101 @@ pub struct Task {
::prost::alloc::string::String,
::prost::alloc::string::String,
>,
/// Task piece length.
#[prost(uint64, tag = "9")]
pub piece_length: u64,
/// Task content length.
#[prost(uint64, tag = "10")]
#[prost(uint64, tag = "9")]
pub content_length: u64,
/// Task piece count.
#[prost(uint32, tag = "11")]
#[prost(uint32, tag = "10")]
pub piece_count: u32,
/// Task size scope.
#[prost(enumeration = "SizeScope", tag = "12")]
#[prost(enumeration = "SizeScope", tag = "11")]
pub size_scope: i32,
/// Pieces of task.
#[prost(message, repeated, tag = "13")]
#[prost(message, repeated, tag = "12")]
pub pieces: ::prost::alloc::vec::Vec<Piece>,
/// Task state.
#[prost(string, tag = "14")]
#[prost(string, tag = "13")]
pub state: ::prost::alloc::string::String,
/// Task peer count.
#[prost(uint32, tag = "15")]
#[prost(uint32, tag = "14")]
pub peer_count: u32,
/// Task contains available peer.
#[prost(bool, tag = "16")]
#[prost(bool, tag = "15")]
pub has_available_peer: bool,
/// Task create time.
#[prost(message, optional, tag = "17")]
#[prost(message, optional, tag = "16")]
pub created_at: ::core::option::Option<::prost_wkt_types::Timestamp>,
/// Task update time.
#[prost(message, optional, tag = "18")]
#[prost(message, optional, tag = "17")]
pub updated_at: ::core::option::Option<::prost_wkt_types::Timestamp>,
}
/// CacheTask metadata.
#[derive(serde::Serialize, serde::Deserialize)]
#[allow(clippy::derive_partial_eq_without_eq)]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct CacheTask {
/// Task id.
#[prost(string, tag = "1")]
pub id: ::prost::alloc::string::String,
/// Task type.
#[prost(enumeration = "TaskType", tag = "2")]
pub r#type: i32,
/// Download url.
#[prost(string, tag = "3")]
pub url: ::prost::alloc::string::String,
/// Verifies task data integrity after download using a digest. Supports CRC32, SHA256, and SHA512 algorithms.
/// Format: `<algorithm>:<hash>`, e.g., `crc32:xxx`, `sha256:yyy`, `sha512:zzz`.
/// Returns an error if the computed digest mismatches the expected value.
///
/// Performance
/// Digest calculation increases processing time. Enable only when data integrity verification is critical.
#[prost(string, optional, tag = "4")]
pub digest: ::core::option::Option<::prost::alloc::string::String>,
/// URL tag identifies different task for same url.
#[prost(string, optional, tag = "5")]
pub tag: ::core::option::Option<::prost::alloc::string::String>,
/// Application of task.
#[prost(string, optional, tag = "6")]
pub application: ::core::option::Option<::prost::alloc::string::String>,
/// Filtered query params to generate the task id.
/// When filter is \["Signature", "Expires", "ns"\], for example:
/// <http://example.com/xyz?Expires=e1&Signature=s1&ns=docker.io> and <http://example.com/xyz?Expires=e2&Signature=s2&ns=docker.io>
/// will generate the same task id.
/// Default value includes the filtered query params of s3, gcs, oss, obs, cos.
#[prost(string, repeated, tag = "7")]
pub filtered_query_params: ::prost::alloc::vec::Vec<::prost::alloc::string::String>,
/// Task request headers.
#[prost(map = "string, string", tag = "8")]
pub request_header: ::std::collections::HashMap<
::prost::alloc::string::String,
::prost::alloc::string::String,
>,
/// Task content length.
#[prost(uint64, tag = "9")]
pub content_length: u64,
/// Task piece count.
#[prost(uint32, tag = "10")]
pub piece_count: u32,
/// Task size scope.
#[prost(enumeration = "SizeScope", tag = "11")]
pub size_scope: i32,
/// Pieces of task.
#[prost(message, repeated, tag = "12")]
pub pieces: ::prost::alloc::vec::Vec<Piece>,
/// Task state.
#[prost(string, tag = "13")]
pub state: ::prost::alloc::string::String,
/// Task peer count.
#[prost(uint32, tag = "14")]
pub peer_count: u32,
/// Task contains available peer.
#[prost(bool, tag = "15")]
pub has_available_peer: bool,
/// Task create time.
#[prost(message, optional, tag = "16")]
pub created_at: ::core::option::Option<::prost_wkt_types::Timestamp>,
/// Task update time.
#[prost(message, optional, tag = "17")]
pub updated_at: ::core::option::Option<::prost_wkt_types::Timestamp>,
}
/// PersistentCacheTask metadata.
@ -145,15 +256,20 @@ pub struct PersistentCacheTask {
/// Task id.
#[prost(string, tag = "1")]
pub id: ::prost::alloc::string::String,
/// Replica count of the persistent cache task.
/// Replica count of the persistent cache task. The persistent cache task will
/// not be deleted when dfdamon runs garbage collection. It only be deleted
/// when the task is deleted by the user.
#[prost(uint64, tag = "2")]
pub persistent_replica_count: u64,
/// Replica count of the persistent cache task.
/// Current replica count of the persistent cache task. The persistent cache task
/// will not be deleted when dfdaemon runs garbage collection. It only be deleted
/// when the task is deleted by the user.
#[prost(uint64, tag = "3")]
pub replica_count: u64,
/// Digest of the task digest, for example blake3:xxx or sha256:yyy.
#[prost(string, tag = "4")]
pub digest: ::prost::alloc::string::String,
pub current_persistent_replica_count: u64,
/// Current replica count of the cache task. If cache task is not persistent,
/// the persistent cache task will be deleted when dfdaemon runs garbage collection.
#[prost(uint64, tag = "4")]
pub current_replica_count: u64,
/// Tag is used to distinguish different persistent cache tasks.
#[prost(string, optional, tag = "5")]
pub tag: ::core::option::Option<::prost::alloc::string::String>,
@ -241,6 +357,9 @@ pub struct Host {
/// Disable shared data for other peers.
#[prost(bool, tag = "18")]
pub disable_shared: bool,
/// Port of proxy server.
#[prost(int32, tag = "19")]
pub proxy_port: i32,
}
/// CPU Stat.
#[derive(serde::Serialize, serde::Deserialize)]
@ -341,6 +460,18 @@ pub struct Network {
/// IDC where the peer host is located
#[prost(string, optional, tag = "4")]
pub idc: ::core::option::Option<::prost::alloc::string::String>,
/// Maximum bandwidth of the network interface, in bps (bits per second).
#[prost(uint64, tag = "9")]
pub max_rx_bandwidth: u64,
/// Receive bandwidth of the network interface, in bps (bits per second).
#[prost(uint64, optional, tag = "10")]
pub rx_bandwidth: ::core::option::Option<u64>,
/// Maximum bandwidth of the network interface for transmission, in bps (bits per second).
#[prost(uint64, tag = "11")]
pub max_tx_bandwidth: u64,
/// Transmit bandwidth of the network interface, in bps (bits per second).
#[prost(uint64, optional, tag = "12")]
pub tx_bandwidth: ::core::option::Option<u64>,
}
/// Disk Stat.
#[derive(serde::Serialize, serde::Deserialize)]
@ -371,6 +502,12 @@ pub struct Disk {
/// Used percent of indoes on the data path of dragonfly directory.
#[prost(double, tag = "8")]
pub inodes_used_percent: f64,
/// Disk read bandwidth, in bytes per second.
#[prost(uint64, tag = "9")]
pub read_bandwidth: u64,
/// Disk write bandwidth, in bytes per second.
#[prost(uint64, tag = "10")]
pub write_bandwidth: u64,
}
/// Build information.
#[derive(serde::Serialize, serde::Deserialize)]
@ -437,27 +574,55 @@ pub struct Download {
/// Task piece length.
#[prost(uint64, optional, tag = "10")]
pub piece_length: ::core::option::Option<u64>,
/// File path to be exported.
/// File path to be downloaded. If output_path is set, the downloaded file will be saved to the specified path.
/// Dfdaemon will try to create hard link to the output path before starting the download. If hard link creation fails,
/// it will copy the file to the output path after the download is completed.
/// For more details refer to <https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.>
#[prost(string, optional, tag = "11")]
pub output_path: ::core::option::Option<::prost::alloc::string::String>,
/// Download timeout.
#[prost(message, optional, tag = "12")]
pub timeout: ::core::option::Option<::prost_wkt_types::Duration>,
/// Dfdaemon will disable download back-to-source by itself, if disable_back_to_source is true.
/// Dfdaemon cannot download the task from the source if disable_back_to_source is true.
#[prost(bool, tag = "13")]
pub disable_back_to_source: bool,
/// Scheduler will triggers peer to download back-to-source, if need_back_to_source is true.
/// Scheduler needs to schedule the task downloads from the source if need_back_to_source is true.
#[prost(bool, tag = "14")]
pub need_back_to_source: bool,
/// certificate_chain is the client certs with DER format for the backend client to download back-to-source.
#[prost(bytes = "vec", repeated, tag = "15")]
pub certificate_chain: ::prost::alloc::vec::Vec<::prost::alloc::vec::Vec<u8>>,
/// Prefetch pre-downloads all pieces of the task when download task with range request.
/// Prefetch pre-downloads all pieces of the task when the download task request is a range request.
#[prost(bool, tag = "16")]
pub prefetch: bool,
/// Object Storage related information.
/// Object storage protocol information.
#[prost(message, optional, tag = "17")]
pub object_storage: ::core::option::Option<ObjectStorage>,
/// HDFS protocol information.
#[prost(message, optional, tag = "18")]
pub hdfs: ::core::option::Option<Hdfs>,
/// is_prefetch is the flag to indicate whether the request is a prefetch request.
#[prost(bool, tag = "19")]
pub is_prefetch: bool,
/// need_piece_content is the flag to indicate whether the response needs to return piece content.
#[prost(bool, tag = "20")]
pub need_piece_content: bool,
/// force_hard_link is the flag to indicate whether the download file must be hard linked to the output path.
/// For more details refer to <https://github.com/dragonflyoss/design/blob/main/systems-analysis/file-download-workflow-with-hard-link/README.md.>
#[prost(bool, tag = "22")]
pub force_hard_link: bool,
/// content_for_calculating_task_id is the content used to calculate the task id.
/// If content_for_calculating_task_id is set, use its value to calculate the task ID.
/// Otherwise, calculate the task ID based on url, piece_length, tag, application, and filtered_query_params.
#[prost(string, optional, tag = "23")]
pub content_for_calculating_task_id: ::core::option::Option<
::prost::alloc::string::String,
>,
/// remote_ip represents the IP address of the client initiating the download request.
/// For proxy requests, it is set to the IP address of the request source.
/// For dfget requests, it is set to the IP address of the dfget.
#[prost(string, optional, tag = "24")]
pub remote_ip: ::core::option::Option<::prost::alloc::string::String>,
}
/// Object Storage related information.
#[derive(serde::Serialize, serde::Deserialize)]
@ -485,6 +650,18 @@ pub struct ObjectStorage {
/// Predefined ACL that used for the Google Cloud Storage service.
#[prost(string, optional, tag = "7")]
pub predefined_acl: ::core::option::Option<::prost::alloc::string::String>,
/// Temporary STS security token for accessing OSS.
#[prost(string, optional, tag = "8")]
pub security_token: ::core::option::Option<::prost::alloc::string::String>,
}
/// HDFS related information.
#[derive(serde::Serialize, serde::Deserialize)]
#[allow(clippy::derive_partial_eq_without_eq)]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Hdfs {
/// Delegation token for Web HDFS operator.
#[prost(string, optional, tag = "1")]
pub delegation_token: ::core::option::Option<::prost::alloc::string::String>,
}
/// Range represents download range.
#[derive(serde::Serialize, serde::Deserialize)]
@ -578,7 +755,7 @@ pub enum TaskType {
/// local peer(local cache). When the standard task is never downloaded in the
/// P2P cluster, dfdaemon will download the task from the source. When the standard
/// task is downloaded in the P2P cluster, dfdaemon will download the task from
/// the remote peer or local peer(local cache).
/// the remote peer or local peer(local cache), where peers use disk storage to store tasks.
Standard = 0,
/// PERSISTENT is persistent type of task, it can import file and export file in P2P cluster.
/// When the persistent task is imported into the P2P cluster, dfdaemon will store
@ -590,6 +767,12 @@ pub enum TaskType {
/// the task in the peer's disk and copy multiple replicas to remote peers to prevent data loss.
/// When the expiration time is reached, task will be deleted in the P2P cluster.
PersistentCache = 2,
/// CACHE is cache type of task, it can download from source, remote peer and
/// local peer(local cache). When the cache task is never downloaded in the
/// P2P cluster, dfdaemon will download the cache task from the source. When the cache
/// task is downloaded in the P2P cluster, dfdaemon will download the cache task from
/// the remote peer or local peer(local cache), where peers use memory storage to store cache tasks.
Cache = 3,
}
impl TaskType {
/// String value of the enum field names used in the ProtoBuf definition.
@ -601,6 +784,7 @@ impl TaskType {
TaskType::Standard => "STANDARD",
TaskType::Persistent => "PERSISTENT",
TaskType::PersistentCache => "PERSISTENT_CACHE",
TaskType::Cache => "CACHE",
}
}
/// Creates an enum from field names used in the ProtoBuf definition.
@ -609,6 +793,7 @@ impl TaskType {
"STANDARD" => Some(Self::Standard),
"PERSISTENT" => Some(Self::Persistent),
"PERSISTENT_CACHE" => Some(Self::PersistentCache),
"CACHE" => Some(Self::Cache),
_ => None,
}
}

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -17,3 +17,12 @@ pub struct Backend {
#[prost(int32, optional, tag = "3")]
pub status_code: ::core::option::Option<i32>,
}
/// Unknown is error detail for Unknown.
#[derive(serde::Serialize, serde::Deserialize)]
#[allow(clippy::derive_partial_eq_without_eq)]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Unknown {
/// Unknown error message.
#[prost(string, optional, tag = "1")]
pub message: ::core::option::Option<::prost::alloc::string::String>,
}

View File

@ -28,7 +28,5 @@ pub mod scheduler {
pub mod v2;
}
pub mod security;
// FILE_DESCRIPTOR_SET is the serialized FileDescriptorSet of the proto files.
pub const FILE_DESCRIPTOR_SET: &[u8] = include_bytes!("descriptor.bin");

View File

@ -268,6 +268,9 @@ pub struct UpdateSchedulerRequest {
/// Scheduler features.
#[prost(string, repeated, tag = "8")]
pub features: ::prost::alloc::vec::Vec<::prost::alloc::string::String>,
/// Scheduler Configuration.
#[prost(bytes = "vec", tag = "9")]
pub config: ::prost::alloc::vec::Vec<u8>,
}
/// ListSchedulersRequest represents request of ListSchedulers.
#[derive(serde::Serialize, serde::Deserialize)]
@ -295,6 +298,9 @@ pub struct ListSchedulersRequest {
/// Dfdaemon commit.
#[prost(string, tag = "7")]
pub commit: ::prost::alloc::string::String,
/// ID of the cluster to which the scheduler belongs.
#[prost(uint64, tag = "8")]
pub scheduler_cluster_id: u64,
}
/// ListSchedulersResponse represents response of ListSchedulers.
#[derive(serde::Serialize, serde::Deserialize)]

File diff suppressed because it is too large Load Diff

View File

@ -1,315 +0,0 @@
// This file is @generated by prost-build.
/// Certificate request type.
/// Dragonfly supports peers authentication with Mutual TLS(mTLS)
/// For mTLS, all peers need to request TLS certificates for communicating
/// The server side may overwrite ant requested certificate filed based on its policies.
#[derive(serde::Serialize, serde::Deserialize)]
#[allow(clippy::derive_partial_eq_without_eq)]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct CertificateRequest {
/// ASN.1 DER form certificate request.
/// The public key in the CSR is used to generate the certificate,
/// and other fields in the generated certificate may be overwritten by the CA.
#[prost(bytes = "vec", tag = "1")]
pub csr: ::prost::alloc::vec::Vec<u8>,
/// Optional: requested certificate validity period.
#[prost(message, optional, tag = "2")]
pub validity_period: ::core::option::Option<::prost_wkt_types::Duration>,
}
/// Certificate response type.
#[derive(serde::Serialize, serde::Deserialize)]
#[allow(clippy::derive_partial_eq_without_eq)]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct CertificateResponse {
/// ASN.1 DER form certificate chain.
#[prost(bytes = "vec", repeated, tag = "1")]
pub certificate_chain: ::prost::alloc::vec::Vec<::prost::alloc::vec::Vec<u8>>,
}
/// Generated client implementations.
pub mod certificate_client {
#![allow(unused_variables, dead_code, missing_docs, clippy::let_unit_value)]
use tonic::codegen::*;
use tonic::codegen::http::Uri;
/// Service for managing certificates issued by the CA.
#[derive(Debug, Clone)]
pub struct CertificateClient<T> {
inner: tonic::client::Grpc<T>,
}
impl CertificateClient<tonic::transport::Channel> {
/// Attempt to create a new client by connecting to a given endpoint.
pub async fn connect<D>(dst: D) -> Result<Self, tonic::transport::Error>
where
D: TryInto<tonic::transport::Endpoint>,
D::Error: Into<StdError>,
{
let conn = tonic::transport::Endpoint::new(dst)?.connect().await?;
Ok(Self::new(conn))
}
}
impl<T> CertificateClient<T>
where
T: tonic::client::GrpcService<tonic::body::BoxBody>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + std::marker::Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + std::marker::Send,
{
pub fn new(inner: T) -> Self {
let inner = tonic::client::Grpc::new(inner);
Self { inner }
}
pub fn with_origin(inner: T, origin: Uri) -> Self {
let inner = tonic::client::Grpc::with_origin(inner, origin);
Self { inner }
}
pub fn with_interceptor<F>(
inner: T,
interceptor: F,
) -> CertificateClient<InterceptedService<T, F>>
where
F: tonic::service::Interceptor,
T::ResponseBody: Default,
T: tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
Response = http::Response<
<T as tonic::client::GrpcService<tonic::body::BoxBody>>::ResponseBody,
>,
>,
<T as tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
>>::Error: Into<StdError> + std::marker::Send + std::marker::Sync,
{
CertificateClient::new(InterceptedService::new(inner, interceptor))
}
/// Compress requests with the given encoding.
///
/// This requires the server to support it otherwise it might respond with an
/// error.
#[must_use]
pub fn send_compressed(mut self, encoding: CompressionEncoding) -> Self {
self.inner = self.inner.send_compressed(encoding);
self
}
/// Enable decompressing responses.
#[must_use]
pub fn accept_compressed(mut self, encoding: CompressionEncoding) -> Self {
self.inner = self.inner.accept_compressed(encoding);
self
}
/// Limits the maximum size of a decoded message.
///
/// Default: `4MB`
#[must_use]
pub fn max_decoding_message_size(mut self, limit: usize) -> Self {
self.inner = self.inner.max_decoding_message_size(limit);
self
}
/// Limits the maximum size of an encoded message.
///
/// Default: `usize::MAX`
#[must_use]
pub fn max_encoding_message_size(mut self, limit: usize) -> Self {
self.inner = self.inner.max_encoding_message_size(limit);
self
}
/// Using provided CSR, returns a signed certificate.
pub async fn issue_certificate(
&mut self,
request: impl tonic::IntoRequest<super::CertificateRequest>,
) -> std::result::Result<
tonic::Response<super::CertificateResponse>,
tonic::Status,
> {
self.inner
.ready()
.await
.map_err(|e| {
tonic::Status::new(
tonic::Code::Unknown,
format!("Service was not ready: {}", e.into()),
)
})?;
let codec = tonic::codec::ProstCodec::default();
let path = http::uri::PathAndQuery::from_static(
"/security.Certificate/IssueCertificate",
);
let mut req = request.into_request();
req.extensions_mut()
.insert(GrpcMethod::new("security.Certificate", "IssueCertificate"));
self.inner.unary(req, path, codec).await
}
}
}
/// Generated server implementations.
pub mod certificate_server {
#![allow(unused_variables, dead_code, missing_docs, clippy::let_unit_value)]
use tonic::codegen::*;
/// Generated trait containing gRPC methods that should be implemented for use with CertificateServer.
#[async_trait]
pub trait Certificate: std::marker::Send + std::marker::Sync + 'static {
/// Using provided CSR, returns a signed certificate.
async fn issue_certificate(
&self,
request: tonic::Request<super::CertificateRequest>,
) -> std::result::Result<
tonic::Response<super::CertificateResponse>,
tonic::Status,
>;
}
/// Service for managing certificates issued by the CA.
#[derive(Debug)]
pub struct CertificateServer<T> {
inner: Arc<T>,
accept_compression_encodings: EnabledCompressionEncodings,
send_compression_encodings: EnabledCompressionEncodings,
max_decoding_message_size: Option<usize>,
max_encoding_message_size: Option<usize>,
}
impl<T> CertificateServer<T> {
pub fn new(inner: T) -> Self {
Self::from_arc(Arc::new(inner))
}
pub fn from_arc(inner: Arc<T>) -> Self {
Self {
inner,
accept_compression_encodings: Default::default(),
send_compression_encodings: Default::default(),
max_decoding_message_size: None,
max_encoding_message_size: None,
}
}
pub fn with_interceptor<F>(
inner: T,
interceptor: F,
) -> InterceptedService<Self, F>
where
F: tonic::service::Interceptor,
{
InterceptedService::new(Self::new(inner), interceptor)
}
/// Enable decompressing requests with the given encoding.
#[must_use]
pub fn accept_compressed(mut self, encoding: CompressionEncoding) -> Self {
self.accept_compression_encodings.enable(encoding);
self
}
/// Compress responses with the given encoding, if the client supports it.
#[must_use]
pub fn send_compressed(mut self, encoding: CompressionEncoding) -> Self {
self.send_compression_encodings.enable(encoding);
self
}
/// Limits the maximum size of a decoded message.
///
/// Default: `4MB`
#[must_use]
pub fn max_decoding_message_size(mut self, limit: usize) -> Self {
self.max_decoding_message_size = Some(limit);
self
}
/// Limits the maximum size of an encoded message.
///
/// Default: `usize::MAX`
#[must_use]
pub fn max_encoding_message_size(mut self, limit: usize) -> Self {
self.max_encoding_message_size = Some(limit);
self
}
}
impl<T, B> tonic::codegen::Service<http::Request<B>> for CertificateServer<T>
where
T: Certificate,
B: Body + std::marker::Send + 'static,
B::Error: Into<StdError> + std::marker::Send + 'static,
{
type Response = http::Response<tonic::body::BoxBody>;
type Error = std::convert::Infallible;
type Future = BoxFuture<Self::Response, Self::Error>;
fn poll_ready(
&mut self,
_cx: &mut Context<'_>,
) -> Poll<std::result::Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, req: http::Request<B>) -> Self::Future {
match req.uri().path() {
"/security.Certificate/IssueCertificate" => {
#[allow(non_camel_case_types)]
struct IssueCertificateSvc<T: Certificate>(pub Arc<T>);
impl<
T: Certificate,
> tonic::server::UnaryService<super::CertificateRequest>
for IssueCertificateSvc<T> {
type Response = super::CertificateResponse;
type Future = BoxFuture<
tonic::Response<Self::Response>,
tonic::Status,
>;
fn call(
&mut self,
request: tonic::Request<super::CertificateRequest>,
) -> Self::Future {
let inner = Arc::clone(&self.0);
let fut = async move {
<T as Certificate>::issue_certificate(&inner, request).await
};
Box::pin(fut)
}
}
let accept_compression_encodings = self.accept_compression_encodings;
let send_compression_encodings = self.send_compression_encodings;
let max_decoding_message_size = self.max_decoding_message_size;
let max_encoding_message_size = self.max_encoding_message_size;
let inner = self.inner.clone();
let fut = async move {
let method = IssueCertificateSvc(inner);
let codec = tonic::codec::ProstCodec::default();
let mut grpc = tonic::server::Grpc::new(codec)
.apply_compression_config(
accept_compression_encodings,
send_compression_encodings,
)
.apply_max_message_size_config(
max_decoding_message_size,
max_encoding_message_size,
);
let res = grpc.unary(method, req).await;
Ok(res)
};
Box::pin(fut)
}
_ => {
Box::pin(async move {
Ok(
http::Response::builder()
.status(200)
.header("grpc-status", tonic::Code::Unimplemented as i32)
.header(
http::header::CONTENT_TYPE,
tonic::metadata::GRPC_CONTENT_TYPE,
)
.body(empty_body())
.unwrap(),
)
})
}
}
}
}
impl<T> Clone for CertificateServer<T> {
fn clone(&self) -> Self {
let inner = self.inner.clone();
Self {
inner,
accept_compression_encodings: self.accept_compression_encodings,
send_compression_encodings: self.send_compression_encodings,
max_decoding_message_size: self.max_decoding_message_size,
max_encoding_message_size: self.max_encoding_message_size,
}
}
}
/// Generated gRPC service name
pub const SERVICE_NAME: &str = "security.Certificate";
impl<T> tonic::server::NamedService for CertificateServer<T> {
const NAME: &'static str = SERVICE_NAME;
}
}