Compare commits

..

80 Commits

Author SHA1 Message Date
renovate[bot] b9cabb2e09 fix(deps): update longhorn branch repo dependencies to v1.9.1 2025-07-27 06:27:34 +00:00
David Ko 979b8d559b release: update version file for v1.9.1
Signed-off-by: David Ko <dko@suse.com>
2025-07-22 22:00:40 +00:00
renovate[bot] 82510b717b fix(deps): update patch digest dependencies 2025-07-20 06:21:22 +00:00
Derek Su e4f4736028 fix: revert "fix: check backup target status before backup creation"
This reverts commit 0ac64f1d1a.

The backport should be in v1.9.2 instead.

Longhorn 10085

Signed-off-by: Derek Su <derek.su@suse.com>
2025-07-18 19:01:37 +08:00
nina zhan 0ac64f1d1a fix: check backup target status before backup creation
check if the backup target is available before creating a backup,
backup backing image, and system backup.

ref #10085

Signed-off-by: nina zhan <ninazhan666@gmail.com>
(cherry picked from commit 00fb7c6e65)
2025-07-17 16:42:05 -07:00
David Ko 1477720686 release: update version file for v1.9.1-rc1
Signed-off-by: David Ko <dko@suse.com>
2025-07-16 00:23:19 +00:00
renovate[bot] 47b135df6e fix(deps): update github.com/longhorn/go-iscsi-helper digest to 69ce6f3 2025-07-13 14:42:40 +00:00
renovate[bot] 768e06189a fix(deps): update patch digest dependencies 2025-07-13 06:27:52 +00:00
Derek Su bb02bf8c48 chore(workflow): replace tibdex/github-app-token with actions/create-github-app-token
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 7234295e83)
2025-07-11 19:30:28 +08:00
Derek Su da623fa463 chore(workflow): add workflow_dispatch for create-crd-update-pr-in-longhorn-repo.yml
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 4ddb6cb733)

# Conflicts:
#	.github/workflows/create-crd-update-pr-in-longhorn-repo.yml
2025-07-11 19:30:28 +08:00
Derek Su 6ab392b0ad chore(workflow): generate longhorn-github-bot token for create-crd-update-pr-in-longhorn-repo.yml
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit e4e7f35a8e)

# Conflicts:
#	.github/workflows/create-crd-update-pr-in-longhorn-repo.yml
2025-07-11 19:30:28 +08:00
Derek Su c59345fa55 feat(crd): remove preserveUnknownFields patch
- Kubernetes enforces the rule: If `preserveUnknownFields: true`, then `conversion.Strategy`
  must be `None`, because
    - When `preserveUnknownFields: true` is set, Kubernetes skips strict
      schema validation and structural typing.
    - Webhook conversion requires a well-defined schema and structural CRs,
      so the two settings are incompatible.
  Starting from Longhorn v1.10, the v1beta1 API has been removed, and the
  conversion webhook is no longer used.

- In addition, preserveUnknownFields is deprecated in `apiextensions.k8s.io/v1`
  since Kubernetes v1.22, and the default value is false.

Longhorn 11263

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 57b1f20ff8)
2025-07-11 18:53:48 +08:00
Chin-Ya Huang 56c10da942 chore(cve): update bci-base image verion to 15.7
longhorn/longhorn-11239

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
2025-07-09 14:20:45 +08:00
James Lu f4c9114830 fix(backup): add backup volume validator
ref: longhorn/longhorn 11154

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit 115f8c18b4)
2025-07-08 15:06:32 +08:00
James Lu eeeada5cac fix(backup): delete duplicated backup volumes
ref: longhorn/longhorn 11154

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit ec2a5a23b8)
2025-07-08 15:06:32 +08:00
Chin-Ya Huang 14a7f43322 fix(system-backup): failed when volume backup policy is if-not-present
longhorn/longhorn-11232

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 596d573021)
2025-07-07 08:49:21 +00:00
Chin-Ya Huang 2504bbb27f test(system-backup): volume backup if-not-present with invalid snapshot creationTime
longhorn/longhorn-11232

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 7c8ddac721)
2025-07-07 08:49:21 +00:00
Phan Le 99d452b80e fix: do not silent the error returned by ReconcileBackupVolumeState
User would have no idea of why the volume is stuck if we silent the
error

longhorn-11152

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit 9cbb80614f)
2025-07-04 14:18:46 +08:00
David Cheng 831664ac3a feat(scheduler): add detailed error messages for disk schedulability checks
Signed-off-by: David Cheng <davidcheng0922@gmail.com>
(cherry picked from commit 16f796214e)
2025-07-02 21:02:20 +08:00
David Cheng 79db7946c1 feat(scheduler): add detailed error messages for disk schedulability checks
- Refactor IsSchedulableToDisk to return detailed error reason alongside result
- Improve logging in node controller and replica scheduler for unschedulable conditions
- Replace ambiguous messages with actionable suggestions
- Update related unit tests to match new error message format

Signed-off-by: David Cheng <davidcheng0922@gmail.com>
(cherry picked from commit 982b21fd3a)
2025-07-02 21:02:20 +08:00
Derek Su 2d6125b1dc feat: add error in the engine image condition
To improve troubleshooting, add error in the engine image condition.

Longhorn 9845

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit c65035e669)
2025-06-30 16:35:23 +08:00
renovate[bot] 60901be843 fix(deps): update patch digest dependencies 2025-06-29 05:21:02 +00:00
James Lu d33884d10d fix(snapshot): enqueue snapshots when a volume CR is updated
ref: longhorn/longhorn 10874

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit d75b51f3d7)
2025-06-28 12:01:18 +08:00
Chin-Ya Huang 28f3cfa233 fix(storage-network): skip migratable RWX volume workloads restarted
longhorn/longhorn-11158

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit a0ddfc27b9)
2025-06-25 06:33:10 +00:00
Chin-Ya Huang 21ae0eabf5 fix(support-bundle): panic NPE after update conflicted
longhorn/longhorn-11169

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 7fd9cf808a)
2025-06-25 04:36:19 +00:00
renovate[bot] 7f5ddb7ca2 fix(deps): update patch digest dependencies 2025-06-22 06:15:03 +00:00
renovate[bot] 1089aa267c fix(deps): update patch digest dependencies 2025-06-15 15:00:28 +00:00
renovate[bot] f77b277cc4 fix(deps): update module github.com/urfave/cli to v1.22.17 2025-06-15 05:52:42 +00:00
renovate[bot] 6f07308aa7 fix(deps): update patch digest dependencies 2025-06-08 06:55:28 +00:00
Chin-Ya Huang badc1f7c39 fix(recurringjob): ensure each goroutine uses its own *logrus.Entry
longhorn/longhorn-11016

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 842a85bc8e)
2025-06-06 09:44:38 +08:00
Chin-Ya Huang 10c30c040c fix(recurring-job): concurrent write to recurringJob.Spec.Labels
longhorn/longhorn-11016

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 849f677822)
2025-06-06 09:44:38 +08:00
Chin-Ya Huang 1443f4ffb6 fix(recurring-job): newVolumeJob not controlled by the concurrentLimiter
longhorn/longhorn-11016

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit b0e446e623)
2025-06-06 09:44:38 +08:00
Raphanus Lo ac16968f98 fix(backingimage): prevent eviction of in-used copies
longhorn/longhorn-11053

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit c5e2988fd3)
2025-06-05 02:23:38 +00:00
Derek Su ce40c4d86f fix(workflow): fix invalid curl path
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit a79573a761)
2025-06-04 17:14:10 +08:00
Derek Su e8ae1f15eb fix(workflow): use target branch instead
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 43e36b45f5)
2025-06-04 08:36:02 +00:00
Raphanus Lo c7c414f531 fix(backingimage): reconcile backing image disk evict event by spec disks
longhorn/longhorn-11034

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit cae3910641)
2025-06-04 16:15:35 +08:00
Phan Le b70dd73d0d feat: add StartupProbe for Longhorn CSI plugin
StartupProbe is needed because Longhorn CSI plugin might need more time
at the begining to establish connection to Longhorn manager API. Without
StartupProbe, we would have to rely on the LivenessProbe which will not
wait for the long enough, unnessary crash the Longhorn CSI plugin container,
and causing more delay.

longhorn-9482

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit 604a88fbb6)
2025-06-03 08:49:04 +00:00
Phan Le db917b65ba fix: add longhorn-csi-plugin retry logic
Retry the Longhorn client initialization instead of failing immediately after
the first try

longhorn-9482

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit 9deae1ff71)
2025-06-03 08:49:04 +00:00
renovate[bot] 81d8919e31 fix(deps): update longhorn branch repo dependencies to v1.9.0 2025-06-01 06:37:06 +00:00
renovate[bot] bed82e9332 fix(deps): update patch digest dependencies 2025-06-01 06:16:45 +00:00
David Ko 8bb4d023b6 release: update version file for v1.9.0
Signed-off-by: David Ko <dko@suse.com>
2025-05-27 02:52:58 +00:00
David Ko 6d56ab6d30 release: update version file for v1.9.0-rc4
Signed-off-by: David Ko <dko@suse.com>
2025-05-26 04:11:40 +00:00
renovate[bot] 425cec7960 fix(deps): update module github.com/longhorn/longhorn-engine to v1.9.0-rc3 2025-05-25 06:14:13 +00:00
renovate[bot] 37c2add23a fix(deps): update patch digest dependencies 2025-05-25 05:53:20 +00:00
Derek Su 1b4e5a3953 fix(setting): orphan-resource-auto-deletion-grace-period should be in Orphan category
Longhorn 10904

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 412ede8bba)
2025-05-21 10:36:29 +09:00
David Ko cab28508e5 release: update version file for v1.9.0-rc3
Signed-off-by: David Ko <dko@suse.com>
2025-05-20 06:43:25 +00:00
Raphanus Lo dd6c53173c fix(orphan): improve orphan parameter code readability
Longhorn 10888

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit 08801f5a47)
2025-05-19 09:44:55 +00:00
Raphanus Lo 1855ebb481 feat(orphan): make orphan CR spec immutable
Longhorn 10888

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit 635f2a2dea)
2025-05-19 09:44:55 +00:00
Raphanus Lo 938d167182 fix(orphan): prevent retry while deleting illegal instance orphans
Longhorn 10888

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit 69f547d951)
2025-05-19 09:44:55 +00:00
Raphanus Lo d384cd916a feat(orphan): bind orphan with instance using uuid
longhorn/longhorn-10888

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit fd72dff47a)
2025-05-19 09:44:55 +00:00
Derek Su bcc2356a85 feat: record instance uuid in orphan.spec.parameters["InstanceUID"]
Longhorn 10888

Signed-off-by: Derek Su <derek.su@suse.com>
Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit 71fc752773)
2025-05-19 09:44:55 +00:00
Derek Su b26e52446c feat: record uuid of engine or replica instance to im.status
Longhorn 10888

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 045c08b44e)
2025-05-19 09:44:55 +00:00
Derek Su 8163bac607 feat(crd): add uuid in InstanceProcessStatus and InstanceStatus
Longhorn 10888

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 9a9bf8188d)
2025-05-19 09:44:55 +00:00
Derek Su 8723a36e64 chore(vendor): update dependencies
Longhorn 10888

Signed-off-by: Derek Su <derek.su@suse.com>
Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit b797aaac95)
2025-05-19 09:44:55 +00:00
Derek Su 205861cae6 feat(orphan): introduce orphan-resource-auto-deletion-grace-period setting
In the current implementation, with auto orphan deletion enabled, the orphan CR
and its associated process are deleted almost immediately after the orphan CR is
created. This behavior might be too abrupt. We could introduce a configurable
setting, such as a wait interval (deletion grace period), to delay this automatic
deletion. For instance, if the interval is set to 60 seconds, the orphan CR and
its process would be deleted 60 seconds after the creation timestamp.

Manual deletion of the orphan CR by the user would bypass this delay and proceed
immediately.

Longhorn 10904

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit ff4148dbeb)
2025-05-19 15:07:09 +09:00
Derek Su 7a062e4714 fix(orphan): replace switch-case with if-else
switch-case works better for conditions based on a single number or string,
whereas if-else is more suitable for decisions involving multiple variables.

Longhorn 10888

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit e83d51f56e)
2025-05-19 15:07:09 +09:00
renovate[bot] 207eb71df4 fix(deps): update patch digest dependencies 2025-05-17 03:53:02 +00:00
Raphanus Lo c95b2e4ada fix(orphan): prevent creating orphans from a terminating instance
longhorn/longhorn-10888

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit 0c94ce3177)
2025-05-16 21:18:27 +09:00
Raphanus Lo 2aadde18bc fix(orphan): remove owner reference from orphan CRs to prevent blocking instance manager termination
longhorn/longhorn-10888

Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit 48a757ad4a)
2025-05-16 21:18:27 +09:00
Chin-Ya Huang bf84f3d0b1 revert: "fix: enqueue snapshot to ensure volumeattachment ticket can be correctly cleaned up"
This reverts commit 014fd6ea91.

longhorn/longhorn-10808

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 523a0aee5d)
2025-05-15 19:51:33 +09:00
Derek Su be9e6e4e24 chore: update log messages
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 96d7e664b3)
2025-05-14 10:30:57 +09:00
Derek Su 9542b23511 fix: enqueue snapshot to ensure volumeattachment ticket can be correctly cleaned up
Requeueue the snapshot if there is an attachment ticket for it to ensure
the volumeattachment can be cleaned up after the snapshot is created.

Longhorn 10874

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 014fd6ea91)
2025-05-14 10:30:57 +09:00
James Lu acdebda5ef fix(rebuild): change the code flow for offline rebuilding
Reduce the false warning logs for offline rebuilding
when deleting volume.

ref: longhorn/longhorn 10889

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit 3468976c36)
2025-05-13 13:42:09 +09:00
renovate[bot] c0cee11eda fix(deps): update github.com/longhorn/go-iscsi-helper digest to ceffe5d 2025-05-11 14:27:53 +00:00
renovate[bot] a417db2000 fix(deps): update longhorn branch repo dependencies to v1.9.0-rc2 2025-05-11 07:46:38 +00:00
renovate[bot] 143003ff67 fix(deps): update patch digest dependencies 2025-05-11 07:27:33 +00:00
David Ko ecd00b3c24 release: update version file for v1.9.0-rc2
Signed-off-by: David Ko <dko@suse.com>
2025-05-07 08:40:17 +00:00
renovate[bot] 0187fd7d82 chore(deps): update dependency go to v1.24.3 2025-05-07 08:30:13 +00:00
Derek Su 5aa86bcfac fix(controller): remove unnecessary lasso dependency
Longhorn 10856

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit ad0cdbf3c8)
2025-05-06 16:45:49 +00:00
Chin-Ya Huang 4b08f35386 fix(snapshot): skip new snapshot handling if engine is purging
longhorn/longhorn-10808

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit a51de50092)
2025-05-06 16:45:49 +09:00
Chin-Ya Huang 437bea1d81 fix(snapshot): skip new snapshot handling if engine is purging
longhorn/longhorn-10808

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit a51de50092)
2025-05-06 07:20:20 +00:00
Chin-Ya Huang 782555bb3e fix(dr-volume): backup volume not found during up-to-date check
longhorn/longhorn-10824

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit bb52d1da76)
2025-05-05 07:53:54 +00:00
renovate[bot] 0d1dc62612 chore(deps): update docker/dockerfile docker tag to v1.15.1 2025-05-04 06:20:24 +00:00
renovate[bot] 1f4898acce fix(deps): update longhorn branch repo dependencies to v1.9.0-rc1 2025-05-04 06:03:12 +00:00
renovate[bot] fbb2ff0993 fix(deps): update k8s.io/utils digest to 0f33e8f 2025-05-04 05:44:02 +00:00
Raphanus Lo 78420e9f7d feat(setting): replace orphan-auto-deletion with orphan-resource-auto-deletion
Signed-off-by: Raphanus Lo <yunchang.lo@suse.com>
(cherry picked from commit 40fd3ab08f)
2025-05-02 08:59:39 +00:00
David Ko a6079a9394 release: update version file for v1.9.0-rc1
Signed-off-by: David Ko <dko@suse.com>
2025-04-28 05:47:40 +00:00
renovate[bot] 0f65c400dc chore(deps): update module k8s.io/legacy-cloud-providers to v0.30.12 2025-04-28 07:11:38 +08:00
renovate[bot] 1c7ab7066c fix(deps): update module github.com/longhorn/longhorn-engine to v1.9.0-dev-20250420 2025-04-28 06:55:18 +08:00
Derek Su 23b9caaca5 chore(Dockerfile): revert back to registry.suse.com/bci/bci-base:15.6
registry.suse.com/bci/bci-base:15.7 is in TechPreview stage.

Signed-off-by: Derek Su <derek.su@suse.com>
2025-04-25 21:04:59 +08:00
648 changed files with 21601 additions and 54442 deletions

View File

@ -82,4 +82,4 @@ jobs:
title: "chore(crd): update crds.yaml and manifests (PR longhorn/longhorn-manager#${{ steps.pr_info.outputs.PR_NUMBER }})"
body: |
This PR updates the crds.yaml and manifests.
It was triggered by longhorn/longhorn-manager#${{ steps.pr_info.outputs.PR_NUMBER }}.
It was triggered by longhorn/longhorn-manager#${{ steps.pr_info.outputs.PR_NUMBER }}.

View File

@ -74,7 +74,7 @@ func (s *Server) EngineImageCreate(rw http.ResponseWriter, req *http.Request) er
func (s *Server) EngineImageDelete(rw http.ResponseWriter, req *http.Request) error {
id := mux.Vars(req)["name"]
if err := s.m.DeleteEngineImage(id); err != nil {
if err := s.m.DeleteEngineImageByName(id); err != nil {
return errors.Wrap(err, "failed to delete engine image")
}

View File

@ -2,11 +2,9 @@ package api
import (
"fmt"
"net"
"net/http"
"net/http/httputil"
"net/url"
"strconv"
"github.com/gorilla/mux"
"github.com/pkg/errors"
@ -207,7 +205,7 @@ func UploadParametersForBackingImage(m *manager.VolumeManager) func(req *http.Re
if bids.Status.CurrentState != longhorn.BackingImageStatePending {
return nil, fmt.Errorf("upload server for backing image %s has not been initiated", name)
}
return map[string]string{ParameterKeyAddress: net.JoinHostPort(pod.Status.PodIP, strconv.Itoa(engineapi.BackingImageDataSourceDefaultPort))}, nil
return map[string]string{ParameterKeyAddress: fmt.Sprintf("%s:%d", pod.Status.PodIP, engineapi.BackingImageDataSourceDefaultPort)}, nil
}
}

View File

@ -29,41 +29,39 @@ type Empty struct {
type Volume struct {
client.Resource
Name string `json:"name"`
Size string `json:"size"`
Frontend longhorn.VolumeFrontend `json:"frontend"`
DisableFrontend bool `json:"disableFrontend"`
FromBackup string `json:"fromBackup"`
RestoreVolumeRecurringJob longhorn.RestoreVolumeRecurringJobType `json:"restoreVolumeRecurringJob"`
DataSource longhorn.VolumeDataSource `json:"dataSource"`
DataLocality longhorn.DataLocality `json:"dataLocality"`
StaleReplicaTimeout int `json:"staleReplicaTimeout"`
State longhorn.VolumeState `json:"state"`
Robustness longhorn.VolumeRobustness `json:"robustness"`
Image string `json:"image"`
CurrentImage string `json:"currentImage"`
BackingImage string `json:"backingImage"`
Created string `json:"created"`
LastBackup string `json:"lastBackup"`
LastBackupAt string `json:"lastBackupAt"`
LastAttachedBy string `json:"lastAttachedBy"`
Standby bool `json:"standby"`
RestoreRequired bool `json:"restoreRequired"`
RestoreInitiated bool `json:"restoreInitiated"`
RevisionCounterDisabled bool `json:"revisionCounterDisabled"`
SnapshotDataIntegrity longhorn.SnapshotDataIntegrity `json:"snapshotDataIntegrity"`
UnmapMarkSnapChainRemoved longhorn.UnmapMarkSnapChainRemoved `json:"unmapMarkSnapChainRemoved"`
BackupCompressionMethod longhorn.BackupCompressionMethod `json:"backupCompressionMethod"`
BackupBlockSize string `json:"backupBlockSize"`
ReplicaSoftAntiAffinity longhorn.ReplicaSoftAntiAffinity `json:"replicaSoftAntiAffinity"`
ReplicaZoneSoftAntiAffinity longhorn.ReplicaZoneSoftAntiAffinity `json:"replicaZoneSoftAntiAffinity"`
ReplicaDiskSoftAntiAffinity longhorn.ReplicaDiskSoftAntiAffinity `json:"replicaDiskSoftAntiAffinity"`
DataEngine longhorn.DataEngineType `json:"dataEngine"`
SnapshotMaxCount int `json:"snapshotMaxCount"`
SnapshotMaxSize string `json:"snapshotMaxSize"`
ReplicaRebuildingBandwidthLimit int64 `json:"replicaRebuildingBandwidthLimit"`
FreezeFilesystemForSnapshot longhorn.FreezeFilesystemForSnapshot `json:"freezeFilesystemForSnapshot"`
BackupTargetName string `json:"backupTargetName"`
Name string `json:"name"`
Size string `json:"size"`
Frontend longhorn.VolumeFrontend `json:"frontend"`
DisableFrontend bool `json:"disableFrontend"`
FromBackup string `json:"fromBackup"`
RestoreVolumeRecurringJob longhorn.RestoreVolumeRecurringJobType `json:"restoreVolumeRecurringJob"`
DataSource longhorn.VolumeDataSource `json:"dataSource"`
DataLocality longhorn.DataLocality `json:"dataLocality"`
StaleReplicaTimeout int `json:"staleReplicaTimeout"`
State longhorn.VolumeState `json:"state"`
Robustness longhorn.VolumeRobustness `json:"robustness"`
Image string `json:"image"`
CurrentImage string `json:"currentImage"`
BackingImage string `json:"backingImage"`
Created string `json:"created"`
LastBackup string `json:"lastBackup"`
LastBackupAt string `json:"lastBackupAt"`
LastAttachedBy string `json:"lastAttachedBy"`
Standby bool `json:"standby"`
RestoreRequired bool `json:"restoreRequired"`
RestoreInitiated bool `json:"restoreInitiated"`
RevisionCounterDisabled bool `json:"revisionCounterDisabled"`
SnapshotDataIntegrity longhorn.SnapshotDataIntegrity `json:"snapshotDataIntegrity"`
UnmapMarkSnapChainRemoved longhorn.UnmapMarkSnapChainRemoved `json:"unmapMarkSnapChainRemoved"`
BackupCompressionMethod longhorn.BackupCompressionMethod `json:"backupCompressionMethod"`
ReplicaSoftAntiAffinity longhorn.ReplicaSoftAntiAffinity `json:"replicaSoftAntiAffinity"`
ReplicaZoneSoftAntiAffinity longhorn.ReplicaZoneSoftAntiAffinity `json:"replicaZoneSoftAntiAffinity"`
ReplicaDiskSoftAntiAffinity longhorn.ReplicaDiskSoftAntiAffinity `json:"replicaDiskSoftAntiAffinity"`
DataEngine longhorn.DataEngineType `json:"dataEngine"`
SnapshotMaxCount int `json:"snapshotMaxCount"`
SnapshotMaxSize string `json:"snapshotMaxSize"`
FreezeFilesystemForSnapshot longhorn.FreezeFilesystemForSnapshot `json:"freezeFilesystemForSnapshot"`
BackupTargetName string `json:"backupTargetName"`
DiskSelector []string `json:"diskSelector"`
NodeSelector []string `json:"nodeSelector"`
@ -178,7 +176,6 @@ type Backup struct {
NewlyUploadedDataSize string `json:"newlyUploadDataSize"`
ReUploadedDataSize string `json:"reUploadedDataSize"`
BackupTargetName string `json:"backupTargetName"`
BlockSize string `json:"blockSize"`
}
type BackupBackingImage struct {
@ -252,8 +249,6 @@ type Attachment struct {
}
type VolumeAttachment struct {
client.Resource
Attachments map[string]Attachment `json:"attachments"`
Volume string `json:"volume"`
}
@ -373,10 +368,6 @@ type UpdateSnapshotMaxSizeInput struct {
SnapshotMaxSize string `json:"snapshotMaxSize"`
}
type UpdateReplicaRebuildingBandwidthLimitInput struct {
ReplicaRebuildingBandwidthLimit string `json:"replicaRebuildingBandwidthLimit"`
}
type UpdateBackupCompressionMethodInput struct {
BackupCompressionMethod string `json:"backupCompressionMethod"`
}
@ -405,10 +396,6 @@ type UpdateSnapshotMaxSize struct {
SnapshotMaxSize string `json:"snapshotMaxSize"`
}
type UpdateReplicaRebuildingBandwidthLimit struct {
ReplicaRebuildingBandwidthLimit string `json:"replicaRebuildingBandwidthLimit"`
}
type UpdateFreezeFilesystemForSnapshotInput struct {
FreezeFilesystemForSnapshot string `json:"freezeFilesystemForSnapshot"`
}
@ -675,7 +662,6 @@ func NewSchema() *client.Schemas {
schemas.AddType("UpdateSnapshotDataIntegrityInput", UpdateSnapshotDataIntegrityInput{})
schemas.AddType("UpdateSnapshotMaxCountInput", UpdateSnapshotMaxCountInput{})
schemas.AddType("UpdateSnapshotMaxSizeInput", UpdateSnapshotMaxSizeInput{})
schemas.AddType("UpdateReplicaRebuildingBandwidthLimitInput", UpdateReplicaRebuildingBandwidthLimitInput{})
schemas.AddType("UpdateBackupCompressionInput", UpdateBackupCompressionMethodInput{})
schemas.AddType("UpdateUnmapMarkSnapChainRemovedInput", UpdateUnmapMarkSnapChainRemovedInput{})
schemas.AddType("UpdateReplicaSoftAntiAffinityInput", UpdateReplicaSoftAntiAffinityInput{})
@ -1107,10 +1093,6 @@ func volumeSchema(volume *client.Schema) {
Input: "UpdateSnapshotMaxSizeInput",
},
"updateReplicaRebuildingBandwidthLimit": {
Input: "UpdateReplicaRebuildingBandwidthLimitInput",
},
"updateBackupCompressionMethod": {
Input: "UpdateBackupCompressionMethodInput",
},
@ -1621,32 +1603,30 @@ func toVolumeResource(v *longhorn.Volume, ves []*longhorn.Engine, vrs []*longhor
Actions: map[string]string{},
Links: map[string]string{},
},
Name: v.Name,
Size: strconv.FormatInt(v.Spec.Size, 10),
Frontend: v.Spec.Frontend,
DisableFrontend: v.Spec.DisableFrontend,
LastAttachedBy: v.Spec.LastAttachedBy,
FromBackup: v.Spec.FromBackup,
DataSource: v.Spec.DataSource,
NumberOfReplicas: v.Spec.NumberOfReplicas,
ReplicaAutoBalance: v.Spec.ReplicaAutoBalance,
DataLocality: v.Spec.DataLocality,
SnapshotDataIntegrity: v.Spec.SnapshotDataIntegrity,
SnapshotMaxCount: v.Spec.SnapshotMaxCount,
SnapshotMaxSize: strconv.FormatInt(v.Spec.SnapshotMaxSize, 10),
ReplicaRebuildingBandwidthLimit: v.Spec.ReplicaRebuildingBandwidthLimit,
BackupCompressionMethod: v.Spec.BackupCompressionMethod,
BackupBlockSize: strconv.FormatInt(v.Spec.BackupBlockSize, 10),
StaleReplicaTimeout: v.Spec.StaleReplicaTimeout,
Created: v.CreationTimestamp.String(),
Image: v.Spec.Image,
BackingImage: v.Spec.BackingImage,
Standby: v.Spec.Standby,
DiskSelector: v.Spec.DiskSelector,
NodeSelector: v.Spec.NodeSelector,
RestoreVolumeRecurringJob: v.Spec.RestoreVolumeRecurringJob,
FreezeFilesystemForSnapshot: v.Spec.FreezeFilesystemForSnapshot,
BackupTargetName: v.Spec.BackupTargetName,
Name: v.Name,
Size: strconv.FormatInt(v.Spec.Size, 10),
Frontend: v.Spec.Frontend,
DisableFrontend: v.Spec.DisableFrontend,
LastAttachedBy: v.Spec.LastAttachedBy,
FromBackup: v.Spec.FromBackup,
DataSource: v.Spec.DataSource,
NumberOfReplicas: v.Spec.NumberOfReplicas,
ReplicaAutoBalance: v.Spec.ReplicaAutoBalance,
DataLocality: v.Spec.DataLocality,
SnapshotDataIntegrity: v.Spec.SnapshotDataIntegrity,
SnapshotMaxCount: v.Spec.SnapshotMaxCount,
SnapshotMaxSize: strconv.FormatInt(v.Spec.SnapshotMaxSize, 10),
BackupCompressionMethod: v.Spec.BackupCompressionMethod,
StaleReplicaTimeout: v.Spec.StaleReplicaTimeout,
Created: v.CreationTimestamp.String(),
Image: v.Spec.Image,
BackingImage: v.Spec.BackingImage,
Standby: v.Spec.Standby,
DiskSelector: v.Spec.DiskSelector,
NodeSelector: v.Spec.NodeSelector,
RestoreVolumeRecurringJob: v.Spec.RestoreVolumeRecurringJob,
FreezeFilesystemForSnapshot: v.Spec.FreezeFilesystemForSnapshot,
BackupTargetName: v.Spec.BackupTargetName,
State: v.Status.State,
Robustness: v.Status.Robustness,
@ -1719,7 +1699,6 @@ func toVolumeResource(v *longhorn.Volume, ves []*longhorn.Engine, vrs []*longhor
actions["updateSnapshotDataIntegrity"] = struct{}{}
actions["updateSnapshotMaxCount"] = struct{}{}
actions["updateSnapshotMaxSize"] = struct{}{}
actions["updateReplicaRebuildingBandwidthLimit"] = struct{}{}
actions["updateBackupCompressionMethod"] = struct{}{}
actions["updateReplicaSoftAntiAffinity"] = struct{}{}
actions["updateReplicaZoneSoftAntiAffinity"] = struct{}{}
@ -1753,7 +1732,6 @@ func toVolumeResource(v *longhorn.Volume, ves []*longhorn.Engine, vrs []*longhor
actions["updateSnapshotDataIntegrity"] = struct{}{}
actions["updateSnapshotMaxCount"] = struct{}{}
actions["updateSnapshotMaxSize"] = struct{}{}
actions["updateReplicaRebuildingBandwidthLimit"] = struct{}{}
actions["updateBackupCompressionMethod"] = struct{}{}
actions["updateReplicaSoftAntiAffinity"] = struct{}{}
actions["updateReplicaZoneSoftAntiAffinity"] = struct{}{}
@ -2045,7 +2023,6 @@ func toBackupResource(b *longhorn.Backup) *Backup {
NewlyUploadedDataSize: b.Status.NewlyUploadedDataSize,
ReUploadedDataSize: b.Status.ReUploadedDataSize,
BackupTargetName: backupTargetName,
BlockSize: strconv.FormatInt(b.Spec.BackupBlockSize, 10),
}
// Set the volume name from backup CR's label if it's empty.
// This field is empty probably because the backup state is not Ready
@ -2437,47 +2414,6 @@ func toOrphanCollection(orphans map[string]*longhorn.Orphan) *client.GenericColl
return &client.GenericCollection{Data: data, Collection: client.Collection{ResourceType: "orphan"}}
}
func toVolumeAttachmentResource(volumeAttachment *longhorn.VolumeAttachment) *VolumeAttachment {
attachments := make(map[string]Attachment)
for ticketName, ticket := range volumeAttachment.Spec.AttachmentTickets {
status := volumeAttachment.Status.AttachmentTicketStatuses[ticketName]
attachment := Attachment{
AttachmentID: ticket.ID,
AttachmentType: string(ticket.Type),
NodeID: ticket.NodeID,
Parameters: ticket.Parameters,
Satisfied: false,
Conditions: nil,
}
if status != nil {
attachment.Satisfied = status.Satisfied
attachment.Conditions = status.Conditions
}
attachments[ticketName] = attachment
}
return &VolumeAttachment{
Resource: client.Resource{
Id: volumeAttachment.Name,
Type: "volumeAttachment",
},
Volume: volumeAttachment.Spec.Volume,
Attachments: attachments,
}
}
func toVolumeAttachmentCollection(attachments []*longhorn.VolumeAttachment, apiContext *api.ApiContext) *client.GenericCollection {
data := []interface{}{}
for _, attachment := range attachments {
data = append(data, toVolumeAttachmentResource(attachment))
}
return &client.GenericCollection{Data: data, Collection: client.Collection{ResourceType: "volumeAttachment"}}
}
func sliceToMap(conditions []longhorn.Condition) map[string]longhorn.Condition {
converted := map[string]longhorn.Condition{}
for _, c := range conditions {

View File

@ -69,22 +69,21 @@ func NewRouter(s *Server) *mux.Router {
r.Methods("DELETE").Path("/v1/volumes/{name}").Handler(f(schemas, s.VolumeDelete))
r.Methods("POST").Path("/v1/volumes").Handler(f(schemas, s.fwd.Handler(s.fwd.HandleProxyRequestByNodeID, s.fwd.GetHTTPAddressByNodeID(NodeHasDefaultEngineImage(s.m)), s.VolumeCreate)))
volumeActions := map[string]func(http.ResponseWriter, *http.Request) error{
"attach": s.VolumeAttach,
"detach": s.VolumeDetach,
"salvage": s.VolumeSalvage,
"updateDataLocality": s.VolumeUpdateDataLocality,
"updateAccessMode": s.VolumeUpdateAccessMode,
"updateUnmapMarkSnapChainRemoved": s.VolumeUpdateUnmapMarkSnapChainRemoved,
"updateSnapshotMaxCount": s.VolumeUpdateSnapshotMaxCount,
"updateSnapshotMaxSize": s.VolumeUpdateSnapshotMaxSize,
"updateReplicaRebuildingBandwidthLimit": s.VolumeUpdateReplicaRebuildingBandwidthLimit,
"updateReplicaSoftAntiAffinity": s.VolumeUpdateReplicaSoftAntiAffinity,
"updateReplicaZoneSoftAntiAffinity": s.VolumeUpdateReplicaZoneSoftAntiAffinity,
"updateReplicaDiskSoftAntiAffinity": s.VolumeUpdateReplicaDiskSoftAntiAffinity,
"activate": s.VolumeActivate,
"expand": s.VolumeExpand,
"cancelExpansion": s.VolumeCancelExpansion,
"offlineReplicaRebuilding": s.VolumeOfflineRebuilding,
"attach": s.VolumeAttach,
"detach": s.VolumeDetach,
"salvage": s.VolumeSalvage,
"updateDataLocality": s.VolumeUpdateDataLocality,
"updateAccessMode": s.VolumeUpdateAccessMode,
"updateUnmapMarkSnapChainRemoved": s.VolumeUpdateUnmapMarkSnapChainRemoved,
"updateSnapshotMaxCount": s.VolumeUpdateSnapshotMaxCount,
"updateSnapshotMaxSize": s.VolumeUpdateSnapshotMaxSize,
"updateReplicaSoftAntiAffinity": s.VolumeUpdateReplicaSoftAntiAffinity,
"updateReplicaZoneSoftAntiAffinity": s.VolumeUpdateReplicaZoneSoftAntiAffinity,
"updateReplicaDiskSoftAntiAffinity": s.VolumeUpdateReplicaDiskSoftAntiAffinity,
"activate": s.VolumeActivate,
"expand": s.VolumeExpand,
"cancelExpansion": s.VolumeCancelExpansion,
"offlineReplicaRebuilding": s.VolumeOfflineRebuilding,
"updateReplicaCount": s.VolumeUpdateReplicaCount,
"updateReplicaAutoBalance": s.VolumeUpdateReplicaAutoBalance,
@ -292,9 +291,5 @@ func NewRouter(s *Server) *mux.Router {
r.Path("/v1/ws/events").Handler(f(schemas, eventListStream))
r.Path("/v1/ws/{period}/events").Handler(f(schemas, eventListStream))
// VolumeAttachment routes
r.Methods("GET").Path("/v1/volumeattachments").Handler(f(schemas, s.VolumeAttachmentList))
r.Methods("GET").Path("/v1/volumeattachments/{name}").Handler(f(schemas, s.VolumeAttachmentGet))
return r
}

View File

@ -172,43 +172,36 @@ func (s *Server) VolumeCreate(rw http.ResponseWriter, req *http.Request) error {
return errors.Wrap(err, "failed to parse snapshot max size")
}
backupBlockSize, err := util.ConvertSize(volume.BackupBlockSize)
if err != nil {
return errors.Wrapf(err, "failed to parse backup block size %v", volume.BackupBlockSize)
}
v, err := s.m.Create(volume.Name, &longhorn.VolumeSpec{
Size: size,
AccessMode: volume.AccessMode,
Migratable: volume.Migratable,
Encrypted: volume.Encrypted,
Frontend: volume.Frontend,
FromBackup: volume.FromBackup,
RestoreVolumeRecurringJob: volume.RestoreVolumeRecurringJob,
DataSource: volume.DataSource,
NumberOfReplicas: volume.NumberOfReplicas,
ReplicaAutoBalance: volume.ReplicaAutoBalance,
DataLocality: volume.DataLocality,
StaleReplicaTimeout: volume.StaleReplicaTimeout,
BackingImage: volume.BackingImage,
Standby: volume.Standby,
RevisionCounterDisabled: volume.RevisionCounterDisabled,
DiskSelector: volume.DiskSelector,
NodeSelector: volume.NodeSelector,
SnapshotDataIntegrity: volume.SnapshotDataIntegrity,
SnapshotMaxCount: volume.SnapshotMaxCount,
SnapshotMaxSize: snapshotMaxSize,
ReplicaRebuildingBandwidthLimit: volume.ReplicaRebuildingBandwidthLimit,
BackupCompressionMethod: volume.BackupCompressionMethod,
BackupBlockSize: backupBlockSize,
UnmapMarkSnapChainRemoved: volume.UnmapMarkSnapChainRemoved,
ReplicaSoftAntiAffinity: volume.ReplicaSoftAntiAffinity,
ReplicaZoneSoftAntiAffinity: volume.ReplicaZoneSoftAntiAffinity,
ReplicaDiskSoftAntiAffinity: volume.ReplicaDiskSoftAntiAffinity,
DataEngine: volume.DataEngine,
FreezeFilesystemForSnapshot: volume.FreezeFilesystemForSnapshot,
BackupTargetName: volume.BackupTargetName,
OfflineRebuilding: volume.OfflineRebuilding,
Size: size,
AccessMode: volume.AccessMode,
Migratable: volume.Migratable,
Encrypted: volume.Encrypted,
Frontend: volume.Frontend,
FromBackup: volume.FromBackup,
RestoreVolumeRecurringJob: volume.RestoreVolumeRecurringJob,
DataSource: volume.DataSource,
NumberOfReplicas: volume.NumberOfReplicas,
ReplicaAutoBalance: volume.ReplicaAutoBalance,
DataLocality: volume.DataLocality,
StaleReplicaTimeout: volume.StaleReplicaTimeout,
BackingImage: volume.BackingImage,
Standby: volume.Standby,
RevisionCounterDisabled: volume.RevisionCounterDisabled,
DiskSelector: volume.DiskSelector,
NodeSelector: volume.NodeSelector,
SnapshotDataIntegrity: volume.SnapshotDataIntegrity,
SnapshotMaxCount: volume.SnapshotMaxCount,
SnapshotMaxSize: snapshotMaxSize,
BackupCompressionMethod: volume.BackupCompressionMethod,
UnmapMarkSnapChainRemoved: volume.UnmapMarkSnapChainRemoved,
ReplicaSoftAntiAffinity: volume.ReplicaSoftAntiAffinity,
ReplicaZoneSoftAntiAffinity: volume.ReplicaZoneSoftAntiAffinity,
ReplicaDiskSoftAntiAffinity: volume.ReplicaDiskSoftAntiAffinity,
DataEngine: volume.DataEngine,
FreezeFilesystemForSnapshot: volume.FreezeFilesystemForSnapshot,
BackupTargetName: volume.BackupTargetName,
OfflineRebuilding: volume.OfflineRebuilding,
}, volume.RecurringJobSelector)
if err != nil {
return errors.Wrap(err, "failed to create volume")
@ -846,33 +839,6 @@ func (s *Server) VolumeUpdateSnapshotMaxSize(rw http.ResponseWriter, req *http.R
return s.responseWithVolume(rw, req, "", v)
}
func (s *Server) VolumeUpdateReplicaRebuildingBandwidthLimit(rw http.ResponseWriter, req *http.Request) error {
var input UpdateReplicaRebuildingBandwidthLimit
id := mux.Vars(req)["name"]
apiContext := api.GetApiContext(req)
if err := apiContext.Read(&input); err != nil {
return errors.Wrap(err, "failed to read ReplicaRebuildingBandwidthLimit input")
}
replicaRebuildingBandwidthLimit, err := util.ConvertSize(input.ReplicaRebuildingBandwidthLimit)
if err != nil {
return fmt.Errorf("failed to parse replica rebuilding bandwidth limit %v", err)
}
obj, err := util.RetryOnConflictCause(func() (interface{}, error) {
return s.m.UpdateReplicaRebuildingBandwidthLimit(id, replicaRebuildingBandwidthLimit)
})
if err != nil {
return err
}
v, ok := obj.(*longhorn.Volume)
if !ok {
return fmt.Errorf("failed to convert to volume %v object", id)
}
return s.responseWithVolume(rw, req, "", v)
}
func (s *Server) VolumeUpdateFreezeFilesystemForSnapshot(rw http.ResponseWriter, req *http.Request) error {
var input UpdateFreezeFilesystemForSnapshotInput
id := mux.Vars(req)["name"]

View File

@ -1,34 +0,0 @@
package api
import (
"net/http"
"github.com/gorilla/mux"
"github.com/pkg/errors"
"github.com/rancher/go-rancher/api"
)
func (s *Server) VolumeAttachmentGet(rw http.ResponseWriter, req *http.Request) error {
apiContext := api.GetApiContext(req)
id := mux.Vars(req)["name"]
volumeAttachment, err := s.m.GetVolumeAttachment(id)
if err != nil {
return errors.Wrapf(err, "failed to get volume attachment '%s'", id)
}
apiContext.Write(toVolumeAttachmentResource(volumeAttachment))
return nil
}
func (s *Server) VolumeAttachmentList(rw http.ResponseWriter, req *http.Request) (err error) {
apiContext := api.GetApiContext(req)
volumeAttachmentList, err := s.m.ListVolumeAttachment()
if err != nil {
return errors.Wrap(err, "failed to list volume attachments")
}
apiContext.Write(toVolumeAttachmentCollection(volumeAttachmentList, apiContext))
return nil
}

View File

@ -59,10 +59,6 @@ const (
LeaseLockNameWebhook = "longhorn-manager-webhook-lock"
)
const (
enableConversionWebhook = false
)
func DaemonCmd() cli.Command {
return cli.Command{
Name: "daemon",
@ -155,36 +151,34 @@ func startWebhooksByLeaderElection(ctx context.Context, kubeconfigPath, currentN
}
fnStartWebhook := func(ctx context.Context) error {
if enableConversionWebhook {
// Conversion webhook needs to be started first since we use its port 9501 as readiness port.
// longhorn-manager pod becomes ready only when conversion webhook is running.
// The services in the longhorn-manager can then start to receive the requests.
// Conversion webhook does not use datastore, since it is a prerequisite for
// datastore operation.
clientsWithoutDatastore, err := client.NewClients(kubeconfigPath, false, ctx.Done())
if err != nil {
return err
}
if err := webhook.StartWebhook(ctx, types.WebhookTypeConversion, clientsWithoutDatastore); err != nil {
return err
}
// Conversion webhook needs to be started first since we use its port 9501 as readiness port.
// longhorn-manager pod becomes ready only when conversion webhook is running.
// The services in the longhorn-manager can then start to receive the requests.
// Conversion webhook does not use datastore, since it is a prerequisite for
// datastore operation.
clientsWithoutDatastore, err := client.NewClients(kubeconfigPath, false, ctx.Done())
if err != nil {
return err
}
if err := webhook.StartWebhook(ctx, types.WebhookTypeConversion, clientsWithoutDatastore); err != nil {
return err
}
// This adds the label for the conversion webhook's selector. We do it the hard way without datastore to avoid chicken-and-egg.
pod, err := clientsWithoutDatastore.Clients.K8s.CoreV1().Pods(podNamespace).Get(context.Background(), podName, metav1.GetOptions{})
if err != nil {
return err
}
labels := types.GetConversionWebhookLabel()
for key, value := range labels {
pod.Labels[key] = value
}
_, err = clientsWithoutDatastore.Clients.K8s.CoreV1().Pods(podNamespace).Update(context.Background(), pod, metav1.UpdateOptions{})
if err != nil {
return err
}
if err := webhook.CheckWebhookServiceAvailability(types.WebhookTypeConversion); err != nil {
return err
}
// This adds the label for the conversion webhook's selector. We do it the hard way without datastore to avoid chicken-and-egg.
pod, err := clientsWithoutDatastore.Clients.K8s.CoreV1().Pods(podNamespace).Get(context.Background(), podName, metav1.GetOptions{})
if err != nil {
return err
}
labels := types.GetConversionWebhookLabel()
for key, value := range labels {
pod.Labels[key] = value
}
_, err = clientsWithoutDatastore.Clients.K8s.CoreV1().Pods(podNamespace).Update(context.Background(), pod, metav1.UpdateOptions{})
if err != nil {
return err
}
if err := webhook.CheckWebhookServiceAvailability(types.WebhookTypeConversion); err != nil {
return err
}
clients, err := client.NewClients(kubeconfigPath, true, ctx.Done())

View File

@ -7,12 +7,8 @@ const (
type Backup struct {
Resource `yaml:"-"`
BackupMode string `json:"backupMode,omitempty" yaml:"backup_mode,omitempty"`
BackupTargetName string `json:"backupTargetName,omitempty" yaml:"backup_target_name,omitempty"`
BlockSize string `json:"blockSize,omitempty" yaml:"block_size,omitempty"`
CompressionMethod string `json:"compressionMethod,omitempty" yaml:"compression_method,omitempty"`
Created string `json:"created,omitempty" yaml:"created,omitempty"`
@ -25,12 +21,8 @@ type Backup struct {
Name string `json:"name,omitempty" yaml:"name,omitempty"`
NewlyUploadDataSize string `json:"newlyUploadDataSize,omitempty" yaml:"newly_upload_data_size,omitempty"`
Progress int64 `json:"progress,omitempty" yaml:"progress,omitempty"`
ReUploadedDataSize string `json:"reUploadedDataSize,omitempty" yaml:"re_uploaded_data_size,omitempty"`
Size string `json:"size,omitempty" yaml:"size,omitempty"`
SnapshotCreated string `json:"snapshotCreated,omitempty" yaml:"snapshot_created,omitempty"`

View File

@ -7,10 +7,6 @@ const (
type BackupBackingImage struct {
Resource `yaml:"-"`
BackingImageName string `json:"backingImageName,omitempty" yaml:"backing_image_name,omitempty"`
BackupTargetName string `json:"backupTargetName,omitempty" yaml:"backup_target_name,omitempty"`
CompressionMethod string `json:"compressionMethod,omitempty" yaml:"compression_method,omitempty"`
Created string `json:"created,omitempty" yaml:"created,omitempty"`
@ -27,7 +23,7 @@ type BackupBackingImage struct {
Secret string `json:"secret,omitempty" yaml:"secret,omitempty"`
SecretNamespace string `json:"secretNamespace,omitempty" yaml:"secret_namespace,omitempty"`
SecretNamespace string `json:"secretNamespace,omitempty" yaml:"secretNamespace,omitempty"`
Size int64 `json:"size,omitempty" yaml:"size,omitempty"`

View File

@ -36,10 +36,6 @@ type BackupTargetOperations interface {
Update(existing *BackupTarget, updates interface{}) (*BackupTarget, error)
ById(id string) (*BackupTarget, error)
Delete(container *BackupTarget) error
ActionBackupTargetSync(*BackupTarget, *SyncBackupResource) (*BackupTargetListOutput, error)
ActionBackupTargetUpdate(*BackupTarget, *BackupTarget) (*BackupTargetListOutput, error)
}
func newBackupTargetClient(rancherClient *RancherClient) *BackupTargetClient {
@ -91,21 +87,3 @@ func (c *BackupTargetClient) ById(id string) (*BackupTarget, error) {
func (c *BackupTargetClient) Delete(container *BackupTarget) error {
return c.rancherClient.doResourceDelete(BACKUP_TARGET_TYPE, &container.Resource)
}
func (c *BackupTargetClient) ActionBackupTargetSync(resource *BackupTarget, input *SyncBackupResource) (*BackupTargetListOutput, error) {
resp := &BackupTargetListOutput{}
err := c.rancherClient.doAction(BACKUP_TARGET_TYPE, "backupTargetSync", &resource.Resource, input, resp)
return resp, err
}
func (c *BackupTargetClient) ActionBackupTargetUpdate(resource *BackupTarget, input *BackupTarget) (*BackupTargetListOutput, error) {
resp := &BackupTargetListOutput{}
err := c.rancherClient.doAction(BACKUP_TARGET_TYPE, "backupTargetUpdate", &resource.Resource, input, resp)
return resp, err
}

View File

@ -1,79 +0,0 @@
package client
const (
BACKUP_TARGET_LIST_OUTPUT_TYPE = "backupTargetListOutput"
)
type BackupTargetListOutput struct {
Resource `yaml:"-"`
Data []BackupTarget `json:"data,omitempty" yaml:"data,omitempty"`
}
type BackupTargetListOutputCollection struct {
Collection
Data []BackupTargetListOutput `json:"data,omitempty"`
client *BackupTargetListOutputClient
}
type BackupTargetListOutputClient struct {
rancherClient *RancherClient
}
type BackupTargetListOutputOperations interface {
List(opts *ListOpts) (*BackupTargetListOutputCollection, error)
Create(opts *BackupTargetListOutput) (*BackupTargetListOutput, error)
Update(existing *BackupTargetListOutput, updates interface{}) (*BackupTargetListOutput, error)
ById(id string) (*BackupTargetListOutput, error)
Delete(container *BackupTargetListOutput) error
}
func newBackupTargetListOutputClient(rancherClient *RancherClient) *BackupTargetListOutputClient {
return &BackupTargetListOutputClient{
rancherClient: rancherClient,
}
}
func (c *BackupTargetListOutputClient) Create(container *BackupTargetListOutput) (*BackupTargetListOutput, error) {
resp := &BackupTargetListOutput{}
err := c.rancherClient.doCreate(BACKUP_TARGET_LIST_OUTPUT_TYPE, container, resp)
return resp, err
}
func (c *BackupTargetListOutputClient) Update(existing *BackupTargetListOutput, updates interface{}) (*BackupTargetListOutput, error) {
resp := &BackupTargetListOutput{}
err := c.rancherClient.doUpdate(BACKUP_TARGET_LIST_OUTPUT_TYPE, &existing.Resource, updates, resp)
return resp, err
}
func (c *BackupTargetListOutputClient) List(opts *ListOpts) (*BackupTargetListOutputCollection, error) {
resp := &BackupTargetListOutputCollection{}
err := c.rancherClient.doList(BACKUP_TARGET_LIST_OUTPUT_TYPE, opts, resp)
resp.client = c
return resp, err
}
func (cc *BackupTargetListOutputCollection) Next() (*BackupTargetListOutputCollection, error) {
if cc != nil && cc.Pagination != nil && cc.Pagination.Next != "" {
resp := &BackupTargetListOutputCollection{}
err := cc.client.rancherClient.doNext(cc.Pagination.Next, resp)
resp.client = cc.client
return resp, err
}
return nil, nil
}
func (c *BackupTargetListOutputClient) ById(id string) (*BackupTargetListOutput, error) {
resp := &BackupTargetListOutput{}
err := c.rancherClient.doById(BACKUP_TARGET_LIST_OUTPUT_TYPE, id, resp)
if apiError, ok := err.(*ApiError); ok {
if apiError.StatusCode == 404 {
return nil, nil
}
}
return resp, err
}
func (c *BackupTargetListOutputClient) Delete(container *BackupTargetListOutput) error {
return c.rancherClient.doResourceDelete(BACKUP_TARGET_LIST_OUTPUT_TYPE, &container.Resource)
}

View File

@ -58,8 +58,6 @@ type BackupVolumeOperations interface {
ActionBackupList(*BackupVolume) (*BackupListOutput, error)
ActionBackupListByVolume(*BackupVolume, *Volume) (*BackupListOutput, error)
ActionBackupVolumeSync(*BackupVolume, *SyncBackupResource) (*BackupVolumeListOutput, error)
}
func newBackupVolumeClient(rancherClient *RancherClient) *BackupVolumeClient {
@ -147,12 +145,3 @@ func (c *BackupVolumeClient) ActionBackupListByVolume(resource *BackupVolume, in
return resp, err
}
func (c *BackupVolumeClient) ActionBackupVolumeSync(resource *BackupVolume, input *SyncBackupResource) (*BackupVolumeListOutput, error) {
resp := &BackupVolumeListOutput{}
err := c.rancherClient.doAction(BACKUP_VOLUME_TYPE, "backupVolumeSync", &resource.Resource, input, resp)
return resp, err
}

View File

@ -1,79 +0,0 @@
package client
const (
BACKUP_VOLUME_LIST_OUTPUT_TYPE = "backupVolumeListOutput"
)
type BackupVolumeListOutput struct {
Resource `yaml:"-"`
Data []BackupVolume `json:"data,omitempty" yaml:"data,omitempty"`
}
type BackupVolumeListOutputCollection struct {
Collection
Data []BackupVolumeListOutput `json:"data,omitempty"`
client *BackupVolumeListOutputClient
}
type BackupVolumeListOutputClient struct {
rancherClient *RancherClient
}
type BackupVolumeListOutputOperations interface {
List(opts *ListOpts) (*BackupVolumeListOutputCollection, error)
Create(opts *BackupVolumeListOutput) (*BackupVolumeListOutput, error)
Update(existing *BackupVolumeListOutput, updates interface{}) (*BackupVolumeListOutput, error)
ById(id string) (*BackupVolumeListOutput, error)
Delete(container *BackupVolumeListOutput) error
}
func newBackupVolumeListOutputClient(rancherClient *RancherClient) *BackupVolumeListOutputClient {
return &BackupVolumeListOutputClient{
rancherClient: rancherClient,
}
}
func (c *BackupVolumeListOutputClient) Create(container *BackupVolumeListOutput) (*BackupVolumeListOutput, error) {
resp := &BackupVolumeListOutput{}
err := c.rancherClient.doCreate(BACKUP_VOLUME_LIST_OUTPUT_TYPE, container, resp)
return resp, err
}
func (c *BackupVolumeListOutputClient) Update(existing *BackupVolumeListOutput, updates interface{}) (*BackupVolumeListOutput, error) {
resp := &BackupVolumeListOutput{}
err := c.rancherClient.doUpdate(BACKUP_VOLUME_LIST_OUTPUT_TYPE, &existing.Resource, updates, resp)
return resp, err
}
func (c *BackupVolumeListOutputClient) List(opts *ListOpts) (*BackupVolumeListOutputCollection, error) {
resp := &BackupVolumeListOutputCollection{}
err := c.rancherClient.doList(BACKUP_VOLUME_LIST_OUTPUT_TYPE, opts, resp)
resp.client = c
return resp, err
}
func (cc *BackupVolumeListOutputCollection) Next() (*BackupVolumeListOutputCollection, error) {
if cc != nil && cc.Pagination != nil && cc.Pagination.Next != "" {
resp := &BackupVolumeListOutputCollection{}
err := cc.client.rancherClient.doNext(cc.Pagination.Next, resp)
resp.client = cc.client
return resp, err
}
return nil, nil
}
func (c *BackupVolumeListOutputClient) ById(id string) (*BackupVolumeListOutput, error) {
resp := &BackupVolumeListOutput{}
err := c.rancherClient.doById(BACKUP_VOLUME_LIST_OUTPUT_TYPE, id, resp)
if apiError, ok := err.(*ApiError); ok {
if apiError.StatusCode == 404 {
return nil, nil
}
}
return resp, err
}
func (c *BackupVolumeListOutputClient) Delete(container *BackupVolumeListOutput) error {
return c.rancherClient.doResourceDelete(BACKUP_VOLUME_LIST_OUTPUT_TYPE, &container.Resource)
}

View File

@ -9,10 +9,10 @@ type RancherClient struct {
DetachInput DetachInputOperations
SnapshotInput SnapshotInputOperations
SnapshotCRInput SnapshotCRInputOperations
BackupTarget BackupTargetOperations
Backup BackupOperations
BackupInput BackupInputOperations
BackupStatus BackupStatusOperations
SyncBackupResource SyncBackupResourceOperations
Orphan OrphanOperations
RestoreStatus RestoreStatusOperations
PurgeStatus PurgeStatusOperations
@ -38,8 +38,6 @@ type RancherClient struct {
UpdateReplicaZoneSoftAntiAffinityInput UpdateReplicaZoneSoftAntiAffinityInputOperations
UpdateReplicaDiskSoftAntiAffinityInput UpdateReplicaDiskSoftAntiAffinityInputOperations
UpdateFreezeFSForSnapshotInput UpdateFreezeFSForSnapshotInputOperations
UpdateBackupTargetInput UpdateBackupTargetInputOperations
UpdateOfflineRebuildingInput UpdateOfflineRebuildingInputOperations
WorkloadStatus WorkloadStatusOperations
CloneStatus CloneStatusOperations
Empty EmptyOperations
@ -58,14 +56,13 @@ type RancherClient struct {
InstanceManager InstanceManagerOperations
BackingImageDiskFileStatus BackingImageDiskFileStatusOperations
BackingImageCleanupInput BackingImageCleanupInputOperations
UpdateMinNumberOfCopiesInput UpdateMinNumberOfCopiesInputOperations
BackingImageRestoreInput BackingImageRestoreInputOperations
UpdateMinNumberOfCopiesInput UpdateMinNumberOfCopiesInputOperations
Attachment AttachmentOperations
VolumeAttachment VolumeAttachmentOperations
Volume VolumeOperations
Snapshot SnapshotOperations
SnapshotCR SnapshotCROperations
BackupTarget BackupTargetOperations
BackupVolume BackupVolumeOperations
BackupBackingImage BackupBackingImageOperations
Setting SettingOperations
@ -76,8 +73,6 @@ type RancherClient struct {
DiskUpdateInput DiskUpdateInputOperations
DiskInfo DiskInfoOperations
KubernetesStatus KubernetesStatusOperations
BackupTargetListOutput BackupTargetListOutputOperations
BackupVolumeListOutput BackupVolumeListOutputOperations
BackupListOutput BackupListOutputOperations
SnapshotListOutput SnapshotListOutputOperations
SystemBackup SystemBackupOperations
@ -96,10 +91,10 @@ func constructClient(rancherBaseClient *RancherBaseClientImpl) *RancherClient {
client.DetachInput = newDetachInputClient(client)
client.SnapshotInput = newSnapshotInputClient(client)
client.SnapshotCRInput = newSnapshotCRInputClient(client)
client.BackupTarget = newBackupTargetClient(client)
client.Backup = newBackupClient(client)
client.BackupInput = newBackupInputClient(client)
client.BackupStatus = newBackupStatusClient(client)
client.SyncBackupResource = newSyncBackupResourceClient(client)
client.Orphan = newOrphanClient(client)
client.RestoreStatus = newRestoreStatusClient(client)
client.PurgeStatus = newPurgeStatusClient(client)
@ -125,8 +120,6 @@ func constructClient(rancherBaseClient *RancherBaseClientImpl) *RancherClient {
client.UpdateReplicaZoneSoftAntiAffinityInput = newUpdateReplicaZoneSoftAntiAffinityInputClient(client)
client.UpdateReplicaDiskSoftAntiAffinityInput = newUpdateReplicaDiskSoftAntiAffinityInputClient(client)
client.UpdateFreezeFSForSnapshotInput = newUpdateFreezeFSForSnapshotInputClient(client)
client.UpdateBackupTargetInput = newUpdateBackupTargetInputClient(client)
client.UpdateOfflineRebuildingInput = newUpdateOfflineRebuildingInputClient(client)
client.WorkloadStatus = newWorkloadStatusClient(client)
client.CloneStatus = newCloneStatusClient(client)
client.Empty = newEmptyClient(client)
@ -152,7 +145,6 @@ func constructClient(rancherBaseClient *RancherBaseClientImpl) *RancherClient {
client.Volume = newVolumeClient(client)
client.Snapshot = newSnapshotClient(client)
client.SnapshotCR = newSnapshotCRClient(client)
client.BackupTarget = newBackupTargetClient(client)
client.BackupVolume = newBackupVolumeClient(client)
client.BackupBackingImage = newBackupBackingImageClient(client)
client.Setting = newSettingClient(client)
@ -163,8 +155,6 @@ func constructClient(rancherBaseClient *RancherBaseClientImpl) *RancherClient {
client.DiskUpdateInput = newDiskUpdateInputClient(client)
client.DiskInfo = newDiskInfoClient(client)
client.KubernetesStatus = newKubernetesStatusClient(client)
client.BackupTargetListOutput = newBackupTargetListOutputClient(client)
client.BackupVolumeListOutput = newBackupVolumeListOutputClient(client)
client.BackupListOutput = newBackupListOutputClient(client)
client.SnapshotListOutput = newSnapshotListOutputClient(client)
client.SystemBackup = newSystemBackupClient(client)

View File

@ -7,10 +7,6 @@ const (
type CloneStatus struct {
Resource `yaml:"-"`
AttemptCount int64 `json:"attemptCount,omitempty" yaml:"attempt_count,omitempty"`
NextAllowedAttemptAt string `json:"nextAllowedAttemptAt,omitempty" yaml:"next_allowed_attempt_at,omitempty"`
Snapshot string `json:"snapshot,omitempty" yaml:"snapshot,omitempty"`
SourceVolume string `json:"sourceVolume,omitempty" yaml:"source_volume,omitempty"`

View File

@ -11,8 +11,6 @@ type DiskInfo struct {
Conditions map[string]interface{} `json:"conditions,omitempty" yaml:"conditions,omitempty"`
DiskDriver string `json:"diskDriver,omitempty" yaml:"disk_driver,omitempty"`
DiskType string `json:"diskType,omitempty" yaml:"disk_type,omitempty"`
DiskUUID string `json:"diskUUID,omitempty" yaml:"disk_uuid,omitempty"`

View File

@ -9,8 +9,6 @@ type DiskUpdate struct {
AllowScheduling bool `json:"allowScheduling,omitempty" yaml:"allow_scheduling,omitempty"`
DiskDriver string `json:"diskDriver,omitempty" yaml:"disk_driver,omitempty"`
DiskType string `json:"diskType,omitempty" yaml:"disk_type,omitempty"`
EvictionRequested bool `json:"evictionRequested,omitempty" yaml:"eviction_requested,omitempty"`

View File

@ -29,8 +29,6 @@ type EngineImage struct {
Image string `json:"image,omitempty" yaml:"image,omitempty"`
Incompatible bool `json:"incompatible,omitempty" yaml:"incompatible,omitempty"`
Name string `json:"name,omitempty" yaml:"name,omitempty"`
NoRefSince string `json:"noRefSince,omitempty" yaml:"no_ref_since,omitempty"`

View File

@ -7,8 +7,6 @@ const (
type Orphan struct {
Resource `yaml:"-"`
DataEngine string `json:"dataEngine,omitempty" yaml:"data_engine,omitempty"`
Name string `json:"name,omitempty" yaml:"name,omitempty"`
NodeID string `json:"nodeID,omitempty" yaml:"node_id,omitempty"`

View File

@ -19,10 +19,6 @@ type RecurringJob struct {
Name string `json:"name,omitempty" yaml:"name,omitempty"`
OwnerID string `json:"ownerID,omitempty" yaml:"owner_id,omitempty"`
Parameters map[string]string `json:"parameters,omitempty" yaml:"parameters,omitempty"`
Retain int64 `json:"retain,omitempty" yaml:"retain,omitempty"`
Task string `json:"task,omitempty" yaml:"task,omitempty"`

View File

@ -7,8 +7,6 @@ const (
type Setting struct {
Resource `yaml:"-"`
Applied bool `json:"applied,omitempty" yaml:"applied,omitempty"`
Definition SettingDefinition `json:"definition,omitempty" yaml:"definition,omitempty"`
Name string `json:"name,omitempty" yaml:"name,omitempty"`

View File

@ -7,7 +7,7 @@ const (
type SnapshotInput struct {
Resource `yaml:"-"`
BackupMode string `json:"backupMode,omitempty" yaml:"backup_mode,omitempty"`
BackupMode string `json:"backupMode,omitempty" yaml:"backupMode,omitempty"`
Labels map[string]string `json:"labels,omitempty" yaml:"labels,omitempty"`

View File

@ -1,85 +0,0 @@
package client
const (
SYNC_BACKUP_RESOURCE_TYPE = "syncBackupResource"
)
type SyncBackupResource struct {
Resource `yaml:"-"`
SyncAllBackupTargets bool `json:"syncAllBackupTargets,omitempty" yaml:"sync_all_backup_targets,omitempty"`
SyncAllBackupVolumes bool `json:"syncAllBackupVolumes,omitempty" yaml:"sync_all_backup_volumes,omitempty"`
SyncBackupTarget bool `json:"syncBackupTarget,omitempty" yaml:"sync_backup_target,omitempty"`
SyncBackupVolume bool `json:"syncBackupVolume,omitempty" yaml:"sync_backup_volume,omitempty"`
}
type SyncBackupResourceCollection struct {
Collection
Data []SyncBackupResource `json:"data,omitempty"`
client *SyncBackupResourceClient
}
type SyncBackupResourceClient struct {
rancherClient *RancherClient
}
type SyncBackupResourceOperations interface {
List(opts *ListOpts) (*SyncBackupResourceCollection, error)
Create(opts *SyncBackupResource) (*SyncBackupResource, error)
Update(existing *SyncBackupResource, updates interface{}) (*SyncBackupResource, error)
ById(id string) (*SyncBackupResource, error)
Delete(container *SyncBackupResource) error
}
func newSyncBackupResourceClient(rancherClient *RancherClient) *SyncBackupResourceClient {
return &SyncBackupResourceClient{
rancherClient: rancherClient,
}
}
func (c *SyncBackupResourceClient) Create(container *SyncBackupResource) (*SyncBackupResource, error) {
resp := &SyncBackupResource{}
err := c.rancherClient.doCreate(SYNC_BACKUP_RESOURCE_TYPE, container, resp)
return resp, err
}
func (c *SyncBackupResourceClient) Update(existing *SyncBackupResource, updates interface{}) (*SyncBackupResource, error) {
resp := &SyncBackupResource{}
err := c.rancherClient.doUpdate(SYNC_BACKUP_RESOURCE_TYPE, &existing.Resource, updates, resp)
return resp, err
}
func (c *SyncBackupResourceClient) List(opts *ListOpts) (*SyncBackupResourceCollection, error) {
resp := &SyncBackupResourceCollection{}
err := c.rancherClient.doList(SYNC_BACKUP_RESOURCE_TYPE, opts, resp)
resp.client = c
return resp, err
}
func (cc *SyncBackupResourceCollection) Next() (*SyncBackupResourceCollection, error) {
if cc != nil && cc.Pagination != nil && cc.Pagination.Next != "" {
resp := &SyncBackupResourceCollection{}
err := cc.client.rancherClient.doNext(cc.Pagination.Next, resp)
resp.client = cc.client
return resp, err
}
return nil, nil
}
func (c *SyncBackupResourceClient) ById(id string) (*SyncBackupResource, error) {
resp := &SyncBackupResource{}
err := c.rancherClient.doById(SYNC_BACKUP_RESOURCE_TYPE, id, resp)
if apiError, ok := err.(*ApiError); ok {
if apiError.StatusCode == 404 {
return nil, nil
}
}
return resp, err
}
func (c *SyncBackupResourceClient) Delete(container *SyncBackupResource) error {
return c.rancherClient.doResourceDelete(SYNC_BACKUP_RESOURCE_TYPE, &container.Resource)
}

View File

@ -1,79 +0,0 @@
package client
const (
UPDATE_BACKUP_TARGET_INPUT_TYPE = "UpdateBackupTargetInput"
)
type UpdateBackupTargetInput struct {
Resource `yaml:"-"`
BackupTargetName string `json:"backupTargetName,omitempty" yaml:"backup_target_name,omitempty"`
}
type UpdateBackupTargetInputCollection struct {
Collection
Data []UpdateBackupTargetInput `json:"data,omitempty"`
client *UpdateBackupTargetInputClient
}
type UpdateBackupTargetInputClient struct {
rancherClient *RancherClient
}
type UpdateBackupTargetInputOperations interface {
List(opts *ListOpts) (*UpdateBackupTargetInputCollection, error)
Create(opts *UpdateBackupTargetInput) (*UpdateBackupTargetInput, error)
Update(existing *UpdateBackupTargetInput, updates interface{}) (*UpdateBackupTargetInput, error)
ById(id string) (*UpdateBackupTargetInput, error)
Delete(container *UpdateBackupTargetInput) error
}
func newUpdateBackupTargetInputClient(rancherClient *RancherClient) *UpdateBackupTargetInputClient {
return &UpdateBackupTargetInputClient{
rancherClient: rancherClient,
}
}
func (c *UpdateBackupTargetInputClient) Create(container *UpdateBackupTargetInput) (*UpdateBackupTargetInput, error) {
resp := &UpdateBackupTargetInput{}
err := c.rancherClient.doCreate(UPDATE_BACKUP_TARGET_INPUT_TYPE, container, resp)
return resp, err
}
func (c *UpdateBackupTargetInputClient) Update(existing *UpdateBackupTargetInput, updates interface{}) (*UpdateBackupTargetInput, error) {
resp := &UpdateBackupTargetInput{}
err := c.rancherClient.doUpdate(UPDATE_BACKUP_TARGET_INPUT_TYPE, &existing.Resource, updates, resp)
return resp, err
}
func (c *UpdateBackupTargetInputClient) List(opts *ListOpts) (*UpdateBackupTargetInputCollection, error) {
resp := &UpdateBackupTargetInputCollection{}
err := c.rancherClient.doList(UPDATE_BACKUP_TARGET_INPUT_TYPE, opts, resp)
resp.client = c
return resp, err
}
func (cc *UpdateBackupTargetInputCollection) Next() (*UpdateBackupTargetInputCollection, error) {
if cc != nil && cc.Pagination != nil && cc.Pagination.Next != "" {
resp := &UpdateBackupTargetInputCollection{}
err := cc.client.rancherClient.doNext(cc.Pagination.Next, resp)
resp.client = cc.client
return resp, err
}
return nil, nil
}
func (c *UpdateBackupTargetInputClient) ById(id string) (*UpdateBackupTargetInput, error) {
resp := &UpdateBackupTargetInput{}
err := c.rancherClient.doById(UPDATE_BACKUP_TARGET_INPUT_TYPE, id, resp)
if apiError, ok := err.(*ApiError); ok {
if apiError.StatusCode == 404 {
return nil, nil
}
}
return resp, err
}
func (c *UpdateBackupTargetInputClient) Delete(container *UpdateBackupTargetInput) error {
return c.rancherClient.doResourceDelete(UPDATE_BACKUP_TARGET_INPUT_TYPE, &container.Resource)
}

View File

@ -1,79 +0,0 @@
package client
const (
UPDATE_OFFLINE_REBUILDING_INPUT_TYPE = "UpdateOfflineRebuildingInput"
)
type UpdateOfflineRebuildingInput struct {
Resource `yaml:"-"`
OfflineRebuilding string `json:"offlineRebuilding,omitempty" yaml:"offline_rebuilding,omitempty"`
}
type UpdateOfflineRebuildingInputCollection struct {
Collection
Data []UpdateOfflineRebuildingInput `json:"data,omitempty"`
client *UpdateOfflineRebuildingInputClient
}
type UpdateOfflineRebuildingInputClient struct {
rancherClient *RancherClient
}
type UpdateOfflineRebuildingInputOperations interface {
List(opts *ListOpts) (*UpdateOfflineRebuildingInputCollection, error)
Create(opts *UpdateOfflineRebuildingInput) (*UpdateOfflineRebuildingInput, error)
Update(existing *UpdateOfflineRebuildingInput, updates interface{}) (*UpdateOfflineRebuildingInput, error)
ById(id string) (*UpdateOfflineRebuildingInput, error)
Delete(container *UpdateOfflineRebuildingInput) error
}
func newUpdateOfflineRebuildingInputClient(rancherClient *RancherClient) *UpdateOfflineRebuildingInputClient {
return &UpdateOfflineRebuildingInputClient{
rancherClient: rancherClient,
}
}
func (c *UpdateOfflineRebuildingInputClient) Create(container *UpdateOfflineRebuildingInput) (*UpdateOfflineRebuildingInput, error) {
resp := &UpdateOfflineRebuildingInput{}
err := c.rancherClient.doCreate(UPDATE_OFFLINE_REBUILDING_INPUT_TYPE, container, resp)
return resp, err
}
func (c *UpdateOfflineRebuildingInputClient) Update(existing *UpdateOfflineRebuildingInput, updates interface{}) (*UpdateOfflineRebuildingInput, error) {
resp := &UpdateOfflineRebuildingInput{}
err := c.rancherClient.doUpdate(UPDATE_OFFLINE_REBUILDING_INPUT_TYPE, &existing.Resource, updates, resp)
return resp, err
}
func (c *UpdateOfflineRebuildingInputClient) List(opts *ListOpts) (*UpdateOfflineRebuildingInputCollection, error) {
resp := &UpdateOfflineRebuildingInputCollection{}
err := c.rancherClient.doList(UPDATE_OFFLINE_REBUILDING_INPUT_TYPE, opts, resp)
resp.client = c
return resp, err
}
func (cc *UpdateOfflineRebuildingInputCollection) Next() (*UpdateOfflineRebuildingInputCollection, error) {
if cc != nil && cc.Pagination != nil && cc.Pagination.Next != "" {
resp := &UpdateOfflineRebuildingInputCollection{}
err := cc.client.rancherClient.doNext(cc.Pagination.Next, resp)
resp.client = cc.client
return resp, err
}
return nil, nil
}
func (c *UpdateOfflineRebuildingInputClient) ById(id string) (*UpdateOfflineRebuildingInput, error) {
resp := &UpdateOfflineRebuildingInput{}
err := c.rancherClient.doById(UPDATE_OFFLINE_REBUILDING_INPUT_TYPE, id, resp)
if apiError, ok := err.(*ApiError); ok {
if apiError.StatusCode == 404 {
return nil, nil
}
}
return resp, err
}
func (c *UpdateOfflineRebuildingInputClient) Delete(container *UpdateOfflineRebuildingInput) error {
return c.rancherClient.doResourceDelete(UPDATE_OFFLINE_REBUILDING_INPUT_TYPE, &container.Resource)
}

View File

@ -11,8 +11,6 @@ type Volume struct {
BackingImage string `json:"backingImage,omitempty" yaml:"backing_image,omitempty"`
BackupBlockSize string `json:"backupBlockSize,omitempty" yaml:"backup_block_size,omitempty"`
BackupCompressionMethod string `json:"backupCompressionMethod,omitempty" yaml:"backup_compression_method,omitempty"`
BackupStatus []BackupStatus `json:"backupStatus,omitempty" yaml:"backup_status,omitempty"`
@ -143,12 +141,12 @@ type VolumeOperations interface {
ActionCancelExpansion(*Volume) (*Volume, error)
ActionOfflineReplicaRebuilding(*Volume) (*Volume, error)
ActionDetach(*Volume, *DetachInput) (*Volume, error)
ActionExpand(*Volume, *ExpandInput) (*Volume, error)
ActionOfflineReplicaRebuilding(*Volume, *UpdateOfflineRebuildingInput) (*Volume, error)
ActionPvCreate(*Volume, *PVCreateInput) (*Volume, error)
ActionPvcCreate(*Volume, *PVCCreateInput) (*Volume, error)
@ -267,6 +265,15 @@ func (c *VolumeClient) ActionCancelExpansion(resource *Volume) (*Volume, error)
return resp, err
}
func (c *VolumeClient) ActionOfflineReplicaRebuilding(resource *Volume) (*Volume, error) {
resp := &Volume{}
err := c.rancherClient.doAction(VOLUME_TYPE, "offlineReplicaRebuilding", &resource.Resource, nil, resp)
return resp, err
}
func (c *VolumeClient) ActionDetach(resource *Volume, input *DetachInput) (*Volume, error) {
resp := &Volume{}
@ -285,15 +292,6 @@ func (c *VolumeClient) ActionExpand(resource *Volume, input *ExpandInput) (*Volu
return resp, err
}
func (c *VolumeClient) ActionOfflineReplicaRebuilding(resource *Volume, input *UpdateOfflineRebuildingInput) (*Volume, error) {
resp := &Volume{}
err := c.rancherClient.doAction(VOLUME_TYPE, "offlineReplicaRebuilding", &resource.Resource, input, resp)
return resp, err
}
func (c *VolumeClient) ActionPvCreate(resource *Volume, input *PVCreateInput) (*Volume, error) {
resp := &Volume{}

View File

@ -3,7 +3,6 @@ package controller
import (
"encoding/json"
"fmt"
"net"
"reflect"
"strconv"
"strings"
@ -677,8 +676,8 @@ func (c *BackingImageDataSourceController) generateBackingImageDataSourcePodMani
cmd := []string{
"backing-image-manager", "--debug",
"data-source",
"--listen", fmt.Sprintf(":%d", engineapi.BackingImageDataSourceDefaultPort),
"--sync-listen", fmt.Sprintf(":%d", engineapi.BackingImageSyncServerDefaultPort),
"--listen", fmt.Sprintf("%s:%d", "0.0.0.0", engineapi.BackingImageDataSourceDefaultPort),
"--sync-listen", fmt.Sprintf("%s:%d", "0.0.0.0", engineapi.BackingImageSyncServerDefaultPort),
"--name", bids.Name,
"--uuid", bids.Spec.UUID,
"--source-type", string(bids.Spec.SourceType),
@ -943,7 +942,7 @@ func (c *BackingImageDataSourceController) prepareRunningParametersForExport(bid
continue
}
rAddress := e.Status.CurrentReplicaAddressMap[rName]
if rAddress == "" || rAddress != net.JoinHostPort(r.Status.StorageIP, strconv.Itoa(r.Status.Port)) {
if rAddress == "" || rAddress != fmt.Sprintf("%s:%d", r.Status.StorageIP, r.Status.Port) {
continue
}
bids.Status.RunningParameters[longhorn.DataSourceTypeExportFromVolumeParameterSenderAddress] = rAddress

View File

@ -1,12 +1,9 @@
package controller
import (
"context"
"encoding/json"
"fmt"
"net"
"reflect"
"strconv"
"sync"
"time"
@ -67,8 +64,6 @@ type BackingImageManagerController struct {
replenishLock *sync.Mutex
inProgressReplenishingMap map[string]string
podRecreateBackoff *flowcontrol.Backoff
}
type BackingImageManagerMonitor struct {
@ -138,8 +133,6 @@ func NewBackingImageManagerController(
replenishLock: &sync.Mutex{},
inProgressReplenishingMap: map[string]string{},
podRecreateBackoff: newBackoff(context.TODO()),
}
var err error
@ -329,9 +322,6 @@ func (c *BackingImageManagerController) syncBackingImageManager(key string) (err
bim.Status.CurrentState = longhorn.BackingImageManagerStateUnknown
c.updateForUnknownBackingImageManager(bim)
}
if noReadyDisk {
return c.evictMissingDiskBackingImageManager(bim)
}
return nil
}
@ -391,38 +381,6 @@ func (c *BackingImageManagerController) cleanupBackingImageManager(bim *longhorn
return nil
}
// evictMissingDiskBackingImageManager trigger image manager eviction for missing disks
func (c *BackingImageManagerController) evictMissingDiskBackingImageManager(bim *longhorn.BackingImageManager) error {
isDiskExist, err := c.ds.IsNodeHasDiskUUID(bim.Spec.NodeID, bim.Spec.DiskUUID)
if err != nil {
return errors.Wrapf(err, "cannot check if backing image manager %v is serving on a existing disk %v", bim.Name, bim.Spec.DiskUUID)
} else if isDiskExist {
return nil
}
// Backing image manager is serving on the disk that no longer belongs to any node. Trigger the manager eviction.
for imageName := range bim.Spec.BackingImages {
bi, getImageErr := c.ds.GetBackingImageRO(imageName)
if getImageErr != nil {
if datastore.ErrorIsNotFound(getImageErr) {
c.logger.Warnf("No corresponding backing image %v for missing disk backing image manager %v", imageName, bim.Name)
continue
}
return errors.Wrapf(getImageErr, "failed to get backing image %v for missing disk backing image manager %v", bi.Name, bim.Name)
}
if bi.Spec.DiskFileSpecMap != nil {
if bimDiskFileSpec, exist := bi.Spec.DiskFileSpecMap[bim.Spec.DiskUUID]; exist && !bimDiskFileSpec.EvictionRequested {
c.logger.Infof("Evicting backing image manager %v because of missing disk %v", bim.Name, bim.Spec.DiskUUID)
bimDiskFileSpec.EvictionRequested = true
if _, updateErr := c.ds.UpdateBackingImage(bi); updateErr != nil {
return errors.Wrapf(updateErr, "failed to evict missing disk backing image manager %v from backing image %v", bim.Name, bi.Name)
}
}
}
}
return nil
}
func (c *BackingImageManagerController) updateForUnknownBackingImageManager(bim *longhorn.BackingImageManager) {
if bim.Status.CurrentState != longhorn.BackingImageManagerStateUnknown {
return
@ -586,16 +544,9 @@ func (c *BackingImageManagerController) syncBackingImageManagerPod(bim *longhorn
// Similar to InstanceManagerController.
// Longhorn shouldn't create the pod when users set taints with NoExecute effect on a node the bim is preferred.
if c.controllerID == bim.Spec.NodeID {
backoffID := bim.Name
if c.podRecreateBackoff.IsInBackOffSinceUpdate(backoffID, time.Now()) {
log.Infof("Skipping pod creation for backing image manager %s, will retry after backoff of %s", bim.Name, c.podRecreateBackoff.Get(backoffID))
} else {
log.Infof("Creating pod for backing image manager %s", bim.Name)
c.podRecreateBackoff.Next(backoffID, time.Now())
if err := c.createBackingImageManagerPod(bim); err != nil {
return errors.Wrap(err, "failed to create pod for backing image manager")
}
log.Info("Creating backing image manager pod")
if err := c.createBackingImageManagerPod(bim); err != nil {
return err
}
bim.Status.CurrentState = longhorn.BackingImageManagerStateStarting
c.eventRecorder.Eventf(bim, corev1.EventTypeNormal, constant.EventReasonCreate, "Creating backing image manager pod %v for disk %v on node %v. Backing image manager state will become %v", bim.Name, bim.Spec.DiskUUID, bim.Spec.NodeID, longhorn.BackingImageManagerStateStarting)
@ -758,7 +709,7 @@ func (c *BackingImageManagerController) prepareBackingImageFiles(currentBIM *lon
continue
}
log.Infof("Starting to fetch the data source file from the backing image data source work directory %v", bimtypes.DataSourceDirectoryName)
if _, err := cli.Fetch(biRO.Name, biRO.Status.UUID, bids.Status.Checksum, net.JoinHostPort(bids.Status.StorageIP, strconv.Itoa(engineapi.BackingImageDataSourceDefaultPort)), bids.Status.Size); err != nil {
if _, err := cli.Fetch(biRO.Name, biRO.Status.UUID, bids.Status.Checksum, fmt.Sprintf("%s:%d", bids.Status.StorageIP, engineapi.BackingImageDataSourceDefaultPort), bids.Status.Size); err != nil {
if types.ErrorAlreadyExists(err) {
continue
}
@ -922,8 +873,8 @@ func (c *BackingImageManagerController) generateBackingImageManagerPodManifest(b
Command: []string{
"backing-image-manager", "--debug",
"daemon",
"--listen", fmt.Sprintf(":%d", engineapi.BackingImageManagerDefaultPort),
"--sync-listen", fmt.Sprintf(":%d", engineapi.BackingImageSyncServerDefaultPort),
"--listen", fmt.Sprintf("%s:%d", "0.0.0.0", engineapi.BackingImageManagerDefaultPort),
"--sync-listen", fmt.Sprintf("%s:%d", "0.0.0.0", engineapi.BackingImageSyncServerDefaultPort),
},
ReadinessProbe: &corev1.Probe{
ProbeHandler: corev1.ProbeHandler{

View File

@ -26,7 +26,6 @@ import (
v1core "k8s.io/client-go/kubernetes/typed/core/v1"
systembackupstore "github.com/longhorn/backupstore/systembackup"
multierr "github.com/longhorn/go-common-libs/multierr"
"github.com/longhorn/longhorn-manager/datastore"
"github.com/longhorn/longhorn-manager/engineapi"
@ -242,19 +241,28 @@ func getLoggerForBackupTarget(logger logrus.FieldLogger, backupTarget *longhorn.
)
}
func getBackupTarget(nodeID string, backupTarget *longhorn.BackupTarget, ds *datastore.DataStore, log logrus.FieldLogger, proxyConnCounter util.Counter) (engineClientProxy engineapi.EngineClientProxy, backupTargetClient *engineapi.BackupTargetClient, err error) {
var instanceManager *longhorn.InstanceManager
errs := multierr.NewMultiError()
func getAvailableDataEngine(ds *datastore.DataStore) (longhorn.DataEngineType, error) {
dataEngines := ds.GetDataEngines()
for dataEngine := range dataEngines {
instanceManager, err = ds.GetRunningInstanceManagerByNodeRO(nodeID, dataEngine)
if err == nil {
break
if len(dataEngines) > 0 {
for _, dataEngine := range []longhorn.DataEngineType{longhorn.DataEngineTypeV2, longhorn.DataEngineTypeV1} {
if _, ok := dataEngines[dataEngine]; ok {
return dataEngine, nil
}
}
errs.Append("errors", errors.Wrapf(err, "failed to get running instance manager for node %v and data engine %v", nodeID, dataEngine))
}
if instanceManager == nil {
return nil, nil, fmt.Errorf("failed to find a running instance manager for node %v: %v", nodeID, errs.Error())
return "", errors.New("no data engine available")
}
func getBackupTarget(controllerID string, backupTarget *longhorn.BackupTarget, ds *datastore.DataStore, log logrus.FieldLogger, proxyConnCounter util.Counter) (engineClientProxy engineapi.EngineClientProxy, backupTargetClient *engineapi.BackupTargetClient, err error) {
dataEngine, err := getAvailableDataEngine(ds)
if err != nil {
return nil, nil, errors.Wrap(err, "failed to get available data engine for getting backup target")
}
instanceManager, err := ds.GetRunningInstanceManagerByNodeRO(controllerID, dataEngine)
if err != nil {
return nil, nil, errors.Wrap(err, "failed to get running instance manager for proxy client")
}
engineClientProxy, err = engineapi.NewEngineClientProxy(instanceManager, log, proxyConnCounter, ds)
@ -458,19 +466,13 @@ func (btc *BackupTargetController) reconcile(name string) (err error) {
log.WithError(err).Error("Failed to get info from backup store")
return nil // Ignore error to allow status update as well as preventing enqueue
}
if !backupTarget.Status.Available {
backupTarget.Status.Available = true
backupTarget.Status.Conditions = types.SetCondition(backupTarget.Status.Conditions,
longhorn.BackupTargetConditionTypeUnavailable, longhorn.ConditionStatusFalse,
"", "")
// If the controller can communicate with the remote backup target while "backupTarget.Status.Available" is "false",
// Longhorn should update the field to "true" first rather than continuing to fetch info from the target.
// related issue: https://github.com/longhorn/longhorn/issues/11337
return nil
}
syncTimeRequired = true // Errors beyond this point are NOT backup target related.
backupTarget.Status.Available = true
backupTarget.Status.Conditions = types.SetCondition(backupTarget.Status.Conditions,
longhorn.BackupTargetConditionTypeUnavailable, longhorn.ConditionStatusFalse,
"", "")
if err = btc.syncBackupVolume(backupTarget, info.backupStoreBackupVolumeNames, clusterVolumeBVMap, syncTime, log); err != nil {
return err
}
@ -538,14 +540,6 @@ func (btc *BackupTargetController) getInfoFromBackupStore(backupTarget *longhorn
defer engineClientProxy.Close()
// Get required information using backup target client.
// Get SystemBackups first to update the backup target to `available` while minimizing requests to S3.
info.backupStoreSystemBackups, err = backupTargetClient.ListSystemBackup()
if err != nil {
return backupStoreInfo{}, errors.Wrapf(err, "failed to list system backups in %v", backupTargetClient.URL)
}
if !backupTarget.Status.Available {
return info, nil
}
info.backupStoreBackupVolumeNames, err = backupTargetClient.BackupVolumeNameList()
if err != nil {
return backupStoreInfo{}, errors.Wrapf(err, "failed to list backup volumes in %v", backupTargetClient.URL)
@ -554,6 +548,10 @@ func (btc *BackupTargetController) getInfoFromBackupStore(backupTarget *longhorn
if err != nil {
return backupStoreInfo{}, errors.Wrapf(err, "failed to list backup backing images in %v", backupTargetClient.URL)
}
info.backupStoreSystemBackups, err = backupTargetClient.ListSystemBackup()
if err != nil {
return backupStoreInfo{}, errors.Wrapf(err, "failed to list system backups in %v", backupTargetClient.URL)
}
return info, nil
}

View File

@ -24,8 +24,6 @@ import (
"github.com/longhorn/backupstore"
lhbackup "github.com/longhorn/go-common-libs/backup"
"github.com/longhorn/longhorn-manager/datastore"
"github.com/longhorn/longhorn-manager/types"
"github.com/longhorn/longhorn-manager/util"
@ -323,11 +321,6 @@ func (bvc *BackupVolumeController) reconcile(backupVolumeName string) (err error
backupLabelMap := map[string]string{}
backupURL := backupstore.EncodeBackupURL(backupName, canonicalBVName, backupTargetClient.URL)
// If the block size is unavailable from legacy remote backup, the size fallback to legacy default value 2MiB.
// If the size value is invalid, it still creates a backup with invalid block size, but the volume restoring will be rejected by the volume validator.
var blockSize = types.BackupBlockSizeInvalid
if backupInfo, err := backupTargetClient.BackupGet(backupURL, backupTargetClient.Credential); err != nil && !types.ErrorIsNotFound(err) {
log.WithError(err).WithFields(logrus.Fields{
"backup": backupName,
@ -338,18 +331,6 @@ func (bvc *BackupVolumeController) reconcile(backupVolumeName string) (err error
if accessMode, exist := backupInfo.Labels[types.GetLonghornLabelKey(types.LonghornLabelVolumeAccessMode)]; exist {
backupLabelMap[types.GetLonghornLabelKey(types.LonghornLabelVolumeAccessMode)] = accessMode
}
backupBlockSizeParam := backupInfo.Parameters[lhbackup.LonghornBackupParameterBackupBlockSize]
if blockSizeBytes, convertErr := util.ConvertSize(backupBlockSizeParam); convertErr != nil {
log.WithError(convertErr).Warnf("Invalid backup block size string from the remote backup %v: %v", backupName, backupBlockSizeParam)
} else if sizeErr := types.ValidateBackupBlockSize(-1, blockSizeBytes); sizeErr != nil {
log.WithError(sizeErr).Warnf("Invalid backup block size from the remote backup %v: %v", backupName, backupBlockSizeParam)
} else {
if blockSizeBytes == 0 {
blockSize = types.BackupBlockSize2Mi
} else {
blockSize = blockSizeBytes
}
}
}
}
@ -362,8 +343,7 @@ func (bvc *BackupVolumeController) reconcile(backupVolumeName string) (err error
OwnerReferences: datastore.GetOwnerReferencesForBackupVolume(backupVolume),
},
Spec: longhorn.BackupSpec{
Labels: backupLabelMap,
BackupBlockSize: blockSize,
Labels: backupLabelMap,
},
}
if _, err = bvc.ds.CreateBackup(backup, canonicalBVName); err != nil && !apierrors.IsAlreadyExists(err) {

View File

@ -3,6 +3,7 @@ package controller
import (
"fmt"
"math"
"strconv"
"time"
"github.com/pkg/errors"
@ -258,18 +259,31 @@ func GetInstanceManagerCPURequirement(ds *datastore.DataStore, imName string) (*
cpuRequest := 0
switch im.Spec.DataEngine {
case longhorn.DataEngineTypeV1, longhorn.DataEngineTypeV2:
// TODO: Currently lhNode.Spec.InstanceManagerCPURequest is applied to both v1 and v2 data engines.
// In the future, we may want to support different CPU requests for them.
case longhorn.DataEngineTypeV1:
cpuRequest = lhNode.Spec.InstanceManagerCPURequest
if cpuRequest == 0 {
guaranteedCPUPercentage, err := ds.GetSettingAsFloatByDataEngine(types.SettingNameGuaranteedInstanceManagerCPU, im.Spec.DataEngine)
guaranteedCPUSetting, err := ds.GetSettingWithAutoFillingRO(types.SettingNameGuaranteedInstanceManagerCPU)
if err != nil {
return nil, err
}
guaranteedCPUPercentage, err := strconv.ParseFloat(guaranteedCPUSetting.Value, 64)
if err != nil {
return nil, err
}
allocatableMilliCPU := float64(kubeNode.Status.Allocatable.Cpu().MilliValue())
cpuRequest = int(math.Round(allocatableMilliCPU * guaranteedCPUPercentage / 100.0))
}
case longhorn.DataEngineTypeV2:
// TODO: Support CPU request per node for v2 volumes
guaranteedCPUSetting, err := ds.GetSettingWithAutoFillingRO(types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU)
if err != nil {
return nil, err
}
guaranteedCPURequest, err := strconv.ParseFloat(guaranteedCPUSetting.Value, 64)
if err != nil {
return nil, err
}
cpuRequest = int(guaranteedCPURequest)
default:
return nil, fmt.Errorf("unknown data engine %v", im.Spec.DataEngine)
}

View File

@ -467,7 +467,7 @@ func (ec *EngineController) CreateInstance(obj interface{}) (*longhorn.InstanceP
}
}(c)
engineReplicaTimeout, err := ec.ds.GetSettingAsIntByDataEngine(types.SettingNameEngineReplicaTimeout, e.Spec.DataEngine)
engineReplicaTimeout, err := ec.ds.GetSettingAsInt(types.SettingNameEngineReplicaTimeout)
if err != nil {
return nil, err
}
@ -494,11 +494,6 @@ func (ec *EngineController) CreateInstance(obj interface{}) (*longhorn.InstanceP
instanceManagerStorageIP := ec.ds.GetStorageIPFromPod(instanceManagerPod)
e.Status.Starting = true
if e, err = ec.ds.UpdateEngineStatus(e); err != nil {
return nil, errors.Wrapf(err, "failed to update engine %v status.starting to true before sending instance create request", e.Name)
}
return c.EngineInstanceCreate(&engineapi.EngineInstanceCreateRequest{
Engine: e,
VolumeFrontend: frontend,
@ -949,10 +944,6 @@ func (m *EngineMonitor) refresh(engine *longhorn.Engine) error {
if err != nil {
return err
}
if err := m.checkAndApplyRebuildQoS(engine, engineClientProxy, rebuildStatus); err != nil {
return err
}
engine.Status.RebuildStatus = rebuildStatus
// It's meaningless to sync the trim related field for old engines or engines in old engine instance managers
@ -1138,66 +1129,6 @@ func (m *EngineMonitor) refresh(engine *longhorn.Engine) error {
return nil
}
func (m *EngineMonitor) checkAndApplyRebuildQoS(engine *longhorn.Engine, engineClientProxy engineapi.EngineClientProxy, rebuildStatus map[string]*longhorn.RebuildStatus) error {
if !types.IsDataEngineV2(engine.Spec.DataEngine) {
return nil
}
expectedQoSValue, err := m.getEffectiveRebuildQoS(engine)
if err != nil {
return err
}
for replica, newStatus := range rebuildStatus {
if newStatus == nil {
continue
}
var appliedQoS int64
if oldStatus, exists := engine.Status.RebuildStatus[replica]; exists && oldStatus != nil {
appliedQoS = oldStatus.AppliedRebuildingMBps
}
if appliedQoS == expectedQoSValue {
newStatus.AppliedRebuildingMBps = appliedQoS
continue
}
if !newStatus.IsRebuilding {
continue
}
if err := engineClientProxy.ReplicaRebuildQosSet(engine, expectedQoSValue); err != nil {
m.logger.WithError(err).Warnf("[qos] Failed to set QoS for volume %s, replica %s", engine.Spec.VolumeName, replica)
continue
}
newStatus.AppliedRebuildingMBps = expectedQoSValue
}
return nil
}
func (m *EngineMonitor) getEffectiveRebuildQoS(engine *longhorn.Engine) (int64, error) {
if types.IsDataEngineV1(engine.Spec.DataEngine) {
return 0, nil
}
globalQoS, err := m.ds.GetSettingAsIntByDataEngine(types.SettingNameReplicaRebuildingBandwidthLimit, engine.Spec.DataEngine)
if err != nil {
return 0, err
}
volume, err := m.ds.GetVolumeRO(engine.Spec.VolumeName)
if err != nil {
return 0, err
}
if volume.Spec.ReplicaRebuildingBandwidthLimit > 0 {
return volume.Spec.ReplicaRebuildingBandwidthLimit, nil
}
return globalQoS, nil
}
func isBackupRestoreFailed(rsMap map[string]*longhorn.RestoreStatus) bool {
for _, status := range rsMap {
if status.IsRestoring {
@ -1514,7 +1445,7 @@ func (m *EngineMonitor) restoreBackup(engine *longhorn.Engine, rsMap map[string]
backupTargetClient, err := newBackupTargetClientFromDefaultEngineImage(m.ds, backupTarget)
if err != nil {
return errors.Wrapf(err, "failed to get backup target client for backup restoration of engine %v", engine.Name)
return errors.Wrapf(err, "cannot get backup target config for backup restoration of engine %v", engine.Name)
}
mlog := m.logger.WithFields(logrus.Fields{
@ -1526,8 +1457,7 @@ func (m *EngineMonitor) restoreBackup(engine *longhorn.Engine, rsMap map[string]
concurrentLimit, err := m.ds.GetSettingAsInt(types.SettingNameRestoreConcurrentLimit)
if err != nil {
return errors.Wrapf(err, "failed to get %v setting for backup restoration of engine %v",
types.SettingNameRestoreConcurrentLimit, engine.Name)
return errors.Wrapf(err, "failed to assert %v value", types.SettingNameRestoreConcurrentLimit)
}
mlog.Info("Restoring backup")
@ -1686,6 +1616,7 @@ func (ec *EngineController) ReconcileEngineState(e *longhorn.Engine) error {
if err := ec.rebuildNewReplica(e); err != nil {
return err
}
return nil
}
@ -1821,13 +1752,13 @@ func (ec *EngineController) startRebuilding(e *longhorn.Engine, replicaName, add
go func() {
autoCleanupSystemGeneratedSnapshot, err := ec.ds.GetSettingAsBool(types.SettingNameAutoCleanupSystemGeneratedSnapshot)
if err != nil {
log.WithError(err).Errorf("Failed to get %v setting", types.SettingNameAutoCleanupSystemGeneratedSnapshot)
log.WithError(err).Errorf("Failed to get %v setting", types.SettingDefinitionAutoCleanupSystemGeneratedSnapshot)
return
}
fastReplicaRebuild, err := ec.ds.GetSettingAsBoolByDataEngine(types.SettingNameFastReplicaRebuildEnabled, e.Spec.DataEngine)
fastReplicaRebuild, err := ec.ds.GetSettingAsBool(types.SettingNameFastReplicaRebuildEnabled)
if err != nil {
log.WithError(err).Errorf("Failed to get %v setting for data engine %v", types.SettingNameFastReplicaRebuildEnabled, e.Spec.DataEngine)
log.WithError(err).Errorf("Failed to get %v setting", types.SettingNameFastReplicaRebuildEnabled)
return
}
@ -2109,8 +2040,8 @@ func getReplicaRebuildFailedReasonFromError(errMsg string) (string, longhorn.Con
}
}
func (ec *EngineController) waitForV2EngineRebuild(engine *longhorn.Engine, replicaName string, timeout int64) (err error) {
if !types.IsDataEngineV2(engine.Spec.DataEngine) {
func (ec *EngineController) waitForV2EngineRebuild(e *longhorn.Engine, replicaName string, timeout int64) (err error) {
if !types.IsDataEngineV2(e.Spec.DataEngine) {
return nil
}
@ -2121,11 +2052,11 @@ func (ec *EngineController) waitForV2EngineRebuild(engine *longhorn.Engine, repl
for {
select {
case <-ticker.C:
e, err := ec.ds.GetEngineRO(engine.Name)
e, err = ec.ds.GetEngineRO(e.Name)
if err != nil {
// There is no need to continue if the engine is not found
if apierrors.IsNotFound(err) {
return errors.Wrapf(err, "engine %v not found during v2 replica %s rebuild wait", engine.Name, replicaName)
return errors.Wrapf(err, "engine %v not found during v2 replica %s rebuild wait", e.Name, replicaName)
}
// There may be something wrong with the indexer or the API sever, will retry
continue
@ -2228,7 +2159,7 @@ func (ec *EngineController) UpgradeEngineInstance(e *longhorn.Engine, log *logru
}
}(c)
engineReplicaTimeout, err := ec.ds.GetSettingAsIntByDataEngine(types.SettingNameEngineReplicaTimeout, e.Spec.DataEngine)
engineReplicaTimeout, err := ec.ds.GetSettingAsInt(types.SettingNameEngineReplicaTimeout)
if err != nil {
return err
}

View File

@ -49,7 +49,7 @@ func NewInstanceHandler(ds *datastore.DataStore, instanceManagerHandler Instance
}
}
func (h *InstanceHandler) syncStatusWithInstanceManager(log *logrus.Entry, im *longhorn.InstanceManager, instanceName string, spec *longhorn.InstanceSpec, status *longhorn.InstanceStatus, instances map[string]longhorn.InstanceProcess) {
func (h *InstanceHandler) syncStatusWithInstanceManager(im *longhorn.InstanceManager, instanceName string, spec *longhorn.InstanceSpec, status *longhorn.InstanceStatus, instances map[string]longhorn.InstanceProcess) {
defer func() {
if status.CurrentState == longhorn.InstanceStateStopped {
status.InstanceManagerName = ""
@ -64,7 +64,7 @@ func (h *InstanceHandler) syncStatusWithInstanceManager(log *logrus.Entry, im *l
if im == nil || im.Status.CurrentState == longhorn.InstanceManagerStateUnknown || isDelinquent {
if status.Started {
if status.CurrentState != longhorn.InstanceStateUnknown {
log.Warnf("Marking the instance as state UNKNOWN since the related node %v of instance %v is down or deleted", spec.NodeID, instanceName)
logrus.Warnf("Marking the instance as state UNKNOWN since the related node %v of instance %v is down or deleted", spec.NodeID, instanceName)
}
status.CurrentState = longhorn.InstanceStateUnknown
} else {
@ -122,7 +122,7 @@ func (h *InstanceHandler) syncStatusWithInstanceManager(log *logrus.Entry, im *l
if !exists {
if status.Started {
if status.CurrentState != longhorn.InstanceStateError {
log.Warnf("Marking the instance as state ERROR since failed to find the instance status in instance manager %v for the running instance %v", im.Name, instanceName)
logrus.Warnf("Marking the instance as state ERROR since failed to find the instance status in instance manager %v for the running instance %v", im.Name, instanceName)
}
status.CurrentState = longhorn.InstanceStateError
} else {
@ -139,7 +139,7 @@ func (h *InstanceHandler) syncStatusWithInstanceManager(log *logrus.Entry, im *l
}
if status.InstanceManagerName != "" && status.InstanceManagerName != im.Name {
log.Errorf("The related process of instance %v is found in the instance manager %v, but the instance manager name in the instance status is %v. "+
logrus.Errorf("The related process of instance %v is found in the instance manager %v, but the instance manager name in the instance status is %v. "+
"The instance manager name shouldn't change except for cleanup",
instanceName, im.Name, status.InstanceManagerName)
}
@ -164,34 +164,28 @@ func (h *InstanceHandler) syncStatusWithInstanceManager(log *logrus.Entry, im *l
imPod, err := h.ds.GetPodRO(im.Namespace, im.Name)
if err != nil {
log.WithError(err).Errorf("Failed to get instance manager pod from %v", im.Name)
logrus.WithError(err).Errorf("Failed to get instance manager pod from %v", im.Name)
return
}
if imPod == nil {
log.Warnf("Instance manager pod from %v not exist in datastore", im.Name)
logrus.Warnf("Instance manager pod from %v not exist in datastore", im.Name)
return
}
storageIP := h.ds.GetStorageIPFromPod(imPod)
if status.StorageIP != storageIP {
if status.StorageIP != "" {
log.Warnf("Instance %v is state running in instance manager %s, but its status Storage IP %s does not match the instance manager recorded Storage IP %s", instanceName, im.Name, status.StorageIP, storageIP)
}
status.StorageIP = storageIP
logrus.Warnf("Instance %v starts running, Storage IP %v", instanceName, status.StorageIP)
}
if status.IP != im.Status.IP {
if status.IP != "" {
log.Warnf("Instance %v is state running in instance manager %s, but its status IP %s does not match the instance manager recorded IP %s", instanceName, im.Name, status.IP, im.Status.IP)
}
status.IP = im.Status.IP
logrus.Warnf("Instance %v starts running, IP %v", instanceName, status.IP)
}
if status.Port != int(instance.Status.PortStart) {
if status.Port != 0 {
log.Warnf("Instance %v is state running in instance manager %s, but its status Port %d does not match the instance manager recorded Port %d", instanceName, im.Name, status.Port, instance.Status.PortStart)
}
status.Port = int(instance.Status.PortStart)
logrus.Warnf("Instance %v starts running, Port %d", instanceName, status.Port)
}
if status.UblkID != instance.Status.UblkID {
status.UblkID = instance.Status.UblkID
@ -236,7 +230,7 @@ func (h *InstanceHandler) syncStatusWithInstanceManager(log *logrus.Entry, im *l
h.resetInstanceErrorCondition(status)
default:
if status.CurrentState != longhorn.InstanceStateError {
log.Warnf("Instance %v is state %v, error message: %v", instanceName, instance.Status.State, instance.Status.ErrorMsg)
logrus.Warnf("Instance %v is state %v, error message: %v", instanceName, instance.Status.State, instance.Status.ErrorMsg)
}
status.CurrentState = longhorn.InstanceStateError
status.CurrentImage = ""
@ -284,14 +278,7 @@ func (h *InstanceHandler) ReconcileInstanceState(obj interface{}, spec *longhorn
return err
}
log := logrus.WithFields(logrus.Fields{"instance": instanceName, "volumeName": spec.VolumeName, "dataEngine": spec.DataEngine, "specNodeID": spec.NodeID})
stateBeforeReconcile := status.CurrentState
defer func() {
if stateBeforeReconcile != status.CurrentState {
log.Infof("Instance %v state is updated from %v to %v", instanceName, stateBeforeReconcile, status.CurrentState)
}
}()
log := logrus.WithField("instance", instanceName)
var im *longhorn.InstanceManager
if status.InstanceManagerName != "" {
@ -323,9 +310,6 @@ func (h *InstanceHandler) ReconcileInstanceState(obj interface{}, spec *longhorn
}
}
}
if im != nil {
log = log.WithFields(logrus.Fields{"instanceManager": im.Name})
}
if spec.LogRequested {
if !status.LogFetched {
@ -368,7 +352,6 @@ func (h *InstanceHandler) ReconcileInstanceState(obj interface{}, spec *longhorn
if i, exists := instances[instanceName]; exists && i.Status.State == longhorn.InstanceStateRunning {
status.Started = true
status.Starting = false
break
}
@ -389,29 +372,23 @@ func (h *InstanceHandler) ReconcileInstanceState(obj interface{}, spec *longhorn
}
case longhorn.InstanceStateStopped:
shouldDelete := false
if im != nil && im.DeletionTimestamp == nil {
if _, exists := instances[instanceName]; exists {
shouldDelete = true
}
}
if status.Starting {
shouldDelete = true
}
if shouldDelete {
// there is a delay between deleteInstance() invocation and state/InstanceManager update,
// deleteInstance() may be called multiple times.
if err := h.deleteInstance(instanceName, runtimeObj); err != nil {
return err
if instance, exists := instances[instanceName]; exists {
if shouldDeleteInstance(&instance) {
if err := h.deleteInstance(instanceName, runtimeObj); err != nil {
return err
}
}
}
}
status.Started = false
status.Starting = false
default:
return fmt.Errorf("unknown instance desire state: desire %v", spec.DesireState)
}
h.syncStatusWithInstanceManager(log, im, instanceName, spec, status, instances)
h.syncStatusWithInstanceManager(im, instanceName, spec, status, instances)
switch status.CurrentState {
case longhorn.InstanceStateRunning:
@ -440,10 +417,10 @@ func (h *InstanceHandler) ReconcileInstanceState(obj interface{}, spec *longhorn
}
if types.IsDataEngineV1(instance.Spec.DataEngine) {
log.Warnf("Instance %v crashed on Instance Manager %v at %v, getting log",
logrus.Warnf("Instance %v crashed on Instance Manager %v at %v, getting log",
instanceName, im.Name, im.Spec.NodeID)
if err := h.printInstanceLogs(instanceName, runtimeObj); err != nil {
log.WithError(err).Warnf("failed to get crash log for instance %v on Instance Manager %v at %v",
logrus.WithError(err).Warnf("failed to get crash log for instance %v on Instance Manager %v at %v",
instanceName, im.Name, im.Spec.NodeID)
}
}
@ -452,6 +429,17 @@ func (h *InstanceHandler) ReconcileInstanceState(obj interface{}, spec *longhorn
return nil
}
func shouldDeleteInstance(instance *longhorn.InstanceProcess) bool {
// For a replica of a SPDK volume, a stopped replica means the lvol is not exposed,
// but the lvol is still there. We don't need to delete it.
if types.IsDataEngineV2(instance.Spec.DataEngine) {
if instance.Status.State == longhorn.InstanceStateStopped {
return false
}
}
return true
}
func (h *InstanceHandler) getInstancesFromInstanceManager(obj runtime.Object, instanceManager *longhorn.InstanceManager) (map[string]longhorn.InstanceProcess, error) {
switch obj.(type) {
case *longhorn.Engine:

View File

@ -19,7 +19,6 @@ import (
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/flowcontrol"
"k8s.io/kubernetes/pkg/controller"
corev1 "k8s.io/api/core/v1"
@ -63,8 +62,6 @@ type InstanceManagerController struct {
proxyConnCounter util.Counter
backoff *flowcontrol.Backoff
// for unit test
versionUpdater func(*longhorn.InstanceManager) error
}
@ -143,8 +140,6 @@ func NewInstanceManagerController(
proxyConnCounter: proxyConnCounter,
versionUpdater: updateInstanceManagerVersion,
backoff: newBackoff(context.TODO()),
}
var err error
@ -214,7 +209,7 @@ func (imc *InstanceManagerController) isResponsibleForSetting(obj interface{}) b
}
return types.SettingName(setting.Name) == types.SettingNameKubernetesClusterAutoscalerEnabled ||
types.SettingName(setting.Name) == types.SettingNameDataEngineCPUMask ||
types.SettingName(setting.Name) == types.SettingNameV2DataEngineCPUMask ||
types.SettingName(setting.Name) == types.SettingNameOrphanResourceAutoDeletion
}
@ -465,14 +460,12 @@ func (imc *InstanceManagerController) syncStatusWithPod(im *longhorn.InstanceMan
im.Status.CurrentState = longhorn.InstanceManagerStateStopped
return nil
}
imc.logger.Warnf("Instance manager pod %v is not found, updating the instance manager state from %s to error", im.Name, im.Status.CurrentState)
im.Status.CurrentState = longhorn.InstanceManagerStateError
return nil
}
// By design instance manager pods should not be terminated.
if pod.DeletionTimestamp != nil {
imc.logger.Warnf("Instance manager pod %v is being deleted, updating the instance manager state from %s to error", im.Name, im.Status.CurrentState)
im.Status.CurrentState = longhorn.InstanceManagerStateError
return nil
}
@ -495,7 +488,6 @@ func (imc *InstanceManagerController) syncStatusWithPod(im *longhorn.InstanceMan
im.Status.CurrentState = longhorn.InstanceManagerStateStarting
}
default:
imc.logger.Warnf("Instance manager pod %v is in phase %s, updating the instance manager state from %s to error", im.Name, pod.Status.Phase, im.Status.CurrentState)
im.Status.CurrentState = longhorn.InstanceManagerStateError
}
@ -549,12 +541,12 @@ func (imc *InstanceManagerController) isDateEngineCPUMaskApplied(im *longhorn.In
return im.Spec.DataEngineSpec.V2.CPUMask == im.Status.DataEngineStatus.V2.CPUMask, nil
}
value, err := imc.ds.GetSettingValueExistedByDataEngine(types.SettingNameDataEngineCPUMask, im.Spec.DataEngine)
setting, err := imc.ds.GetSettingWithAutoFillingRO(types.SettingNameV2DataEngineCPUMask)
if err != nil {
return true, errors.Wrapf(err, "failed to get %v setting for updating data engine CPU mask", types.SettingNameDataEngineCPUMask)
return true, errors.Wrapf(err, "failed to get %v setting for updating data engine CPU mask", types.SettingNameV2DataEngineCPUMask)
}
return value == im.Status.DataEngineStatus.V2.CPUMask, nil
return setting.Value == im.Status.DataEngineStatus.V2.CPUMask, nil
}
func (imc *InstanceManagerController) syncLogSettingsToInstanceManagerPod(im *longhorn.InstanceManager) error {
@ -574,41 +566,36 @@ func (imc *InstanceManagerController) syncLogSettingsToInstanceManagerPod(im *lo
settingNames := []types.SettingName{
types.SettingNameLogLevel,
types.SettingNameDataEngineLogLevel,
types.SettingNameDataEngineLogFlags,
types.SettingNameV2DataEngineLogLevel,
types.SettingNameV2DataEngineLogFlags,
}
for _, settingName := range settingNames {
setting, err := imc.ds.GetSettingWithAutoFillingRO(settingName)
if err != nil {
return err
}
switch settingName {
case types.SettingNameLogLevel:
value, err := imc.ds.GetSettingValueExisted(settingName)
if err != nil {
return err
}
// We use this to set the instance-manager log level, for either engine type.
err = client.LogSetLevel("", "", value)
err = client.LogSetLevel("", "", setting.Value)
if err != nil {
return errors.Wrapf(err, "failed to set instance-manager log level to setting %v value: %v", settingName, value)
return errors.Wrapf(err, "failed to set instance-manager log level to setting %v value: %v", settingName, setting.Value)
}
case types.SettingNameDataEngineLogLevel:
// We use this to set the data engine (such as spdk_tgt for v2 data engine) log level independently of the instance-manager's.
case types.SettingNameV2DataEngineLogLevel:
// We use this to set the spdk_tgt log level independently of the instance-manager's.
if types.IsDataEngineV2(im.Spec.DataEngine) {
value, err := imc.ds.GetSettingValueExistedByDataEngine(settingName, im.Spec.DataEngine)
err = client.LogSetLevel(longhorn.DataEngineTypeV2, "", setting.Value)
if err != nil {
return err
}
if err := client.LogSetLevel(longhorn.DataEngineTypeV2, "", value); err != nil {
return errors.Wrapf(err, "failed to set data engine log level to setting %v value: %v", settingName, value)
return errors.Wrapf(err, "failed to set spdk_tgt log level to setting %v value: %v", settingName, setting.Value)
}
}
case types.SettingNameDataEngineLogFlags:
case types.SettingNameV2DataEngineLogFlags:
if types.IsDataEngineV2(im.Spec.DataEngine) {
value, err := imc.ds.GetSettingValueExistedByDataEngine(settingName, im.Spec.DataEngine)
err = client.LogSetFlags(longhorn.DataEngineTypeV2, "spdk_tgt", setting.Value)
if err != nil {
return err
}
if err := client.LogSetFlags(longhorn.DataEngineTypeV2, "spdk_tgt", value); err != nil {
return errors.Wrapf(err, "failed to set data engine log flags to setting %v value: %v", settingName, value)
return errors.Wrapf(err, "failed to set spdk_tgt log flags to setting %v value: %v", settingName, setting.Value)
}
}
}
@ -665,16 +652,8 @@ func (imc *InstanceManagerController) handlePod(im *longhorn.InstanceManager) er
return err
}
backoffID := im.Name
if imc.backoff.IsInBackOffSinceUpdate(backoffID, time.Now()) {
log.Infof("Skipping pod creation for instance manager %s, will retry after backoff of %s", im.Name, imc.backoff.Get(backoffID))
} else {
log.Infof("Creating pod for instance manager %s", im.Name)
imc.backoff.Next(backoffID, time.Now())
if err := imc.createInstanceManagerPod(im); err != nil {
return errors.Wrap(err, "failed to create pod for instance manager")
}
if err := imc.createInstanceManagerPod(im); err != nil {
return err
}
return nil
@ -747,7 +726,7 @@ func (imc *InstanceManagerController) areDangerZoneSettingsSyncedToIMPod(im *lon
isSettingSynced, err = imc.isSettingTaintTolerationSynced(setting, pod)
case types.SettingNameSystemManagedComponentsNodeSelector:
isSettingSynced, err = imc.isSettingNodeSelectorSynced(setting, pod)
case types.SettingNameGuaranteedInstanceManagerCPU:
case types.SettingNameGuaranteedInstanceManagerCPU, types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU:
isSettingSynced, err = imc.isSettingGuaranteedInstanceManagerCPUSynced(setting, pod)
case types.SettingNamePriorityClass:
isSettingSynced, err = imc.isSettingPriorityClassSynced(setting, pod)
@ -823,7 +802,7 @@ func (imc *InstanceManagerController) isSettingStorageNetworkSynced(setting *lon
func (imc *InstanceManagerController) isSettingDataEngineSynced(settingName types.SettingName, im *longhorn.InstanceManager) (bool, error) {
enabled, err := imc.ds.GetSettingAsBool(settingName)
if err != nil {
return false, errors.Wrapf(err, "failed to get %v setting for checking data engine sync", settingName)
return false, errors.Wrapf(err, "failed to get %v setting for updating data engine", settingName)
}
var dataEngine longhorn.DataEngineType
switch settingName {
@ -835,7 +814,6 @@ func (imc *InstanceManagerController) isSettingDataEngineSynced(settingName type
if !enabled && im.Spec.DataEngine == dataEngine {
return false, nil
}
return true, nil
}
@ -1493,27 +1471,24 @@ func (imc *InstanceManagerController) createInstanceManagerPodSpec(im *longhorn.
if types.IsDataEngineV2(dataEngine) {
// spdk_tgt doesn't support log level option, so we don't need to pass the log level to the instance manager.
// The log level will be applied in the reconciliation of instance manager controller.
logFlagsSetting, err := imc.ds.GetSettingValueExistedByDataEngine(types.SettingNameDataEngineLogFlags, dataEngine)
logFlagsSetting, err := imc.ds.GetSettingWithAutoFillingRO(types.SettingNameV2DataEngineLogFlags)
if err != nil {
return nil, err
}
logFlags := "all"
if logFlagsSetting != "" {
logFlags = strings.ToLower(logFlagsSetting)
if logFlagsSetting.Value != "" {
logFlags = strings.ToLower(logFlagsSetting.Value)
}
cpuMask := im.Spec.DataEngineSpec.V2.CPUMask
if cpuMask == "" {
value, err := imc.ds.GetSettingValueExistedByDataEngine(types.SettingNameDataEngineCPUMask, dataEngine)
value, err := imc.ds.GetSettingWithAutoFillingRO(types.SettingNameV2DataEngineCPUMask)
if err != nil {
return nil, err
}
cpuMask = value
if cpuMask == "" {
return nil, fmt.Errorf("failed to get CPU mask setting for data engine %v", dataEngine)
}
cpuMask = value.Value
}
im.Status.DataEngineStatus.V2.CPUMask = cpuMask
@ -1531,7 +1506,7 @@ func (imc *InstanceManagerController) createInstanceManagerPodSpec(im *longhorn.
podSpec.Spec.Containers[0].Args = args
hugepage, err := imc.ds.GetSettingAsIntByDataEngine(types.SettingNameDataEngineHugepageLimit, im.Spec.DataEngine)
hugepage, err := imc.ds.GetSettingAsInt(types.SettingNameV2DataEngineHugepageLimit)
if err != nil {
return nil, err
}
@ -1554,7 +1529,7 @@ func (imc *InstanceManagerController) createInstanceManagerPodSpec(im *longhorn.
}
} else {
podSpec.Spec.Containers[0].Args = []string{
"instance-manager", "--debug", "daemon", "--listen", fmt.Sprintf(":%d", engineapi.InstanceManagerProcessManagerServiceDefaultPort),
"instance-manager", "--debug", "daemon", "--listen", fmt.Sprintf("0.0.0.0:%d", engineapi.InstanceManagerProcessManagerServiceDefaultPort),
}
}
@ -1704,7 +1679,7 @@ func (imc *InstanceManagerController) deleteOrphans(im *longhorn.InstanceManager
autoDeleteGracePeriod, err := imc.ds.GetSettingAsInt(types.SettingNameOrphanResourceAutoDeletionGracePeriod)
if err != nil {
return errors.Wrapf(err, "failed to get %v setting", types.SettingNameOrphanResourceAutoDeletionGracePeriod)
return errors.Wrapf(err, "failed to get setting %v", types.SettingNameOrphanResourceAutoDeletionGracePeriod)
}
orphanList, err := imc.ds.ListInstanceOrphansByInstanceManagerRO(im.Name)

View File

@ -32,9 +32,7 @@ import (
)
const (
environmentCheckMonitorSyncPeriod = 1800 * time.Second
defaultHugePageLimitInMiB = 2048
EnvironmentCheckMonitorSyncPeriod = 1800 * time.Second
kernelConfigDir = "/host/boot/"
systemConfigDir = "/host/etc/"
@ -66,7 +64,7 @@ func NewEnvironmentCheckMonitor(logger logrus.FieldLogger, ds *datastore.DataSto
ctx, quit := context.WithCancel(context.Background())
m := &EnvironmentCheckMonitor{
baseMonitor: newBaseMonitor(ctx, quit, logger, ds, environmentCheckMonitorSyncPeriod),
baseMonitor: newBaseMonitor(ctx, quit, logger, ds, EnvironmentCheckMonitorSyncPeriod),
nodeName: nodeName,
@ -362,11 +360,10 @@ func (m *EnvironmentCheckMonitor) checkPackageInstalled(packageProbeExecutables
}
func (m *EnvironmentCheckMonitor) checkHugePages(kubeNode *corev1.Node, collectedData *CollectedEnvironmentCheckInfo) {
hugePageLimitInMiB, err := m.ds.GetSettingAsIntByDataEngine(types.SettingNameDataEngineHugepageLimit, longhorn.DataEngineTypeV2)
hugePageLimitInMiB, err := m.ds.GetSettingAsInt(types.SettingNameV2DataEngineHugepageLimit)
if err != nil {
m.logger.Warnf("Failed to get setting %v for data engine %v, using default value %d",
types.SettingNameDataEngineHugepageLimit, longhorn.DataEngineTypeV2, defaultHugePageLimitInMiB)
hugePageLimitInMiB = defaultHugePageLimitInMiB
m.logger.Debugf("Failed to fetch v2-data-engine-hugepage-limit setting, using default value: %d", 2048)
hugePageLimitInMiB = 2048
}
capacity := kubeNode.Status.Capacity

View File

@ -30,7 +30,7 @@ func NewFakeEnvironmentCheckMonitor(logger logrus.FieldLogger, ds *datastore.Dat
ctx, quit := context.WithCancel(context.Background())
m := &FakeEnvironmentCheckMonitor{
baseMonitor: newBaseMonitor(ctx, quit, logger, ds, environmentCheckMonitorSyncPeriod),
baseMonitor: newBaseMonitor(ctx, quit, logger, ds, EnvironmentCheckMonitorSyncPeriod),
nodeName: nodeName,

View File

@ -238,9 +238,9 @@ func (nc *NodeController) isResponsibleForSnapshot(obj interface{}) bool {
}
func (nc *NodeController) snapshotHashRequired(volume *longhorn.Volume) bool {
dataIntegrityImmediateChecking, err := nc.ds.GetSettingAsBoolByDataEngine(types.SettingNameSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation, volume.Spec.DataEngine)
dataIntegrityImmediateChecking, err := nc.ds.GetSettingAsBool(types.SettingNameSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation)
if err != nil {
nc.logger.WithError(err).Warnf("Failed to get %v setting for data engine %v", types.SettingNameSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation, volume.Spec.DataEngine)
nc.logger.WithError(err).Warnf("Failed to get %v setting", types.SettingNameSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation)
return false
}
if !dataIntegrityImmediateChecking {
@ -767,7 +767,6 @@ func (nc *NodeController) findNotReadyAndReadyDiskMaps(node *longhorn.Node, coll
node.Status.DiskStatus[diskName].DiskDriver = diskInfo.DiskDriver
node.Status.DiskStatus[diskName].DiskName = diskInfo.DiskName
node.Status.DiskStatus[diskName].DiskPath = diskInfo.Path
readyDiskInfoMap[diskID][diskName] = diskInfo
}
}
@ -902,7 +901,6 @@ func (nc *NodeController) updateDiskStatusSchedulableCondition(node *longhorn.No
diskStatus.StorageScheduled = storageScheduled
diskStatus.ScheduledReplica = scheduledReplica
diskStatus.ScheduledBackingImage = scheduledBackingImage
// check disk pressure
info, err := nc.scheduler.GetDiskSchedulingInfo(disk, diskStatus)
if err != nil {
@ -1151,7 +1149,7 @@ func (nc *NodeController) cleanUpBackingImagesInDisks(node *longhorn.Node) error
settingValue, err := nc.ds.GetSettingAsInt(types.SettingNameBackingImageCleanupWaitInterval)
if err != nil {
log.WithError(err).Warnf("Failed to get %v setting, won't do cleanup for backing images", types.SettingNameBackingImageCleanupWaitInterval)
log.WithError(err).Warnf("Failed to get setting %v, won't do cleanup for backing images", types.SettingNameBackingImageCleanupWaitInterval)
return nil
}
waitInterval := time.Duration(settingValue) * time.Minute
@ -1301,7 +1299,7 @@ func (nc *NodeController) enqueueNodeForMonitor(key string) {
func (nc *NodeController) syncOrphans(node *longhorn.Node, collectedDataInfo map[string]*monitor.CollectedDiskInfo) error {
autoDeleteGracePeriod, err := nc.ds.GetSettingAsInt(types.SettingNameOrphanResourceAutoDeletionGracePeriod)
if err != nil {
return errors.Wrapf(err, "failed to get %v setting", types.SettingNameOrphanResourceAutoDeletionGracePeriod)
return errors.Wrapf(err, "failed to get setting %v", types.SettingNameOrphanResourceAutoDeletionGracePeriod)
}
for diskName, diskInfo := range collectedDataInfo {
@ -1846,7 +1844,8 @@ func (nc *NodeController) setReadyAndSchedulableConditions(node *longhorn.Node,
nc.eventRecorder, node, corev1.EventTypeNormal)
}
disableSchedulingOnCordonedNode, err := nc.ds.GetSettingAsBool(types.SettingNameDisableSchedulingOnCordonedNode)
disableSchedulingOnCordonedNode, err :=
nc.ds.GetSettingAsBool(types.SettingNameDisableSchedulingOnCordonedNode)
if err != nil {
return errors.Wrapf(err, "failed to get %v setting", types.SettingNameDisableSchedulingOnCordonedNode)
}
@ -1955,7 +1954,7 @@ func (nc *NodeController) SetSchedulableCondition(node *longhorn.Node, kubeNode
func (nc *NodeController) clearDelinquentLeasesIfNodeNotReady(node *longhorn.Node) error {
enabled, err := nc.ds.GetSettingAsBool(types.SettingNameRWXVolumeFastFailover)
if err != nil {
return errors.Wrapf(err, "failed to get %v setting", types.SettingNameRWXVolumeFastFailover)
return errors.Wrapf(err, "failed to get setting %v", types.SettingNameRWXVolumeFastFailover)
}
if !enabled {
return nil

View File

@ -380,11 +380,6 @@ func (rc *ReplicaController) CreateInstance(obj interface{}) (*longhorn.Instance
return nil, err
}
r.Status.Starting = true
if r, err = rc.ds.UpdateReplicaStatus(r); err != nil {
return nil, errors.Wrapf(err, "failed to update replica %v status.starting to true before sending instance create request", r.Name)
}
return c.ReplicaInstanceCreate(&engineapi.ReplicaInstanceCreateRequest{
Replica: r,
DiskName: diskName,

View File

@ -350,6 +350,7 @@ func (sc *SettingController) syncDangerZoneSettingsForManagedComponents(settingN
types.SettingNameV1DataEngine,
types.SettingNameV2DataEngine,
types.SettingNameGuaranteedInstanceManagerCPU,
types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU,
}
if slices.Contains(dangerSettingsRequiringSpecificDataEngineVolumesDetached, settingName) {
@ -359,11 +360,14 @@ func (sc *SettingController) syncDangerZoneSettingsForManagedComponents(settingN
return errors.Wrapf(err, "failed to apply %v setting to Longhorn instance managers when there are attached volumes. "+
"It will be eventually applied", settingName)
}
case types.SettingNameGuaranteedInstanceManagerCPU:
for _, dataEngine := range []longhorn.DataEngineType{longhorn.DataEngineTypeV1, longhorn.DataEngineTypeV2} {
if err := sc.updateInstanceManagerCPURequest(dataEngine); err != nil {
return err
}
case types.SettingNameGuaranteedInstanceManagerCPU, types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU:
dataEngine := longhorn.DataEngineTypeV1
if settingName == types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU {
dataEngine = longhorn.DataEngineTypeV2
}
if err := sc.updateInstanceManagerCPURequest(dataEngine); err != nil {
return err
}
}
}
@ -1221,6 +1225,10 @@ func (sc *SettingController) enqueueSettingForNode(obj interface{}) {
// updateInstanceManagerCPURequest deletes all instance manager pods immediately with the updated CPU request.
func (sc *SettingController) updateInstanceManagerCPURequest(dataEngine longhorn.DataEngineType) error {
settingName := types.SettingNameGuaranteedInstanceManagerCPU
if types.IsDataEngineV2(dataEngine) {
settingName = types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU
}
imPodList, err := sc.ds.ListInstanceManagerPodsBy("", "", longhorn.InstanceManagerTypeAllInOne, dataEngine)
if err != nil {
return errors.Wrap(err, "failed to list instance manager pods for toleration update")
@ -1261,10 +1269,10 @@ func (sc *SettingController) updateInstanceManagerCPURequest(dataEngine longhorn
stopped, _, err := sc.ds.AreAllEngineInstancesStopped(dataEngine)
if err != nil {
return errors.Wrapf(err, "failed to check engine instances for %v setting update for data engine %v", types.SettingNameGuaranteedInstanceManagerCPU, dataEngine)
return errors.Wrapf(err, "failed to check engine instances for %v setting update", settingName)
}
if !stopped {
return &types.ErrorInvalidState{Reason: fmt.Sprintf("failed to apply %v setting for data engine %v to Longhorn components when there are running engine instances. It will be eventually applied", types.SettingNameGuaranteedInstanceManagerCPU, dataEngine)}
return &types.ErrorInvalidState{Reason: fmt.Sprintf("failed to apply %v setting to Longhorn components when there are running engine instances. It will be eventually applied", settingName)}
}
for _, pod := range notUpdatedPods {
@ -1385,22 +1393,22 @@ const (
ClusterInfoVolumeNumOfReplicas = util.StructName("LonghornVolumeNumberOfReplicas")
ClusterInfoVolumeNumOfSnapshots = util.StructName("LonghornVolumeNumberOfSnapshots")
ClusterInfoPodAvgCPUUsageFmt = "Longhorn%sAverageCpuUsageMilliCores"
ClusterInfoPodAvgMemoryUsageFmt = "Longhorn%sAverageMemoryUsageBytes"
ClusterInfoSettingFmt = "LonghornSetting%s"
ClusterInfoVolumeAccessModeCountFmt = "LonghornVolumeAccessMode%sCount"
ClusterInfoVolumeDataEngineCountFmt = "LonghornVolumeDataEngine%sCount"
ClusterInfoVolumeDataLocalityCountFmt = "LonghornVolumeDataLocality%sCount"
ClusterInfoVolumeEncryptedCountFmt = "LonghornVolumeEncrypted%sCount"
ClusterInfoVolumeFrontendCountFmt = "LonghornVolumeFrontend%sCount"
ClusterInfoVolumeReplicaAutoBalanceCountFmt = "LonghornVolumeReplicaAutoBalance%sCount"
ClusterInfoVolumeReplicaSoftAntiAffinityCountFmt = "LonghornVolumeReplicaSoftAntiAffinity%sCount"
ClusterInfoVolumeReplicaZoneSoftAntiAffinityCountFmt = "LonghornVolumeReplicaZoneSoftAntiAffinity%sCount"
ClusterInfoVolumeReplicaDiskSoftAntiAffinityCountFmt = "LonghornVolumeReplicaDiskSoftAntiAffinity%sCount"
ClusterInfoVolumeRestoreVolumeRecurringJobCountFmt = "LonghornVolumeRestoreVolumeRecurringJob%sCount"
ClusterInfoVolumeSnapshotDataIntegrityCountFmt = "LonghornVolumeSnapshotDataIntegrity%sCount"
ClusterInfoVolumeUnmapMarkSnapChainRemovedCountFmt = "LonghornVolumeUnmapMarkSnapChainRemoved%sCount"
ClusterInfoVolumeFreezeFilesystemForV1DataEngineSnapshotCountFmt = "LonghornVolumeFreezeFilesystemForV1DataEngineSnapshot%sCount"
ClusterInfoPodAvgCPUUsageFmt = "Longhorn%sAverageCpuUsageMilliCores"
ClusterInfoPodAvgMemoryUsageFmt = "Longhorn%sAverageMemoryUsageBytes"
ClusterInfoSettingFmt = "LonghornSetting%s"
ClusterInfoVolumeAccessModeCountFmt = "LonghornVolumeAccessMode%sCount"
ClusterInfoVolumeDataEngineCountFmt = "LonghornVolumeDataEngine%sCount"
ClusterInfoVolumeDataLocalityCountFmt = "LonghornVolumeDataLocality%sCount"
ClusterInfoVolumeEncryptedCountFmt = "LonghornVolumeEncrypted%sCount"
ClusterInfoVolumeFrontendCountFmt = "LonghornVolumeFrontend%sCount"
ClusterInfoVolumeReplicaAutoBalanceCountFmt = "LonghornVolumeReplicaAutoBalance%sCount"
ClusterInfoVolumeReplicaSoftAntiAffinityCountFmt = "LonghornVolumeReplicaSoftAntiAffinity%sCount"
ClusterInfoVolumeReplicaZoneSoftAntiAffinityCountFmt = "LonghornVolumeReplicaZoneSoftAntiAffinity%sCount"
ClusterInfoVolumeReplicaDiskSoftAntiAffinityCountFmt = "LonghornVolumeReplicaDiskSoftAntiAffinity%sCount"
ClusterInfoVolumeRestoreVolumeRecurringJobCountFmt = "LonghornVolumeRestoreVolumeRecurringJob%sCount"
ClusterInfoVolumeSnapshotDataIntegrityCountFmt = "LonghornVolumeSnapshotDataIntegrity%sCount"
ClusterInfoVolumeUnmapMarkSnapChainRemovedCountFmt = "LonghornVolumeUnmapMarkSnapChainRemoved%sCount"
ClusterInfoVolumeFreezeFilesystemForSnapshotCountFmt = "LonghornVolumeFreezeFilesystemForSnapshot%sCount"
)
// Node Scope Info: will be sent from all Longhorn cluster nodes
@ -1601,6 +1609,7 @@ func (info *ClusterInfo) collectSettings() error {
types.SettingNameSystemManagedPodsImagePullPolicy: true,
types.SettingNameV1DataEngine: true,
types.SettingNameV2DataEngine: true,
types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU: true,
}
settings, err := info.ds.ListSettings()
@ -1658,15 +1667,12 @@ func (info *ClusterInfo) convertSettingValueType(setting *longhorn.Setting) (con
switch definition.Type {
case types.SettingTypeInt:
if !definition.DataEngineSpecific {
return strconv.ParseInt(setting.Value, 10, 64)
}
return strconv.ParseInt(setting.Value, 10, 64)
case types.SettingTypeBool:
if !definition.DataEngineSpecific {
return strconv.ParseBool(setting.Value)
}
return strconv.ParseBool(setting.Value)
default:
return setting.Value, nil
}
return setting.Value, nil
}
func (info *ClusterInfo) collectVolumesInfo() error {
@ -1735,31 +1741,29 @@ func (info *ClusterInfo) collectVolumesInfo() error {
frontendCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeFrontendCountFmt, frontend))]++
}
replicaAutoBalance := info.collectSettingInVolume(string(volume.Spec.ReplicaAutoBalance), string(longhorn.ReplicaAutoBalanceIgnored), volume.Spec.DataEngine, types.SettingNameReplicaAutoBalance)
replicaAutoBalance := info.collectSettingInVolume(string(volume.Spec.ReplicaAutoBalance), string(longhorn.ReplicaAutoBalanceIgnored), types.SettingNameReplicaAutoBalance)
replicaAutoBalanceCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeReplicaAutoBalanceCountFmt, util.ConvertToCamel(string(replicaAutoBalance), "-")))]++
replicaSoftAntiAffinity := info.collectSettingInVolume(string(volume.Spec.ReplicaSoftAntiAffinity), string(longhorn.ReplicaSoftAntiAffinityDefault), volume.Spec.DataEngine, types.SettingNameReplicaSoftAntiAffinity)
replicaSoftAntiAffinity := info.collectSettingInVolume(string(volume.Spec.ReplicaSoftAntiAffinity), string(longhorn.ReplicaSoftAntiAffinityDefault), types.SettingNameReplicaSoftAntiAffinity)
replicaSoftAntiAffinityCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeReplicaSoftAntiAffinityCountFmt, util.ConvertToCamel(string(replicaSoftAntiAffinity), "-")))]++
replicaZoneSoftAntiAffinity := info.collectSettingInVolume(string(volume.Spec.ReplicaZoneSoftAntiAffinity), string(longhorn.ReplicaZoneSoftAntiAffinityDefault), volume.Spec.DataEngine, types.SettingNameReplicaZoneSoftAntiAffinity)
replicaZoneSoftAntiAffinity := info.collectSettingInVolume(string(volume.Spec.ReplicaZoneSoftAntiAffinity), string(longhorn.ReplicaZoneSoftAntiAffinityDefault), types.SettingNameReplicaZoneSoftAntiAffinity)
replicaZoneSoftAntiAffinityCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeReplicaZoneSoftAntiAffinityCountFmt, util.ConvertToCamel(string(replicaZoneSoftAntiAffinity), "-")))]++
replicaDiskSoftAntiAffinity := info.collectSettingInVolume(string(volume.Spec.ReplicaDiskSoftAntiAffinity), string(longhorn.ReplicaDiskSoftAntiAffinityDefault), volume.Spec.DataEngine, types.SettingNameReplicaDiskSoftAntiAffinity)
replicaDiskSoftAntiAffinity := info.collectSettingInVolume(string(volume.Spec.ReplicaDiskSoftAntiAffinity), string(longhorn.ReplicaDiskSoftAntiAffinityDefault), types.SettingNameReplicaDiskSoftAntiAffinity)
replicaDiskSoftAntiAffinityCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeReplicaDiskSoftAntiAffinityCountFmt, util.ConvertToCamel(string(replicaDiskSoftAntiAffinity), "-")))]++
restoreVolumeRecurringJob := info.collectSettingInVolume(string(volume.Spec.RestoreVolumeRecurringJob), string(longhorn.RestoreVolumeRecurringJobDefault), volume.Spec.DataEngine, types.SettingNameRestoreVolumeRecurringJobs)
restoreVolumeRecurringJob := info.collectSettingInVolume(string(volume.Spec.RestoreVolumeRecurringJob), string(longhorn.RestoreVolumeRecurringJobDefault), types.SettingNameRestoreVolumeRecurringJobs)
restoreVolumeRecurringJobCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeRestoreVolumeRecurringJobCountFmt, util.ConvertToCamel(string(restoreVolumeRecurringJob), "-")))]++
snapshotDataIntegrity := info.collectSettingInVolume(string(volume.Spec.SnapshotDataIntegrity), string(longhorn.SnapshotDataIntegrityIgnored), volume.Spec.DataEngine, types.SettingNameSnapshotDataIntegrity)
snapshotDataIntegrity := info.collectSettingInVolume(string(volume.Spec.SnapshotDataIntegrity), string(longhorn.SnapshotDataIntegrityIgnored), types.SettingNameSnapshotDataIntegrity)
snapshotDataIntegrityCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeSnapshotDataIntegrityCountFmt, util.ConvertToCamel(string(snapshotDataIntegrity), "-")))]++
unmapMarkSnapChainRemoved := info.collectSettingInVolume(string(volume.Spec.UnmapMarkSnapChainRemoved), string(longhorn.UnmapMarkSnapChainRemovedIgnored), volume.Spec.DataEngine, types.SettingNameRemoveSnapshotsDuringFilesystemTrim)
unmapMarkSnapChainRemoved := info.collectSettingInVolume(string(volume.Spec.UnmapMarkSnapChainRemoved), string(longhorn.UnmapMarkSnapChainRemovedIgnored), types.SettingNameRemoveSnapshotsDuringFilesystemTrim)
unmapMarkSnapChainRemovedCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeUnmapMarkSnapChainRemovedCountFmt, util.ConvertToCamel(string(unmapMarkSnapChainRemoved), "-")))]++
if types.IsDataEngineV1(volume.Spec.DataEngine) {
freezeFilesystemForSnapshot := info.collectSettingInVolume(string(volume.Spec.FreezeFilesystemForSnapshot), string(longhorn.FreezeFilesystemForSnapshotDefault), volume.Spec.DataEngine, types.SettingNameFreezeFilesystemForSnapshot)
freezeFilesystemForSnapshotCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeFreezeFilesystemForV1DataEngineSnapshotCountFmt, util.ConvertToCamel(string(freezeFilesystemForSnapshot), "-")))]++
}
freezeFilesystemForSnapshot := info.collectSettingInVolume(string(volume.Spec.FreezeFilesystemForSnapshot), string(longhorn.FreezeFilesystemForSnapshotDefault), types.SettingNameFreezeFilesystemForSnapshot)
freezeFilesystemForSnapshotCountStruct[util.StructName(fmt.Sprintf(ClusterInfoVolumeFreezeFilesystemForSnapshotCountFmt, util.ConvertToCamel(string(freezeFilesystemForSnapshot), "-")))]++
}
info.structFields.fields.Append(ClusterInfoVolumeNumOfReplicas, totalVolumeNumOfReplicas)
info.structFields.fields.AppendCounted(accessModeCountStruct)
@ -1810,14 +1814,13 @@ func (info *ClusterInfo) collectVolumesInfo() error {
return nil
}
func (info *ClusterInfo) collectSettingInVolume(volumeSpecValue, ignoredValue string, dataEngine longhorn.DataEngineType, settingName types.SettingName) string {
func (info *ClusterInfo) collectSettingInVolume(volumeSpecValue, ignoredValue string, settingName types.SettingName) string {
if volumeSpecValue == ignoredValue {
globalSettingValue, err := info.ds.GetSettingValueExistedByDataEngine(settingName, dataEngine)
globalSetting, err := info.ds.GetSettingWithAutoFillingRO(settingName)
if err != nil {
info.logger.WithError(err).Warnf("Failed to get Longhorn Setting %v", settingName)
}
return globalSettingValue
return globalSetting.Value
}
return volumeSpecValue
}

View File

@ -6,7 +6,6 @@ import (
"fmt"
"io"
"reflect"
"regexp"
"strings"
"time"
@ -17,7 +16,6 @@ import (
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/flowcontrol"
"k8s.io/kubernetes/pkg/apis/core"
"k8s.io/kubernetes/pkg/controller"
@ -60,8 +58,6 @@ type ShareManagerController struct {
ds *datastore.DataStore
cacheSyncs []cache.InformerSynced
backoff *flowcontrol.Backoff
}
func NewShareManagerController(
@ -91,8 +87,6 @@ func NewShareManagerController(
eventRecorder: eventBroadcaster.NewRecorder(scheme, corev1.EventSource{Component: "longhorn-share-manager-controller"}),
ds: ds,
backoff: newBackoff(context.TODO()),
}
var err error
@ -881,16 +875,8 @@ func (c *ShareManagerController) syncShareManagerPod(sm *longhorn.ShareManager)
return nil
}
backoffID := sm.Name
if c.backoff.IsInBackOffSinceUpdate(backoffID, time.Now()) {
log.Infof("Skipping pod creation for share manager %s, will retry after backoff of %s", sm.Name, c.backoff.Get(backoffID))
} else {
log.Infof("Creating pod for share manager %s", sm.Name)
c.backoff.Next(backoffID, time.Now())
if pod, err = c.createShareManagerPod(sm); err != nil {
return errors.Wrap(err, "failed to create pod for share manager")
}
if pod, err = c.createShareManagerPod(sm); err != nil {
return errors.Wrap(err, "failed to create pod for share manager")
}
}
@ -1263,8 +1249,6 @@ func (c *ShareManagerController) createShareManagerPod(sm *longhorn.ShareManager
var affinity *corev1.Affinity
var formatOptions []string
if pv.Spec.StorageClassName != "" {
sc, err := c.ds.GetStorageClass(pv.Spec.StorageClassName)
if err != nil {
@ -1287,9 +1271,6 @@ func (c *ShareManagerController) createShareManagerPod(sm *longhorn.ShareManager
}
tolerationsFromStorageClass := c.getShareManagerTolerationsFromStorageClass(sc)
tolerations = append(tolerations, tolerationsFromStorageClass...)
// A storage class can override mkfs parameters which need to be passed to the share manager
formatOptions = c.splitFormatOptions(sc)
}
}
@ -1327,7 +1308,7 @@ func (c *ShareManagerController) createShareManagerPod(sm *longhorn.ShareManager
}
manifest := c.createPodManifest(sm, volume.Spec.DataEngine, annotations, tolerations, affinity, imagePullPolicy, nil, registrySecret,
priorityClass, nodeSelector, fsType, formatOptions, mountOptions, cryptoKey, cryptoParams, nfsConfig)
priorityClass, nodeSelector, fsType, mountOptions, cryptoKey, cryptoParams, nfsConfig)
storageNetwork, err := c.ds.GetSettingWithAutoFillingRO(types.SettingNameStorageNetwork)
if err != nil {
@ -1354,29 +1335,6 @@ func (c *ShareManagerController) createShareManagerPod(sm *longhorn.ShareManager
return pod, nil
}
func (c *ShareManagerController) splitFormatOptions(sc *storagev1.StorageClass) []string {
if mkfsParams, ok := sc.Parameters["mkfsParams"]; ok {
regex, err := regexp.Compile("-[a-zA-Z_]+(?:\\s*=?\\s*(?:\"[^\"]*\"|'[^']*'|[^\\r\\n\\t\\f\\v -]+))?")
if err != nil {
c.logger.WithError(err).Warnf("Failed to compile regex for mkfsParams %v, will continue the share manager pod creation", mkfsParams)
return nil
}
matches := regex.FindAllString(mkfsParams, -1)
if matches == nil {
c.logger.Warnf("No valid mkfs parameters found in \"%v\", will continue the share manager pod creation", mkfsParams)
return nil
}
return matches
}
c.logger.Debug("No mkfs parameters found, will continue the share manager pod creation")
return nil
}
func (c *ShareManagerController) createServiceManifest(sm *longhorn.ShareManager) *corev1.Service {
service := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
@ -1464,7 +1422,7 @@ func (c *ShareManagerController) createLeaseManifest(sm *longhorn.ShareManager)
func (c *ShareManagerController) createPodManifest(sm *longhorn.ShareManager, dataEngine longhorn.DataEngineType, annotations map[string]string, tolerations []corev1.Toleration,
affinity *corev1.Affinity, pullPolicy corev1.PullPolicy, resourceReq *corev1.ResourceRequirements, registrySecret, priorityClass string,
nodeSelector map[string]string, fsType string, formatOptions []string, mountOptions []string, cryptoKey string, cryptoParams *crypto.EncryptParams,
nodeSelector map[string]string, fsType string, mountOptions []string, cryptoKey string, cryptoParams *crypto.EncryptParams,
nfsConfig *nfsServerConfig) *corev1.Pod {
// command args for the share-manager
@ -1535,15 +1493,6 @@ func (c *ShareManagerController) createPodManifest(sm *longhorn.ShareManager, da
},
}
if len(formatOptions) > 0 {
podSpec.Spec.Containers[0].Env = append(podSpec.Spec.Containers[0].Env, []corev1.EnvVar{
{
Name: "FS_FORMAT_OPTIONS",
Value: fmt.Sprint(strings.Join(formatOptions, ":")),
},
}...)
}
// this is an encrypted volume the cryptoKey is base64 encoded
if len(cryptoKey) > 0 {
podSpec.Spec.Containers[0].Env = append(podSpec.Spec.Containers[0].Env, []corev1.EnvVar{

View File

@ -1,148 +0,0 @@
package controller
import (
"github.com/sirupsen/logrus"
storagev1 "k8s.io/api/storage/v1"
"reflect"
"testing"
)
func TestShareManagerController_splitFormatOptions(t *testing.T) {
type args struct {
sc *storagev1.StorageClass
}
tests := []struct {
name string
args args
want []string
}{
{
name: "mkfsParams with no mkfsParams",
args: args{
sc: &storagev1.StorageClass{},
},
want: nil,
},
{
name: "mkfsParams with empty options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "",
},
},
},
want: nil,
},
{
name: "mkfsParams with multiple options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-O someopt -L label -n",
},
},
},
want: []string{"-O someopt", "-L label", "-n"},
},
{
name: "mkfsParams with underscore options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-O someopt -label_value test -L label -n",
},
},
},
want: []string{"-O someopt", "-label_value test", "-L label", "-n"},
},
{
name: "mkfsParams with quoted options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-O someopt -label_value \"test\" -L label -n",
},
},
},
want: []string{"-O someopt", "-label_value \"test\"", "-L label", "-n"},
},
{
name: "mkfsParams with equal sign quoted options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-O someopt -label_value=\"test\" -L label -n",
},
},
},
want: []string{"-O someopt", "-label_value=\"test\"", "-L label", "-n"},
},
{
name: "mkfsParams with equal sign quoted options with spaces",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-O someopt -label_value=\"test label \" -L label -n",
},
},
},
want: []string{"-O someopt", "-label_value=\"test label \"", "-L label", "-n"},
},
{
name: "mkfsParams with equal sign quoted options and different spacing",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-n -O someopt -label_value=\"test\" -Llabel",
},
},
},
want: []string{"-n", "-O someopt", "-label_value=\"test\"", "-Llabel"},
},
{
name: "mkfsParams with special characters in options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-I 256 -b 4096 -O ^metadata_csum,^64bit",
},
},
},
want: []string{"-I 256", "-b 4096", "-O ^metadata_csum,^64bit"},
},
{
name: "mkfsParams with no spacing in options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-Osomeopt -Llabel",
},
},
},
want: []string{"-Osomeopt", "-Llabel"},
},
{
name: "mkfsParams with different spacing between options",
args: args{
sc: &storagev1.StorageClass{
Parameters: map[string]string{
"mkfsParams": "-Osomeopt -L label",
},
},
},
want: []string{"-Osomeopt", "-L label"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
c := &ShareManagerController{
baseController: newBaseController("test-controller", logrus.StandardLogger()),
}
if got := c.splitFormatOptions(tt.args.sc); !reflect.DeepEqual(got, tt.want) {
t.Errorf("splitFormatOptions() = %v (len %d), want %v (len %d)",
got, len(got), tt.want, len(tt.want))
}
})
}
}

View File

@ -845,21 +845,8 @@ func (c *UninstallController) deleteEngineImages(engineImages map[string]*longho
for _, ei := range engineImages {
log := getLoggerForEngineImage(c.logger, ei)
if ei.Annotations == nil {
ei.Annotations = make(map[string]string)
}
timeout := metav1.NewTime(time.Now().Add(-gracePeriod))
if ei.DeletionTimestamp == nil {
if defaultImage, errGetSetting := c.ds.GetSettingValueExisted(types.SettingNameDefaultEngineImage); errGetSetting != nil {
return errors.Wrap(errGetSetting, "failed to get default engine image setting")
} else if ei.Spec.Image == defaultImage {
log.Infof("Adding annotation %v to engine image %s to mark for deletion", types.GetLonghornLabelKey(types.DeleteEngineImageFromLonghorn), ei.Name)
ei.Annotations[types.GetLonghornLabelKey(types.DeleteEngineImageFromLonghorn)] = ""
if _, err := c.ds.UpdateEngineImage(ei); err != nil {
return errors.Wrap(err, "failed to update engine image annotations to mark for deletion")
}
}
if errDelete := c.ds.DeleteEngineImage(ei.Name); errDelete != nil {
if datastore.ErrorIsNotFound(errDelete) {
log.Info("EngineImage is not found")
@ -904,16 +891,7 @@ func (c *UninstallController) deleteNodes(nodes map[string]*longhorn.Node) (err
for _, node := range nodes {
log := getLoggerForNode(c.logger, node)
if node.Annotations == nil {
node.Annotations = make(map[string]string)
}
if node.DeletionTimestamp == nil {
log.Infof("Adding annotation %v to node %s to mark for deletion", types.GetLonghornLabelKey(types.DeleteNodeFromLonghorn), node.Name)
node.Annotations[types.GetLonghornLabelKey(types.DeleteNodeFromLonghorn)] = ""
if _, err := c.ds.UpdateNode(node); err != nil {
return errors.Wrap(err, "failed to update node annotations to mark for deletion")
}
if errDelete := c.ds.DeleteNode(node.Name); errDelete != nil {
if datastore.ErrorIsNotFound(errDelete) {
log.Info("Node is not found")

View File

@ -1,13 +1,9 @@
package controller
import (
"context"
"time"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/util/flowcontrol"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@ -17,32 +13,6 @@ import (
longhorn "github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
)
const (
podRecreateInitBackoff = 1 * time.Second
podRecreateMaxBackoff = 120 * time.Second
backoffGCPeriod = 12 * time.Hour
)
// newBackoff returns a flowcontrol.Backoff and starts a background GC loop.
func newBackoff(ctx context.Context) *flowcontrol.Backoff {
backoff := flowcontrol.NewBackOff(podRecreateInitBackoff, podRecreateMaxBackoff)
go func() {
ticker := time.NewTicker(backoffGCPeriod)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
backoff.GC()
}
}
}()
return backoff
}
func hasReplicaEvictionRequested(rs map[string]*longhorn.Replica) bool {
for _, r := range rs {
if r.Spec.EvictionRequested {

View File

@ -345,7 +345,7 @@ func (vac *VolumeAttachmentController) handleNodeCordoned(va *longhorn.VolumeAtt
detachManuallyAttachedVolumesWhenCordoned, err := vac.ds.GetSettingAsBool(types.SettingNameDetachManuallyAttachedVolumesWhenCordoned)
if err != nil {
log.WithError(err).Warnf("Failed to get %v setting", types.SettingNameDetachManuallyAttachedVolumesWhenCordoned)
log.WithError(err).Warnf("Failed to get setting %v", types.SettingNameDetachManuallyAttachedVolumesWhenCordoned)
return
}

View File

@ -890,8 +890,7 @@ func isAutoSalvageNeeded(rs map[string]*longhorn.Replica) bool {
if isFirstAttachment(rs) {
return areAllReplicasFailed(rs)
}
// We need to auto-salvage if there are no healthy and active replicas including those marked for deletion,
return getHealthyAndActiveReplicaCount(rs, true) == 0 && getFailedReplicaCount(rs) > 0
return getHealthyAndActiveReplicaCount(rs) == 0 && getFailedReplicaCount(rs) > 0
}
func areAllReplicasFailed(rs map[string]*longhorn.Replica) bool {
@ -914,7 +913,7 @@ func isFirstAttachment(rs map[string]*longhorn.Replica) bool {
return true
}
func isHealthyAndActiveReplica(r *longhorn.Replica, includeMarkedForDeletion bool) bool {
func isHealthyAndActiveReplica(r *longhorn.Replica) bool {
if r.Spec.FailedAt != "" {
return false
}
@ -924,11 +923,6 @@ func isHealthyAndActiveReplica(r *longhorn.Replica, includeMarkedForDeletion boo
if !r.Spec.Active {
return false
}
if !includeMarkedForDeletion {
if !r.DeletionTimestamp.IsZero() {
return false
}
}
return true
}
@ -943,7 +937,7 @@ func isHealthyAndActiveReplica(r *longhorn.Replica, includeMarkedForDeletion boo
// it successfully became read/write in an engine) after spec.LastFailedAt. If the replica does not meet this condition,
// it is not "safe as last replica", and we should not clean up the other replicas for its volume.
func isSafeAsLastReplica(r *longhorn.Replica) bool {
if !isHealthyAndActiveReplica(r, false) {
if !isHealthyAndActiveReplica(r) {
return false
}
// We know r.Spec.LastHealthyAt != "" because r.Spec.HealthyAt != "" from isHealthyAndActiveReplica.
@ -959,10 +953,10 @@ func isSafeAsLastReplica(r *longhorn.Replica) bool {
return true
}
func getHealthyAndActiveReplicaCount(rs map[string]*longhorn.Replica, includeMarkedForDeletion bool) int {
func getHealthyAndActiveReplicaCount(rs map[string]*longhorn.Replica) int {
count := 0
for _, r := range rs {
if isHealthyAndActiveReplica(r, includeMarkedForDeletion) {
if isHealthyAndActiveReplica(r) {
count++
}
}
@ -1068,7 +1062,7 @@ func (c *VolumeController) cleanupCorruptedOrStaleReplicas(v *longhorn.Volume, r
}
func (c *VolumeController) cleanupFailedToScheduleReplicas(v *longhorn.Volume, rs map[string]*longhorn.Replica) (err error) {
healthyCount := getHealthyAndActiveReplicaCount(rs, false)
healthyCount := getHealthyAndActiveReplicaCount(rs)
var replicasToCleanUp []*longhorn.Replica
if hasReplicaEvictionRequested(rs) {
@ -1101,7 +1095,7 @@ func (c *VolumeController) cleanupFailedToScheduleReplicas(v *longhorn.Volume, r
}
func (c *VolumeController) cleanupExtraHealthyReplicas(v *longhorn.Volume, e *longhorn.Engine, rs map[string]*longhorn.Replica) (err error) {
healthyCount := getHealthyAndActiveReplicaCount(rs, false)
healthyCount := getHealthyAndActiveReplicaCount(rs)
if healthyCount <= v.Spec.NumberOfReplicas {
return nil
}
@ -1461,10 +1455,13 @@ func (c *VolumeController) ReconcileVolumeState(v *longhorn.Volume, es map[strin
return nil
}
// Reattach volume if
// - volume is detached unexpectedly and there are still healthy replicas
// - engine dead unexpectedly and there are still healthy replicas when the volume is not attached
if e.Status.CurrentState == longhorn.InstanceStateError {
if v.Status.CurrentNodeID != "" || (v.Spec.NodeID != "" && v.Status.CurrentNodeID == "" && v.Status.State != longhorn.VolumeStateAttached) {
log.Warn("Engine of volume dead unexpectedly, setting v.Status.Robustness to faulted")
msg := fmt.Sprintf("Engine of volume %v dead unexpectedly, setting v.Status.Robustness to faulted", v.Name)
log.Warn("Reattaching the volume since engine of volume dead unexpectedly")
msg := fmt.Sprintf("Engine of volume %v dead unexpectedly, reattach the volume", v.Name)
c.eventRecorder.Event(v, corev1.EventTypeWarning, constant.EventReasonDetachedUnexpectedly, msg)
e.Spec.LogRequested = true
for _, r := range rs {
@ -1748,17 +1745,12 @@ func (c *VolumeController) reconcileVolumeCondition(v *longhorn.Volume, e *longh
if r.Spec.NodeID != "" {
continue
}
switch v.Spec.DataLocality {
case longhorn.DataLocalityStrictLocal:
if v.Spec.DataLocality == longhorn.DataLocalityStrictLocal {
if v.Spec.NodeID == "" {
continue
}
r.Spec.HardNodeAffinity = v.Spec.NodeID
case longhorn.DataLocalityBestEffort:
// For best-effort locality, wait until the volume is attached to a node before scheduling the replica.
if v.Spec.NodeID == "" {
continue
}
}
scheduledReplica, multiError, err := c.scheduler.ScheduleReplica(r, rs, v)
if err != nil {
@ -1768,12 +1760,12 @@ func (c *VolumeController) reconcileVolumeCondition(v *longhorn.Volume, e *longh
if scheduledReplica == nil {
if r.Spec.HardNodeAffinity == "" {
log.WithField("replica", r.Name).Debug("Failed to schedule replica")
log.WithField("replica", r.Name).Warn("Failed to schedule replica")
v.Status.Conditions = types.SetCondition(v.Status.Conditions,
longhorn.VolumeConditionTypeScheduled, longhorn.ConditionStatusFalse,
longhorn.VolumeConditionReasonReplicaSchedulingFailure, "")
} else {
log.WithField("replica", r.Name).Debugf("Failed to schedule replica of volume with HardNodeAffinity = %v", r.Spec.HardNodeAffinity)
log.WithField("replica", r.Name).Warnf("Failed to schedule replica of volume with HardNodeAffinity = %v", r.Spec.HardNodeAffinity)
v.Status.Conditions = types.SetCondition(v.Status.Conditions,
longhorn.VolumeConditionTypeScheduled, longhorn.ConditionStatusFalse,
longhorn.VolumeConditionReasonLocalReplicaSchedulingFailure, "")
@ -5006,6 +4998,17 @@ func (c *VolumeController) shouldCleanUpFailedReplica(v *longhorn.Volume, r *lon
return true
}
if types.IsDataEngineV2(v.Spec.DataEngine) {
V2DataEngineFastReplicaRebuilding, err := c.ds.GetSettingAsBool(types.SettingNameV2DataEngineFastReplicaRebuilding)
if err != nil {
log.WithError(err).Warnf("Failed to get the setting %v, will consider it as false", types.SettingDefinitionV2DataEngineFastReplicaRebuilding)
V2DataEngineFastReplicaRebuilding = false
}
if !V2DataEngineFastReplicaRebuilding {
log.Infof("Failed replica %v should be cleaned up blindly since setting %v is not enabled", r.Name, types.SettingNameV2DataEngineFastReplicaRebuilding)
return true
}
}
// Failed too long ago to be useful during a rebuild.
if v.Spec.StaleReplicaTimeout > 0 &&
util.TimestampAfterTimeout(r.Spec.FailedAt, time.Duration(v.Spec.StaleReplicaTimeout)*time.Minute) {

View File

@ -286,7 +286,7 @@ func (vbc *VolumeRebuildingController) reconcile(volName string) (err error) {
}
}()
isOfflineRebuildEnabled, err := vbc.isVolumeOfflineRebuildEnabled(vol)
isOfflineRebuildEnabled, err := vbc.isVolumeOfflineRebuildEnabled(vol.Spec.OfflineRebuilding)
if err != nil {
return err
}
@ -343,16 +343,16 @@ func (vbc *VolumeRebuildingController) reconcile(volName string) (err error) {
return nil
}
func (vbc *VolumeRebuildingController) isVolumeOfflineRebuildEnabled(vol *longhorn.Volume) (bool, error) {
if vol.Spec.OfflineRebuilding == longhorn.VolumeOfflineRebuildingEnabled {
func (vbc *VolumeRebuildingController) isVolumeOfflineRebuildEnabled(offlineRebuilding longhorn.VolumeOfflineRebuilding) (bool, error) {
if offlineRebuilding == longhorn.VolumeOfflineRebuildingEnabled {
return true, nil
}
globalOfflineRebuildingEnabled, err := vbc.ds.GetSettingAsBoolByDataEngine(types.SettingNameOfflineReplicaRebuilding, vol.Spec.DataEngine)
globalOfflineRebuildingEnabled, err := vbc.ds.GetSettingAsBool(types.SettingNameOfflineReplicaRebuilding)
if err != nil {
return false, err
}
return globalOfflineRebuildingEnabled && vol.Spec.OfflineRebuilding != longhorn.VolumeOfflineRebuildingDisabled, nil
return globalOfflineRebuildingEnabled && offlineRebuilding != longhorn.VolumeOfflineRebuildingDisabled, nil
}
func (vbc *VolumeRebuildingController) syncLHVolumeAttachmentForOfflineRebuild(vol *longhorn.Volume, va *longhorn.VolumeAttachment, attachmentID string) (*longhorn.VolumeAttachment, error) {

View File

@ -6,7 +6,6 @@ import (
"encoding/json"
"fmt"
"net/url"
"os"
"reflect"
"regexp"
"strconv"
@ -20,18 +19,12 @@ import (
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/timestamppb"
"k8s.io/client-go/rest"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/longhorn/longhorn-manager/datastore"
"github.com/longhorn/longhorn-manager/types"
"github.com/longhorn/longhorn-manager/util"
longhornclient "github.com/longhorn/longhorn-manager/client"
longhorn "github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
lhclientset "github.com/longhorn/longhorn-manager/k8s/pkg/client/clientset/versioned"
)
const (
@ -59,26 +52,9 @@ type ControllerServer struct {
caps []*csi.ControllerServiceCapability
accessModes []*csi.VolumeCapability_AccessMode
log *logrus.Entry
lhClient lhclientset.Interface
lhNamespace string
}
func NewControllerServer(apiClient *longhornclient.RancherClient, nodeID string) (*ControllerServer, error) {
lhNamespace := os.Getenv(types.EnvPodNamespace)
if lhNamespace == "" {
return nil, fmt.Errorf("failed to detect pod namespace, environment variable %v is missing", types.EnvPodNamespace)
}
config, err := rest.InClusterConfig()
if err != nil {
return nil, errors.Wrap(err, "failed to get client config")
}
lhClient, err := lhclientset.NewForConfig(config)
if err != nil {
return nil, errors.Wrap(err, "failed to get longhorn clientset")
}
func NewControllerServer(apiClient *longhornclient.RancherClient, nodeID string) *ControllerServer {
return &ControllerServer{
apiClient: apiClient,
nodeID: nodeID,
@ -89,17 +65,14 @@ func NewControllerServer(apiClient *longhornclient.RancherClient, nodeID string)
csi.ControllerServiceCapability_RPC_EXPAND_VOLUME,
csi.ControllerServiceCapability_RPC_CREATE_DELETE_SNAPSHOT,
csi.ControllerServiceCapability_RPC_CLONE_VOLUME,
csi.ControllerServiceCapability_RPC_GET_CAPACITY,
}),
accessModes: getVolumeCapabilityAccessModes(
[]csi.VolumeCapability_AccessMode_Mode{
csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER,
csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER,
}),
log: logrus.StandardLogger().WithField("component", "csi-controller-server"),
lhClient: lhClient,
lhNamespace: lhNamespace,
}, nil
log: logrus.StandardLogger().WithField("component", "csi-controller-server"),
}
}
func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
@ -669,118 +642,8 @@ func (cs *ControllerServer) ListVolumes(context.Context, *csi.ListVolumesRequest
return nil, status.Error(codes.Unimplemented, "")
}
func (cs *ControllerServer) GetCapacity(ctx context.Context, req *csi.GetCapacityRequest) (*csi.GetCapacityResponse, error) {
log := cs.log.WithFields(logrus.Fields{"function": "GetCapacity"})
log.Infof("GetCapacity is called with req %+v", req)
var err error
defer func() {
if err != nil {
log.WithError(err).Errorf("Failed to get capacity")
}
}()
scParameters := req.GetParameters()
if scParameters == nil {
scParameters = map[string]string{}
}
nodeID, err := parseNodeID(req.GetAccessibleTopology())
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "failed to parse node id: %v", err)
}
node, err := cs.lhClient.LonghornV1beta2().Nodes(cs.lhNamespace).Get(ctx, nodeID, metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
return nil, status.Errorf(codes.NotFound, "node %s not found", nodeID)
}
return nil, status.Errorf(codes.Internal, "unexpected error: %v", err)
}
if types.GetCondition(node.Status.Conditions, longhorn.NodeConditionTypeReady).Status != longhorn.ConditionStatusTrue {
return &csi.GetCapacityResponse{}, nil
}
if types.GetCondition(node.Status.Conditions, longhorn.NodeConditionTypeSchedulable).Status != longhorn.ConditionStatusTrue {
return &csi.GetCapacityResponse{}, nil
}
if !node.Spec.AllowScheduling || node.Spec.EvictionRequested {
return &csi.GetCapacityResponse{}, nil
}
allowEmptyNodeSelectorVolume, err := cs.getSettingAsBoolean(types.SettingNameAllowEmptyNodeSelectorVolume)
if err != nil {
return nil, status.Errorf(codes.Internal, "failed to get setting %v: %v", types.SettingNameAllowEmptyNodeSelectorVolume, err)
}
var nodeSelector []string
if nodeSelectorRaw, ok := scParameters["nodeSelector"]; ok && len(nodeSelectorRaw) > 0 {
nodeSelector = strings.Split(nodeSelectorRaw, ",")
}
if !types.IsSelectorsInTags(node.Spec.Tags, nodeSelector, allowEmptyNodeSelectorVolume) {
return &csi.GetCapacityResponse{}, nil
}
var diskSelector []string
if diskSelectorRaw, ok := scParameters["diskSelector"]; ok && len(diskSelectorRaw) > 0 {
diskSelector = strings.Split(diskSelectorRaw, ",")
}
allowEmptyDiskSelectorVolume, err := cs.getSettingAsBoolean(types.SettingNameAllowEmptyDiskSelectorVolume)
if err != nil {
return nil, status.Errorf(codes.Internal, "failed to get setting %v: %v", types.SettingNameAllowEmptyDiskSelectorVolume, err)
}
var v1AvailableCapacity int64 = 0
var v2AvailableCapacity int64 = 0
for diskName, diskStatus := range node.Status.DiskStatus {
diskSpec, exists := node.Spec.Disks[diskName]
if !exists {
continue
}
if !diskSpec.AllowScheduling || diskSpec.EvictionRequested {
continue
}
if types.GetCondition(diskStatus.Conditions, longhorn.DiskConditionTypeSchedulable).Status != longhorn.ConditionStatusTrue {
continue
}
if !types.IsSelectorsInTags(diskSpec.Tags, diskSelector, allowEmptyDiskSelectorVolume) {
continue
}
storageSchedulable := diskStatus.StorageAvailable - diskSpec.StorageReserved
if diskStatus.Type == longhorn.DiskTypeFilesystem {
v1AvailableCapacity = max(v1AvailableCapacity, storageSchedulable)
}
if diskStatus.Type == longhorn.DiskTypeBlock {
v2AvailableCapacity = max(v2AvailableCapacity, storageSchedulable)
}
}
dataEngine, ok := scParameters["dataEngine"]
if !ok {
return nil, status.Error(codes.InvalidArgument, "storage class parameters missing 'dataEngine' key")
}
rsp := &csi.GetCapacityResponse{}
switch longhorn.DataEngineType(dataEngine) {
case longhorn.DataEngineTypeV1:
rsp.AvailableCapacity = v1AvailableCapacity
case longhorn.DataEngineTypeV2:
rsp.AvailableCapacity = v2AvailableCapacity
default:
return nil, status.Errorf(codes.InvalidArgument, "unknown data engine type %v", dataEngine)
}
log.Infof("Node: %s, DataEngine: %s, v1AvailableCapacity: %d, v2AvailableCapacity: %d", nodeID, dataEngine, v1AvailableCapacity, v2AvailableCapacity)
return rsp, nil
}
func (cs *ControllerServer) getSettingAsBoolean(name types.SettingName) (bool, error) {
obj, err := cs.lhClient.LonghornV1beta2().Settings(cs.lhNamespace).Get(context.TODO(), string(name), metav1.GetOptions{})
if err != nil {
return false, err
}
value, err := strconv.ParseBool(obj.Value)
if err != nil {
return false, err
}
return value, nil
func (cs *ControllerServer) GetCapacity(context.Context, *csi.GetCapacityRequest) (*csi.GetCapacityResponse, error) {
return nil, status.Error(codes.Unimplemented, "")
}
func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) {

View File

@ -1,341 +0,0 @@
package csi
import (
"context"
"fmt"
"strings"
"testing"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/longhorn/longhorn-manager/types"
longhorn "github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
lhfake "github.com/longhorn/longhorn-manager/k8s/pkg/client/clientset/versioned/fake"
)
type disk struct {
spec longhorn.DiskSpec
status longhorn.DiskStatus
}
func TestGetCapacity(t *testing.T) {
cs := &ControllerServer{
lhNamespace: "longhorn-system-test",
log: logrus.StandardLogger().WithField("component", "test-get-capacity"),
}
for _, test := range []struct {
testName string
node *longhorn.Node
skipNodeCreation bool
skipNodeSettingCreation bool
skipDiskSettingCreation bool
dataEngine string
diskSelector string
nodeSelector string
availableCapacity int64
disks []*disk
err error
}{
{
testName: "Node not found",
skipNodeCreation: true,
node: newNode("node-0", "storage", true, true, true, false),
err: status.Errorf(codes.NotFound, "node node-0 not found"),
},
{
testName: "Node setting not found",
skipNodeSettingCreation: true,
node: newNode("node-0", "storage", true, true, true, false),
err: status.Errorf(codes.Internal, "failed to get setting allow-empty-node-selector-volume: settings.longhorn.io \"allow-empty-node-selector-volume\" not found"),
},
{
testName: "Disk setting not found",
skipDiskSettingCreation: true,
node: newNode("node-0", "storage", true, true, true, false),
err: status.Errorf(codes.Internal, "failed to get setting allow-empty-disk-selector-volume: settings.longhorn.io \"allow-empty-disk-selector-volume\" not found"),
},
{
testName: "Missing data engine type",
node: newNode("node-0", "storage", true, true, true, false),
err: status.Errorf(codes.InvalidArgument, "storage class parameters missing 'dataEngine' key"),
},
{
testName: "Unknown data engine type",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v5",
err: status.Errorf(codes.InvalidArgument, "unknown data engine type v5"),
},
{
testName: "v1 engine with no disks",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v1",
availableCapacity: 0,
},
{
testName: "v2 engine with no disks",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v2",
availableCapacity: 0,
},
{
testName: "Node condition is not ready",
node: newNode("node-0", "storage", false, true, true, false),
dataEngine: "v1",
disks: []*disk{newDisk(1450, 300, "ssd", false, true, true, false), newDisk(1000, 500, "", false, true, true, false)},
availableCapacity: 0,
},
{
testName: "Node condition is not schedulable",
node: newNode("node-0", "storage", true, false, true, false),
dataEngine: "v1",
disks: []*disk{newDisk(1450, 300, "ssd", false, true, true, false), newDisk(1000, 500, "", false, true, true, false)},
availableCapacity: 0,
},
{
testName: "Scheduling not allowed on a node",
node: newNode("node-0", "storage", true, true, false, false),
dataEngine: "v1",
disks: []*disk{newDisk(1450, 300, "ssd", false, true, true, false), newDisk(1000, 500, "", false, true, true, false)},
availableCapacity: 0,
},
{
testName: "Node eviction is requested",
node: newNode("node-0", "storage", true, true, true, true),
dataEngine: "v1",
disks: []*disk{newDisk(1450, 300, "ssd", false, true, true, false), newDisk(1000, 500, "", false, true, true, false)},
availableCapacity: 0,
},
{
testName: "Node tags don't match node selector",
node: newNode("node-0", "large,fast,linux", true, true, true, false),
nodeSelector: "fast,storage",
dataEngine: "v1",
disks: []*disk{newDisk(1450, 300, "ssd", false, true, true, false), newDisk(1000, 500, "", false, true, true, false)},
availableCapacity: 0,
},
{
testName: "v1 engine with two valid disks",
node: newNode("node-0", "storage,large,fast,linux", true, true, true, false),
nodeSelector: "fast,storage",
dataEngine: "v1",
disks: []*disk{newDisk(1450, 300, "ssd", false, true, true, false), newDisk(1000, 500, "", false, true, true, false)},
availableCapacity: 1150,
},
{
testName: "v1 engine with two valid disks and one with mismatched engine type",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v1",
availableCapacity: 1150,
disks: []*disk{newDisk(1450, 300, "ssd", false, true, true, false), newDisk(1000, 500, "", false, true, true, false), newDisk(2000, 100, "", true, true, true, false)},
},
{
testName: "v2 engine with two valid disks and one with mismatched engine type",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v2",
availableCapacity: 1650,
disks: []*disk{newDisk(1950, 300, "", true, true, true, false), newDisk(1500, 500, "", true, true, true, false), newDisk(2000, 100, "", false, true, true, false)},
},
{
testName: "v2 engine with one valid disk and two with unmatched tags",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v2",
diskSelector: "ssd,fast",
availableCapacity: 1000,
disks: []*disk{newDisk(1100, 100, "fast,nvmf,ssd,hot", true, true, true, false), newDisk(2500, 500, "ssd,slow,green", true, true, true, false), newDisk(2000, 100, "hdd,fast", true, true, true, false)},
},
{
testName: "v2 engine with one valid disk and one with unhealthy condition",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v2",
availableCapacity: 400,
disks: []*disk{newDisk(1100, 100, "ssd", true, false, true, false), newDisk(500, 100, "hdd", true, true, true, false)},
},
{
testName: "v2 engine with one valid disk and one with scheduling disabled",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v2",
availableCapacity: 400,
disks: []*disk{newDisk(1100, 100, "ssd", true, true, false, false), newDisk(500, 100, "hdd", true, true, true, false)},
},
{
testName: "v2 engine with one valid disk and one marked for eviction",
node: newNode("node-0", "storage", true, true, true, false),
dataEngine: "v2",
availableCapacity: 400,
disks: []*disk{newDisk(1100, 100, "ssd", true, true, true, true), newDisk(500, 100, "hdd", true, true, true, false)},
},
} {
t.Run(test.testName, func(t *testing.T) {
cs.lhClient = lhfake.NewSimpleClientset()
if !test.skipNodeCreation {
addDisksToNode(test.node, test.disks)
_, err := cs.lhClient.LonghornV1beta2().Nodes(cs.lhNamespace).Create(context.TODO(), test.node, metav1.CreateOptions{})
if err != nil {
t.Error("failed to create node")
}
}
if !test.skipNodeSettingCreation {
_, err := cs.lhClient.LonghornV1beta2().Settings(cs.lhNamespace).Create(context.TODO(), newSetting(string(types.SettingNameAllowEmptyNodeSelectorVolume), "true"), metav1.CreateOptions{})
if err != nil {
t.Errorf("failed to create setting %v", types.SettingNameAllowEmptyNodeSelectorVolume)
}
}
if !test.skipDiskSettingCreation {
_, err := cs.lhClient.LonghornV1beta2().Settings(cs.lhNamespace).Create(context.TODO(), newSetting(string(types.SettingNameAllowEmptyDiskSelectorVolume), "true"), metav1.CreateOptions{})
if err != nil {
t.Errorf("failed to create setting %v", types.SettingNameAllowEmptyDiskSelectorVolume)
}
}
req := &csi.GetCapacityRequest{
AccessibleTopology: &csi.Topology{
Segments: map[string]string{
nodeTopologyKey: test.node.Name,
},
},
Parameters: map[string]string{},
}
if test.dataEngine != "" {
req.Parameters["dataEngine"] = test.dataEngine
}
req.Parameters["diskSelector"] = test.diskSelector
req.Parameters["nodeSelector"] = test.nodeSelector
res, err := cs.GetCapacity(context.TODO(), req)
expectedStatus := status.Convert(test.err)
actualStatus := status.Convert(err)
if expectedStatus.Code() != actualStatus.Code() {
t.Errorf("expected error code: %v, but got: %v", expectedStatus.Code(), actualStatus.Code())
} else if expectedStatus.Message() != actualStatus.Message() {
t.Errorf("expected error message: '%s', but got: '%s'", expectedStatus.Message(), actualStatus.Message())
}
if res != nil && res.AvailableCapacity != test.availableCapacity {
t.Errorf("expected available capacity: %d, but got: %d", test.availableCapacity, res.AvailableCapacity)
}
})
}
}
func TestParseNodeID(t *testing.T) {
for _, test := range []struct {
topology *csi.Topology
err error
nodeID string
}{
{
err: fmt.Errorf("missing accessible topology request parameter"),
},
{
topology: &csi.Topology{
Segments: nil,
},
err: fmt.Errorf("missing accessible topology request parameter"),
},
{
topology: &csi.Topology{
Segments: map[string]string{
"some-key": "some-value",
},
},
err: fmt.Errorf("accessible topology request parameter is missing kubernetes.io/hostname key"),
},
{
topology: &csi.Topology{
Segments: map[string]string{
nodeTopologyKey: "node-0",
},
},
nodeID: "node-0",
},
} {
nodeID, err := parseNodeID(test.topology)
checkError(t, test.err, err)
if test.nodeID != nodeID {
t.Errorf("expected nodeID: %s, but got: %s", test.nodeID, nodeID)
}
}
}
func checkError(t *testing.T, expected, actual error) {
if expected == nil {
if actual != nil {
t.Errorf("expected no error but got: %v", actual)
}
} else {
if actual == nil {
t.Errorf("expected error: %v, but got no error", expected)
}
if expected.Error() != actual.Error() {
t.Errorf("expected error: %v, but got: %v", expected, actual)
}
}
}
func newDisk(storageAvailable, storageReserved int64, tags string, isBlockType, isCondOk, allowScheduling, evictionRequested bool) *disk {
disk := &disk{
spec: longhorn.DiskSpec{
StorageReserved: storageReserved,
Tags: strings.Split(tags, ","),
AllowScheduling: allowScheduling,
EvictionRequested: evictionRequested,
},
status: longhorn.DiskStatus{
StorageAvailable: storageAvailable,
Type: longhorn.DiskTypeFilesystem,
},
}
if isBlockType {
disk.status.Type = longhorn.DiskTypeBlock
}
if isCondOk {
disk.status.Conditions = []longhorn.Condition{{Type: longhorn.DiskConditionTypeSchedulable, Status: longhorn.ConditionStatusTrue}}
}
return disk
}
func newNode(name, tags string, isCondReady, isCondSchedulable, allowScheduling, evictionRequested bool) *longhorn.Node {
node := &longhorn.Node{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
Spec: longhorn.NodeSpec{
Disks: map[string]longhorn.DiskSpec{},
Tags: strings.Split(tags, ","),
AllowScheduling: allowScheduling,
EvictionRequested: evictionRequested,
},
Status: longhorn.NodeStatus{
DiskStatus: map[string]*longhorn.DiskStatus{},
},
}
if isCondReady {
node.Status.Conditions = append(node.Status.Conditions, longhorn.Condition{Type: longhorn.NodeConditionTypeReady, Status: longhorn.ConditionStatusTrue})
}
if isCondSchedulable {
node.Status.Conditions = append(node.Status.Conditions, longhorn.Condition{Type: longhorn.NodeConditionTypeSchedulable, Status: longhorn.ConditionStatusTrue})
}
return node
}
func addDisksToNode(node *longhorn.Node, disks []*disk) {
for i, disk := range disks {
name := fmt.Sprintf("disk-%d", i)
node.Spec.Disks[name] = disk.spec
node.Status.DiskStatus[name] = &disk.status
}
}
func newSetting(name, value string) *longhorn.Setting {
return &longhorn.Setting{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
Value: value,
}
}

View File

@ -117,8 +117,6 @@ func NewProvisionerDeployment(namespace, serviceAccount, provisionerImage, rootD
"--leader-election",
"--leader-election-namespace=$(POD_NAMESPACE)",
"--default-fstype=ext4",
"--enable-capacity",
"--capacity-ownerref-level=2",
fmt.Sprintf("--kube-api-qps=%v", types.KubeAPIQPS),
fmt.Sprintf("--kube-api-burst=%v", types.KubeAPIBurst),
fmt.Sprintf("--http-endpoint=:%v", types.CSISidecarMetricsPort),
@ -593,13 +591,13 @@ type DriverObjectDeployment struct {
}
func NewCSIDriverObject() *DriverObjectDeployment {
falseFlag := true
obj := &storagev1.CSIDriver{
ObjectMeta: metav1.ObjectMeta{
Name: types.LonghornDriverName,
},
Spec: storagev1.CSIDriverSpec{
PodInfoOnMount: ptr.To(true),
StorageCapacity: ptr.To(true),
PodInfoOnMount: &falseFlag,
},
}
return &DriverObjectDeployment{

View File

@ -105,24 +105,6 @@ func getCommonDeployment(commonName, namespace, serviceAccount, image, rootDir s
},
},
},
{
// required by external-provisioner to set owner references for CSIStorageCapacity objects
Name: "NAMESPACE",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.namespace",
},
},
},
{
// required by external-provisioner to set owner references for CSIStorageCapacity objects
Name: "POD_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.name",
},
},
},
},
VolumeMounts: []corev1.VolumeMount{
{

View File

@ -41,13 +41,6 @@ func (ids *IdentityServer) GetPluginCapabilities(ctx context.Context, req *csi.G
},
},
},
{
Type: &csi.PluginCapability_Service_{
Service: &csi.PluginCapability_Service{
Type: csi.PluginCapability_Service_VOLUME_ACCESSIBILITY_CONSTRAINTS,
},
},
},
{
Type: &csi.PluginCapability_VolumeExpansion_{
VolumeExpansion: &csi.PluginCapability_VolumeExpansion{

View File

@ -41,11 +41,7 @@ func (m *Manager) Run(driverName, nodeID, endpoint, identityVersion, managerURL
return errors.Wrap(err, "Failed to create CSI node server ")
}
m.cs, err = NewControllerServer(apiClient, nodeID)
if err != nil {
return errors.Wrap(err, "failed to create CSI controller server")
}
m.cs = NewControllerServer(apiClient, nodeID)
s := NewNonBlockingGRPCServer()
s.Start(endpoint, m.ids, m.cs, m.ns)
s.Wait()

View File

@ -893,11 +893,6 @@ func (ns *NodeServer) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoReque
return &csi.NodeGetInfoResponse{
NodeId: ns.nodeID,
MaxVolumesPerNode: 0, // technically the scsi kernel limit is the max limit of volumes
AccessibleTopology: &csi.Topology{
Segments: map[string]string{
nodeTopologyKey: ns.nodeID,
},
},
}, nil
}

View File

@ -22,7 +22,6 @@ import (
utilexec "k8s.io/utils/exec"
"github.com/longhorn/longhorn-manager/types"
"github.com/longhorn/longhorn-manager/util"
longhornclient "github.com/longhorn/longhorn-manager/client"
longhorn "github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
@ -36,8 +35,6 @@ const (
defaultForceUmountTimeout = 30 * time.Second
tempTestMountPointValidStatusFile = ".longhorn-volume-mount-point-test.tmp"
nodeTopologyKey = "kubernetes.io/hostname"
)
// NewForcedParamsExec creates a osExecutor that allows for adding additional params to later occurring Run calls
@ -215,14 +212,6 @@ func getVolumeOptions(volumeID string, volOptions map[string]string) (*longhornc
vol.BackupTargetName = backupTargetName
}
if backupBlockSize, ok := volOptions["backupBlockSize"]; ok {
blockSize, err := util.ConvertSize(backupBlockSize)
if err != nil {
return nil, errors.Wrap(err, "invalid parameter backupBlockSize")
}
vol.BackupBlockSize = strconv.FormatInt(blockSize, 10)
}
if dataSource, ok := volOptions["dataSource"]; ok {
vol.DataSource = dataSource
}
@ -477,14 +466,3 @@ func requiresSharedAccess(vol *longhornclient.Volume, cap *csi.VolumeCapability)
func getStageBlockVolumePath(stagingTargetPath, volumeID string) string {
return filepath.Join(stagingTargetPath, volumeID)
}
func parseNodeID(topology *csi.Topology) (string, error) {
if topology == nil || topology.Segments == nil {
return "", fmt.Errorf("missing accessible topology request parameter")
}
nodeId, ok := topology.Segments[nodeTopologyKey]
if !ok {
return "", fmt.Errorf("accessible topology request parameter is missing %s key", nodeTopologyKey)
}
return nodeId, nil
}

View File

@ -2,14 +2,13 @@ package datastore
import (
"context"
"encoding/json"
"fmt"
"math/bits"
"math/rand"
"net"
"net/url"
"reflect"
"regexp"
"runtime"
"strconv"
"strings"
"time"
@ -23,7 +22,6 @@ import (
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/validation"
"k8s.io/client-go/util/retry"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@ -77,10 +75,6 @@ func (s *DataStore) UpdateCustomizedSettings(defaultImages map[types.SettingName
return err
}
if err := s.syncConsolidatedV2DataEngineSettings(); err != nil {
return err
}
if err := s.createNonExistingSettingCRsWithDefaultSetting(defaultSettingCM.ResourceVersion); err != nil {
return err
}
@ -171,11 +165,11 @@ func (s *DataStore) syncSettingsWithDefaultImages(defaultImages map[types.Settin
func (s *DataStore) syncSettingOrphanResourceAutoDeletionSettings() error {
oldOrphanReplicaDataAutoDeletionSettingRO, err := s.getSettingRO(string(types.SettingNameOrphanAutoDeletion))
if err != nil {
if ErrorIsNotFound(err) {
logrus.Debugf("No old setting %v to be replaced.", types.SettingNameOrphanAutoDeletion)
return nil
}
switch {
case ErrorIsNotFound(err):
logrus.Infof("No old setting %v to be replaced.", types.SettingNameOrphanAutoDeletion)
return nil
case err != nil:
return errors.Wrapf(err, "failed to get replaced setting %v", types.SettingNameOrphanAutoDeletion)
}
@ -194,36 +188,6 @@ func (s *DataStore) syncSettingOrphanResourceAutoDeletionSettings() error {
return s.createOrUpdateSetting(types.SettingNameOrphanResourceAutoDeletion, value, "")
}
func (s *DataStore) syncConsolidatedV2DataEngineSetting(oldSettingName, newSettingName types.SettingName) error {
oldSetting, err := s.getSettingRO(string(oldSettingName))
if err != nil {
if ErrorIsNotFound(err) {
logrus.Debugf("No old setting %v to be replaced.", oldSettingName)
return nil
}
return errors.Wrapf(err, "failed to get old setting %v", oldSettingName)
}
return s.createOrUpdateSetting(newSettingName, oldSetting.Value, "")
}
func (s *DataStore) syncConsolidatedV2DataEngineSettings() error {
settings := map[types.SettingName]types.SettingName{
types.SettingNameV2DataEngineHugepageLimit: types.SettingNameDataEngineHugepageLimit,
types.SettingNameV2DataEngineCPUMask: types.SettingNameDataEngineCPUMask,
types.SettingNameV2DataEngineLogLevel: types.SettingNameDataEngineLogLevel,
types.SettingNameV2DataEngineLogFlags: types.SettingNameDataEngineLogFlags,
}
for oldSettingName, newSettingName := range settings {
if err := s.syncConsolidatedV2DataEngineSetting(oldSettingName, newSettingName); err != nil {
return errors.Wrapf(err, "failed to sync consolidated v2 data engine setting %v to %v", oldSettingName, newSettingName)
}
}
return nil
}
func (s *DataStore) createOrUpdateSetting(name types.SettingName, value, defaultSettingCMResourceVersion string) error {
setting, err := s.GetSettingExact(name)
if err != nil {
@ -272,12 +236,7 @@ func (s *DataStore) applyCustomizedDefaultSettingsToDefinitions(customizedDefaul
continue
}
if raw, ok := customizedDefaultSettings[string(sName)]; ok {
value, err := GetSettingValidValue(definition, raw)
if err != nil {
return err
}
logrus.Infof("Setting %v default value is updated to a customized value %v (raw value %v)", sName, value, raw)
if value, ok := customizedDefaultSettings[string(sName)]; ok {
definition.Default = value
types.SetSettingDefinition(sName, definition)
}
@ -285,75 +244,6 @@ func (s *DataStore) applyCustomizedDefaultSettingsToDefinitions(customizedDefaul
return nil
}
func GetSettingValidValue(definition types.SettingDefinition, value string) (string, error) {
if !definition.DataEngineSpecific {
return value, nil
}
if !types.IsJSONFormat(definition.Default) {
return "", fmt.Errorf("setting %v is data engine specific but default value %v is not in JSON-formatted string", definition.DisplayName, definition.Default)
}
var values map[longhorn.DataEngineType]any
var err error
// Get default values from definition
defaultValues, err := types.ParseDataEngineSpecificSetting(definition, definition.Default)
if err != nil {
return "", err
}
// Get values from customized value
if types.IsJSONFormat(value) {
values, err = types.ParseDataEngineSpecificSetting(definition, value)
} else {
values, err = types.ParseSettingSingleValue(definition, value)
}
if err != nil {
return "", err
}
// Remove any data engine types that are not in the default values
for dataEngine := range values {
if _, ok := defaultValues[dataEngine]; !ok {
delete(values, dataEngine)
}
}
return convertDataEngineValuesToJSONString(values)
}
func convertDataEngineValuesToJSONString(values map[longhorn.DataEngineType]any) (string, error) {
converted := make(map[longhorn.DataEngineType]string)
for dataEngine, raw := range values {
var value string
switch v := raw.(type) {
case string:
value = v
case bool:
value = strconv.FormatBool(v)
case int:
value = strconv.Itoa(v)
case int64:
value = strconv.FormatInt(v, 10)
case float64:
value = strconv.FormatFloat(v, 'f', -1, 64)
default:
return "", fmt.Errorf("unsupported value type: %T", v)
}
converted[dataEngine] = value
}
jsonBytes, err := json.Marshal(converted)
if err != nil {
return "", err
}
return string(jsonBytes), nil
}
func (s *DataStore) syncSettingCRsWithCustomizedDefaultSettings(customizedDefaultSettings map[string]string, defaultSettingCMResourceVersion string) error {
for _, sName := range types.SettingNameList {
definition, ok := types.GetSettingDefinition(sName)
@ -413,16 +303,8 @@ func (s *DataStore) UpdateSetting(setting *longhorn.Setting) (*longhorn.Setting,
return nil, err
}
err = retry.RetryOnConflict(retry.DefaultRetry, func() error {
latest, getErr := s.lhClient.LonghornV1beta2().Settings(s.namespace).Get(context.TODO(), setting.Name, metav1.GetOptions{})
if getErr != nil {
return getErr
}
delete(latest.Annotations, types.GetLonghornLabelKey(types.UpdateSettingFromLonghorn))
obj, err = s.lhClient.LonghornV1beta2().Settings(s.namespace).Update(context.TODO(), latest, metav1.UpdateOptions{})
return err
})
delete(obj.Annotations, types.GetLonghornLabelKey(types.UpdateSettingFromLonghorn))
obj, err = s.lhClient.LonghornV1beta2().Settings(s.namespace).Update(context.TODO(), obj, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
@ -452,21 +334,30 @@ func (s *DataStore) deleteSetting(name string) error {
// ValidateSetting checks the given setting value types and condition
func (s *DataStore) ValidateSetting(name, value string) (err error) {
defer func() {
err = errors.Wrapf(err, "failed to validate setting %v with invalid value %v", name, value)
err = errors.Wrapf(err, "failed to set the setting %v with invalid value %v", name, value)
}()
sName := types.SettingName(name)
if err := types.ValidateSetting(name, value); err != nil {
return err
}
switch types.SettingName(name) {
switch sName {
case types.SettingNamePriorityClass:
if value != "" {
if _, err := s.GetPriorityClass(value); err != nil {
return errors.Wrapf(err, "failed to get priority class %v before modifying priority class setting", value)
}
}
case types.SettingNameGuaranteedInstanceManagerCPU, types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU:
guaranteedInstanceManagerCPU, err := s.GetSettingWithAutoFillingRO(sName)
if err != nil {
return err
}
guaranteedInstanceManagerCPU.Value = value
if err := types.ValidateCPUReservationValues(sName, guaranteedInstanceManagerCPU.Value); err != nil {
return err
}
case types.SettingNameV1DataEngine:
old, err := s.GetSettingWithAutoFillingRO(types.SettingNameV1DataEngine)
if err != nil {
@ -502,49 +393,13 @@ func (s *DataStore) ValidateSetting(name, value string) (err error) {
return err
}
}
case types.SettingNameDataEngineCPUMask:
definition, ok := types.GetSettingDefinition(types.SettingNameDataEngineCPUMask)
if !ok {
return fmt.Errorf("setting %v is not found", types.SettingNameDataEngineCPUMask)
case types.SettingNameV2DataEngineCPUMask:
if value == "" {
return errors.Errorf("cannot set %v setting to empty value", name)
}
var values map[longhorn.DataEngineType]any
if types.IsJSONFormat(value) {
values, err = types.ParseDataEngineSpecificSetting(definition, value)
} else {
values, err = types.ParseSettingSingleValue(definition, value)
if err := s.ValidateCPUMask(value); err != nil {
return err
}
if err != nil {
return errors.Wrapf(err, "failed to parse value %v for setting %v", value, types.SettingNameDataEngineCPUMask)
}
for dataEngine, raw := range values {
cpuMask, ok := raw.(string)
if !ok {
return fmt.Errorf("setting %v value %v is not a string for data engine %v", types.SettingNameDataEngineCPUMask, raw, dataEngine)
}
lhNodes, err := s.ListNodesRO()
if err != nil {
return errors.Wrapf(err, "failed to list nodes for %v setting validation for data engine %v", types.SettingNameDataEngineCPUMask, dataEngine)
}
// Ensure if the CPU mask can be satisfied on each node
for _, lhnode := range lhNodes {
kubeNode, err := s.GetKubernetesNodeRO(lhnode.Name)
if err != nil {
if apierrors.IsNotFound(err) {
logrus.Warnf("Kubernetes node %s not found, skipping CPU mask validation for this node for data engine %v", lhnode.Name, dataEngine)
continue
}
return errors.Wrapf(err, "failed to get Kubernetes node %s for %v setting validation for data engine %v", lhnode.Name, types.SettingNameDataEngineCPUMask, dataEngine)
}
if err := s.ValidateCPUMask(kubeNode, cpuMask); err != nil {
return err
}
}
}
case types.SettingNameAutoCleanupSystemGeneratedSnapshot:
disablePurgeValue, err := s.GetSettingAsBool(types.SettingNameDisableSnapshotPurge)
if err != nil {
@ -553,7 +408,6 @@ func (s *DataStore) ValidateSetting(name, value string) (err error) {
if value == "true" && disablePurgeValue {
return errors.Errorf("cannot set %v setting to true when %v setting is true", name, types.SettingNameDisableSnapshotPurge)
}
case types.SettingNameDisableSnapshotPurge:
autoCleanupValue, err := s.GetSettingAsBool(types.SettingNameAutoCleanupSystemGeneratedSnapshot)
if err != nil {
@ -562,7 +416,6 @@ func (s *DataStore) ValidateSetting(name, value string) (err error) {
if value == "true" && autoCleanupValue {
return errors.Errorf("cannot set %v setting to true when %v setting is true", name, types.SettingNameAutoCleanupSystemGeneratedSnapshot)
}
case types.SettingNameSnapshotMaxCount:
v, err := strconv.Atoi(value)
if err != nil {
@ -571,7 +424,6 @@ func (s *DataStore) ValidateSetting(name, value string) (err error) {
if v < 2 || v > 250 {
return fmt.Errorf("%s should be between 2 and 250", name)
}
case types.SettingNameDefaultLonghornStaticStorageClass:
definition, ok := types.GetSettingDefinition(types.SettingNameDefaultLonghornStaticStorageClass)
if !ok {
@ -623,53 +475,45 @@ func (s *DataStore) ValidateV2DataEngineEnabled(dataEngineEnabled bool) (ims []*
}
// Check if there is enough hugepages-2Mi capacity for all nodes
hugepageRequestedInMiB, err := s.GetSettingAsIntByDataEngine(types.SettingNameDataEngineHugepageLimit, longhorn.DataEngineTypeV2)
hugepageRequestedInMiB, err := s.GetSettingWithAutoFillingRO(types.SettingNameV2DataEngineHugepageLimit)
if err != nil {
return nil, err
}
// hugepageRequestedInMiB is integer
hugepageRequested, err := resource.ParseQuantity(fmt.Sprintf("%dMi", hugepageRequestedInMiB))
if err != nil {
return nil, errors.Wrapf(err, "failed to parse hugepage value %qMi", hugepageRequestedInMiB)
}
{
hugepageRequested := resource.MustParse(hugepageRequestedInMiB.Value + "Mi")
_ims, err := s.ListInstanceManagersRO()
if err != nil {
return nil, errors.Wrapf(err, "failed to list instance managers for %v setting update", types.SettingNameV2DataEngine)
}
for _, im := range _ims {
if types.IsDataEngineV1(im.Spec.DataEngine) {
continue
}
node, err := s.GetKubernetesNodeRO(im.Spec.NodeID)
_ims, err := s.ListInstanceManagersRO()
if err != nil {
if !apierrors.IsNotFound(err) {
return nil, errors.Wrapf(err, "failed to get Kubernetes node %v for %v setting update", im.Spec.NodeID, types.SettingNameV2DataEngine)
}
continue
return nil, errors.Wrapf(err, "failed to list instance managers for %v setting update", types.SettingNameV2DataEngine)
}
if val, ok := node.Labels[types.NodeDisableV2DataEngineLabelKey]; ok && val == types.NodeDisableV2DataEngineLabelKeyTrue {
// V2 data engine is disabled on this node, don't worry about hugepages
continue
}
if dataEngineEnabled {
capacity, ok := node.Status.Capacity["hugepages-2Mi"]
if !ok {
return nil, errors.Errorf("failed to get hugepages-2Mi capacity for node %v", node.Name)
}
hugepageCapacity, err := resource.ParseQuantity(capacity.String())
for _, im := range _ims {
node, err := s.GetKubernetesNodeRO(im.Spec.NodeID)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse hugepage value %qMi", hugepageRequestedInMiB)
if !apierrors.IsNotFound(err) {
return nil, errors.Wrapf(err, "failed to get Kubernetes node %v for %v setting update", im.Spec.NodeID, types.SettingNameV2DataEngine)
}
continue
}
if hugepageCapacity.Cmp(hugepageRequested) < 0 {
return nil, errors.Errorf("not enough hugepages-2Mi capacity for node %v, requested %v, capacity %v", node.Name, hugepageRequested.String(), hugepageCapacity.String())
if val, ok := node.Labels[types.NodeDisableV2DataEngineLabelKey]; ok && val == types.NodeDisableV2DataEngineLabelKeyTrue {
// V2 data engine is disabled on this node, don't worry about hugepages
continue
}
if dataEngineEnabled {
capacity, ok := node.Status.Capacity["hugepages-2Mi"]
if !ok {
return nil, errors.Errorf("failed to get hugepages-2Mi capacity for node %v", node.Name)
}
hugepageCapacity := resource.MustParse(capacity.String())
if hugepageCapacity.Cmp(hugepageRequested) < 0 {
return nil, errors.Errorf("not enough hugepages-2Mi capacity for node %v, requested %v, capacity %v", node.Name, hugepageRequested.String(), hugepageCapacity.String())
}
}
}
}
@ -677,11 +521,7 @@ func (s *DataStore) ValidateV2DataEngineEnabled(dataEngineEnabled bool) (ims []*
return
}
func (s *DataStore) ValidateCPUMask(kubeNode *corev1.Node, value string) error {
if value == "" {
return fmt.Errorf("failed to validate CPU mask: cannot be empty")
}
func (s *DataStore) ValidateCPUMask(value string) error {
// CPU mask must start with 0x
cpuMaskRegex := regexp.MustCompile(`^0x[1-9a-fA-F][0-9a-fA-F]*$`)
if !cpuMaskRegex.MatchString(value) {
@ -690,76 +530,30 @@ func (s *DataStore) ValidateCPUMask(kubeNode *corev1.Node, value string) error {
maskValue, err := strconv.ParseUint(value[2:], 16, 64) // skip 0x prefix
if err != nil {
return errors.Wrapf(err, "failed to parse CPU mask %v", value)
return fmt.Errorf("failed to parse CPU mask: %s", value)
}
// Validate the mask value is not larger than the number of available CPUs
numCPUs, err := s.getMinNumCPUsFromAvailableNodes()
if err != nil {
return errors.Wrap(err, "failed to get minimum number of CPUs for CPU mask validation")
}
numCPUs := runtime.NumCPU()
maxCPUMaskValue := (1 << numCPUs) - 1
if maskValue > uint64(maxCPUMaskValue) {
return fmt.Errorf("CPU mask exceeds the maximum allowed value %v for the current system: %s", maxCPUMaskValue, value)
}
// CPU mask currently only supports v2 data engine
guaranteedInstanceManagerCPUInPercentage, err := s.GetSettingAsFloatByDataEngine(types.SettingNameGuaranteedInstanceManagerCPU, longhorn.DataEngineTypeV2)
guaranteedInstanceManagerCPU, err := s.GetSettingAsInt(types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU)
if err != nil {
return errors.Wrapf(err, "failed to get %v setting for guaranteed instance manager CPU validation for data engine %v",
types.SettingNameGuaranteedInstanceManagerCPU, longhorn.DataEngineTypeV2)
return errors.Wrapf(err, "failed to get %v setting for CPU mask validation", types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU)
}
guaranteedInstanceManagerCPU := float64(kubeNode.Status.Allocatable.Cpu().MilliValue()) * guaranteedInstanceManagerCPUInPercentage
numMilliCPUsRequrestedByMaskValue := calculateMilliCPUs(maskValue)
if numMilliCPUsRequrestedByMaskValue > int(guaranteedInstanceManagerCPU) {
return fmt.Errorf("number of CPUs (%v) requested by CPU mask (%v) is larger than the %v setting value (%v)",
numMilliCPUsRequrestedByMaskValue, value, types.SettingNameGuaranteedInstanceManagerCPU, guaranteedInstanceManagerCPU)
numMilliCPUsRequrestedByMaskValue, value, types.SettingNameV2DataEngineGuaranteedInstanceManagerCPU, guaranteedInstanceManagerCPU)
}
return nil
}
func (s *DataStore) getMinNumCPUsFromAvailableNodes() (int64, error) {
kubeNodes, err := s.ListKubeNodesRO()
if err != nil {
return -1, errors.Wrapf(err, "failed to list Kubernetes nodes")
}
// Assign max value to minNumCPUs of the max value of int64
minNumCPUs := int64(^uint64(0) >> 1)
for _, kubeNode := range kubeNodes {
lhNode, err := s.GetNodeRO(kubeNode.Name)
if err != nil {
if apierrors.IsNotFound(err) {
continue
}
return -1, errors.Wrapf(err, "failed to get Longhorn node %v", kubeNode.Name)
}
// Skip node that is down, deleted, or missing manager
if isUnavailable, err := s.IsNodeDownOrDeletedOrMissingManager(lhNode.Name); err != nil {
return -1, errors.Wrapf(err, "failed to check if node %v is down or deleted", lhNode.Name)
} else if isUnavailable {
continue
}
// Skip node that disables v2 data engine
if val, ok := kubeNode.Labels[types.NodeDisableV2DataEngineLabelKey]; ok {
if val == types.NodeDisableV2DataEngineLabelKeyTrue {
continue
}
}
numCPUs := kubeNode.Status.Allocatable.Cpu().Value()
if numCPUs < minNumCPUs {
minNumCPUs = numCPUs
}
}
return minNumCPUs, nil
}
func calculateMilliCPUs(mask uint64) int {
// Count the number of set bits in the mask
setBits := bits.OnesCount64(mask)
@ -859,11 +653,6 @@ func (s *DataStore) getSettingRO(name string) (*longhorn.Setting, error) {
return s.settingLister.Settings(s.namespace).Get(name)
}
// GetSettingWithAutoFillingRO retrieves a read-only setting from the datastore by its name.
// If the setting does not exist, it automatically constructs and returns a default setting
// object using the predefined default value from the setting's definition. If the setting
// name is not recognized or an unexpected error occurs during retrieval, the function
// returns an error.
func (s *DataStore) GetSettingWithAutoFillingRO(sName types.SettingName) (*longhorn.Setting, error) {
definition, ok := types.GetSettingDefinition(sName)
if !ok {
@ -937,44 +726,6 @@ func (s *DataStore) GetSettingValueExisted(sName types.SettingName) (string, err
return setting.Value, nil
}
// GetSettingValueExistedByDataEngine returns the value of the given setting name for a specific data engine.
// Returns error if the setting does not have a value for the given data engine.
func (s *DataStore) GetSettingValueExistedByDataEngine(settingName types.SettingName, dataEngine longhorn.DataEngineType) (string, error) {
definition, ok := types.GetSettingDefinition(settingName)
if !ok {
return "", fmt.Errorf("setting %v is not supported", settingName)
}
if !definition.DataEngineSpecific {
return s.GetSettingValueExisted(settingName)
}
if !types.IsJSONFormat(definition.Default) {
return "", fmt.Errorf("setting %v does not have a JSON-formatted default value", settingName)
}
setting, err := s.GetSettingWithAutoFillingRO(settingName)
if err != nil {
return "", err
}
values, err := types.ParseDataEngineSpecificSetting(definition, setting.Value)
if err != nil {
return "", err
}
value, ok := values[dataEngine]
if ok {
if strValue, ok := value.(string); ok {
return strValue, nil
} else {
return fmt.Sprintf("%v", value), nil
}
}
return "", fmt.Errorf("setting %v does not have a value for data engine %v", settingName, dataEngine)
}
// ListSettings lists all Settings in the namespace, and fill with default
// values of any missing entry
func (s *DataStore) ListSettings() (map[types.SettingName]*longhorn.Setting, error) {
@ -1033,7 +784,7 @@ func (s *DataStore) GetAutoBalancedReplicasSetting(volume *longhorn.Volume, logg
var err error
if setting == "" {
globalSetting, _ := s.GetSettingValueExistedByDataEngine(types.SettingNameReplicaAutoBalance, volume.Spec.DataEngine)
globalSetting, _ := s.GetSettingValueExisted(types.SettingNameReplicaAutoBalance)
if globalSetting == string(longhorn.ReplicaAutoBalanceIgnored) {
globalSetting = string(longhorn.ReplicaAutoBalanceDisabled)
@ -1060,9 +811,20 @@ func (s *DataStore) GetVolumeSnapshotDataIntegrity(volumeName string) (longhorn.
return volume.Spec.SnapshotDataIntegrity, nil
}
dataIntegrity, err := s.GetSettingValueExistedByDataEngine(types.SettingNameSnapshotDataIntegrity, volume.Spec.DataEngine)
if err != nil {
return "", errors.Wrapf(err, "failed to assert %v value for data engine %v", types.SettingNameSnapshotDataIntegrity, volume.Spec.DataEngine)
var dataIntegrity string
switch volume.Spec.DataEngine {
case longhorn.DataEngineTypeV1:
dataIntegrity, err = s.GetSettingValueExisted(types.SettingNameSnapshotDataIntegrity)
if err != nil {
return "", errors.Wrapf(err, "failed to assert %v value", types.SettingNameSnapshotDataIntegrity)
}
case longhorn.DataEngineTypeV2:
dataIntegrity, err = s.GetSettingValueExisted(types.SettingNameV2DataEngineSnapshotDataIntegrity)
if err != nil {
return "", errors.Wrapf(err, "failed to assert %v value", types.SettingNameV2DataEngineSnapshotDataIntegrity)
}
default:
return "", fmt.Errorf("unknown data engine type %v for snapshot data integrity get", volume.Spec.DataEngine)
}
return longhorn.SnapshotDataIntegrity(dataIntegrity), nil
@ -2021,21 +1783,13 @@ func (s *DataStore) ListVolumeReplicasROMapByNode(volumeName string) (map[string
// ReplicaAddressToReplicaName will directly return the address if the format
// is invalid or the replica is not found.
func ReplicaAddressToReplicaName(address string, rs []*longhorn.Replica) string {
// Remove the "tcp://" prefix if it exists
addr := strings.TrimPrefix(address, "tcp://")
var host, port string
var err error
// Handle both IPv4 and IPv6 formats
host, port, err = net.SplitHostPort(addr)
if err != nil {
// If parsing fails, return the original address
addressComponents := strings.Split(strings.TrimPrefix(address, "tcp://"), ":")
// The address format should be `<IP>:<Port>` after removing the prefix "tcp://".
if len(addressComponents) != 2 {
return address
}
for _, r := range rs {
if host == r.Status.StorageIP && port == strconv.Itoa(r.Status.Port) {
if addressComponents[0] == r.Status.StorageIP && addressComponents[1] == strconv.Itoa(r.Status.Port) {
return r.Name
}
}
@ -2337,22 +2091,17 @@ func (s *DataStore) CheckDataEngineImageReadiness(image string, dataEngine longh
return s.CheckEngineImageReadiness(image, nodes...)
}
// IsDataEngineImageReady checks if the IMAGE is deployed on the NODEID and, if data locality is disabled, also on at least one replica node of the volume.
func (s *DataStore) IsDataEngineImageReady(image, volumeName, nodeID string, dataLocality longhorn.DataLocality, dataEngine longhorn.DataEngineType) (bool, error) {
// CheckDataEngineImageReadyOnAtLeastOneVolumeReplica checks if the IMAGE is deployed on the NODEID and on at least one of the the volume's replicas
func (s *DataStore) CheckDataEngineImageReadyOnAtLeastOneVolumeReplica(image, volumeName, nodeID string, dataLocality longhorn.DataLocality, dataEngine longhorn.DataEngineType) (bool, error) {
isReady, err := s.CheckDataEngineImageReadiness(image, dataEngine, nodeID)
if err != nil {
return false, errors.Wrapf(err, "failed to check data engine image readiness of node %v", nodeID)
}
if !isReady || dataLocality == longhorn.DataLocalityStrictLocal || dataLocality == longhorn.DataLocalityBestEffort {
if !isReady || dataLocality == longhorn.DataLocalityStrictLocal {
return isReady, nil
}
return s.checkDataEngineImageReadyOnAtLeastOneVolumeReplica(image, volumeName)
}
// checkDataEngineImageReadyOnAtLeastOneVolumeReplica checks if the IMAGE is deployed on at least one replica node of the volume.
func (s *DataStore) checkDataEngineImageReadyOnAtLeastOneVolumeReplica(image, volumeName string) (bool, error) {
replicas, err := s.ListVolumeReplicas(volumeName)
if err != nil {
return false, errors.Wrapf(err, "failed to get replicas for volume %v", volumeName)
@ -2371,7 +2120,6 @@ func (s *DataStore) checkDataEngineImageReadyOnAtLeastOneVolumeReplica(image, vo
if !hasScheduledReplica {
return false, errors.Errorf("volume %v has no scheduled replicas", volumeName)
}
return false, nil
}
@ -3404,22 +3152,6 @@ func (s *DataStore) IsNodeSchedulable(name string) bool {
return nodeSchedulableCondition.Status == longhorn.ConditionStatusTrue
}
func (s *DataStore) IsNodeHasDiskUUID(nodeName, diskUUID string) (bool, error) {
node, err := s.GetNodeRO(nodeName)
if err != nil {
if ErrorIsNotFound(err) {
return false, nil
}
return false, err
}
for _, diskStatus := range node.Status.DiskStatus {
if diskStatus.DiskUUID == diskUUID {
return true, nil
}
}
return false, nil
}
func getNodeSelector(nodeName string) (labels.Selector, error) {
return metav1.LabelSelectorAsSelector(&metav1.LabelSelector{
MatchLabels: map[string]string{
@ -3768,81 +3500,6 @@ func GetOwnerReferencesForNode(node *longhorn.Node) []metav1.OwnerReference {
}
}
// GetSettingAsFloat gets the setting for the given name, returns as float
// Returns error if the definition type is not float
func (s *DataStore) GetSettingAsFloat(settingName types.SettingName) (float64, error) {
definition, ok := types.GetSettingDefinition(settingName)
if !ok {
return -1, fmt.Errorf("setting %v is not supported", settingName)
}
settings, err := s.GetSettingWithAutoFillingRO(settingName)
if err != nil {
return -1, err
}
value := settings.Value
if definition.Type == types.SettingTypeFloat {
result, err := strconv.ParseFloat(value, 64)
if err != nil {
return -1, err
}
return result, nil
}
return -1, fmt.Errorf("the %v setting value couldn't change to float, value is %v ", string(settingName), value)
}
// GetSettingAsFloatByDataEngine retrieves the float64 value of the given setting for the specified
// DataEngineType. If the setting is not data-engine-specific, it falls back to GetSettingAsFloat.
// For data-engine-specific settings, it expects the setting value to be in JSON format mapping
// data engine types to float values.
//
// If the setting is not defined, not in the expected format, or the value for the given data engine
// is missing or not a float, an error is returned.
//
// Example JSON format for a data-engine-specific setting:
//
// {"v1": 50.0, "v2": 100.0}
//
// Returns the float value for the provided data engine type, or an error if validation or parsing fails.
func (s *DataStore) GetSettingAsFloatByDataEngine(settingName types.SettingName, dataEngine longhorn.DataEngineType) (float64, error) {
definition, ok := types.GetSettingDefinition(settingName)
if !ok {
return -1, fmt.Errorf("setting %v is not supported", settingName)
}
if !definition.DataEngineSpecific {
return s.GetSettingAsFloat(settingName)
}
if !types.IsJSONFormat(definition.Default) {
return -1, fmt.Errorf("setting %v does not have a JSON-formatted default value", settingName)
}
// Get the setting value, which may be auto-filled
setting, err := s.GetSettingWithAutoFillingRO(settingName)
if err != nil {
return -1, err
}
// Parse the setting value as a map of floats map[dataEngine]float64]{...}
values, err := types.ParseDataEngineSpecificSetting(definition, setting.Value)
if err != nil {
return -1, err
}
value, ok := values[dataEngine]
if !ok {
return -1, fmt.Errorf("the %v setting value for data engine %v is not defined, value is %v", string(settingName), dataEngine, values)
}
floatValue, ok := value.(float64)
if !ok {
return -1, fmt.Errorf("the %v setting value for data engine %v is not a float, value is %v", string(settingName), dataEngine, value)
}
return floatValue, nil
}
// GetSettingAsInt gets the setting for the given name, returns as integer
// Returns error if the definition type is not integer
func (s *DataStore) GetSettingAsInt(settingName types.SettingName) (int64, error) {
@ -3867,55 +3524,6 @@ func (s *DataStore) GetSettingAsInt(settingName types.SettingName) (int64, error
return -1, fmt.Errorf("the %v setting value couldn't change to integer, value is %v ", string(settingName), value)
}
// GetSettingAsIntByDataEngine retrieves the int64 value of the given setting for the specified
// DataEngineType. If the setting is not data-engine-specific, it falls back to GetSettingAsInt.
// For data-engine-specific settings, it expects the setting value to be in JSON format mapping
// data engine types to integer values.
//
// If the setting is not defined, not in the expected format, or the value for the given data engine
// is missing or not an integer, an error is returned.
//
// Example JSON format for a data-engine-specific setting:
//
// {"v1": 1024, "v2": 2048}
//
// Returns the int64 value for the provided data engine type, or an error if validation or parsing fails.
func (s *DataStore) GetSettingAsIntByDataEngine(settingName types.SettingName, dataEngine longhorn.DataEngineType) (int64, error) {
definition, ok := types.GetSettingDefinition(settingName)
if !ok {
return -1, fmt.Errorf("setting %v is not supported", settingName)
}
if !definition.DataEngineSpecific {
return s.GetSettingAsInt(settingName)
}
if !types.IsJSONFormat(definition.Default) {
return -1, fmt.Errorf("setting %v does not have a JSON-formatted default value", settingName)
}
setting, err := s.GetSettingWithAutoFillingRO(settingName)
if err != nil {
return -1, err
}
values, err := types.ParseDataEngineSpecificSetting(definition, setting.Value)
if err != nil {
return -1, err
}
value, ok := values[dataEngine]
if !ok {
return -1, fmt.Errorf("the %v setting value for data engine %v is not defined, value is %v", string(settingName), dataEngine, values)
}
intValue, ok := value.(int64)
if !ok {
return -1, fmt.Errorf("the %v setting value for data engine %v is not an integer, value is %v", string(settingName), dataEngine, value)
}
return intValue, nil
}
// GetSettingAsBool gets the setting for the given name, returns as boolean
// Returns error if the definition type is not boolean
func (s *DataStore) GetSettingAsBool(settingName types.SettingName) (bool, error) {
@ -3923,11 +3531,11 @@ func (s *DataStore) GetSettingAsBool(settingName types.SettingName) (bool, error
if !ok {
return false, fmt.Errorf("setting %v is not supported", settingName)
}
setting, err := s.GetSettingWithAutoFillingRO(settingName)
settings, err := s.GetSettingWithAutoFillingRO(settingName)
if err != nil {
return false, err
}
value := setting.Value
value := settings.Value
if definition.Type == types.SettingTypeBool {
result, err := strconv.ParseBool(value)
@ -3940,55 +3548,6 @@ func (s *DataStore) GetSettingAsBool(settingName types.SettingName) (bool, error
return false, fmt.Errorf("the %v setting value couldn't be converted to bool, value is %v ", string(settingName), value)
}
// GetSettingAsBoolByDataEngine retrieves the bool value of the given setting for the specified
// DataEngineType. If the setting is not data-engine-specific, it falls back to GetSettingAsBool.
// For data-engine-specific settings, it expects the setting value to be in JSON format mapping
// data engine types to boolean values.
//
// If the setting is not defined, not in the expected format, or the value for the given data engine
// is missing or not a boolean, an error is returned.
//
// Example JSON format for a data-engine-specific setting:
//
// {"v1": true, "v2": false}
//
// Returns the boolean value for the provided data engine type, or an error if validation or parsing fails.
func (s *DataStore) GetSettingAsBoolByDataEngine(settingName types.SettingName, dataEngine longhorn.DataEngineType) (bool, error) {
definition, ok := types.GetSettingDefinition(settingName)
if !ok {
return false, fmt.Errorf("setting %v is not supported", settingName)
}
if !definition.DataEngineSpecific {
return s.GetSettingAsBool(settingName)
}
if !types.IsJSONFormat(definition.Default) {
return false, fmt.Errorf("setting %v does not have a JSON-formatted default value", settingName)
}
setting, err := s.GetSettingWithAutoFillingRO(settingName)
if err != nil {
return false, err
}
values, err := types.ParseDataEngineSpecificSetting(definition, setting.Value)
if err != nil {
return false, err
}
value, ok := values[dataEngine]
if !ok {
return false, fmt.Errorf("the %v setting value for data engine %v is not defined, value is %v", string(settingName), dataEngine, values)
}
boolValue, ok := value.(bool)
if !ok {
return false, fmt.Errorf("the %v setting value for data engine %v is not an boolean, value is %v", string(settingName), dataEngine, value)
}
return boolValue, nil
}
// GetSettingImagePullPolicy get the setting and return one of Kubernetes ImagePullPolicy definition
// Returns error if the ImagePullPolicy is invalid
func (s *DataStore) GetSettingImagePullPolicy() (corev1.PullPolicy, error) {
@ -5872,21 +5431,6 @@ func (s *DataStore) GetLHVolumeAttachmentByVolumeName(volName string) (*longhorn
return s.GetLHVolumeAttachment(vaName)
}
// ListLHVolumeAttachments returns all VolumeAttachments in the cluster
func (s *DataStore) ListLHVolumeAttachments() ([]*longhorn.VolumeAttachment, error) {
vaList, err := s.lhVolumeAttachmentLister.VolumeAttachments(s.namespace).List(labels.Everything())
if err != nil {
return nil, err
}
result := make([]*longhorn.VolumeAttachment, 0, len(vaList))
for _, va := range vaList {
result = append(result, va.DeepCopy())
}
return result, nil
}
// ListSupportBundles returns an object contains all SupportBundles
func (s *DataStore) ListSupportBundles() (map[string]*longhorn.SupportBundle, error) {
itemMap := make(map[string]*longhorn.SupportBundle)
@ -6631,10 +6175,6 @@ func (s *DataStore) GetRunningInstanceManagerByNodeRO(node string, dataEngine lo
}
func (s *DataStore) GetFreezeFilesystemForSnapshotSetting(e *longhorn.Engine) (bool, error) {
if types.IsDataEngineV2(e.Spec.DataEngine) {
return false, nil
}
volume, err := s.GetVolumeRO(e.Spec.VolumeName)
if err != nil {
return false, err
@ -6644,7 +6184,7 @@ func (s *DataStore) GetFreezeFilesystemForSnapshotSetting(e *longhorn.Engine) (b
return volume.Spec.FreezeFilesystemForSnapshot == longhorn.FreezeFilesystemForSnapshotEnabled, nil
}
return s.GetSettingAsBoolByDataEngine(types.SettingNameFreezeFilesystemForSnapshot, e.Spec.DataEngine)
return s.GetSettingAsBool(types.SettingNameFreezeFilesystemForSnapshot)
}
func (s *DataStore) CanPutBackingImageOnDisk(backingImage *longhorn.BackingImage, diskUUID string) (bool, error) {

View File

@ -1,8 +1,7 @@
package engineapi
import (
"net"
"strconv"
"fmt"
bimapi "github.com/longhorn/backing-image-manager/api"
bimclient "github.com/longhorn/backing-image-manager/pkg/client"
@ -31,7 +30,7 @@ type BackingImageDataSourceClient struct {
func NewBackingImageDataSourceClient(ip string) *BackingImageDataSourceClient {
return &BackingImageDataSourceClient{
bimclient.DataSourceClient{
Remote: net.JoinHostPort(ip, strconv.Itoa(BackingImageDataSourceDefaultPort)),
Remote: fmt.Sprintf("%s:%d", ip, BackingImageDataSourceDefaultPort),
},
}
}

View File

@ -2,8 +2,6 @@ package engineapi
import (
"fmt"
"net"
"strconv"
bimapi "github.com/longhorn/backing-image-manager/api"
bimclient "github.com/longhorn/backing-image-manager/pkg/client"
@ -47,7 +45,7 @@ func NewBackingImageManagerClient(bim *longhorn.BackingImageManager) (*BackingIm
ip: bim.Status.IP,
apiMinVersion: bim.Status.APIMinVersion,
apiVersion: bim.Status.APIVersion,
grpcClient: bimclient.NewBackingImageManagerClient(net.JoinHostPort(bim.Status.IP, strconv.Itoa(BackingImageManagerDefaultPort))),
grpcClient: bimclient.NewBackingImageManagerClient(fmt.Sprintf("%s:%d", bim.Status.IP, BackingImageManagerDefaultPort)),
}, nil
}
@ -86,7 +84,7 @@ func (c *BackingImageManagerClient) Sync(name, uuid, checksum, fromHost string,
if err := CheckBackingImageManagerCompatibility(c.apiMinVersion, c.apiVersion); err != nil {
return nil, err
}
resp, err := c.grpcClient.Sync(name, uuid, checksum, net.JoinHostPort(fromHost, strconv.Itoa(BackingImageManagerDefaultPort)), size)
resp, err := c.grpcClient.Sync(name, uuid, checksum, fmt.Sprintf("%s:%d", fromHost, BackingImageManagerDefaultPort), size)
if err != nil {
return nil, err
}

View File

@ -5,7 +5,6 @@ import (
"encoding/json"
"fmt"
"reflect"
"strconv"
"strings"
"sync"
"time"
@ -335,6 +334,5 @@ func (m *BackupMonitor) Close() {
func getBackupParameters(backup *longhorn.Backup) map[string]string {
parameters := map[string]string{}
parameters[lhbackup.LonghornBackupParameterBackupMode] = string(backup.Spec.BackupMode)
parameters[lhbackup.LonghornBackupParameterBackupBlockSize] = strconv.FormatInt(backup.Spec.BackupBlockSize, 10)
return parameters
}

View File

@ -84,7 +84,3 @@ func (s *DiskService) DiskReplicaInstanceDelete(diskType, diskName, diskUUID, di
func (s *DiskService) GetInstanceManagerName() string {
return s.instanceManagerName
}
func (s *DiskService) MetricsGet(diskType, diskName, diskPath, diskDriver string) (*imapi.DiskMetrics, error) {
return s.grpcClient.MetricsGet(diskType, diskName, diskPath, diskDriver)
}

View File

@ -250,11 +250,6 @@ func (e *EngineBinary) ReplicaRebuildStatus(*longhorn.Engine) (map[string]*longh
return data, nil
}
func (e *EngineBinary) ReplicaRebuildQosSet(engine *longhorn.Engine, qosLimitMbps int64) error {
// NOTE: Not implemented for EngineBinary (deprecated path)
return nil
}
// VolumeFrontendStart calls engine binary
// TODO: Deprecated, replaced by gRPC proxy
func (e *EngineBinary) VolumeFrontendStart(engine *longhorn.Engine) error {

View File

@ -240,10 +240,6 @@ func (e *EngineSimulator) ReplicaRebuildStatus(*longhorn.Engine) (map[string]*lo
return nil, errors.New(ErrNotImplement)
}
func (e *EngineSimulator) ReplicaRebuildQosSet(engine *longhorn.Engine, qosLimitMbps int64) error {
return errors.New(ErrNotImplement)
}
func (e *EngineSimulator) VolumeFrontendStart(*longhorn.Engine) error {
return errors.New(ErrNotImplement)
}

View File

@ -35,7 +35,7 @@ const (
DefaultReplicaPortCountV1 = 10
DefaultReplicaPortCountV2 = 5
DefaultPortArg = "--listen,:"
DefaultPortArg = "--listen,0.0.0.0:"
DefaultTerminateSignal = "SIGHUP"
// IncompatibleInstanceManagerAPIVersion means the instance manager version in v0.7.0

View File

@ -51,11 +51,6 @@ func (p *Proxy) ReplicaRebuildStatus(e *longhorn.Engine) (status map[string]*lon
return status, nil
}
func (p *Proxy) ReplicaRebuildQosSet(e *longhorn.Engine, qosLimitMbps int64) error {
return p.grpcClient.ReplicaRebuildingQosSet(string(e.Spec.DataEngine), e.Name, e.Spec.VolumeName,
p.DirectToURL(e), qosLimitMbps)
}
func (p *Proxy) ReplicaRebuildVerify(e *longhorn.Engine, replicaName, url string) (err error) {
if err := ValidateReplicaURL(url); err != nil {
return err

View File

@ -1,8 +1,7 @@
package engineapi
import (
"net"
"strconv"
"fmt"
"github.com/pkg/errors"
@ -18,7 +17,7 @@ type ShareManagerClient struct {
}
func NewShareManagerClient(sm *longhorn.ShareManager, pod *corev1.Pod) (*ShareManagerClient, error) {
client, err := smclient.NewShareManagerClient(net.JoinHostPort(pod.Status.PodIP, strconv.Itoa(ShareManagerDefaultPort)))
client, err := smclient.NewShareManagerClient(fmt.Sprintf("%s:%d", pod.Status.PodIP, ShareManagerDefaultPort))
if err != nil {
return nil, errors.Wrapf(err, "failed to create Share Manager client for %v", sm.Name)
}

View File

@ -3,7 +3,6 @@ package engineapi
import (
"context"
"fmt"
"net"
"strings"
"time"
@ -92,7 +91,6 @@ type EngineClient interface {
ReplicaAdd(engine *longhorn.Engine, replicaName, url string, isRestoreVolume, fastSync bool, localSync *etypes.FileLocalSync, replicaFileSyncHTTPClientTimeout, grpcTimeoutSeconds int64) error
ReplicaRemove(engine *longhorn.Engine, url, replicaName string) error
ReplicaRebuildStatus(*longhorn.Engine) (map[string]*longhorn.RebuildStatus, error)
ReplicaRebuildQosSet(engine *longhorn.Engine, qosLimitMbps int64) error
ReplicaRebuildVerify(engine *longhorn.Engine, replicaName, url string) error
ReplicaModeUpdate(engine *longhorn.Engine, url string, mode string) error
@ -312,8 +310,7 @@ func GetEngineEndpoint(volume *Volume, ip string) (string, error) {
// it will looks like this in the end
// iscsi://10.42.0.12:3260/iqn.2014-09.com.rancher:vol-name/1
formattedIPPort := net.JoinHostPort(ip, DefaultISCSIPort)
return EndpointISCSIPrefix + formattedIPPort + "/" + volume.Endpoint + "/" + DefaultISCSILUN, nil
return EndpointISCSIPrefix + ip + ":" + DefaultISCSIPort + "/" + volume.Endpoint + "/" + DefaultISCSILUN, nil
case spdkdevtypes.FrontendSPDKTCPNvmf, spdkdevtypes.FrontendSPDKUblk:
return volume.Endpoint, nil
}

66
go.mod
View File

@ -2,7 +2,7 @@ module github.com/longhorn/longhorn-manager
go 1.24.0
toolchain go1.24.6
toolchain go1.24.5
// Replace directives are required for dependencies in this section because:
// - This module imports k8s.io/kubernetes.
@ -55,38 +55,37 @@ replace (
)
require (
github.com/cockroachdb/errors v1.12.0
github.com/container-storage-interface/spec v1.11.0
github.com/docker/go-connections v0.6.0
github.com/docker/go-connections v0.5.0
github.com/go-co-op/gocron v1.37.0
github.com/google/uuid v1.6.0
github.com/gorilla/handlers v1.5.2
github.com/gorilla/mux v1.8.1
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674
github.com/jinzhu/copier v0.4.0
github.com/kubernetes-csi/csi-lib-utils v0.22.0
github.com/kubernetes-csi/csi-lib-utils v0.21.0
github.com/longhorn/backing-image-manager v1.9.1
github.com/longhorn/backupstore v0.0.0-20250804022317-794abf817297
github.com/longhorn/go-common-libs v0.0.0-20250812101836-470cb7301942
github.com/longhorn/backupstore v0.0.0-20250716050439-d920cc13cf0f
github.com/longhorn/go-common-libs v0.0.0-20250712065607-11215ac4de96
github.com/longhorn/go-iscsi-helper v0.0.0-20250713130221-69ce6f3960fa
github.com/longhorn/go-spdk-helper v0.0.3-0.20250809103353-695fd752a98b
github.com/longhorn/longhorn-engine v1.10.0-dev-20250713.0.20250728071833-3932ded2f139
github.com/longhorn/longhorn-instance-manager v1.10.0-dev-20250629.0.20250711075830-f3729b840178
github.com/longhorn/go-spdk-helper v0.0.2
github.com/longhorn/longhorn-engine v1.9.1
github.com/longhorn/longhorn-instance-manager v1.10.0-dev-20250518.0.20250519060809-955e286a739c
github.com/longhorn/longhorn-share-manager v1.9.1
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.23.0
github.com/rancher/dynamiclistener v0.7.0
github.com/prometheus/client_golang v1.22.0
github.com/rancher/dynamiclistener v0.6.2
github.com/rancher/go-rancher v0.1.1-0.20220412083059-ff12399dd57b
github.com/rancher/wrangler/v3 v3.2.2
github.com/robfig/cron v1.2.0
github.com/sirupsen/logrus v1.9.3
github.com/stretchr/testify v1.10.0
github.com/urfave/cli v1.22.17
golang.org/x/mod v0.27.0
golang.org/x/net v0.43.0
golang.org/x/sys v0.35.0
golang.org/x/time v0.12.0
google.golang.org/grpc v1.74.2
golang.org/x/mod v0.26.0
golang.org/x/net v0.40.0
golang.org/x/sys v0.33.0
golang.org/x/time v0.11.0
google.golang.org/grpc v1.73.0
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.33.3
@ -98,28 +97,24 @@ require (
k8s.io/metrics v0.33.3
k8s.io/mount-utils v0.33.3
k8s.io/utils v0.0.0-20250604170112-4c0f3b243397
sigs.k8s.io/controller-runtime v0.21.0
sigs.k8s.io/controller-runtime v0.20.4
)
require (
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b // indirect
github.com/cockroachdb/redact v1.1.5 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/getsentry/sentry-go v0.27.0 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/longhorn/types v0.0.0-20250810143617-8a478c078cb8 // indirect
github.com/longhorn/types v0.0.0-20250710112743-e3a1e9e2a9c1 // indirect
github.com/mitchellh/go-ps v1.0.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/shirou/gopsutil/v3 v3.24.5 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
golang.org/x/exp v0.0.0-20250808145144-a408d31f581a // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a // indirect
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
@ -139,11 +134,12 @@ require (
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/gammazero/deque v1.0.0 // indirect
github.com/gammazero/workerpool v1.1.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/gorilla/context v1.1.2 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
@ -160,25 +156,25 @@ require (
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.65.0 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rancher/lasso v0.2.3 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/rogpeppe/go-internal v1.13.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/slok/goresilience v0.2.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
go.opentelemetry.io/otel v1.36.0 // indirect
go.opentelemetry.io/otel/trace v1.36.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.11.0
golang.org/x/crypto v0.41.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/crypto v0.38.0 // indirect
golang.org/x/oauth2 v0.28.0 // indirect
golang.org/x/sync v0.16.0
golang.org/x/term v0.34.0 // indirect
golang.org/x/text v0.28.0
google.golang.org/protobuf v1.36.7
golang.org/x/term v0.32.0 // indirect
golang.org/x/text v0.25.0
google.golang.org/protobuf v1.36.6
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/apiserver v0.33.3 // indirect
k8s.io/component-base v0.33.3 // indirect

144
go.sum
View File

@ -14,12 +14,6 @@ github.com/c9s/goprocinfo v0.0.0-20210130143923-c95fcf8c64a8 h1:SjZ2GvvOononHOpK
github.com/c9s/goprocinfo v0.0.0-20210130143923-c95fcf8c64a8/go.mod h1:uEyr4WpAH4hio6LFriaPkL938XnrvLpNPmQHBdrmbIE=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cockroachdb/errors v1.12.0 h1:d7oCs6vuIMUQRVbi6jWWWEJZahLCfJpnJSVobd1/sUo=
github.com/cockroachdb/errors v1.12.0/go.mod h1:SvzfYNNBshAVbZ8wzNc/UPK3w1vf0dKDUP41ucAIf7g=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b h1:r6VH0faHjZeQy818SGhaone5OnYfxFR/+AzdY3sf5aE=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs=
github.com/cockroachdb/redact v1.1.5 h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwPJ30=
github.com/cockroachdb/redact v1.1.5/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/container-storage-interface/spec v1.11.0 h1:H/YKTOeUZwHtyPOr9raR+HgFmGluGCklulxDYxSdVNM=
github.com/container-storage-interface/spec v1.11.0/go.mod h1:DtUvaQszPml1YJfIK7c00mlv6/g4wNMLanLgiUbKFRI=
github.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=
@ -33,8 +27,8 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=
github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=
github.com/emicklei/go-restful/v3 v3.12.1 h1:PJMDIM/ak7btuL8Ex0iYET9hxM3CI2sjZtzpL63nKAU=
github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v5.9.11+incompatible h1:ixHHqfcGvxhWkniF1tWxBHA0yb4Z+d1UQi45df52xW8=
@ -47,14 +41,10 @@ github.com/gammazero/deque v1.0.0 h1:LTmimT8H7bXkkCy6gZX7zNLtkbz4NdS2z8LZuor3j34
github.com/gammazero/deque v1.0.0/go.mod h1:iflpYvtGfM3U8S8j+sZEKIak3SAKYpA5/SQewgfXDKo=
github.com/gammazero/workerpool v1.1.3 h1:WixN4xzukFoN0XSeXF6puqEqFTl2mECI9S6W44HWy9Q=
github.com/gammazero/workerpool v1.1.3/go.mod h1:wPjyBLDbyKnUn2XwwyD3EEwo9dHutia9/fwNmSHWACc=
github.com/getsentry/sentry-go v0.27.0 h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps=
github.com/getsentry/sentry-go v0.27.0/go.mod h1:lc76E2QywIyW8WuBnwl8Lc4bkmQH4+w1gwTf25trprY=
github.com/go-co-op/gocron v1.37.0 h1:ZYDJGtQ4OMhTLKOKMIch+/CY70Brbb1dGdooLEhh7b0=
github.com/go-co-op/gocron v1.37.0/go.mod h1:3L/n6BkO7ABj+TrfSVXLRzsP26zmikL4ISkLQ0O8iNY=
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
@ -113,30 +103,30 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kubernetes-csi/csi-lib-utils v0.22.0 h1:EUAs1+uHGps3OtVj4XVx16urhpI02eu+Z8Vps6plpHY=
github.com/kubernetes-csi/csi-lib-utils v0.22.0/go.mod h1:f+PalKyS4Ujsjb9+m6Rj0W6c28y3nfea3paQ/VqjI28=
github.com/kubernetes-csi/csi-lib-utils v0.21.0 h1:dUN/iIgXLucAxyML2iPyhniIlACQumIeAJmIzsMBddc=
github.com/kubernetes-csi/csi-lib-utils v0.21.0/go.mod h1:ZCVRTYuup+bwX9tOeE5Q3LDw64QvltSwMUQ3M3g2T+Q=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
github.com/longhorn/backing-image-manager v1.9.1 h1:amT5BDkBJnnmlJYfPfA2m0o3zdvArf7e/DSsbgOquX0=
github.com/longhorn/backing-image-manager v1.9.1/go.mod h1:a9UGK3bsd1Gj0kbN5tKev5/uaSwjOvoHqZzLqMMqnU0=
github.com/longhorn/backupstore v0.0.0-20250804022317-794abf817297 h1:KVnOHFT3wuwgyhV7/Rue8NMt13NkpIkZ3B7eVR0C8yM=
github.com/longhorn/backupstore v0.0.0-20250804022317-794abf817297/go.mod h1:j5TiUyvRBYaSaPY/p6GIFOk1orfWcngk9hIWxDDJ5mg=
github.com/longhorn/go-common-libs v0.0.0-20250812101836-470cb7301942 h1:H9hPMP02ZJSzXa7/0TOG3HQAhieDAGQuqnePSlj+BbQ=
github.com/longhorn/go-common-libs v0.0.0-20250812101836-470cb7301942/go.mod h1:fuYzrb6idZgLrh8yePy6fA+LVB+z5fl4zZbBAU09+0g=
github.com/longhorn/backupstore v0.0.0-20250716050439-d920cc13cf0f h1:fxgi/MLL2RjMUgaodx6pxPsqRRbDOTjd/0MabqbowrE=
github.com/longhorn/backupstore v0.0.0-20250716050439-d920cc13cf0f/go.mod h1:zVJtOEHBCFmboACEjy8rtbUVYrrtknN4DIVJ9gd1TJQ=
github.com/longhorn/go-common-libs v0.0.0-20250712065607-11215ac4de96 h1:+SN5T/B6WvJjlzKWDqswziq9k11XOxK27KlCTrbalW0=
github.com/longhorn/go-common-libs v0.0.0-20250712065607-11215ac4de96/go.mod h1:WJowu2xRMEZ2B9K+SPQCUQpFoiC6yZiAHLZx2cR34QE=
github.com/longhorn/go-iscsi-helper v0.0.0-20250713130221-69ce6f3960fa h1:J0DyOSate7Vf+zlHYB5WrCTWJfshEsSJDp161GjBmhI=
github.com/longhorn/go-iscsi-helper v0.0.0-20250713130221-69ce6f3960fa/go.mod h1:fN9H878mLjAqSbPxEXpOCwvTlt43h+/CZxXrQlX/iMQ=
github.com/longhorn/go-spdk-helper v0.0.3-0.20250809103353-695fd752a98b h1:IzKSLNxFgDA/5ZtVJnt5CkgAfjyendEJsr2+fRMAa18=
github.com/longhorn/go-spdk-helper v0.0.3-0.20250809103353-695fd752a98b/go.mod h1:ypwTG96myWDEkea5PxNzzii9CPm/TI8duZwPGxUNsvo=
github.com/longhorn/longhorn-engine v1.10.0-dev-20250713.0.20250728071833-3932ded2f139 h1:qeR/Rt/Mmahgzf2Df2m00BLZibYZYs5+iTTvbHFfAXA=
github.com/longhorn/longhorn-engine v1.10.0-dev-20250713.0.20250728071833-3932ded2f139/go.mod h1:kl2QVpLZeMoYYSAVOd2IDiP7JeLQFX/fujLA9MdyK6o=
github.com/longhorn/longhorn-instance-manager v1.10.0-dev-20250629.0.20250711075830-f3729b840178 h1:JO7uffDjHufJZZxvXLdoLpIWUl1/QszoZlx9dzCRNKY=
github.com/longhorn/longhorn-instance-manager v1.10.0-dev-20250629.0.20250711075830-f3729b840178/go.mod h1:dLZTouISlm8sUpSDDb4xbnSEbZOBnKCVFMf46Ybpr44=
github.com/longhorn/go-spdk-helper v0.0.2 h1:cK7obTyCI1ytm0SMaUEjwsHeX6hK+82kPjuAQkf+Tvg=
github.com/longhorn/go-spdk-helper v0.0.2/go.mod h1:lZYWKf8YNOV4TSf57u8Tj1ilDQLQlW2M/HgFlecoRno=
github.com/longhorn/longhorn-engine v1.9.1 h1:DlkcXhwmR2b6ATwZeaQr8hG4i8Mf4SLcXcIzgnl6jaI=
github.com/longhorn/longhorn-engine v1.9.1/go.mod h1:40+Fw+/PV78DDFYWXUfJHmrZ8QGfFaisC9m9YRnw4xg=
github.com/longhorn/longhorn-instance-manager v1.10.0-dev-20250518.0.20250519060809-955e286a739c h1:W9/fwmx/uhCzZfE9g7Lf6i4VaD6fl20IgeQF1cFblrU=
github.com/longhorn/longhorn-instance-manager v1.10.0-dev-20250518.0.20250519060809-955e286a739c/go.mod h1:8gfwbZRPzNazr3eLPm3/JS2pQIdflj0yFn1J4E4vLy8=
github.com/longhorn/longhorn-share-manager v1.9.1 h1:ObRP8lnNOncRg9podwrPrqObBXJsQDlPfNwslxkBRhM=
github.com/longhorn/longhorn-share-manager v1.9.1/go.mod h1:vYqc2o+6xTlgdlweIeED4Do/n+0/4I3AbD6jQ5OHfcg=
github.com/longhorn/types v0.0.0-20250810143617-8a478c078cb8 h1:NkYbz5Bs+zNW7l3lS9xG9ktUPcNCgmG1tEYzOCk7rdM=
github.com/longhorn/types v0.0.0-20250810143617-8a478c078cb8/go.mod h1:jbvGQ66V//M9Jp2DC6k+BR74QxSK0Hp/L2FRJ/SBxFA=
github.com/longhorn/types v0.0.0-20250710112743-e3a1e9e2a9c1 h1:Lox/NlebN9jOc9JXokB270iyeMlyUw9gRePBy5LKwz0=
github.com/longhorn/types v0.0.0-20250710112743-e3a1e9e2a9c1/go.mod h1:3bhH8iUZGZT3kA/B1DYMGzpdzfacqeexOt4SHo4/C2I=
github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4=
github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
@ -165,8 +155,6 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@ -176,19 +164,19 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_golang v1.23.0 h1:ust4zpdl9r4trLY/gSjlm07PuiBq2ynaXXlptpfy8Uc=
github.com/prometheus/client_golang v1.23.0/go.mod h1:i/o0R9ByOnHX0McrTMTyhYvKE4haaf2mW08I+jGAjEE=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/rancher/dynamiclistener v0.7.0 h1:+jyfZ4lVamc1UbKWo8V8dhSPtCgRZYaY8nm7wiHeko4=
github.com/rancher/dynamiclistener v0.7.0/go.mod h1:Q2YA42xp7Xc69JiSlJ8GpvLvze261T0iQ/TP4RdMCYk=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rancher/dynamiclistener v0.6.2 h1:F0SEJhvO2aFe0eTvKGlQoy5x7HtwK8oJbyITVfBSb90=
github.com/rancher/dynamiclistener v0.6.2/go.mod h1:ncmVR7qR8kR1o6xNkTcVS2mZ9WtlljimBilIlNjdyzc=
github.com/rancher/go-rancher v0.1.1-0.20220412083059-ff12399dd57b h1:so40GMVZOZkQeIbAzaZRq6wDrMErvRLuXNsGTRZUpg8=
github.com/rancher/go-rancher v0.1.1-0.20220412083059-ff12399dd57b/go.mod h1:7oQvGNiJsGvrUgB+7AH8bmdzuR0uhULfwKb43Ht0hUk=
github.com/rancher/lasso v0.2.3 h1:74/z/C/O3ykhyMrRuEgc9kVyYiSoS7kp5BAijlcyXDg=
@ -243,16 +231,16 @@ github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.35.0 h1:iPctf8iprVySXSKJffSS79eOjl9pvxV9ZqOWT0QejKY=
go.opentelemetry.io/otel/sdk v1.35.0/go.mod h1:+ga1bZliga3DxJ3CQGg3updiaAJoNECOgJREo9KHGQg=
go.opentelemetry.io/otel/sdk/metric v1.35.0 h1:1RriWBmCKgkeHEhM7a2uMjMUfP7MsOF5JpUCaEqEI9o=
go.opentelemetry.io/otel/sdk/metric v1.35.0/go.mod h1:is6XYCUMpcKi+ZsOvfluY5YstFnhW0BidkR+gL+qN+w=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
@ -265,23 +253,23 @@ go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN8
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/exp v0.0.0-20250808145144-a408d31f581a h1:Y+7uR/b1Mw2iSXZ3G//1haIiSElDQZ8KWh0h+sZPG90=
golang.org/x/exp v0.0.0-20250808145144-a408d31f581a/go.mod h1:rT6SFzZ7oxADUDx58pcaKFTcZ+inxAa9fTrYx/uVYwg=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc h1:TS73t7x3KarrNd5qAipmspBDS1rkMcgVG/fS1aRb4Rc=
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc/go.mod h1:A+z0yzpGtvnG90cToK5n2tu8UJVP2XUATh+r+sfOOOc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ=
golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc=
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg=
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/oauth2 v0.28.0 h1:CrgCKl8PPAVtLnU3c+EDw6x11699EWlsDeWNWKdIOkc=
golang.org/x/oauth2 v0.28.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -296,32 +284,32 @@ golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.34.0 h1:O/2T7POpk0ZZ7MAzMeWFSg6S5IpWd/RXDlM9hgM3DR4=
golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg=
golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg=
golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s=
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a h1:v2PbRU4K3llS09c7zodFpNePeamkAwG3mPrAery9VeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.74.2 h1:WoosgB65DlWVC9FqI82dGsZhWFNBSLjQ84bjROOpMu4=
google.golang.org/grpc v1.74.2/go.mod h1:CtQ+BGjaAIXHs/5YS3i473GqwBBa1zGQNevxdeBEXrM=
google.golang.org/protobuf v1.36.7 h1:IgrO7UwFQGJdRNXH/sQux4R1Dj1WAKcLElzeeRaXV2A=
google.golang.org/protobuf v1.36.7/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 h1:e0AIkUUhxyBKh6ssZNrAMeqhA7RKUj42346d1y02i2g=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
@ -370,8 +358,8 @@ k8s.io/mount-utils v0.33.3 h1:Q1jsnqdS4LdtJSYSXgiQv/XNrRHQncLk3gMYjKNSZrE=
k8s.io/mount-utils v0.33.3/go.mod h1:1JR4rKymg8B8bCPo618hpSAdrpO6XLh0Acqok/xVwPE=
k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 h1:hwvWFiBzdWw1FhfY1FooPn3kzWuJ8tmbZBHi4zVsl1Y=
k8s.io/utils v0.0.0-20250604170112-4c0f3b243397/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/controller-runtime v0.21.0 h1:CYfjpEuicjUecRk+KAeyYh+ouUBn4llGyDYytIGcJS8=
sigs.k8s.io/controller-runtime v0.21.0/go.mod h1:OSg14+F65eWqIu4DceX7k/+QRAbTTvxeQSNSOQpukWM=
sigs.k8s.io/controller-runtime v0.20.4 h1:X3c+Odnxz+iPTRobG4tp092+CvBU9UK0t/bRf+n0DGU=
sigs.k8s.io/controller-runtime v0.20.4/go.mod h1:xg2XB0K5ShQzAgsoujxuKN4LNXR2LfwwHsPj7Iaw+XY=
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 h1:gBQPwqORJ8d8/YNZWEjoZs7npUVDpVXUUOFfW6CgAqE=
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=

File diff suppressed because it is too large Load Diff

View File

@ -11,7 +11,7 @@ LH_MANAGER_PKG="github.com/longhorn/longhorn-manager"
OUTPUT_PKG="${LH_MANAGER_PKG}/k8s/pkg/client"
APIS_PATH="k8s/pkg/apis"
APIS_DIR="${SCRIPT_ROOT}/${APIS_PATH}"
GROUP_VERSION="longhorn:v1beta2"
GROUP_VERSION="longhorn:v1beta1,v1beta2"
CODE_GENERATOR_VERSION="v0.32.1"
CRDS_DIR="crds"
CONTROLLER_TOOLS_VERSION="v0.17.1"

View File

@ -0,0 +1,15 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: backingimages.longhorn.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1beta2","v1beta1"]
clientConfig:
service:
namespace: longhorn-system
name: longhorn-conversion-webhook
path: /v1/webhook/conversion
port: 9501

View File

@ -0,0 +1,15 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: backuptargets.longhorn.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1beta2","v1beta1"]
clientConfig:
service:
namespace: longhorn-system
name: longhorn-conversion-webhook
path: /v1/webhook/conversion
port: 9501

View File

@ -0,0 +1,16 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: engineimages.longhorn.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1beta2","v1beta1"]
clientConfig:
service:
namespace: longhorn-system
name: longhorn-conversion-webhook
path: /v1/webhook/conversion
port: 9501

View File

@ -0,0 +1,16 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: nodes.longhorn.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1beta2","v1beta1"]
clientConfig:
service:
namespace: longhorn-system
name: longhorn-conversion-webhook
path: /v1/webhook/conversion
port: 9501

View File

@ -0,0 +1,16 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: volumes.longhorn.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1beta2","v1beta1"]
clientConfig:
service:
namespace: longhorn-system
name: longhorn-conversion-webhook
path: /v1/webhook/conversion
port: 9501

View File

@ -0,0 +1,141 @@
package v1beta1
import (
"fmt"
"github.com/jinzhu/copier"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/conversion"
"github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
)
// BackingImageDownloadState is replaced by BackingImageState.
type BackingImageDownloadState string
type BackingImageState string
const (
BackingImageStatePending = BackingImageState("pending")
BackingImageStateStarting = BackingImageState("starting")
BackingImageStateReadyForTransfer = BackingImageState("ready-for-transfer")
BackingImageStateReady = BackingImageState("ready")
BackingImageStateInProgress = BackingImageState("in-progress")
BackingImageStateFailed = BackingImageState("failed")
BackingImageStateUnknown = BackingImageState("unknown")
)
type BackingImageDiskFileStatus struct {
State BackingImageState `json:"state"`
Progress int `json:"progress"`
Message string `json:"message"`
LastStateTransitionTime string `json:"lastStateTransitionTime"`
}
// BackingImageSpec defines the desired state of the Longhorn backing image
type BackingImageSpec struct {
Disks map[string]struct{} `json:"disks"`
Checksum string `json:"checksum"`
SourceType BackingImageDataSourceType `json:"sourceType"`
SourceParameters map[string]string `json:"sourceParameters"`
// Deprecated: This kind of info will be included in the related BackingImageDataSource.
ImageURL string `json:"imageURL"`
}
// BackingImageStatus defines the observed state of the Longhorn backing image status
type BackingImageStatus struct {
OwnerID string `json:"ownerID"`
UUID string `json:"uuid"`
Size int64 `json:"size"`
Checksum string `json:"checksum"`
DiskFileStatusMap map[string]*BackingImageDiskFileStatus `json:"diskFileStatusMap"`
DiskLastRefAtMap map[string]string `json:"diskLastRefAtMap"`
// Deprecated: Replaced by field `State` in `DiskFileStatusMap`.
DiskDownloadStateMap map[string]BackingImageDownloadState `json:"diskDownloadStateMap"`
// Deprecated: Replaced by field `Progress` in `DiskFileStatusMap`.
DiskDownloadProgressMap map[string]int `json:"diskDownloadProgressMap"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhbi
// +kubebuilder:unservedversion
// +kubebuilder:subresource:status
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 BackingImage is deprecated; use longhorn.io/v1beta2 BackingImage instead"
// +kubebuilder:printcolumn:name="Image",type=string,JSONPath=`.spec.image`,description="The backing image name"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// BackingImage is where Longhorn stores backing image object.
type BackingImage struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec BackingImageSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status BackingImageStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackingImageList is a list of BackingImages.
type BackingImageList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BackingImage `json:"items"`
}
// ConvertTo converts from spoke version (v1beta1) to hub version (v1beta2)
func (bi *BackingImage) ConvertTo(dst conversion.Hub) error {
switch t := dst.(type) {
case *v1beta2.BackingImage:
biV1beta2 := dst.(*v1beta2.BackingImage)
biV1beta2.ObjectMeta = bi.ObjectMeta
if err := copier.Copy(&biV1beta2.Spec, &bi.Spec); err != nil {
return err
}
if err := copier.Copy(&biV1beta2.Status, &bi.Status); err != nil {
return err
}
// Copy spec.disks from map[string]struct{} to map[string]string
biV1beta2.Spec.Disks = make(map[string]string)
for name := range bi.Spec.Disks {
biV1beta2.Spec.Disks[name] = ""
biV1beta2.Spec.DiskFileSpecMap[name] = &v1beta2.BackingImageDiskFileSpec{}
}
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}
// ConvertFrom converts from hub version (v1beta2) to spoke version (v1beta1)
func (bi *BackingImage) ConvertFrom(src conversion.Hub) error {
switch t := src.(type) {
case *v1beta2.BackingImage:
biV1beta2 := src.(*v1beta2.BackingImage)
bi.ObjectMeta = biV1beta2.ObjectMeta
if err := copier.Copy(&bi.Spec, &biV1beta2.Spec); err != nil {
return err
}
if err := copier.Copy(&bi.Status, &biV1beta2.Status); err != nil {
return err
}
// Copy spec.disks from map[string]string to map[string]struct{}
bi.Spec.Disks = make(map[string]struct{})
for name := range biV1beta2.Spec.Disks {
bi.Spec.Disks[name] = struct{}{}
}
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}

View File

@ -0,0 +1,72 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
const (
DataSourceTypeDownloadParameterURL = "url"
)
type BackingImageDataSourceType string
const (
BackingImageDataSourceTypeDownload = BackingImageDataSourceType("download")
BackingImageDataSourceTypeUpload = BackingImageDataSourceType("upload")
BackingImageDataSourceTypeExportFromVolume = BackingImageDataSourceType("export-from-volume")
)
// BackingImageDataSourceSpec defines the desired state of the Longhorn backing image data source
type BackingImageDataSourceSpec struct {
NodeID string `json:"nodeID"`
DiskUUID string `json:"diskUUID"`
DiskPath string `json:"diskPath"`
Checksum string `json:"checksum"`
SourceType BackingImageDataSourceType `json:"sourceType"`
Parameters map[string]string `json:"parameters"`
FileTransferred bool `json:"fileTransferred"`
}
// BackingImageDataSourceStatus defines the observed state of the Longhorn backing image data source
type BackingImageDataSourceStatus struct {
OwnerID string `json:"ownerID"`
RunningParameters map[string]string `json:"runningParameters"`
CurrentState BackingImageState `json:"currentState"`
Size int64 `json:"size"`
Progress int `json:"progress"`
Checksum string `json:"checksum"`
Message string `json:"message"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhbids
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 BackingImageDataSource is deprecated; use longhorn.io/v1beta2 BackingImageDataSource instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.currentState`,description="The current state of the pod used to provision the backing image file from source"
// +kubebuilder:printcolumn:name="SourceType",type=string,JSONPath=`.spec.sourceType`,description="The data source type"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.spec.nodeID`,description="The node the backing image file will be prepared on"
// +kubebuilder:printcolumn:name="DiskUUID",type=string,JSONPath=`.spec.diskUUID`,description="The disk the backing image file will be prepared on"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// BackingImageDataSource is where Longhorn stores backing image data source object.
type BackingImageDataSource struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec BackingImageDataSourceSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status BackingImageDataSourceStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackingImageDataSourceList is a list of BackingImageDataSources.
type BackingImageDataSourceList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BackingImageDataSource `json:"items"`
}

View File

@ -0,0 +1,87 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
type BackingImageManagerState string
const (
BackingImageManagerStateError = BackingImageManagerState("error")
BackingImageManagerStateRunning = BackingImageManagerState("running")
BackingImageManagerStateStopped = BackingImageManagerState("stopped")
BackingImageManagerStateStarting = BackingImageManagerState("starting")
BackingImageManagerStateUnknown = BackingImageManagerState("unknown")
)
type BackingImageFileInfo struct {
Name string `json:"name"`
UUID string `json:"uuid"`
Size int64 `json:"size"`
State BackingImageState `json:"state"`
CurrentChecksum string `json:"currentChecksum"`
Message string `json:"message"`
SendingReference int `json:"sendingReference"`
SenderManagerAddress string `json:"senderManagerAddress"`
Progress int `json:"progress"`
// Deprecated: This field is useless now. The manager of backing image files doesn't care if a file is downloaded and how.
URL string `json:"url"`
// Deprecated: This field is useless.
Directory string `json:"directory"`
// Deprecated: This field is renamed to `Progress`.
DownloadProgress int `json:"downloadProgress"`
}
// BackingImageManagerSpec defines the desired state of the Longhorn backing image manager
type BackingImageManagerSpec struct {
Image string `json:"image"`
NodeID string `json:"nodeID"`
DiskUUID string `json:"diskUUID"`
DiskPath string `json:"diskPath"`
BackingImages map[string]string `json:"backingImages"`
}
// BackingImageManagerStatus defines the observed state of the Longhorn backing image manager
type BackingImageManagerStatus struct {
OwnerID string `json:"ownerID"`
CurrentState BackingImageManagerState `json:"currentState"`
BackingImageFileMap map[string]BackingImageFileInfo `json:"backingImageFileMap"`
IP string `json:"ip"`
APIMinVersion int `json:"apiMinVersion"`
APIVersion int `json:"apiVersion"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhbim
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 BackingImageManager is deprecated; use longhorn.io/v1beta2 BackingImageManager instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.currentState`,description="The current state of the manager"
// +kubebuilder:printcolumn:name="Image",type=string,JSONPath=`.spec.image`,description="The image the manager pod will use"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.spec.nodeID`,description="The node the manager is on"
// +kubebuilder:printcolumn:name="DiskUUID",type=string,JSONPath=`.spec.diskUUID`,description="The disk the manager is responsible for"
// +kubebuilder:printcolumn:name="DiskPath",type=string,JSONPath=`.spec.diskPath`,description="The disk path the manager is using"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// BackingImageManager is where Longhorn stores backing image manager object.
type BackingImageManager struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec BackingImageManagerSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status BackingImageManagerStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackingImageManagerList is a list of BackingImageManagers.
type BackingImageManagerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BackingImageManager `json:"items"`
}

View File

@ -0,0 +1,98 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
type BackupState string
const (
BackupStateNew = BackupState("")
BackupStatePending = BackupState("Pending")
BackupStateInProgress = BackupState("InProgress")
BackupStateCompleted = BackupState("Completed")
BackupStateError = BackupState("Error")
BackupStateUnknown = BackupState("Unknown")
)
// BackupSpec defines the desired state of the Longhorn backup
type BackupSpec struct {
// The time to request run sync the remote backup.
SyncRequestedAt metav1.Time `json:"syncRequestedAt"`
// The snapshot name.
SnapshotName string `json:"snapshotName"`
// The labels of snapshot backup.
Labels map[string]string `json:"labels"`
}
// BackupStatus defines the observed state of the Longhorn backup
type BackupStatus struct {
// The node ID on which the controller is responsible to reconcile this backup CR.
OwnerID string `json:"ownerID"`
// The backup creation state.
// Can be "", "InProgress", "Completed", "Error", "Unknown".
State BackupState `json:"state"`
// The snapshot backup progress.
Progress int `json:"progress"`
// The address of the replica that runs snapshot backup.
ReplicaAddress string `json:"replicaAddress"`
// The error message when taking the snapshot backup.
Error string `json:"error,omitempty"`
// The snapshot backup URL.
URL string `json:"url"`
// The snapshot name.
SnapshotName string `json:"snapshotName"`
// The snapshot creation time.
SnapshotCreatedAt string `json:"snapshotCreatedAt"`
// The snapshot backup upload finished time.
BackupCreatedAt string `json:"backupCreatedAt"`
// The snapshot size.
Size string `json:"size"`
// The labels of snapshot backup.
Labels map[string]string `json:"labels"`
// The error messages when calling longhorn engine on listing or inspecting backups.
Messages map[string]string `json:"messages"`
// The volume name.
VolumeName string `json:"volumeName"`
// The volume size.
VolumeSize string `json:"volumeSize"`
// The volume creation time.
VolumeCreated string `json:"volumeCreated"`
// The volume's backing image name.
VolumeBackingImageName string `json:"volumeBackingImageName"`
// The last time that the backup was synced with the remote backup target.
LastSyncedAt metav1.Time `json:"lastSyncedAt"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhb
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 Backup is deprecated; use longhorn.io/v1beta2 Backup instead"
// +kubebuilder:printcolumn:name="SnapshotName",type=string,JSONPath=`.status.snapshotName`,description="The snapshot name"
// +kubebuilder:printcolumn:name="SnapshotSize",type=string,JSONPath=`.status.size`,description="The snapshot size"
// +kubebuilder:printcolumn:name="SnapshotCreatedAt",type=string,JSONPath=`.status.snapshotCreatedAt`,description="The snapshot creation time"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.state`,description="The backup state"
// +kubebuilder:printcolumn:name="LastSyncedAt",type=string,JSONPath=`.status.lastSyncedAt`,description="The backup last synced time"
// Backup is where Longhorn stores backup object.
type Backup struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec BackupSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status BackupStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackupList is a list of Backups.
type BackupList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Backup `json:"items"`
}

View File

@ -0,0 +1,127 @@
package v1beta1
import (
"fmt"
"github.com/jinzhu/copier"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/conversion"
"github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
)
const (
BackupTargetConditionTypeUnavailable = "Unavailable"
BackupTargetConditionReasonUnavailable = "Unavailable"
)
// BackupTargetSpec defines the desired state of the Longhorn backup target
type BackupTargetSpec struct {
// The backup target URL.
BackupTargetURL string `json:"backupTargetURL"`
// The backup target credential secret.
CredentialSecret string `json:"credentialSecret"`
// The interval that the cluster needs to run sync with the backup target.
PollInterval metav1.Duration `json:"pollInterval"`
// The time to request run sync the remote backup target.
SyncRequestedAt metav1.Time `json:"syncRequestedAt"`
}
// BackupTargetStatus defines the observed state of the Longhorn backup target
type BackupTargetStatus struct {
// The node ID on which the controller is responsible to reconcile this backup target CR.
OwnerID string `json:"ownerID"`
// Available indicates if the remote backup target is available or not.
Available bool `json:"available"`
// Records the reason on why the backup target is unavailable.
Conditions map[string]Condition `json:"conditions"`
// The last time that the controller synced with the remote backup target.
LastSyncedAt metav1.Time `json:"lastSyncedAt"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhbt
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 BackupTarget is deprecated; use longhorn.io/v1beta2 BackupTarget instead"
// +kubebuilder:printcolumn:name="URL",type=string,JSONPath=`.spec.backupTargetURL`,description="The backup target URL"
// +kubebuilder:printcolumn:name="Credential",type=string,JSONPath=`.spec.credentialSecret`,description="The backup target credential secret"
// +kubebuilder:printcolumn:name="LastBackupAt",type=string,JSONPath=`.spec.pollInterval`,description="The backup target poll interval"
// +kubebuilder:printcolumn:name="Available",type=boolean,JSONPath=`.status.available`,description="Indicate whether the backup target is available or not"
// +kubebuilder:printcolumn:name="LastSyncedAt",type=string,JSONPath=`.status.lastSyncedAt`,description="The backup target last synced time"
// BackupTarget is where Longhorn stores backup target object.
type BackupTarget struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec BackupTargetSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status BackupTargetStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackupTargetList is a list of BackupTargets.
type BackupTargetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BackupTarget `json:"items"`
}
// ConvertTo converts from spoke version (v1beta1) to hub version (v1beta2)
func (bt *BackupTarget) ConvertTo(dst conversion.Hub) error {
switch t := dst.(type) {
case *v1beta2.BackupTarget:
btV1beta2 := dst.(*v1beta2.BackupTarget)
btV1beta2.ObjectMeta = bt.ObjectMeta
if err := copier.Copy(&btV1beta2.Spec, &bt.Spec); err != nil {
return err
}
if err := copier.Copy(&btV1beta2.Status, &bt.Status); err != nil {
return err
}
// Copy status.conditions from map to slice
dstConditions, err := copyConditionsFromMapToSlice(bt.Status.Conditions)
if err != nil {
return err
}
btV1beta2.Status.Conditions = dstConditions
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}
// ConvertFrom converts from hub version (v1beta2) to spoke version (v1beta1)
func (bt *BackupTarget) ConvertFrom(src conversion.Hub) error {
switch t := src.(type) {
case *v1beta2.BackupTarget:
btV1beta2 := src.(*v1beta2.BackupTarget)
bt.ObjectMeta = btV1beta2.ObjectMeta
if err := copier.Copy(&bt.Spec, &btV1beta2.Spec); err != nil {
return err
}
if err := copier.Copy(&bt.Status, &btV1beta2.Status); err != nil {
return err
}
// Copy status.conditions from slice to map
dstConditions, err := copyConditionFromSliceToMap(btV1beta2.Status.Conditions)
if err != nil {
return err
}
bt.Status.Conditions = dstConditions
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}

View File

@ -0,0 +1,71 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// BackupVolumeSpec defines the desired state of the Longhorn backup volume
type BackupVolumeSpec struct {
// The time to request run sync the remote backup volume.
SyncRequestedAt metav1.Time `json:"syncRequestedAt"`
}
// BackupVolumeStatus defines the observed state of the Longhorn backup volume
type BackupVolumeStatus struct {
// The node ID on which the controller is responsible to reconcile this backup volume CR.
OwnerID string `json:"ownerID"`
// The backup volume config last modification time.
LastModificationTime metav1.Time `json:"lastModificationTime"`
// The backup volume size.
Size string `json:"size"`
// The backup volume labels.
Labels map[string]string `json:"labels"`
// The backup volume creation time.
CreatedAt string `json:"createdAt"`
// The latest volume backup name.
LastBackupName string `json:"lastBackupName"`
// The latest volume backup time.
LastBackupAt string `json:"lastBackupAt"`
// The backup volume block count.
DataStored string `json:"dataStored"`
// The error messages when call longhorn engine on list or inspect backup volumes.
Messages map[string]string `json:"messages"`
// The backing image name.
BackingImageName string `json:"backingImageName"`
// the backing image checksum.
BackingImageChecksum string `json:"backingImageChecksum"`
// The last time that the backup volume was synced into the cluster.
LastSyncedAt metav1.Time `json:"lastSyncedAt"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhbv
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 BackupVolume is deprecated; use longhorn.io/v1beta2 BackupVolume instead"
// +kubebuilder:printcolumn:name="CreatedAt",type=string,JSONPath=`.status.createdAt`,description="The backup volume creation time"
// +kubebuilder:printcolumn:name="LastBackupName",type=string,JSONPath=`.status.lastBackupName`,description="The backup volume last backup name"
// +kubebuilder:printcolumn:name="LastBackupAt",type=string,JSONPath=`.status.lastBackupAt`,description="The backup volume last backup time"
// +kubebuilder:printcolumn:name="LastSyncedAt",type=string,JSONPath=`.status.lastSyncedAt`,description="The backup volume last synced time"
// BackupVolume is where Longhorn stores backup volume object.
type BackupVolume struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec BackupVolumeSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status BackupVolumeStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// BackupVolumeList is a list of BackupVolumes.
type BackupVolumeList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BackupVolume `json:"items"`
}

View File

@ -0,0 +1,31 @@
package v1beta1
import (
"github.com/jinzhu/copier"
"github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
)
func copyConditionsFromMapToSlice(srcConditions map[string]Condition) ([]v1beta2.Condition, error) {
dstConditions := []v1beta2.Condition{}
for _, src := range srcConditions {
dst := v1beta2.Condition{}
if err := copier.Copy(&dst, &src); err != nil {
return nil, err
}
dstConditions = append(dstConditions, dst)
}
return dstConditions, nil
}
func copyConditionFromSliceToMap(srcConditions []v1beta2.Condition) (map[string]Condition, error) {
dstConditions := make(map[string]Condition, 0)
for _, src := range srcConditions {
dst := Condition{}
if err := copier.Copy(&dst, &src); err != nil {
return nil, err
}
dstConditions[dst.Type] = dst
}
return dstConditions, nil
}

View File

@ -0,0 +1,5 @@
// +k8s:deepcopy-gen=package
// Package v1beta1 is the v1beta1 version of the API.
// +groupName=longhorn.io
package v1beta1

View File

@ -0,0 +1,135 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
type ReplicaMode string
const (
ReplicaModeRW = ReplicaMode("RW")
ReplicaModeWO = ReplicaMode("WO")
ReplicaModeERR = ReplicaMode("ERR")
)
type EngineBackupStatus struct {
Progress int `json:"progress"`
BackupURL string `json:"backupURL,omitempty"`
Error string `json:"error,omitempty"`
SnapshotName string `json:"snapshotName"`
State string `json:"state"`
ReplicaAddress string `json:"replicaAddress"`
}
type RestoreStatus struct {
IsRestoring bool `json:"isRestoring"`
LastRestored string `json:"lastRestored"`
CurrentRestoringBackup string `json:"currentRestoringBackup"`
Progress int `json:"progress,omitempty"`
Error string `json:"error,omitempty"`
Filename string `json:"filename,omitempty"`
State string `json:"state"`
BackupURL string `json:"backupURL"`
}
type PurgeStatus struct {
Error string `json:"error"`
IsPurging bool `json:"isPurging"`
Progress int `json:"progress"`
State string `json:"state"`
}
type RebuildStatus struct {
Error string `json:"error"`
IsRebuilding bool `json:"isRebuilding"`
Progress int `json:"progress"`
State string `json:"state"`
FromReplicaAddress string `json:"fromReplicaAddress"`
}
type SnapshotCloneStatus struct {
IsCloning bool `json:"isCloning"`
Error string `json:"error"`
Progress int `json:"progress"`
State string `json:"state"`
FromReplicaAddress string `json:"fromReplicaAddress"`
SnapshotName string `json:"snapshotName"`
}
type SnapshotInfo struct {
Name string `json:"name"`
Parent string `json:"parent"`
Children map[string]bool `json:"children"`
Removed bool `json:"removed"`
UserCreated bool `json:"usercreated"`
Created string `json:"created"`
Size string `json:"size"`
Labels map[string]string `json:"labels"`
}
// EngineSpec defines the desired state of the Longhorn engine
type EngineSpec struct {
InstanceSpec `json:""`
Frontend VolumeFrontend `json:"frontend"`
ReplicaAddressMap map[string]string `json:"replicaAddressMap"`
UpgradedReplicaAddressMap map[string]string `json:"upgradedReplicaAddressMap"`
BackupVolume string `json:"backupVolume"`
RequestedBackupRestore string `json:"requestedBackupRestore"`
RequestedDataSource VolumeDataSource `json:"requestedDataSource"`
DisableFrontend bool `json:"disableFrontend"`
RevisionCounterDisabled bool `json:"revisionCounterDisabled"`
Active bool `json:"active"`
}
// EngineStatus defines the observed state of the Longhorn engine
type EngineStatus struct {
InstanceStatus `json:""`
CurrentSize int64 `json:"currentSize,string"`
CurrentReplicaAddressMap map[string]string `json:"currentReplicaAddressMap"`
ReplicaModeMap map[string]ReplicaMode `json:"replicaModeMap"`
Endpoint string `json:"endpoint"`
LastRestoredBackup string `json:"lastRestoredBackup"`
BackupStatus map[string]*EngineBackupStatus `json:"backupStatus"`
RestoreStatus map[string]*RestoreStatus `json:"restoreStatus"`
PurgeStatus map[string]*PurgeStatus `json:"purgeStatus"`
RebuildStatus map[string]*RebuildStatus `json:"rebuildStatus"`
CloneStatus map[string]*SnapshotCloneStatus `json:"cloneStatus"`
Snapshots map[string]*SnapshotInfo `json:"snapshots"`
SnapshotsError string `json:"snapshotsError"`
IsExpanding bool `json:"isExpanding"`
LastExpansionError string `json:"lastExpansionError"`
LastExpansionFailedAt string `json:"lastExpansionFailedAt"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhe
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 Engine is deprecated; use longhorn.io/v1beta2 Engine instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.currentState`,description="The current state of the engine"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.spec.nodeID`,description="The node that the engine is on"
// +kubebuilder:printcolumn:name="InstanceManager",type=string,JSONPath=`.status.instanceManagerName`,description="The instance manager of the engine"
// +kubebuilder:printcolumn:name="Image",type=string,JSONPath=`.status.currentImage`,description="The current image of the engine"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// Engine is where Longhorn stores engine object.
type Engine struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec EngineSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status EngineStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// EngineList is a list of Engines.
type EngineList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Engine `json:"items"`
}

View File

@ -0,0 +1,147 @@
package v1beta1
import (
"fmt"
"github.com/jinzhu/copier"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/conversion"
"github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
)
type EngineImageState string
const (
EngineImageStateDeploying = EngineImageState("deploying")
EngineImageStateDeployed = EngineImageState("deployed")
EngineImageStateIncompatible = EngineImageState("incompatible")
EngineImageStateError = EngineImageState("error")
)
const (
EngineImageConditionTypeReady = "ready"
EngineImageConditionTypeReadyReasonDaemonSet = "daemonSet"
EngineImageConditionTypeReadyReasonBinary = "binary"
)
type EngineVersionDetails struct {
Version string `json:"version"`
GitCommit string `json:"gitCommit"`
BuildDate string `json:"buildDate"`
CLIAPIVersion int `json:"cliAPIVersion"`
CLIAPIMinVersion int `json:"cliAPIMinVersion"`
ControllerAPIVersion int `json:"controllerAPIVersion"`
ControllerAPIMinVersion int `json:"controllerAPIMinVersion"`
DataFormatVersion int `json:"dataFormatVersion"`
DataFormatMinVersion int `json:"dataFormatMinVersion"`
}
// EngineImageSpec defines the desired state of the Longhorn engine image
type EngineImageSpec struct {
Image string `json:"image"`
}
// EngineImageStatus defines the observed state of the Longhorn engine image
type EngineImageStatus struct {
OwnerID string `json:"ownerID"`
State EngineImageState `json:"state"`
RefCount int `json:"refCount"`
NoRefSince string `json:"noRefSince"`
Conditions map[string]Condition `json:"conditions"`
NodeDeploymentMap map[string]bool `json:"nodeDeploymentMap"`
EngineVersionDetails `json:""`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhei
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 EngineImage is deprecated; use longhorn.io/v1beta2 EngineImage instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.state`,description="State of the engine image"
// +kubebuilder:printcolumn:name="Image",type=string,JSONPath=`.spec.image`,description="The Longhorn engine image"
// +kubebuilder:printcolumn:name="RefCount",type=integer,JSONPath=`.status.refCount`,description="Number of resources using the engine image"
// +kubebuilder:printcolumn:name="BuildDate",type=date,JSONPath=`.status.buildDate`,description="The build date of the engine image"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// EngineImage is where Longhorn stores engine image object.
type EngineImage struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec EngineImageSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status EngineImageStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// EngineImageList is a list of EngineImages.
type EngineImageList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []EngineImage `json:"items"`
}
// ConvertTo converts from spoke version (v1beta1) to hub version (v1beta2)
func (ei *EngineImage) ConvertTo(dst conversion.Hub) error {
switch t := dst.(type) {
case *v1beta2.EngineImage:
eiV1beta2 := dst.(*v1beta2.EngineImage)
eiV1beta2.ObjectMeta = ei.ObjectMeta
if err := copier.Copy(&eiV1beta2.Spec, &ei.Spec); err != nil {
return err
}
if err := copier.Copy(&eiV1beta2.Status, &ei.Status); err != nil {
return err
}
// Copy status.conditions from map to slice
dstConditions, err := copyConditionsFromMapToSlice(ei.Status.Conditions)
if err != nil {
return err
}
eiV1beta2.Status.Conditions = dstConditions
// Copy status.EngineVersionDetails
return copier.Copy(&eiV1beta2.Status.EngineVersionDetails, &ei.Status.EngineVersionDetails)
default:
return fmt.Errorf("unsupported type %v", t)
}
}
// ConvertFrom converts from hub version (v1beta2) to spoke version (v1beta1)
func (ei *EngineImage) ConvertFrom(src conversion.Hub) error {
switch t := src.(type) {
case *v1beta2.EngineImage:
eiV1beta2 := src.(*v1beta2.EngineImage)
ei.ObjectMeta = eiV1beta2.ObjectMeta
if err := copier.Copy(&ei.Spec, &eiV1beta2.Spec); err != nil {
return err
}
if err := copier.Copy(&ei.Status, &eiV1beta2.Status); err != nil {
return err
}
// Copy status.conditions from slice to map
dstConditions, err := copyConditionFromSliceToMap(eiV1beta2.Status.Conditions)
if err != nil {
return err
}
ei.Status.Conditions = dstConditions
// Copy status.EngineVersionDetails
return copier.Copy(&ei.Status.EngineVersionDetails, &eiV1beta2.Status.EngineVersionDetails)
default:
return fmt.Errorf("unsupported type %v", t)
}
}

View File

@ -0,0 +1,135 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
type InstanceType string
const (
InstanceTypeEngine = InstanceType("engine")
InstanceTypeReplica = InstanceType("replica")
)
type InstanceManagerState string
const (
InstanceManagerStateError = InstanceManagerState("error")
InstanceManagerStateRunning = InstanceManagerState("running")
InstanceManagerStateStopped = InstanceManagerState("stopped")
InstanceManagerStateStarting = InstanceManagerState("starting")
InstanceManagerStateUnknown = InstanceManagerState("unknown")
)
type InstanceManagerType string
const (
InstanceManagerTypeEngine = InstanceManagerType("engine")
InstanceManagerTypeReplica = InstanceManagerType("replica")
)
type InstanceProcess struct {
Spec InstanceProcessSpec `json:"spec"`
Status InstanceProcessStatus `json:"status"`
}
type InstanceProcessSpec struct {
Name string `json:"name"`
}
type InstanceState string
const (
InstanceStateRunning = InstanceState("running")
InstanceStateStopped = InstanceState("stopped")
InstanceStateError = InstanceState("error")
InstanceStateStarting = InstanceState("starting")
InstanceStateStopping = InstanceState("stopping")
InstanceStateUnknown = InstanceState("unknown")
)
type InstanceSpec struct {
VolumeName string `json:"volumeName"`
VolumeSize int64 `json:"volumeSize,string"`
NodeID string `json:"nodeID"`
EngineImage string `json:"engineImage"`
DesireState InstanceState `json:"desireState"`
LogRequested bool `json:"logRequested"`
SalvageRequested bool `json:"salvageRequested"`
}
type InstanceStatus struct {
OwnerID string `json:"ownerID"`
InstanceManagerName string `json:"instanceManagerName"`
CurrentState InstanceState `json:"currentState"`
CurrentImage string `json:"currentImage"`
IP string `json:"ip"`
Port int `json:"port"`
Started bool `json:"started"`
LogFetched bool `json:"logFetched"`
SalvageExecuted bool `json:"salvageExecuted"`
}
type InstanceProcessStatus struct {
Endpoint string `json:"endpoint"`
ErrorMsg string `json:"errorMsg"`
Listen string `json:"listen"`
PortEnd int32 `json:"portEnd"`
PortStart int32 `json:"portStart"`
State InstanceState `json:"state"`
Type InstanceType `json:"type"`
ResourceVersion int64 `json:"resourceVersion"`
}
// InstanceManagerSpec defines the desired state of the Longhorn instance manager
type InstanceManagerSpec struct {
Image string `json:"image"`
NodeID string `json:"nodeID"`
Type InstanceManagerType `json:"type"`
// Deprecated: This field is useless.
// +optional
EngineImage string `json:"engineImage"`
}
// InstanceManagerStatus defines the observed state of the Longhorn instance manager
type InstanceManagerStatus struct {
OwnerID string `json:"ownerID"`
CurrentState InstanceManagerState `json:"currentState"`
Instances map[string]InstanceProcess `json:"instances"`
IP string `json:"ip"`
APIMinVersion int `json:"apiMinVersion"`
APIVersion int `json:"apiVersion"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhim
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 InstanceManager is deprecated; use longhorn.io/v1beta2 InstanceManager instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.currentState`,description="The state of the instance manager"
// +kubebuilder:printcolumn:name="Type",type=string,JSONPath=`.spec.type`,description="The type of the instance manager (engine or replica)"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.spec.nodeID`,description="The node that the instance manager is running on"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// InstanceManager is where Longhorn stores instance manager object.
type InstanceManager struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec InstanceManagerSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status InstanceManagerStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// InstanceManagerList is a list of InstanceManagers.
type InstanceManagerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []InstanceManager `json:"items"`
}

View File

@ -0,0 +1,25 @@
package v1beta1
type ConditionStatus string
const (
ConditionStatusTrue ConditionStatus = "True"
ConditionStatusFalse ConditionStatus = "False"
ConditionStatusUnknown ConditionStatus = "Unknown"
)
type Condition struct {
// Type is the type of the condition.
Type string `json:"type"`
// Status is the status of the condition.
// Can be True, False, Unknown.
Status ConditionStatus `json:"status"`
// Last time we probed the condition.
LastProbeTime string `json:"lastProbeTime"`
// Last time the condition transitioned from one status to another.
LastTransitionTime string `json:"lastTransitionTime"`
// Unique, one-word, CamelCase reason for the condition's last transition.
Reason string `json:"reason"`
// Human-readable message indicating details about last transition.
Message string `json:"message"`
}

View File

@ -0,0 +1,193 @@
package v1beta1
import (
"fmt"
"github.com/jinzhu/copier"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/conversion"
"github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
)
const (
NodeConditionTypeReady = "Ready"
NodeConditionTypeMountPropagation = "MountPropagation"
NodeConditionTypeSchedulable = "Schedulable"
)
const (
NodeConditionReasonManagerPodDown = "ManagerPodDown"
NodeConditionReasonManagerPodMissing = "ManagerPodMissing"
NodeConditionReasonKubernetesNodeGone = "KubernetesNodeGone"
NodeConditionReasonKubernetesNodeNotReady = "KubernetesNodeNotReady"
NodeConditionReasonKubernetesNodePressure = "KubernetesNodePressure"
NodeConditionReasonUnknownNodeConditionTrue = "UnknownNodeConditionTrue"
NodeConditionReasonNoMountPropagationSupport = "NoMountPropagationSupport"
NodeConditionReasonKubernetesNodeCordoned = "KubernetesNodeCordoned"
)
const (
DiskConditionTypeSchedulable = "Schedulable"
DiskConditionTypeReady = "Ready"
)
const (
DiskConditionReasonDiskPressure = "DiskPressure"
DiskConditionReasonDiskFilesystemChanged = "DiskFilesystemChanged"
DiskConditionReasonNoDiskInfo = "NoDiskInfo"
DiskConditionReasonDiskNotReady = "DiskNotReady"
)
type DiskSpec struct {
Path string `json:"path"`
AllowScheduling bool `json:"allowScheduling"`
EvictionRequested bool `json:"evictionRequested"`
StorageReserved int64 `json:"storageReserved"`
Tags []string `json:"tags"`
}
type DiskStatus struct {
Conditions map[string]Condition `json:"conditions"`
StorageAvailable int64 `json:"storageAvailable"`
StorageScheduled int64 `json:"storageScheduled"`
StorageMaximum int64 `json:"storageMaximum"`
ScheduledReplica map[string]int64 `json:"scheduledReplica"`
DiskUUID string `json:"diskUUID"`
}
// NodeSpec defines the desired state of the Longhorn node
type NodeSpec struct {
Name string `json:"name"`
Disks map[string]DiskSpec `json:"disks"`
AllowScheduling bool `json:"allowScheduling"`
EvictionRequested bool `json:"evictionRequested"`
Tags []string `json:"tags"`
EngineManagerCPURequest int `json:"engineManagerCPURequest"`
ReplicaManagerCPURequest int `json:"replicaManagerCPURequest"`
}
// NodeStatus defines the observed state of the Longhorn node
type NodeStatus struct {
Conditions map[string]Condition `json:"conditions"`
DiskStatus map[string]*DiskStatus `json:"diskStatus"`
Region string `json:"region"`
Zone string `json:"zone"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhn
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 Node is deprecated; use longhorn.io/v1beta2 Node instead"
// +kubebuilder:printcolumn:name="Ready",type=string,JSONPath=`.status.conditions['Ready']['status']`,description="Indicate whether the node is ready"
// +kubebuilder:printcolumn:name="AllowScheduling",type=boolean,JSONPath=`.spec.allowScheduling`,description="Indicate whether the user disabled/enabled replica scheduling for the node"
// +kubebuilder:printcolumn:name="Schedulable",type=string,JSONPath=`.status.conditions['Schedulable']['status']`,description="Indicate whether Longhorn can schedule replicas on the node"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// Node is where Longhorn stores Longhorn node object.
type Node struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec NodeSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status NodeStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// NodeList is a list of Nodes.
type NodeList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Node `json:"items"`
}
// ConvertTo converts from spoke version (v1beta1) to hub version (v1beta2)
func (n *Node) ConvertTo(dst conversion.Hub) error {
switch t := dst.(type) {
case *v1beta2.Node:
nV1beta2 := dst.(*v1beta2.Node)
nV1beta2.ObjectMeta = n.ObjectMeta
if err := copier.Copy(&nV1beta2.Spec, &n.Spec); err != nil {
return err
}
if err := copier.Copy(&nV1beta2.Status, &n.Status); err != nil {
return err
}
// Copy status.conditions from map to slice
dstConditions, err := copyConditionsFromMapToSlice(n.Status.Conditions)
if err != nil {
return err
}
nV1beta2.Status.Conditions = dstConditions
// Copy status.diskStatus.conditioions from map to slice
dstDiskStatus := make(map[string]*v1beta2.DiskStatus)
for name, from := range n.Status.DiskStatus {
to := &v1beta2.DiskStatus{}
if err := copier.Copy(to, from); err != nil {
return err
}
conditions, err := copyConditionsFromMapToSlice(from.Conditions)
if err != nil {
return err
}
to.Conditions = conditions
dstDiskStatus[name] = to
}
nV1beta2.Status.DiskStatus = dstDiskStatus
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}
// ConvertFrom converts from hub version (v1beta2) to spoke version (v1beta1)
func (n *Node) ConvertFrom(src conversion.Hub) error {
switch t := src.(type) {
case *v1beta2.Node:
nV1beta2 := src.(*v1beta2.Node)
n.ObjectMeta = nV1beta2.ObjectMeta
if err := copier.Copy(&n.Spec, &nV1beta2.Spec); err != nil {
return err
}
if err := copier.Copy(&n.Status, &nV1beta2.Status); err != nil {
return err
}
// Copy status.conditions from slice to map
dstConditions, err := copyConditionFromSliceToMap(nV1beta2.Status.Conditions)
if err != nil {
return err
}
n.Status.Conditions = dstConditions
// Copy status.diskStatus.conditioions from slice to map
dstDiskStatus := make(map[string]*DiskStatus)
for name, from := range nV1beta2.Status.DiskStatus {
to := &DiskStatus{}
if err := copier.Copy(to, from); err != nil {
return err
}
conditions, err := copyConditionFromSliceToMap(from.Conditions)
if err != nil {
return err
}
to.Conditions = conditions
dstDiskStatus[name] = to
}
n.Status.DiskStatus = dstDiskStatus
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}

View File

@ -0,0 +1,79 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
type RecurringJobType string
const (
RecurringJobTypeSnapshot = RecurringJobType("snapshot")
RecurringJobTypeBackup = RecurringJobType("backup")
RecurringJobGroupDefault = "default"
)
type VolumeRecurringJob struct {
Name string `json:"name"`
IsGroup bool `json:"isGroup"`
}
// RecurringJobSpec defines the desired state of the Longhorn recurring job
type RecurringJobSpec struct {
// The recurring job name.
Name string `json:"name"`
// The recurring job group.
Groups []string `json:"groups,omitempty"`
// The recurring job type.
// Can be "snapshot" or "backup".
Task RecurringJobType `json:"task"`
// The cron setting.
Cron string `json:"cron"`
// The retain count of the snapshot/backup.
Retain int `json:"retain"`
// The concurrency of taking the snapshot/backup.
Concurrency int `json:"concurrency"`
// The label of the snapshot/backup.
Labels map[string]string `json:"labels,omitempty"`
}
// RecurringJobStatus defines the observed state of the Longhorn recurring job
type RecurringJobStatus struct {
// The owner ID which is responsible to reconcile this recurring job CR.
OwnerID string `json:"ownerID"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhrj
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 RecurringJob is deprecated; use longhorn.io/v1beta2 RecurringJob instead"
// +kubebuilder:printcolumn:name="Groups",type=string,JSONPath=`.spec.groups`,description="Sets groupings to the jobs. When set to \"default\" group will be added to the volume label when no other job label exist in volume"
// +kubebuilder:printcolumn:name="Task",type=string,JSONPath=`.spec.task`,description="Should be one of \"backup\" or \"snapshot\""
// +kubebuilder:printcolumn:name="Cron",type=string,JSONPath=`.spec.cron`,description="The cron expression represents recurring job scheduling"
// +kubebuilder:printcolumn:name="Retain",type=integer,JSONPath=`.spec.retain`,description="The number of snapshots/backups to keep for the volume"
// +kubebuilder:printcolumn:name="Concurrency",type=integer,JSONPath=`.spec.concurrency`,description="The concurrent job to run by each cron job"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// +kubebuilder:printcolumn:name="Labels",type=string,JSONPath=`.spec.labels`,description="Specify the labels"
// RecurringJob is where Longhorn stores recurring job object.
type RecurringJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec RecurringJobSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status RecurringJobStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// RecurringJobList is a list of RecurringJobs.
type RecurringJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RecurringJob `json:"items"`
}

View File

@ -0,0 +1,57 @@
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn"
)
var (
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
AddToScheme = SchemeBuilder.AddToScheme
)
var SchemeGroupVersion = schema.GroupVersion{Group: longhorn.GroupName, Version: "v1beta1"}
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}
func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(SchemeGroupVersion,
&Volume{},
&VolumeList{},
&Engine{},
&EngineList{},
&Replica{},
&ReplicaList{},
&Setting{},
&SettingList{},
&EngineImage{},
&EngineImageList{},
&Node{},
&NodeList{},
&InstanceManager{},
&InstanceManagerList{},
&ShareManager{},
&ShareManagerList{},
&BackingImage{},
&BackingImageList{},
&BackingImageManager{},
&BackingImageManagerList{},
&BackingImageDataSource{},
&BackingImageDataSourceList{},
&BackupTarget{},
&BackupTargetList{},
&BackupVolume{},
&BackupVolumeList{},
&Backup{},
&BackupList{},
&RecurringJob{},
&RecurringJobList{},
)
metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
return nil
}

View File

@ -0,0 +1,66 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// ReplicaSpec defines the desired state of the Longhorn replica
type ReplicaSpec struct {
InstanceSpec `json:""`
EngineName string `json:"engineName"`
HealthyAt string `json:"healthyAt"`
FailedAt string `json:"failedAt"`
DiskID string `json:"diskID"`
DiskPath string `json:"diskPath"`
DataDirectoryName string `json:"dataDirectoryName"`
BackingImage string `json:"backingImage"`
Active bool `json:"active"`
HardNodeAffinity string `json:"hardNodeAffinity"`
RevisionCounterDisabled bool `json:"revisionCounterDisabled"`
RebuildRetryCount int `json:"rebuildRetryCount"`
// Deprecated
DataPath string `json:"dataPath"`
// Deprecated. Rename to BackingImage
BaseImage string `json:"baseImage"`
}
// ReplicaStatus defines the observed state of the Longhorn replica
type ReplicaStatus struct {
InstanceStatus `json:""`
EvictionRequested bool `json:"evictionRequested"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhr
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 Replica is deprecated; use longhorn.io/v1beta2 Replica instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.currentState`,description="The current state of the replica"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.spec.nodeID`,description="The node that the replica is on"
// +kubebuilder:printcolumn:name="Disk",type=string,JSONPath=`.spec.diskID`,description="The disk that the replica is on"
// +kubebuilder:printcolumn:name="InstanceManager",type=string,JSONPath=`.status.instanceManagerName`,description="The instance manager of the replica"
// +kubebuilder:printcolumn:name="Image",type=string,JSONPath=`.status.currentImage`,description="The current image of the replica"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// Replica is where Longhorn stores replica object.
type Replica struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec ReplicaSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status ReplicaStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ReplicaList is a list of Replicas.
type ReplicaList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Replica `json:"items"`
}

View File

@ -0,0 +1,30 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhs
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 Setting is deprecated; use longhorn.io/v1beta2 Setting instead"
// +kubebuilder:printcolumn:name="Value",type=string,JSONPath=`.value`,description="The value of the setting"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// Setting is where Longhorn stores setting object.
type Setting struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Value string `json:"value"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// SettingList is a list of Settings.
type SettingList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Setting `json:"items"`
}

View File

@ -0,0 +1,59 @@
package v1beta1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
type ShareManagerState string
const (
ShareManagerStateUnknown = ShareManagerState("unknown")
ShareManagerStateStarting = ShareManagerState("starting")
ShareManagerStateRunning = ShareManagerState("running")
ShareManagerStateStopping = ShareManagerState("stopping")
ShareManagerStateStopped = ShareManagerState("stopped")
ShareManagerStateError = ShareManagerState("error")
)
// ShareManagerSpec defines the desired state of the Longhorn share manager
type ShareManagerSpec struct {
Image string `json:"image"`
}
// ShareManagerStatus defines the observed state of the Longhorn share manager
type ShareManagerStatus struct {
OwnerID string `json:"ownerID"`
State ShareManagerState `json:"state"`
Endpoint string `json:"endpoint"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhsm
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 ShareManager is deprecated; use longhorn.io/v1beta2 ShareManager instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.state`,description="The state of the share manager"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.status.ownerID`,description="The node that the share manager is owned by"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// ShareManager is where Longhorn stores share manager object.
type ShareManager struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec ShareManagerSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status ShareManagerStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ShareManagerList is a list of ShareManagers.
type ShareManagerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ShareManager `json:"items"`
}

View File

@ -0,0 +1,296 @@
package v1beta1
import (
"fmt"
"github.com/jinzhu/copier"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/conversion"
"github.com/longhorn/longhorn-manager/k8s/pkg/apis/longhorn/v1beta2"
)
type VolumeState string
const (
VolumeStateCreating = VolumeState("creating")
VolumeStateAttached = VolumeState("attached")
VolumeStateDetached = VolumeState("detached")
VolumeStateAttaching = VolumeState("attaching")
VolumeStateDetaching = VolumeState("detaching")
VolumeStateDeleting = VolumeState("deleting")
)
type VolumeRobustness string
const (
VolumeRobustnessHealthy = VolumeRobustness("healthy") // during attached
VolumeRobustnessDegraded = VolumeRobustness("degraded") // during attached
VolumeRobustnessFaulted = VolumeRobustness("faulted") // during detached
VolumeRobustnessUnknown = VolumeRobustness("unknown")
)
type VolumeFrontend string
const (
VolumeFrontendBlockDev = VolumeFrontend("blockdev")
VolumeFrontendISCSI = VolumeFrontend("iscsi")
VolumeFrontendEmpty = VolumeFrontend("")
)
type VolumeDataSource string
type VolumeDataSourceType string
const (
VolumeDataSourceTypeBackup = VolumeDataSourceType("backup") // Planing to move FromBackup field into DataSource field
VolumeDataSourceTypeSnapshot = VolumeDataSourceType("snapshot")
VolumeDataSourceTypeVolume = VolumeDataSourceType("volume")
)
type DataLocality string
const (
DataLocalityDisabled = DataLocality("disabled")
DataLocalityBestEffort = DataLocality("best-effort")
)
type AccessMode string
const (
AccessModeReadWriteOnce = AccessMode("rwo")
AccessModeReadWriteMany = AccessMode("rwx")
)
type ReplicaAutoBalance string
const (
ReplicaAutoBalanceIgnored = ReplicaAutoBalance("ignored")
ReplicaAutoBalanceDisabled = ReplicaAutoBalance("disabled")
ReplicaAutoBalanceLeastEffort = ReplicaAutoBalance("least-effort")
ReplicaAutoBalanceBestEffort = ReplicaAutoBalance("best-effort")
)
type VolumeCloneState string
const (
VolumeCloneStateEmpty = VolumeCloneState("")
VolumeCloneStateInitiated = VolumeCloneState("initiated")
VolumeCloneStateCompleted = VolumeCloneState("completed")
VolumeCloneStateFailed = VolumeCloneState("failed")
)
type VolumeCloneStatus struct {
SourceVolume string `json:"sourceVolume"`
Snapshot string `json:"snapshot"`
State VolumeCloneState `json:"state"`
}
const (
VolumeConditionTypeScheduled = "scheduled"
VolumeConditionTypeRestore = "restore"
VolumeConditionTypeTooManySnapshots = "toomanysnapshots"
)
const (
VolumeConditionReasonReplicaSchedulingFailure = "ReplicaSchedulingFailure"
VolumeConditionReasonLocalReplicaSchedulingFailure = "LocalReplicaSchedulingFailure"
VolumeConditionReasonRestoreInProgress = "RestoreInProgress"
VolumeConditionReasonRestoreFailure = "RestoreFailure"
VolumeConditionReasonTooManySnapshots = "TooManySnapshots"
)
// Deprecated: This field is useless.
type VolumeRecurringJobSpec struct {
Name string `json:"name"`
Groups []string `json:"groups,omitempty"`
Task RecurringJobType `json:"task"`
Cron string `json:"cron"`
Retain int `json:"retain"`
Concurrency int `json:"concurrency"`
Labels map[string]string `json:"labels,omitempty"`
}
type KubernetesStatus struct {
PVName string `json:"pvName"`
PVStatus string `json:"pvStatus"`
// determine if PVC/Namespace is history or not
Namespace string `json:"namespace"`
PVCName string `json:"pvcName"`
LastPVCRefAt string `json:"lastPVCRefAt"`
// determine if Pod/Workload is history or not
WorkloadsStatus []WorkloadStatus `json:"workloadsStatus"`
LastPodRefAt string `json:"lastPodRefAt"`
}
type WorkloadStatus struct {
PodName string `json:"podName"`
PodStatus string `json:"podStatus"`
WorkloadName string `json:"workloadName"`
WorkloadType string `json:"workloadType"`
}
// VolumeSpec defines the desired state of the Longhorn volume
type VolumeSpec struct {
Size int64 `json:"size,string"`
Frontend VolumeFrontend `json:"frontend"`
FromBackup string `json:"fromBackup"`
DataSource VolumeDataSource `json:"dataSource"`
DataLocality DataLocality `json:"dataLocality"`
StaleReplicaTimeout int `json:"staleReplicaTimeout"`
NodeID string `json:"nodeID"`
MigrationNodeID string `json:"migrationNodeID"`
EngineImage string `json:"engineImage"`
BackingImage string `json:"backingImage"`
Standby bool `json:"Standby"`
DiskSelector []string `json:"diskSelector"`
NodeSelector []string `json:"nodeSelector"`
DisableFrontend bool `json:"disableFrontend"`
RevisionCounterDisabled bool `json:"revisionCounterDisabled"`
LastAttachedBy string `json:"lastAttachedBy"`
AccessMode AccessMode `json:"accessMode"`
Migratable bool `json:"migratable"`
Encrypted bool `json:"encrypted"`
NumberOfReplicas int `json:"numberOfReplicas"`
ReplicaAutoBalance ReplicaAutoBalance `json:"replicaAutoBalance"`
// Deprecated. Rename to BackingImage
BaseImage string `json:"baseImage"`
// Deprecated. Replaced by a separate resource named "RecurringJob"
RecurringJobs []VolumeRecurringJobSpec `json:"recurringJobs,omitempty"`
}
// VolumeStatus defines the observed state of the Longhorn volume
type VolumeStatus struct {
OwnerID string `json:"ownerID"`
State VolumeState `json:"state"`
Robustness VolumeRobustness `json:"robustness"`
CurrentNodeID string `json:"currentNodeID"`
CurrentImage string `json:"currentImage"`
KubernetesStatus KubernetesStatus `json:"kubernetesStatus"`
Conditions map[string]Condition `json:"conditions"`
LastBackup string `json:"lastBackup"`
LastBackupAt string `json:"lastBackupAt"`
PendingNodeID string `json:"pendingNodeID"`
FrontendDisabled bool `json:"frontendDisabled"`
RestoreRequired bool `json:"restoreRequired"`
RestoreInitiated bool `json:"restoreInitiated"`
CloneStatus VolumeCloneStatus `json:"cloneStatus"`
RemountRequestedAt string `json:"remountRequestedAt"`
ExpansionRequired bool `json:"expansionRequired"`
IsStandby bool `json:"isStandby"`
ActualSize int64 `json:"actualSize"`
LastDegradedAt string `json:"lastDegradedAt"`
ShareEndpoint string `json:"shareEndpoint"`
ShareState ShareManagerState `json:"shareState"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:resource:shortName=lhv
// +kubebuilder:subresource:status
// +kubebuilder:unservedversion
// +kubebuilder:deprecatedversion
// +kubebuilder:deprecatedversion:warning="longhorn.io/v1beta1 Volume is deprecated; use longhorn.io/v1beta2 Volume instead"
// +kubebuilder:printcolumn:name="State",type=string,JSONPath=`.status.state`,description="The state of the volume"
// +kubebuilder:printcolumn:name="Robustness",type=string,JSONPath=`.status.robustness`,description="The robustness of the volume"
// +kubebuilder:printcolumn:name="Scheduled",type=string,JSONPath=`.status.conditions['scheduled']['status']`,description="The scheduled condition of the volume"
// +kubebuilder:printcolumn:name="Size",type=string,JSONPath=`.spec.size`,description="The size of the volume"
// +kubebuilder:printcolumn:name="Node",type=string,JSONPath=`.status.currentNodeID`,description="The node that the volume is currently attaching to"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// Volume is where Longhorn stores volume object.
type Volume struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Spec VolumeSpec `json:"spec,omitempty"`
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
Status VolumeStatus `json:"status,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// VolumeList is a list of Volumes.
type VolumeList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Volume `json:"items"`
}
// ConvertTo converts from spoke version (v1beta1) to hub version (v1beta2)
func (v *Volume) ConvertTo(dst conversion.Hub) error {
switch t := dst.(type) {
case *v1beta2.Volume:
vV1beta2 := dst.(*v1beta2.Volume)
vV1beta2.ObjectMeta = v.ObjectMeta
if err := copier.Copy(&vV1beta2.Spec, &v.Spec); err != nil {
return err
}
if err := copier.Copy(&vV1beta2.Status, &v.Status); err != nil {
return err
}
if v.Spec.DataLocality == "" {
vV1beta2.Spec.DataLocality = v1beta2.DataLocality(DataLocalityDisabled)
}
// Copy status.conditions from map to slice
dstConditions, err := copyConditionsFromMapToSlice(v.Status.Conditions)
if err != nil {
return err
}
vV1beta2.Status.Conditions = dstConditions
// Copy status.KubernetesStatus
if err := copier.Copy(&vV1beta2.Status.KubernetesStatus, &v.Status.KubernetesStatus); err != nil {
return err
}
// Copy status.CloneStatus
return copier.Copy(&vV1beta2.Status.CloneStatus, &v.Status.CloneStatus)
default:
return fmt.Errorf("unsupported type %v", t)
}
}
// ConvertFrom converts from hub version (v1beta2) to spoke version (v1beta1)
func (v *Volume) ConvertFrom(src conversion.Hub) error {
switch t := src.(type) {
case *v1beta2.Volume:
vV1beta2 := src.(*v1beta2.Volume)
v.ObjectMeta = vV1beta2.ObjectMeta
if err := copier.Copy(&v.Spec, &vV1beta2.Spec); err != nil {
return err
}
if err := copier.Copy(&v.Status, &vV1beta2.Status); err != nil {
return err
}
// Copy status.conditions from slice to map
dstConditions, err := copyConditionFromSliceToMap(vV1beta2.Status.Conditions)
if err != nil {
return err
}
v.Status.Conditions = dstConditions
// Copy status.KubernetesStatus
if err := copier.Copy(&v.Status.KubernetesStatus, &vV1beta2.Status.KubernetesStatus); err != nil {
return err
}
// Copy status.CloneStatus
return copier.Copy(&v.Status.CloneStatus, &vV1beta2.Status.CloneStatus)
default:
return fmt.Errorf("unsupported type %v", t)
}
}

File diff suppressed because it is too large Load Diff

View File

@ -51,11 +51,6 @@ type BackupSpec struct {
// Can be "full" or "incremental"
// +optional
BackupMode BackupMode `json:"backupMode"`
// The backup block size. 0 means the legacy default size 2MiB, and -1 indicate the block size is invalid.
// +kubebuilder:validation:Type=string
// +kubebuilder:validation:Enum="-1";"2097152";"16777216"
// +optional
BackupBlockSize int64 `json:"backupBlockSize,string"`
}
// BackupStatus defines the observed state of the Longhorn backup

View File

@ -77,8 +77,6 @@ type RebuildStatus struct {
State string `json:"state"`
// +optional
FromReplicaAddress string `json:"fromReplicaAddress"`
// +optional
AppliedRebuildingMBps int64 `json:"appliedRebuildingMBps"`
}
type SnapshotCloneStatus struct {

View File

@ -102,8 +102,6 @@ type InstanceStatus struct {
// +optional
Port int `json:"port"`
// +optional
Starting bool `json:"starting"`
// +optional
Started bool `json:"started"`
// +optional
LogFetched bool `json:"logFetched"`

View File

@ -78,7 +78,6 @@ const (
DiskDriverNone = DiskDriver("")
DiskDriverAuto = DiskDriver("auto")
DiskDriverAio = DiskDriver("aio")
DiskDriverNvme = DiskDriver("nvme")
)
type SnapshotCheckStatus struct {
@ -92,7 +91,7 @@ type DiskSpec struct {
Type DiskType `json:"diskType"`
// +optional
Path string `json:"path"`
// +kubebuilder:validation:Enum="";auto;aio;nvme
// +kubebuilder:validation:Enum="";auto;aio
// +optional
DiskDriver DiskDriver `json:"diskDriver"`
// +optional

View File

@ -89,6 +89,9 @@ type ReplicaSpec struct {
// ReplicaStatus defines the observed state of the Longhorn replica
type ReplicaStatus struct {
InstanceStatus `json:""`
// Deprecated: Replaced by field `spec.evictionRequested`.
// +optional
EvictionRequested bool `json:"evictionRequested"`
}
// +genclient

Some files were not shown because too many files have changed in this diff Show More