Improve cluster internal custom volume migration

Project LXD
Status Draft
Author(s) @monstermunchkin
Approver(s)
Release LXD 3.19
Internal ID

Abstract

LXD should support copying and moving custom volumes between two cluster members with a single API call. For instances this is already possible.

Rationale

When copying or moving a custom volume from one cluster member to another, two separate API calls are needed, one to the source and one to the target. In case of copy, these two calls are

  • POST /1.0/storage-pools/pool1/volumes/custom/vol1
  • POST /1.0/storage-pools/pool1/volumes/custom

In contrast, when copying an instance from one cluster member to another, a single API call can be used

  • POST /1.0/instances?target=node2

In this case, the target defines the cluster member, the instance should be created on.

For moving an instance the following single API call can be used

  • POST /1.0/instances/c1?target=node2

Here, again, target defines the cluster member, the instance should be moved to.

For custom storage volumes, three API calls are needed. Two for copying the volume from the source to the target, and one for removing the custom volume from the source

  • POST /1.0/storage-pools/pool1/volumes/custom/vol1 (on source)
  • POST /1.0/storage-pools/pool1/volumes/custom (on target)
  • DELETE /1.0/storage-pools/pool1/volumes/custom/vol1 (on source)

The storage volume API should be aligned with that of instances, and support copying and moving storage volumes with a single API call.

Specification

Design

Copy custom storage volume

Copying a custom volume can be done using a POST request to /1.0/storage-pools/<pool>/volumes/custom?target=<target-member>. LXD will then forward the request to the source which is provided in the Source.Location field of StorageVolumesPost. If the location is not provided, the volume is likely located on a remote storage pool (e.g. Ceph). In this case, the request is not explicitly forwarded to any cluster member.

Move custom storage volume

Moving a custom volume can be done using a POST request to /1.0/storage-pools/<pool>/volumes/custom/<volume>?target=<target-member>. As with copying custom volumes, LXD will forward the request to the source, perform a copy, and afterwards delete the custom volume.

API changes

The StorageVolumeSource struct will need an additional field Location. That is because custom storage volumes with the same name can exist on multiple cluster members, unlike instances which have a unique name. Therefore, when copying and moving with a single API call, the location needs to be provided.

// StorageVolumeSource represents the creation source for a new storage volume
//
// swagger:model
//
// API extension: storage_api_local_volume_handling.
type StorageVolumeSource struct {
	// ...

	// What cluster member this record was found on
	// Example: lxd01
	//
	// API extension: cluster_internal_custom_volume_copy
	Location string `json:"location" yaml:"location"`
}

StorageVolumesPost, used for copying custom storage volumes, already contains the Source field so there’s no need to change anything in that struct.

StorageVolumePost, used for renaming custom storage volumes, doesn’t contain a Source field. In order to use a single API call, this struct needs to know the source.

// StorageVolumePost represents the fields required to rename a LXD storage pool volume
//
// swagger:model
//
// API extension: storage_api_volume_rename.
type StorageVolumePost struct {
	// ...

	// Migration source
	//
	// API extension: cluster_internal_custom_volume_copy
	Source StorageVolumeSource `json:"source" yaml:"source"`
}

CLI changes

There will be no CLI changes. Copying and moving custom storage volumes will work as before.

Database changes

There will be no database changes.

Upgrade handling

No manual intervention will be needed for this to work.

Further information

TBD