Error: Failed validation of "storage.backups_volume": Storage volume "blah/blah" isn't empty

# lxc config set storage.backups_volume blah/blah
Error: Failed validation of "storage.backups_volume": Storage volume "blah/blah" isn't empty

why does it have to be empty? Its a bummer that I have to reserve backup space twice, once for backup volume to allow it to be created and once for the destination where the export has to be created.

Could you please elaborate on your workflow?

If you set storage.backups_volume LXD has full access over it and might cause conflicts with files already existing on this volume.

As a note I haved moved the post into the support category.

my workflow is that I do lxc export on the same host running LXD (or in a container running on the same host), so the space reserved can be only the size of the largest expected backup and not twice the size because, if I understood correctly, when you do lxc export the backup file can only be on one of the two locations, but not on both.

right now I cheat by mounting a zfs dataset in the backup dir, like this:

# zfs list YYY107-scratch/backup
NAME                    USED  AVAIL     REFER  MOUNTPOINT
YYY107-scratch/backup  15.6G  84.4G     15.6G  /var/snap/lxd/common/lxd/backups/

but thats kinda hacky and might break any time LXD internals change, so I would very much appreciate a cleaner solution, i.e. let me take the responsibility that the files on the storage volume do not clash.

The storage.backups_volume indicates a custom filesystem volume on the LXD server which it uses as a temporary scratch space for generating the compressed tarball before streaming it to the client.

The scratch space is required because the compressed file cannot always be produced and streamed to the client at the same time.

However depending on the storage pool type used for the storage.backups_volume setting the space on the disk isn’t reserved but only used when creating the backup file, for example when using dir, btrfs, lvm thin, ceph or zfs pool the volume isn’t taking up space even if a quota is set on it.

Once the backup has been sent to the client it is then removed.

Looking at the lxc source code we can see it uses the Go SDK client to create a backup on the server into the backups location here:

It then downloads the file from the server here:

And finally at the end of the export command after it has downloaded the file, it deletes the backup file on the server:

My understanding in your case is that you want the backup file to persist on the server in the storage.backups_volume and don’t necessarily need to download it by the lxc command into the same location. Is that correct?

If so then perhaps we can extend lxc export with a mode to not download the file and instead just leave it on the server. Although we would need to evaluate if there are any problems with this approach.

Actually yes please, now that I think of it some more, this is the only viable course of action, as in most of my use cases (lxc export --instance-only --compress) the export will have to be at two places at the same time at least for a second, from the time the streaming starts to until it ends, so this trick is not actually helping me, and I just got lucky that none of my backups exceeded half of the space allocated .

I can simulate this behaviour already by starting lxc export and immediately killing it, but something more official will still be very useful, more predictable and configurable file names instead of instances/xxx/backupNN come to mind immediately, and of course a reliable way to tell that the export has been done (other than grepping for processes hanging off of lxd) would be nice too :wink:

Cool thanks.

Please can you log a feature request issue over at


oki, opened

1 Like