When I start my Windows VM I get the following error:
Error: open /var/snap/lxd/common/lxd/virtual-machines/win11d/config/server.crt: no space left on device
To create my VM I am following this video https://www.youtube.com/watch?v=3PDMGwbbk48 . I can start Windows a few times and thing work well.
I install some software inside Windows which again, works well. After a few sessions I run into this error when trying lxc start win11d
.
How can I resolve this please? How can I diagnose what is wrong with my setup?
tomp
October 7, 2025, 7:41am
2
Hi Chris,
Please can you provide some more info:
The output of snap list lxd
The output of lxc storage list
, along with lxc storage show <pool>
that your VM is stored on.
Thanks!
Thank you in advance:
$ snap list lxd
Name Version Rev Tracking Publisher Notes
lxd 6.5-22da890 35616 latest/stable canonical✓ -
$ lxc storage list
+---------+--------+--------------------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+--------------------------------------------+-------------+---------+---------+
| default | zfs | /var/snap/lxd/common/lxd/disks/default.img | | 6 | CREATED |
+---------+--------+--------------------------------------------+-------------+---------+---------+
$ lxc storage show default
name: default
description: ""
driver: zfs
status: Created
config:
size: 30GiB
source: /var/snap/lxd/common/lxd/disks/default.img
zfs.pool_name: default
used_by:
- /1.0/instances/win11d
- /1.0/instances/win11d/snapshots/clean_install
- /1.0/instances/win11d/snapshots/dymo_installed
- /1.0/instances/win11d/snapshots/iso_media_really_removed
- /1.0/instances/win11d/snapshots/iso_media_removed
- /1.0/profiles/default
locations:
- none
tomp
October 7, 2025, 8:43am
4
Please can you show the output of:
sudo zfs list -t all
tomp
October 7, 2025, 8:56am
5
Please can you also try doing this:
lxc config device set win11d root size.state=200MiB
You may need to change set
to override
if the root disk is coming from a profile.
You may also need to delete some of the snapshots to free up space.
And then see if you can start the VM.
I don’t appear to have zfs installed! which zfs
produces no output.
tomp
October 7, 2025, 9:34am
7
Can you do:
sudo apt install zfsutils-linux
zfs
now installed:
$ sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
default 28.6G 0B 24K legacy
default/buckets 24K 0B 24K legacy
default/containers 24K 0B 24K legacy
default/custom 24K 0B 24K legacy
default/deleted 144K 0B 24K legacy
default/deleted/buckets 24K 0B 24K legacy
default/deleted/containers 24K 0B 24K legacy
default/deleted/custom 24K 0B 24K legacy
default/deleted/images 24K 0B 24K legacy
default/deleted/virtual-machines 24K 0B 24K legacy
default/images 24K 0B 24K legacy
default/virtual-machines 28.6G 0B 24K legacy
default/virtual-machines/win11d 7.87M 0B 7.68M legacy
default/virtual-machines/win11d@snapshot-clean_install 47.5K - 7.68M -
default/virtual-machines/win11d@snapshot-iso_media_removed 47.5K - 7.68M -
default/virtual-machines/win11d@snapshot-iso_media_really_removed 47.5K - 7.68M -
default/virtual-machines/win11d@snapshot-dymo_installed 48K - 7.68M -
default/virtual-machines/win11d.block 28.6G 0B 22.3G -
default/virtual-machines/win11d.block@snapshot-clean_install 1.94G - 16.0G -
default/virtual-machines/win11d.block@snapshot-iso_media_removed 350M - 15.6G -
default/virtual-machines/win11d.block@snapshot-iso_media_really_removed 283M - 15.8G -
default/virtual-machines/win11d.block@snapshot-dymo_installed 1.06G - 18.8G -
This produces an error also:
$ lxc config device set win11d root size.state=200MiB
Error: Failed to write backup file: Failed to create file "/var/snap/lxd/common/lxd/virtual-machines/win11d/backup.yaml": open /var/snap/lxd/common/lxd/virtual-machines/win11d/backup.yaml: no space left on device
tomp
October 7, 2025, 3:00pm
10
This is the issue, the 0B
in the AVAIL
column.
What does lxc storage info default
dhow?
tomp
October 7, 2025, 3:56pm
11
Please can you also show lxc config show <instance> --expanded
?
tomp
October 7, 2025, 3:58pm
12
Would you mind trying to delete one of the snapshots (that should free up some space), and then setting size.state?
Thank you for your time. I have deleted all but the 1st snapshot and the VM does now start. Output from 3 commands provided below. When I created the VM I allowed 80GiB.
$ sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
default 25.8G 2.81G 24K legacy
default/buckets 24K 2.81G 24K legacy
default/containers 24K 2.81G 24K legacy
default/custom 24K 2.81G 24K legacy
default/deleted 144K 2.81G 24K legacy
default/deleted/buckets 24K 2.81G 24K legacy
default/deleted/containers 24K 2.81G 24K legacy
default/deleted/custom 24K 2.81G 24K legacy
default/deleted/images 24K 2.81G 24K legacy
default/deleted/virtual-machines 24K 2.81G 24K legacy
default/images 24K 2.81G 24K legacy
default/virtual-machines 25.7G 2.81G 24K legacy
default/virtual-machines/win11d 7.73M 192M 63.5K legacy
default/virtual-machines/win11d@snapshot-clean_install 7.67M - 7.68M -
default/virtual-machines/win11d.block 25.7G 2.81G 22.3G -
default/virtual-machines/win11d.block@snapshot-clean_install 3.47G - 16.0G -
$ lxc storage info default
info:
description: ""
driver: zfs
name: default
space used: 25.77GiB
total space: 28.58GiB
used by:
instances:
- win11d
profiles:
- default
$ lxc config show win11d --expanded
architecture: x86_64
config:
limits.cpu: "4"
limits.memory: 8GiB
volatile.cloud-init.instance-id: 800ceedf-7c56-409d-b03a-f0a1fcece579
volatile.eth0.host_name: tapea63755d
volatile.eth0.hwaddr: 00:16:3e:67:83:8d
volatile.last_state.power: RUNNING
volatile.uuid: 9fb7e78f-3746-42c1-ac14-d4975f9f05c0
volatile.uuid.generation: 8f7a7c84-9c30-4847-b17b-3d0f05862fad
volatile.vsock_id: "2881689457"
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
size: 80GiB
size.state: 200MiB
type: disk
vtpm:
path: /dev/tpm0
type: tpm
ephemeral: false
profiles:
- default
stateful: false
description: ""
1 Like
tomp
October 8, 2025, 7:18am
14
OK good to hear, and the size.state seems to have been applied now too.
I believe this is caused by the interplay between LXD copying the lxd-agent
binary into the VM’s config drive (the default/virtual-machines/win11d
volume) each time the VM is started (if it has changes, i.e from a snap refresh update), and the taking of snapshots, which means that the old version of the lxd-agent is stored in the snapshot which increases the CoW storage accounted for the volume, eventually after enough snap refreshes, snapshots and VM restarts the config drive is full.
Setting size.state
increases the size of the VM’s config drive, allowing this pattern to continue for longer.
I’ve created LXD copying lxd-agent binary into VM's config drive eventually causes the config drive quota to be reached · Issue #16694 · canonical/lxd · GitHub to investigate a fix for this.
Thank you for looking at this issue. What’s my best course of action / workaround in the meantime?