Ok, tried that and new error. TLDR: I wonder if I’m going to have to do surgery and delete the records instead?
What I did…
-
Was stumped until I realized I needed to be on the actual server (missed/didn’t catch the ‘lxd’ command was there). So ssh’d to hydra1 (habit, assumed the database is shared, so I think it’s ok).
-
$ lxd sql global ‘select * from storage_volumes’ was really big, so…
$ lxc list --project hydradev
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-177ca778-2438-4bdf-6c42-f6c809d0614b | RUNNING | 10.0.4.104 (eth0) | | VIRTUAL-MACHINE | 0 | hydra1 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-499dd539-ac4d-4f98-5051-1aaf1f855e16 | RUNNING | 10.0.5.4 (eth0) | | VIRTUAL-MACHINE | 0 | hydra2 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-dfaca061-1f86-4dea-48bb-83c750181681 | STOPPED | | | VIRTUAL-MACHINE | 0 | hydra2 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-e1ab555b-0aec-4167-4f48-a4d425014f72 | STOPPED | | | VIRTUAL-MACHINE | 0 | hydra2 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
… and …
$ lxd sql global 'select * from storage_volumes where name in ("vm-177ca778-2438-4bdf-6c42-f6c809d0614b", "vm-499dd539-ac4d-4f98-5051-1aaf1f855e16", "vm-dfaca061-1f86-4dea-48bb-83c750181681", "vm-e1ab555b-0aec-4167-4f48-a4d425014f72")'
+------+-----------------------------------------+-----------------+---------+------+-------------+------------+--------------+--------------------------------+
| id | name | storage_pool_id | node_id | type | description | project_id | content_type | creation_date |
+------+-----------------------------------------+-----------------+---------+------+-------------+------------+--------------+--------------------------------+
| 2116 | vm-177ca778-2438-4bdf-6c42-f6c809d0614b | 1 | 1 | 3 | | 6 | 1 | 2025-05-08T21:55:10.063126096Z |
| 2596 | vm-499dd539-ac4d-4f98-5051-1aaf1f855e16 | 1 | 2 | 3 | | 6 | 1 | 2026-01-17T21:08:17.484696575Z |
+------+-----------------------------------------+-----------------+---------+------+-------------+------------+--------------+--------------------------------+
which narrowed the list down – and to what I expected from you comments.
-
Had to figure out meaning of node_id (etc). Found it’s the cluster nodes. 1 = hydra1, 2 = hydra2 (very convenient).
-
Inserted rows…
$ lxd sql global 'insert into storage_volumes (name, storage_pool_id, node_id, type, description, project_id, content_type) values ("vm-dfaca061-1f86-4dea-48bb-83c750181681", 1, 2, 3, "delete me", 6, 1)'
Rows affected: 1
$ lxd sql global 'insert into storage_volumes (name, storage_pool_id, node_id, type, description, project_id, content_type) values ("vm-e1ab555b-0aec-4167-4f48-a4d425014f72", 1, 2, 3, "delete me", 6, 1)'
Rows affected: 1
$ lxd sql global 'select * from storage_volumes where name in ("vm-177ca778-2438-4bdf-6c42-f6c809d0614b", "vm-499dd539-ac4d-4f98-5051-1aaf1f855e16", "vm-dfaca061-1f86-4dea-48bb-83c750181681", "vm-e1ab555b-0aec-4167-4f48-a4d425014f72")'
+------+-----------------------------------------+-----------------+---------+------+-------------+------------+--------------+--------------------------------+
| id | name | storage_pool_id | node_id | type | description | project_id | content_type | creation_date |
+------+-----------------------------------------+-----------------+---------+------+-------------+------------+--------------+--------------------------------+
| 2116 | vm-177ca778-2438-4bdf-6c42-f6c809d0614b | 1 | 1 | 3 | | 6 | 1 | 2025-05-08T21:55:10.063126096Z |
| 2596 | vm-499dd539-ac4d-4f98-5051-1aaf1f855e16 | 1 | 2 | 3 | | 6 | 1 | 2026-01-17T21:08:17.484696575Z |
| 2902 | vm-dfaca061-1f86-4dea-48bb-83c750181681 | 1 | 2 | 3 | delete me | 6 | 1 | 0001-01-01T00:00:00Z |
| 2903 | vm-e1ab555b-0aec-4167-4f48-a4d425014f72 | 1 | 2 | 3 | delete me | 6 | 1 | 0001-01-01T00:00:00Z |
+------+-----------------------------------------+-----------------+---------+------+-------------+------------+--------------+--------------------------------+
- … and yuck!
$ lxc list --project hydradev
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-177ca778-2438-4bdf-6c42-f6c809d0614b | RUNNING | 10.0.4.104 (eth0) | | VIRTUAL-MACHINE | 0 | hydra1 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-499dd539-ac4d-4f98-5051-1aaf1f855e16 | RUNNING | 10.0.5.4 (eth0) | | VIRTUAL-MACHINE | 0 | hydra2 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-dfaca061-1f86-4dea-48bb-83c750181681 | STOPPED | | | VIRTUAL-MACHINE | 0 | hydra2 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
| vm-e1ab555b-0aec-4167-4f48-a4d425014f72 | STOPPED | | | VIRTUAL-MACHINE | 0 | hydra2 |
+-----------------------------------------+---------+-------------------+------+-----------------+-----------+----------+
$ lxc rm -f vm-dfaca061-1f86-4dea-48bb-83c750181681
Error: Failed checking instance exists "local:vm-dfaca061-1f86-4dea-48bb-83c750181681": Instance not found
$ lxc rm -f vm-e1ab555b-0aec-4167-4f48-a4d425014f72
Error: Failed checking instance exists "local:vm-e1ab555b-0aec-4167-4f48-a4d425014f72": Instance not found
I assume “instance not found” == virtual machine isn’t running. Which we knew! So, that’s where I am starting to wonder if I need to delete. With any luck, the relationships cascade. If not, I’ll likely need a little bit of guidance as to what to clean up.