Howto import Proxmox VM’s to the LXD

Hi,

I am planning to move all VM’s running on bare metal server with Proxmox VE to the LXD based solution. Only concern I have is the ability to properly run Proxmox VM’s saved in QCOW2 format. Because I don’t have spare hardware to properly test it, I am not sure with the procedure. Do I need to export QCOW2 images to the RAW format?

qemu-img convert -f qcow2 -O raw /path/to/disk.qcow2 /path/to/disk.raw

And then create LXD VM, create new disk in storage pool and configure new VM to use converted qcow2 disk as new / disk? Or is the procedure different?

Any concrete example?

Many thanks for any help.

Lumir

Hi Lumir, please have a look at the lxd-migrate tool here https://documentation.ubuntu.com/lxd/en/latest/howto/import_machines_to_instances/. Hope this helps!

Hello Maria,

thanks for help. Unfortunately the documentation is not accurate. For LXD usage I have to convert QCOW2 format to the RAW first, using command:

qemu-img convert -f qcow2 -O raw /path/to/disk.qcow2 /path/to/disk.raw

If I don’t do that, the migration tool complains about the format of the image.

Also, what is not mentioned is, that after the conversion to the LXD, the newly created VM is not able to boot.

Whan I had such problem with Proxmox, I had to attach disk to the instance and set boot priority, like this (on Proxmox):

# Attach disk
qm set <VM ID> --scsi0 local-lvm:vm-<VM ID>-disk-0
# Configure boot
qm set <VM ID> --boot order=scsi0

What would be the LXD equivalent of those commands? Something like:

lxc config device add <instance-name> root disk source=/path/to/target.raw path=/
lxc config set <instance-name> raw.qemu "-drive file=/path/to/target.raw,if=none,id=disk0,format=raw -device virtio-blk-pci,drive=disk0,bootindex=1"

Is this correct, or what is right way to make newly imported images bootable?

Simply upload the qcow2 image on the web ui and everything will be taken care of.

Hi, thanks for your help, unfortunately it doesn’t work.

The instance won’t find the disk. Please see attached screenshot and instance yaml config file.

name: pypi
description: pypi
status: Running
status_code: 103
created_at: '2025-02-01T20:51:24.761976908Z'
last_used_at: '2025-02-01T20:57:31.439143437Z'
location: none
type: virtual-machine
project: default
architecture: x86_64
ephemeral: false
stateful: false
profiles:
  - default
config:
  volatile.cloud-init.instance-id: <id>
  volatile.eth0.host_name: tapb592cf56
  volatile.eth0.hwaddr: 00:12:34:05:54:aa
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: 'false'
  volatile.uuid: <id>
  volatile.uuid.generation: <id>
  volatile.vsock_id: '<id>'
devices:
  iso-volume:
    boot.priority: '10'
    pool: default
    source: vm-104-disk-1.qcow2
    type: disk
  root:
    path: /
    pool: default
    type: disk

I have tried to boot from live installer, mount root and /bootfirlesystem and fix booting manually by running Grub, but without success.