Unable to boot VM

Hi,

I migrated legacy machine from Proxmox to the LXD 6.3 with

lxd-migrate --name <vm name> --type vm --source vm-<id>-disk-1.qcow2 --non-interactive --conversion=format --config security.csm=true --config security.secureboot=false

The disk is attached>

lxc config show <vm-name> --expanded

architecture: x86_64
config:
  security.csm: "true"
  security.secureboot: "false"
  volatile.cloud-init.instance-id: 5e2ebeb0-57f9-4985-9d1d-1227d5deed48
  volatile.eth0.hwaddr: 00:16:3e:90:0c:fa
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: 52de7861-777c-476d-be05-aaa968ea6b24
  volatile.uuid.generation: 52de7861-777c-476d-be05-aaa968ea6b24
  volatile.vsock_id: "488877321"
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

But when I start the instance I get error:

Gave up waiting for root device.

Common problems:

* Boot args (cat /proc/cmdline)
* Check rootdelay= (did the system wait long enough?)
* Check root= (did the system wait for the right device?)
* Missing modules (cat /proc/modules; ls /dev)

ALERT! /dev/mapper/vm-root does not exist. Dropping to a shell!

BusyBox u1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash)

Enter 'help' for a list of built-in commands.

Could you please help me fix this? It becomes urgent, as I struggle with this for quite some time. I would like to migrate all VMs from Proxmox to the LXD, but I am unable as this error happens with all VM’s migrated to the LXD.

What guest OS is it?

Sounds like you may need to use the virt-v2v installed on the LXD host with the --format=conversion option.

See https://documentation.ubuntu.com/lxd/en/latest/howto/import_machines_to_instances/

Hi, thanks for help.

this particular image is very old Ubuntu 10.10, but it’s just test image (smaller size), I am more interested in migrating Ubuntu 20.04 LTS and 22.04 LTS.

I have tried to convert the image with installed virt-v2v package and run lxd-migrate with an option:

--conversion=format,virtio

There is no such option as

--format=conversion

in the lxd-migrate 5.21.3 (nor 6.3).

I also tried to do following:

virt-v2v --block-driver virtio-scsi -o local -of raw -os ./os -i disk -if qcow2 vm-104-disk-1.qcow2
lxd-migrate --name <vm-name> --type vm --source os/vm-104-disk-1-sda --non-interactive --config security.csm=true --config security.secureboot=false

But the result is the same, VM won’t boot.

Support for virt-v2v was added to lxd-migrate in LXD 5.21.3

@dinmusic please can you assist with this?

1 Like

where are you getting your lxd-migrate binary from?

Hi,

If you have recent enough lxd-migrate (as per @tomp’s answer), you can use --conversion=format,virtio (format option if image is not in raw format, and conversion option to enable virtio-scsi). The conversion option calls virt-v2v in the background, which enables virtio-scsi modules if they are already present but disabled, otherwise it will have no effect.

Could you try setting io.bus=virtio-blk on the root device to avoid using virtio-scsi?
This should confirm whether the missing module is an issue.

lxc config device set <instance> root io.bus=virtio-blk

Also what is the size of the raw image?

Hi,

from official site https://github.com/canonical/lxd/releases/latest/download/bin.linux.lxd-migrate.x86_64.

I downloaded latest release 6.3.

Hi,

I tried to use io.bus=virtio-blk by command

lxc config device add <instance> root disk path=/ pool=default io.bus=virtio-blk

I checked the lxc instance config:

architecture: x86_64
config:
  security.csm: "true"
  security.secureboot: "false"
  volatile.cloud-init.instance-id: 04ccc115-6904-4684-b816-b783773ceae9
  volatile.eth0.host_name: tap7d7617c9
  volatile.eth0.hwaddr: 00:16:3e:7f:e6:65
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 9952f484-233d-4415-9eed-73ad311306b4
  volatile.uuid.generation: 9952f484-233d-4415-9eed-73ad311306b4
  volatile.vsock_id: "3961537211"
devices:
  root:
    io.bus: virtio-blk
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

No change, still not working.

I can see, that in the default pool, there is one volume, size 1.5GiB. The original Qcow2 image has the size 4.3GiB. When I look at the running instance on Proxmox, I can see real disk usage about those 1.5GiB

Are you trying an Ubuntu 20.04 or 22.04 LTS image?

If you can make that available perhaps @dinmusic can give it a whirl.

I suspect the older Ubuntu 10.10 image will be too old to run (LXD doesn’t support legacy guest devices) but if you could make that available too we could give it a try.

1 Like