@equator8848 that is correct. Ubuntu server/cloud images are available from the various builtin remotes like ubuntu:, ubuntu-daily:, ubuntu-minimal: and ubuntu-minimal-daily:.
There is an exception for Ubuntu desktop images that are available through images:.
Unfortunately, Iām not aware of a good way to quickly find the fingerprints.
You can go to the product catalog (JSON file) of the given remote (here is ubuntu-daily). The fingerprint in field combined_squashfs_sha256 is for containers and in combined_disk1-img_sha256 is for VM images.
Iām using this new images server to try and launch Rocky VMs (rockylinux/8/cloud), but they seem to be missing the LXD agent:
$ lxc launch images:rockylinux/8/cloud --vm
Creating the instance
Instance name is: cosmic-wasp
Starting cosmic-wasp
(wait a few minutes)
$ lxc exec cosmic-wasp bash
Error: LXD VM agent isnāt currently running
I then tried injecting my ssh key via lxd profile, setting both cloud-init.user-data and user.user-data for users ārockyā or ārootā. But neither of those seem to work either.
Anyone else trying Rocky images and having trouble accessing the newly launched (VM) instances?
lxc launch images:rockylinux/8/cloud --vm --console rocky8 works for me. The VM reboots as part of installing the lxd-agent and when it comes back, it works.
I copy and pasted your launch command. I see some failures in the consoleā¦ but no way of knowing what is causing the failure. Any ideas?
[FAILED] Failed to start LXD - agent.
See 'systemctl status lxd-agent.service' for details.
[ OK ] Reached target Cloud-init target.
Rocky Linux 8.10 (Green Obsidian)
Kernel 4.18.0-553.8.1.el8_10.x86_64 on an x86_64
rocky8 login:
I exported and mounted the image to inspect /var/log/messages.
Jul 9 04:52:07 rocky8 kernel: virtio-fs: tag <config> not found
Jul 9 04:52:07 rocky8 lxd-agent-setup[721]: Couldn't mount 9p or cdrom, failing
Jul 9 04:52:07 rocky8 systemd[1]: lxd-agent.service: Control process exited, code=exited status=1
Jul 9 04:52:07 rocky8 systemd[1]: lxd-agent.service: Failed with result 'exit-code'.
Jul 9 04:52:07 rocky8 systemd[1]: Failed to start LXD - agent.
some posts online suggest this might be to do with the VMās kernelā¦ but then I wouldnāt be the only one experiencing the issue. something strange in my config or lxd version?
This is the issue because LXD should be exporting the config drive that contains the keys that the lxd-agent needs for communicating over vsock to the LXD host.
CentOS clones donāt have 9p kernel module and so cannot use that.
Can you see any virtiofsd processes running on the host?
What does snap list | grep lxd show?
You could also try refreshing to latest/candidate as that contains the up coming LXD 6.1 release:
I donāt see any virtiofsd processes running on the host where the rocky VM is hosted, though there are other (ubuntu) VMs that function as expected with lxd agent running.
Another host in the cluster does have virtiofsd processes.
WARNING[2024-07-09T11:24:02Z] Unable to use virtio-fs for config drive, using 9p as a fallback err="Stateful migration unsupported" instance=rocky8 instanceType=virtual-machine project=random
WARNING[2024-07-09T11:24:02Z] Unable to use virtio-fs for config drive, using 9p as a fallback err="Stateful migration unsupported" instance=rocky8 instanceType=virtual-machine project=random
fyi i have set migration stateful in the cluster configuration.
Those messages also occur for ubuntu VMs, but presumably the fallback to 9p succeeds on the ubuntu kernels?
Ah so in that case, you cant use Rocky images with the lxd-agent because they dont have 9p kernel module. This is because virtio-fs doesnāt support live migration sadly.
Yes, thanks for confirming. Just to close the loop, in case you have set migration.stateful by default in your lxd configuration this works in terms of the lxd-agent starting, and getting a shell:
Thank you for having addressed the image server replacement issue! I thought this was long resolved, but ran into the issue just now. Hereās some feedback on a user story that just happened to me and seems suboptimal.
On a fairly recently created Ubuntu 20.04 VM (official cloud image serial 20240710) the lxd snap is installed, 4.0.9-a29c6f1 (24061) tracking 4.0/stable/ubuntu-20.04. This gave me the old images server by default, which (as you probably know) silently returns zero results.
I expected the snaps to have been updated to use the new image server by now. Since the old one will not return any results at all, I think this could happen without regressing users?
I then found Remote image servers which makes no mention of the extra steps required for older, still supported lxd snap tracks.
Finally, this post also didnāt contain precise instructionsāI had to remove the old images: remote first, which involved guessing the command (or looking it up) rather than it being presented here.
Suggestions:
Older lxd snap tracks should be updated to provide the new images: server for fresh install and also when upgrading to the latest stable, since the old one doesnāt work at all now.
Until the above is done, the Remote image servers documentation page should link to this post (or an equivalent).
The post should contain exact instructions on what the user should do, including the lxc rm images step.