I’m currently having issues with using Noble containers with LXD 5.21.1 LTS. Whenever I start a Noble container using an image pulled from either the ubuntu or ubuntu-minimal remote, the container fails to ever get assigned an IPv4 address. I’ve run the following ufw commands recommended in this thread, but I’m still seeing the same issue with Noble containers not getting assigned an IPv4 address:
sudo ufw allow in on lxdbr0
sudo ufw route allow in on lxdbr0
sudo ufw route allow out on lxdbr0
I’ve been able to produce this issue on both my Mantic workstation and Noble laptop. I only have this problem with Noble LXD containers, not the VM images.
There seems to be quite a few failing services in my containers… Looks like systemd-networkd is failing which would likely explain why I’m not getting any network connection. Here’s the output:
$ lxc launch ubuntu:noble noble-test
# Few seconds after not seeing any IPv4 address assigned
$ lxc exec noble-test -- systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● tpm-udev.path loaded failed failed Handle dynamically added tpm devices
● console-getty.service loaded failed failed Console Getty
● polkit.service loaded failed failed Authorization Manager
● systemd-binfmt.service loaded failed failed Set Up Additional Binary Formats
● systemd-logind.service loaded failed failed User Login Management
● systemd-networkd.service loaded failed failed Network Configuration
● systemd-resolved.service loaded failed failed Network Name Resolution
● systemd-sysctl.service loaded failed failed Apply Kernel Variables
● systemd-sysusers.service loaded failed failed Create System Users
● systemd-timedated.service loaded failed failed Time & Date Service
● systemd-tmpfiles-setup-dev-early.service loaded failed failed Create Static Device Nodes in /dev gracefully
● systemd-tmpfiles-setup-dev.service loaded failed failed Create Static Device Nodes in /dev
● systemd-tmpfiles-setup.service loaded failed failed Create Volatile Files and Directories
● tpm-udev.service loaded failed failed Handle dynamically added tpm devices
● systemd-networkd.socket loaded failed failed Network Service Netlink Socket
Legend: LOAD → Reflects whether the unit definition was properly loaded.
ACTIVE → The high-level unit activation state, i.e. generalization of SUB.
SUB → The low-level unit activation state, values depend on unit type.
15 loaded units listed.
Any ideas for why services might be failing in a new container? I made sure to pull a fresh image from cloud-images before testing.
I don’t have any Mantic host handy here to reproduce but I’d check journalctl -fk while starting a noble container. I think you’ll find Apparmor denials which would be nice to see
I do have the same issue with the Noble containers on my Noble machine - I own a library of computers - but I have collected the output of journalctl -fk:
Huh - so oddly enough I fixed the issue by enabling security nesting for the container instance!
lxc config set noble-test security.nesting true
I guess the issue was with how I had the profile for the instance configured:
raw.apparmor: mount fstype=nfs*, mount fstype=rpc_pipefs,
security.privileged: "true"
I have these above options set to enable NFS mounts within LXD containers so that I can deploy test HPC clusters. Must conflict with noble containers in some way such that I also need to enable security nesting. Good to get this fixed! Thanks for your help!
Just as an FYI, I understand that combining security.nesting and security.privileged makes container breakouts much more easy. But as long as you trust your workloads it would be fine (which is similar to just using security.privileged).
Yes, I only use the security.privileged and security.nesting for running small-scale local tests before scaling up to larger machine-based clouds. I’m the only one running private workloads on my LXD cloud
@tomp For LXD on 24.04 has this problem been fixed ye!
on my 24.04 systems its happening with both VMs and containers.
$ lxd --version
5.21.1 LTS
I’ve set:
sudo ufw allow in on lxdbr0
sudo ufw route allow in on lxdbr0
sudo ufw route allow out on lxdbr0
and…
set security.nesting=true
and none of that fix this on my system.
I notice you have multiple bridges, are these problem instances definitely connected to lxdbr0 or one of the other bridges, as they dont appear to have ufw rules added?