Noble containers not getting assigned an IPv4 address but VMs are

Hi there :wave:

I’m currently having issues with using Noble containers with LXD 5.21.1 LTS. Whenever I start a Noble container using an image pulled from either the ubuntu or ubuntu-minimal remote, the container fails to ever get assigned an IPv4 address. I’ve run the following ufw commands recommended in this thread, but I’m still seeing the same issue with Noble containers not getting assigned an IPv4 address:

sudo ufw allow in on lxdbr0
sudo ufw route allow in on lxdbr0
sudo ufw route allow out on lxdbr0

I’ve been able to produce this issue on both my Mantic workstation and Noble laptop. I only have this problem with Noble LXD containers, not the VM images.

Hey Jason, sorry for the delay.

Have you checked the other firewall related potential issues on https://documentation.ubuntu.com/lxd/en/latest/howto/network_bridge_firewalld/#use-another-firewall ?

You said it only affected containers so could you maybe run lxc exec <container> -- systemctl --failed and see if anything failed for you?

Hi Simon! Thank you for responding :smiley:

There seems to be quite a few failing services in my containers… Looks like systemd-networkd is failing which would likely explain why I’m not getting any network connection. Here’s the output:

$ lxc launch ubuntu:noble noble-test
# Few seconds after not seeing any IPv4 address assigned

$ lxc exec noble-test -- systemctl --failed
  UNIT                                     LOAD   ACTIVE SUB    DESCRIPTION
● tpm-udev.path                            loaded failed failed Handle dynamically added tpm devices
● console-getty.service                    loaded failed failed Console Getty
● polkit.service                           loaded failed failed Authorization Manager
● systemd-binfmt.service                   loaded failed failed Set Up Additional Binary Formats
● systemd-logind.service                   loaded failed failed User Login Management
● systemd-networkd.service                 loaded failed failed Network Configuration
● systemd-resolved.service                 loaded failed failed Network Name Resolution
● systemd-sysctl.service                   loaded failed failed Apply Kernel Variables
● systemd-sysusers.service                 loaded failed failed Create System Users
● systemd-timedated.service                loaded failed failed Time & Date Service
● systemd-tmpfiles-setup-dev-early.service loaded failed failed Create Static Device Nodes in /dev gracefully
● systemd-tmpfiles-setup-dev.service       loaded failed failed Create Static Device Nodes in /dev
● systemd-tmpfiles-setup.service           loaded failed failed Create Volatile Files and Directories
● tpm-udev.service                         loaded failed failed Handle dynamically added tpm devices
● systemd-networkd.socket                  loaded failed failed Network Service Netlink Socket

Legend: LOAD   → Reflects whether the unit definition was properly loaded.
        ACTIVE → The high-level unit activation state, i.e. generalization of SUB.
        SUB    → The low-level unit activation state, values depend on unit type.

15 loaded units listed.

Any ideas for why services might be failing in a new container? I made sure to pull a fresh image from cloud-images before testing.

I don’t have any Mantic host handy here to reproduce but I’d check journalctl -fk while starting a noble container. I think you’ll find Apparmor denials which would be nice to see :wink:

I do have the same issue with the Noble containers on my Noble machine - I own a library of computers :sweat_smile: - but I have collected the output of journalctl -fk:

Jun 12 13:10:58 godzilla kernel: [UFW BLOCK] IN=cali0e3574ff091 OUT= MAC=ee:ee:ee:ee:ee:ee:32:4c:a7:59:bb:95:08:00 SRC=10.1.119.132 DST=10.0.0.96 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=15264 DF PROTO=TCP SPT=40262 DP
T=16443 WINDOW=64860 RES=0x00 SYN URGP=0 MARK=0x50000
Jun 12 13:11:00 godzilla kernel: audit: type=1400 audit(1718212260.096:71360): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxd-noble-test_</var/snap/lxd/common/lx
d>" name="/run/systemd/mount-rootfs/" pid=463870 comm="(d-logind)" srcname="/" flags="rw, rbind"
Jun 12 13:11:00 godzilla kernel: audit: type=1400 audit(1718212260.100:71361): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxd-noble-test_</var/snap/lxd/common/lx
d>" name="/run/systemd/mount-rootfs/" pid=463874 comm="(d-logind)" srcname="/" flags="rw, rbind"
Jun 12 13:11:00 godzilla kernel: audit: type=1400 audit(1718212260.108:71362): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxd-noble-test_</var/snap/lxd/common/lx
d>" name="/run/systemd/mount-rootfs/" pid=463878 comm="(d-logind)" srcname="/" flags="rw, rbind"
Jun 12 13:11:00 godzilla kernel: audit: type=1400 audit(1718212260.116:71363): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxd-noble-test_</var/snap/lxd/common/lx
d>" name="/run/systemd/mount-rootfs/" pid=463882 comm="(d-logind)" srcname="/" flags="rw, rbind"
Jun 12 13:11:00 godzilla kernel: audit: type=1400 audit(1718212260.120:71364): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxd-noble-test_</var/snap/lxd/common/lx
d>" name="/run/systemd/mount-rootfs/" pid=463886 comm="(d-logind)" srcname="/" flags="rw, rbind"

Seems like I’m being denied access to /run/systemd/mount-rootfs. Doesn’t seem like the firewall is the issue here :slightly_frowning_face:

Those apparmor="DENIED" do match the known regression we have with Noble containers, so yeah, not a firewall issue.

1 Like

@sdeziel1

Huh - so oddly enough I fixed the issue by enabling security nesting for the container instance!

lxc config set noble-test security.nesting true

I guess the issue was with how I had the profile for the instance configured:

raw.apparmor: mount fstype=nfs*, mount fstype=rpc_pipefs,
security.privileged: "true"

I have these above options set to enable NFS mounts within LXD containers so that I can deploy test HPC clusters. Must conflict with noble containers in some way such that I also need to enable security nesting. Good to get this fixed! Thanks for your help!

1 Like

Ah that explains it :sweat_smile:

Indeed, this loosen the restrictions on mounts so Apparmor stops getting in the way.

Just as an FYI, I understand that combining security.nesting and security.privileged makes container breakouts much more easy. But as long as you trust your workloads it would be fine (which is similar to just using security.privileged).

Yes, I only use the security.privileged and security.nesting for running small-scale local tests before scaling up to larger machine-based clouds. I’m the only one running private workloads on my LXD cloud :sweat_smile:

image

3 Likes

@tomp For LXD on 24.04 has this problem been fixed ye!

on my 24.04 systems its happening with both VMs and containers.

$ lxd --version
5.21.1 LTS

I’ve set:
sudo ufw allow in on lxdbr0
sudo ufw route allow in on lxdbr0
sudo ufw route allow out on lxdbr0
and…
set security.nesting=true
and none of that fix this on my system.

$ lxc ls results:

Disable UFW on Host

$ lxc ls results:

The only thing that fixes this problem on my systems is to Turn Off UFW on the Host.

If I Turn Off UFW on the Host and start the LXD containers & vms they do get IPv4 addresses

But I do not want to Turn Off the Host Firewall (ie UFW) for obvious reasons.

Please can you show your ufw config?

Are you running docker on the host?

docker is not installed

ubuntu 24.04 ufw rules

Please show sudo nft list ruleset and sudo iptables-save thanks

BTW, I am assuming this is a Desktop install of Ubuntu, as the server version from CPC when using ubuntu:24.04 doesn’t come with any active rules.

Yes its a an Ubuntu 24.04 Desktop.

iptables-save

nft list ruleset

I notice you have multiple bridges, are these problem instances definitely connected to lxdbr0 or one of the other bridges, as they dont appear to have ufw rules added?

ignore the extra bridges… I am going to use them as part of a testbed for something but have not done so yet.