U18 & u20 containers not getting v4 IPs, but u22 & u24 containers are getting v4 IPs. Any ideas?

I just ran into an issue where all my containers have been getting private IPs on various bridge networks as expected. Those containers were all Ubuntu version 24.04.

Ubuntu 20.04, however, is not getting a private IPv4 from LXD, although it does get a private IPv6. I tested this with several private IPs, on different lxc bridge networks, and cannot seem to get an IPv4 on these u20 containers.

Same happens with Ubuntu 18.04.

Containers with Ubuntu 22.04 & 24.04 have no problem at all getting auto assigned private IPs and no problem setting custom private IPs.

This is on an Ubuntu 22.04 host & using LXD version 5.21.2

Any ideas?

ETA: Just tested on my laptop. Ubuntu 22.04 host & LXC 6.1. And with that, no problem with getting v4 IPs for both 20.04 & 18.04

ETA: I updated from 5.21.2 to 6.1 via “sudo snap refresh lxd --channel=6.1/stable” and that appears to have made no difference with u18 & u20 getting v4 IPs.

Hi! Let’s see if we can crack this.
If you create new containers after refreshing, do they have ipv4 assigned?
If not, what is the difference between the setup that works and the one that doesn’t?

Hi, I appreciate the help.

I was just able to replicate the issue on my laptop. Also, it turns out the default profile using lxdbr0 does give u18 & u20 containers private IPv4 addresses, but my custom profiles+bridges do not. On both my laptop & server. I just wasn’t using the default profile for those containers on the server previously.

On my server I have 5 bridge networks, one for each of 5 public IPs. This enables me to use the appropriate outbound egress SNAT IP on containers. I have multiple u24 containers on each of those 5 bridge networks, all running fine with IPv4 addresses. And a few u22 containers.

u18 & u20 being launched with “–profile br-#” (1~5) (each also using a bridge-1 to 5) does not give those containers IPv4. But I can add them to the default profile (lxdbr0) and remove from the custom profile and the container gets the IPv4.

I can also create u18 or u20 containers using the default profile, they get IPv4. But if I move them to other profiles they lose the IPv4… but get it back when on the default profile again.

My laptop only had the single default lxdbr0 network and only used the default profile. So it also was able to give u18 & u20 containers a private IPv4. However, I added a custom bridge & custom profile which uses that bridge… and the same thing happens as on the server. No IPv4 for u18 or u20 containers, but just fine with u22 & u24 containers as on the server.

Maybe it’s with how I setup the bridges/profiles.

lxdbr0 has 10.0.0.1/24 on my laptop. On the server lxdbr0 has 10.1.0.1/24.

Bridges were created on my server similar to:

lxc network create bridge-100 --type=bridge ipv4.address=10.1.100.1/24 ipv4.firewall=“false” ipv4.nat=“true” ipv6.firewall=“false” ipv6.nat=“true” ipv4.nat.address=[public IP]

The bridge on the laptop, to test this issue, I created with:

lxc network create bridge-100 --type=bridge ipv4.address=10.0.100.1/24 ipv4.firewall=“false” ipv4.nat=“true” ipv6.firewall=“false” ipv6.nat=“true”

And the profiles, one for each bridge, were created all similar to:

lxc profile create ip.100
lxc profile device add ip.100 root disk path=/ pool=default
lxc profile device add ip.100 eth-100 nic name=eth-100 network=bridge-100 queue.tx.length=10000

When I check the u20 container with “lxc info u20” I see that eth-# is down when not on the default profile bridge, but up when it is on the default profile/bridge. I’m a bit lost as to why this is happening, but maybe the additional details can help pinpoint the issue.

Hi, sorry for the delay on the response.
I managed to get this working just by making the nic name eth0 instead of eth-100 (not the device name on LXD, the name that is visible inside the instance defined on name=eth-100). It seems any other name results on the same problem. Maybe Ubuntu back then just couldn’t handle names outside the standard on device names but this seems too restrictive for me. I will do some more investigation and open an issue if this something we should fix.
Thanks for showing this :smiley:

It looks like cloud-init in the latter versions of Ubuntu is reconfiguring netplan to use DHCP on the first NIC.

No problem & same here with the delays. :slight_smile:

Seems you’re definitely right about only eth0 working for u18 & u20. Odd that it works this way but good to know how to sort this issue out when necessary.

I did a bit more testing with nic names.

The nic names I’ve tried: eth0, eth-0, eth100, eth00, eth123, myeth0, s1eth100 and a few others that I didn’t keep track of.

The only one that worked for Ubuntu 18.04 & 20.04 was eth0.

That testing I did using custom profiles, so I tested a bit more with the default profile, just editing the nic name & saving. Only eth0 works with that as well for u18 & u20.

Something I also noticed, u18 & u20 containers lost or regained their IPs immediately after saving the default profile edits. But a u22 container would lose it’s IP & require reboots each time to get it back.

Performing DHCP only on the eth0 NIC is actually expected. Latter versions are an exception because, as tomp said, cloud-init is reconfiguring netplan to use DHCP on the first NIC independentely on the name.

As for Ubuntu 22 requiring a reboot to get an ipv4 assigned, I manged to reproduce this with 24.04 as well. But this is also a consequence of netplan configuration, so nothing on the LXD side.

1 Like