Hi, I appreciate the help.
I was just able to replicate the issue on my laptop. Also, it turns out the default profile using lxdbr0 does give u18 & u20 containers private IPv4 addresses, but my custom profiles+bridges do not. On both my laptop & server. I just wasn’t using the default profile for those containers on the server previously.
On my server I have 5 bridge networks, one for each of 5 public IPs. This enables me to use the appropriate outbound egress SNAT IP on containers. I have multiple u24 containers on each of those 5 bridge networks, all running fine with IPv4 addresses. And a few u22 containers.
u18 & u20 being launched with “–profile br-#” (1~5) (each also using a bridge-1 to 5) does not give those containers IPv4. But I can add them to the default profile (lxdbr0) and remove from the custom profile and the container gets the IPv4.
I can also create u18 or u20 containers using the default profile, they get IPv4. But if I move them to other profiles they lose the IPv4… but get it back when on the default profile again.
My laptop only had the single default lxdbr0 network and only used the default profile. So it also was able to give u18 & u20 containers a private IPv4. However, I added a custom bridge & custom profile which uses that bridge… and the same thing happens as on the server. No IPv4 for u18 or u20 containers, but just fine with u22 & u24 containers as on the server.
Maybe it’s with how I setup the bridges/profiles.
lxdbr0 has 10.0.0.1/24 on my laptop. On the server lxdbr0 has 10.1.0.1/24.
Bridges were created on my server similar to:
lxc network create bridge-100 --type=bridge ipv4.address=10.1.100.1/24 ipv4.firewall=“false” ipv4.nat=“true” ipv6.firewall=“false” ipv6.nat=“true” ipv4.nat.address=[public IP]
The bridge on the laptop, to test this issue, I created with:
lxc network create bridge-100 --type=bridge ipv4.address=10.0.100.1/24 ipv4.firewall=“false” ipv4.nat=“true” ipv6.firewall=“false” ipv6.nat=“true”
And the profiles, one for each bridge, were created all similar to:
lxc profile create ip.100
lxc profile device add ip.100 root disk path=/ pool=default
lxc profile device add ip.100 eth-100 nic name=eth-100 network=bridge-100 queue.tx.length=10000
When I check the u20 container with “lxc info u20” I see that eth-# is down when not on the default profile bridge, but up when it is on the default profile/bridge. I’m a bit lost as to why this is happening, but maybe the additional details can help pinpoint the issue.