Containers with Ubuntu 12.04.5 LTS are not getting IPv4's anymore

Since a few weeks one of our servers that is running LXD 6.1-efad198 has a problem with IPv4 addresses. They are not longer configured. IPv6 still works.

And if I have the exact container on another LXD server is works fine.

I have tried multiple settings/etc but non work. The firewall/ufw rules are setup as described in the docs. I even tried with disabled ufw.

Tried it with a fresh ubuntu:12.04 image. That container does not get an IP too. But with ubuntu:24.04 it does! So something is not playing nice with older version of Ubuntu / Linux.

Not an answer per-se, but some information for comparison.

I’ve spun up a 12.04 container on lxd 6.1-c14927a to attempt to reproduce to problem: my container does acquire an IPv4 address.

$ lxc launch ubuntu:12.04
Creating the instance
Instance name is: calm-mullet               
Starting calm-mullet
$ lxc list status=running name=calm-mullet

+-------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
|    NAME     |  STATE  |        IPV4         |                     IPV6                      |   TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
| calm-mullet | RUNNING | 10.241.25.24 (eth0) | fd42:1e62:7326:f1e1:216:3eff:fecc:8ef0 (eth0) | CONTAINER | 0         |
+-------------+---------+---------------------+-----------------------------------------------+-----------+-----------+
$ lxc list status=running name=calm-mullet --format=yaml 

- name: calm-mullet
  description: ""
  status: Running
  status_code: 103
  created_at: 2024-08-23T10:15:35.368738839Z
  last_used_at: 2024-08-23T10:16:05.529511589Z
  location: none
  type: container
  project: default
  architecture: x86_64
  ephemeral: false
  stateful: false
  profiles:
  - default
  config:
    image.architecture: amd64
    image.description: ubuntu 12.04 LTS amd64 (release) (20170502)
    image.label: release
    image.os: ubuntu
    image.release: precise
    image.serial: "20170502"
    image.type: root.tar.xz
    image.version: "12.04"
    volatile.base_image: be4aa8e56eab681fac6553b48ce19d7f34833accc2c8ae65f140a603b8369a1d
    volatile.cloud-init.instance-id: f1382616-2337-45c2-9b36-9a21c9d506ce
    volatile.eth0.host_name: veth7ec38b1e
    volatile.eth0.hwaddr: 00:16:3e:cc:8e:f0
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[]'
    volatile.last_state.power: RUNNING
    volatile.uuid: 9803ac3d-1839-480b-870e-ae2500b1b1f4
    volatile.uuid.generation: 9803ac3d-1839-480b-870e-ae2500b1b1f4
  devices: {}
  expanded_config:
    image.architecture: amd64
    image.description: ubuntu 12.04 LTS amd64 (release) (20170502)
    image.label: release
    image.os: ubuntu
    image.release: precise
    image.serial: "20170502"
    image.type: root.tar.xz
    image.version: "12.04"
    volatile.base_image: be4aa8e56eab681fac6553b48ce19d7f34833accc2c8ae65f140a603b8369a1d
    volatile.cloud-init.instance-id: f1382616-2337-45c2-9b36-9a21c9d506ce
    volatile.eth0.host_name: veth7ec38b1e
    volatile.eth0.hwaddr: 00:16:3e:cc:8e:f0
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[]'
    volatile.last_state.power: RUNNING
    volatile.uuid: 9803ac3d-1839-480b-870e-ae2500b1b1f4
    volatile.uuid.generation: 9803ac3d-1839-480b-870e-ae2500b1b1f4
  expanded_devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  backups: []
  state:
    status: Running
    status_code: 103
    disk: {}
    memory:
      usage: 10493952
      usage_peak: 0
      total: 7860656000
      swap_usage: 81920
      swap_usage_peak: 0
    network:
      eth0:
        addresses:
        - family: inet
          address: 10.241.25.24
          netmask: "24"
          scope: global
        - family: inet6
          address: fd42:1e62:7326:f1e1:216:3eff:fecc:8ef0
          netmask: "64"
          scope: global
        - family: inet6
          address: fe80::216:3eff:fecc:8ef0
          netmask: "64"
          scope: link
        counters:
          bytes_received: 23370
          bytes_sent: 5694
          packets_received: 263
          packets_sent: 58
          errors_received: 0
          errors_sent: 0
          packets_dropped_outbound: 0
          packets_dropped_inbound: 0
        hwaddr: 00:16:3e:cc:8e:f0
        host_name: veth7ec38b1e
        mtu: 1500
        state: up
        type: broadcast
      lo:
        addresses:
        - family: inet
          address: 127.0.0.1
          netmask: "8"
          scope: local
        - family: inet6
          address: ::1
          netmask: "128"
          scope: local
        counters:
          bytes_received: 0
          bytes_sent: 0
          packets_received: 0
          packets_sent: 0
          errors_received: 0
          errors_sent: 0
          packets_dropped_outbound: 0
          packets_dropped_inbound: 0
        hwaddr: ""
        host_name: ""
        mtu: 65536
        state: up
        type: loopback
    pid: 610868
    processes: 15
    cpu:
      usage: 14451731000
  snapshots: []

There has been the rare occasion where a Jammy container, which is the version I predominantly use, has failed to acquire an IPv4 address. I’ve not attempted to track down the reason thus far.

1 Like

Thanks but I already know that. On my other servers the Ubuntu 12.04 run just fine.
I retried it with a clean lxd and the problem still is there.

Yep. I noticed that today. I am on 5.21 Stable. It was only happening with ubu 22 containers. I tired few things…it did not work. Eventually restarted the lxd daemon from systemclt and it was sorted. IPV4 was avialable for ubu 22 contaiers.

Restarting does not fix my problem. Even rebooted the server multiple times. And updated everything first (apt-get update/upgrade). The one version newer Ubuntu 14.04.6 LTS does not have this problem. So it seems only 12.04.5 LTS (and older?) don’t get an IP anymore.

My comment was for ubuntu 22 Jammy contianers loosing IP. I have never used 12 or 14 containers.

BTW, why 12 LTS? Its not even supported anymore. Why are you using it?

Old CRM software. We are migrating to new version (and Ubuntu) but that still will take some months.

Have you tried running dhclient (or similar) manually inside the guest instance, or setting up IP and default route manually, this would help to identity if its a problem with DHCP (or starting the DHCP client) inside the guest or whether its an external factor blocking packets.

Also, what host OS version is it?

With a fixed IP and routing it is working again. Still would like to have it working with dhcp.

Ubuntu 22.04 and after upgrading some servers Ubuntu 24.04

Can you see if the network service isn’t starting inside the guest, look for errors that may indicate if dhcp client isnt working.

Does running dhclient manually work?

I tried this on ubuntu 24.04 with similar results:

lxc ls
+------+---------+------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  | IPV4 |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+------+-----------------------------------------------+-----------+-----------+
| c1   | RUNNING |      | fd42:ffdb:caff:baf7:216:3eff:fe0c:479c (eth0) | CONTAINER | 0         |
+------+---------+------+-----------------------------------------------+-----------+-----------+
lxc exec c1 -- ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0  24332  2816 ?        Ss   08:50   0:00 /sbin/init
root        1013  0.0  0.0  15212  1536 ?        S    08:50   0:00 upstart-socket-bridge --daemon
root        1277  0.0  0.0   7284  2248 ?        Ss   08:50   0:00 dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -1 eth0
root        1842  0.0  0.0  17256  1548 ?        S    08:50   0:00 upstart-udev-bridge --daemon
root        1844  0.0  0.0  21356  2176 ?        Ss   08:50   0:00 /sbin/udevd --daemon
root        1874  0.0  0.0  50068  4992 ?        Ss   08:50   0:00 /usr/sbin/sshd -D
102         1884  0.0  0.0  23840  1664 ?        Ss   08:50   0:00 dbus-daemon --system --fork --activation=upstart
syslog      1897  0.0  0.0 180000  2944 ?        Sl   08:50   0:00 rsyslogd -c5
daemon      1927  0.0  0.0  16928  1416 ?        Ss   08:50   0:00 atd
root        1931  0.0  0.0  19132  1920 ?        Ss   08:50   0:00 cron
root        1940  0.0  0.0  16004  1932 ?        Ss   08:50   0:00 /usr/sbin/irqbalance
whoopsie    1948  0.0  0.0 111776  6784 ?        Ss   08:50   0:00 whoopsie
root        1987  0.0  0.0   4352  1408 ?        Ss   08:50   0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root        2162  0.0  0.0  41232  2688 pts/1    Ss   10:57   0:00 su -l
root        2163  1.0  0.0  21436  5120 pts/1    S    10:57   0:00 -su
root        2221  0.0  0.0  16896  2432 pts/1    R+   10:57   0:00 ps aux

So we can see dhclient3 running, interestingly LXD appears to have allocated a DHPC lease for it, suggesting it got the request:

 lxc network list-leases lxdbr0
+-----------+-------------------+----------------------------------------+---------+
| HOSTNAME  |    MAC ADDRESS    |               IP ADDRESS               |  TYPE   |
+-----------+-------------------+----------------------------------------+---------+
| c1        | 00:16:3e:0c:47:9c | 10.21.203.5                            | DYNAMIC |
+-----------+-------------------+----------------------------------------+---------+
| c1        | 00:16:3e:0c:47:9c | fd42:ffdb:caff:baf7:216:3eff:fe0c:479c | DYNAMIC |
+-----------+-------------------+----------------------------------------+---------+
| lxdbr0.gw |                   | 10.21.203.1                            | GATEWAY |
+-----------+-------------------+----------------------------------------+---------+
| lxdbr0.gw |                   | fd42:ffdb:caff:baf7::1                 | GATEWAY |
+-----------+-------------------+----------------------------------------+---------+

Can see DHCP request reply too:

sudo tcpdump -i lxdbr0 -nn
11:59:57.199660 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:3e:0c:47:9c, length 300
11:59:57.205378 IP 10.21.203.1.67 > 10.21.203.5.68: BOOTP/DHCP, Reply, length 301

Ah, running dhclient3 eth0 manually gave

dhclient3 eth0
RTNETLINK answers: Operation not permitted
RTNETLINK answers: Operation not permitted

And I tracked it down as apparmor denial on host:

Sep 05 12:01:09  kernel: audit: type=1400 audit(1725534069.603:228): apparmor="DENIED" operation="capable" class="cap" namespace="root//lxd-c1_<var-lib-lxd>" profile="/sbin/dhclient" pid=28122 comm="ip" capability=12  capname="net_admin"

@amikhalitsyn could this be another regression related to updated apparmor parser version in LXD 5.21.2 LTS interim snap release 5.21.2-22f93f4 (and 5.21.2-2f4ba6b) and latest/stable?

Which kernel do you have?

I suspect 24.04 will be the issue, just testing this theory now.

6.1 on 22.04 generic kernel seems fine (5.15.0)

But even LXD 5.0 on 24.04 doesn’t work.

So this doesn’t look like a LXD regression after all.

If its a kernel issue, then if your 22.04 systems are running the HWE kernel they could get affected by the same thing as 24.04.

6.8.0-40-generic and 6.5.x previously

Yeah so looks like a kernel regression. @amikhalitsyn is our expert on these matters so will await his analysis.

1 Like

sharing my intermediate findings about this one. On upstream 6.11.0-rc7+ kernel it just works. On upstream 6.8.12 kernel is also works. On Ubuntu Noble’s kernel (based on 6.8.12) it doesn’t. To be continued…

1 Like