craft_providers.lxd.errors.LXDError: Timed out waiting for networking to be ready

Original discussion: Craft-providers error: Timed out waiting for networking to be ready - #18 by nteodosio - snapcraft - snapcraft.io

I’m currently unable to build snaps locally (Ubuntu 25.04 machine) because of this error when configuring the instance:

[...]
2025-03-11 10:32:16.362 Executing in container: lxc --project snapcraft exec local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb -- env CRAFT_MANAGED_MODE=1 DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true DEBIAN_PRIORITY=critical getent hosts snapcraft.io \ (2.1s)
2025-03-11 10:32:16.815 Executing in container: lxc --project snapcraft exec local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb -- env CRAFT_MANAGED_MODE=1 DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true DEBIAN_PRIORITY=critical getent hosts snapcraft.io \ (2.5s)
2025-03-11 10:32:17.220 Executing in container: lxc --project snapcraft exec local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb -- env CRAFT_MANAGED_MODE=1 DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true DEBIAN_PRIORITY=critical getent hosts snapcraft.io \ (2.9s)
2025-03-11 10:32:17.589 Executing in container: lxc --project snapcraft exec local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb -- env CRAFT_MANAGED_MODE=1 DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true DEBIAN_PRIORITY=critical getent hosts snapcraft.io
2025-03-11 10:32:17.615 Executing on host: lxc --project snapcraft config set local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb user.craft_providers.timer 2025-03-11T09:32:17.615481+00:00
2025-03-11 10:32:17.717 Set instance timer to '2025-03-11T09:32:17.615481+00:00'
2025-03-11 10:32:17.917 Executing in container: lxc --project snapcraft exec local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb -- env CRAFT_MANAGED_MODE=1 DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true DEBIAN_PRIORITY=critical getent hosts snapcraft.io
2025-03-11 10:32:20.718 Executing on host: lxc --project snapcraft config set local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb user.craft_providers.timer 2025-03-11T09:32:20.718468+00:00
2025-03-11 10:32:20.828 Set instance timer to '2025-03-11T09:32:20.718468+00:00'
2025-03-11 10:32:23.048 Executing in container: lxc --project snapcraft exec local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb -- env CRAFT_MANAGED_MODE=1 DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true DEBIAN_PRIORITY=critical getent hosts snapcraft.io
2025-03-11 10:32:23.076 Timed out waiting for networking to be ready.
2025-03-11 10:32:23.083 Traceback (most recent call last):
2025-03-11 10:32:23.083   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/util/retry.py", line 56, in retry_until_timeout
2025-03-11 10:32:23.083     return func(retry_wait)
2025-03-11 10:32:23.083            ^^^^^^^^^^^^^^^^
2025-03-11 10:32:23.083   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/base.py", line 454, in check_network
2025-03-11 10:32:23.083     self._execute_run(
2025-03-11 10:32:23.083   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/base.py", line 1151, in _execute_run
2025-03-11 10:32:23.083     proc = executor.execute_run(
2025-03-11 10:32:23.083            ^^^^^^^^^^^^^^^^^^^^^
2025-03-11 10:32:23.083   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/lxd/lxd_instance.py", line 254, in execute_run
2025-03-11 10:32:23.083     return self.lxc.exec(
2025-03-11 10:32:23.083            ^^^^^^^^^^^^^^
2025-03-11 10:32:23.084   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/lxd/lxc.py", line 390, in exec
2025-03-11 10:32:23.084     return runner(final_cmd, timeout=timeout, check=check, **kwargs)
2025-03-11 10:32:23.084            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-11 10:32:23.084   File "/snap/snapcraft/current/usr/lib/python3.12/subprocess.py", line 550, in run
2025-03-11 10:32:23.084     stdout, stderr = process.communicate(input, timeout=timeout)
2025-03-11 10:32:23.084                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-11 10:32:23.084   File "/snap/snapcraft/current/usr/lib/python3.12/subprocess.py", line 1209, in communicate
2025-03-11 10:32:23.084     stdout, stderr = self._communicate(input, endtime, timeout)
2025-03-11 10:32:23.084                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-11 10:32:23.084   File "/snap/snapcraft/current/usr/lib/python3.12/subprocess.py", line 2116, in _communicate
2025-03-11 10:32:23.084     self._check_timeout(endtime, orig_timeout, stdout, stderr)
2025-03-11 10:32:23.084   File "/snap/snapcraft/current/usr/lib/python3.12/subprocess.py", line 1253, in _check_timeout
2025-03-11 10:32:23.084     raise TimeoutExpired(
2025-03-11 10:32:23.084 subprocess.TimeoutExpired: Command '['lxc', '--project', 'snapcraft', 'exec', 'local:base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb', '--', 'env', 'CRAFT_MANAGED_MODE=1', 'DEBIAN_FRONTEND=noninteractive', 'DEBCONF_NONINTERACTIVE_SEEN=true', 'DEBIAN_PRIORITY=critical', 'getent', 'hosts', 'snapcraft.io']' timed out after 0.025 seconds
2025-03-11 10:32:23.084
2025-03-11 10:32:23.085 The above exception was the direct cause of the following exception:
2025-03-11 10:32:23.085 Traceback (most recent call last):
2025-03-11 10:32:23.085   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/lxd/lxd_provider.py", line 147, in launched_environment
2025-03-11 10:32:23.085     instance = launch(
2025-03-11 10:32:23.085                ^^^^^^^
2025-03-11 10:32:23.085   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/lxd/launcher.py", line 806, in launch
2025-03-11 10:32:23.085     _create_instance(
2025-03-11 10:32:23.085   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/lxd/launcher.py", line 150, in _create_instance
2025-03-11 10:32:23.085     base_configuration.setup(executor=base_instance, mount_cache=False)
2025-03-11 10:32:23.085   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/base.py", line 1033, in setup
2025-03-11 10:32:23.085     self._setup_wait_for_network(executor=executor)
2025-03-11 10:32:23.085   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/base.py", line 461, in _setup_wait_for_network
2025-03-11 10:32:23.085     retry.retry_until_timeout(
2025-03-11 10:32:23.085   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/util/retry.py", line 60, in retry_until_timeout
2025-03-11 10:32:23.086     raise error from exc
2025-03-11 10:32:23.086 craft_providers.errors.BaseConfigurationError: Timed out waiting for networking to be ready.
2025-03-11 10:32:23.086
2025-03-11 10:32:23.086 The above exception was the direct cause of the following exception:
2025-03-11 10:32:23.086 Traceback (most recent call last):
2025-03-11 10:32:23.086   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_application/application.py", line 691, in run
2025-03-11 10:32:23.086     return_code = self._run_inner()
2025-03-11 10:32:23.086                   ^^^^^^^^^^^^^^^^^
2025-03-11 10:32:23.086   File "/snap/snapcraft/13693/lib/python3.12/site-packages/snapcraft/application.py", line 203, in _run_inner
2025-03-11 10:32:23.086     return_code = super()._run_inner()
2025-03-11 10:32:23.086                   ^^^^^^^^^^^^^^^^^^^^
2025-03-11 10:32:23.086   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_application/application.py", line 674, in _run_inner
2025-03-11 10:32:23.086     self.run_managed(platform, build_for)
2025-03-11 10:32:23.086   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_application/application.py", line 471, in run_managed
2025-03-11 10:32:23.087     with self.services.provider.instance(
2025-03-11 10:32:23.087   File "/snap/snapcraft/current/usr/lib/python3.12/contextlib.py", line 137, in __enter__
2025-03-11 10:32:23.087     return next(self.gen)
2025-03-11 10:32:23.087            ^^^^^^^^^^^^^^
2025-03-11 10:32:23.087   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_application/services/provider.py", line 154, in instance
2025-03-11 10:32:23.087     with provider.launched_environment(
2025-03-11 10:32:23.087   File "/snap/snapcraft/current/usr/lib/python3.12/contextlib.py", line 137, in __enter__
2025-03-11 10:32:23.087     return next(self.gen)
2025-03-11 10:32:23.087            ^^^^^^^^^^^^^^
2025-03-11 10:32:23.087   File "/snap/snapcraft/13693/lib/python3.12/site-packages/craft_providers/lxd/lxd_provider.py", line 162, in launched_environment
2025-03-11 10:32:23.087     raise LXDError(str(error)) from error
2025-03-11 10:32:23.087 craft_providers.lxd.errors.LXDError: Timed out waiting for networking to be ready.

The info of one of such instances (all affected):

Name: base-instance-snapcraft-buildd-base-v7-c-f91ee4af44ccdf02cefb
Status: RUNNING
Type: container
Architecture: x86_64
PID: 271236
Created: 2025/03/10 13:49 CET
Last Used: 2025/03/10 13:49 CET

Resources:
  Processes: 7
  Disk usage:
    root: 673.00KiB
  CPU usage:
    CPU usage (in seconds): 1
  Memory usage:
    Memory (current): 32.58MiB
  Network usage:
    eth0:
      Type: broadcast
      State: UP
      Host interface: vetha50804ab
      MAC address: 00:16:3e:24:ae:e6
      MTU: 1500
      Bytes received: 20.93kB
      Bytes sent: 20.32kB
      Packets received: 150
      Packets sent: 200
      IP addresses:
        inet6: fd42:7f0:dca8:1282:216:3eff:fe24:aee6/64 (global)
        inet6: fe80::216:3eff:fe24:aee6/64 (link)
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 3.46kB
      Bytes sent: 3.46kB
      Packets received: 48
      Packets sent: 48
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)

And the error is still reproducible with just LXC as this gives exit code 2:

lxc launch ubuntu:24.04 c1
lxc shell c1 -- getent hosts snapcraft.io

Is this on my side, did I misconfigure something? A different machine on 24.04 has no such problem.

No docker, VPN, firewall or proxy.

what does snap list show on the host?

Your c1 instance didn’t even get an IPv4 through DHCP. If you had Docker installed and removed it, did you reboot after? Could you also double check your firewall rules, to be extra sure? sudo nft list ruleset or sudo iptables-save.

@tomp:

Name                       Version                         Revision  Tracking          Herausgeber    Hinweise
bare                       1.0                             5         latest/stable     canonical✓     base
chromium                   134.0.6998.35                   3060      latest/candidate  canonical✓     -
core                       16-2.61.4-20240607              17200     latest/stable     canonical✓     core
core18                     20250123                        2855      latest/stable     canonical✓     base
core20                     20241206                        2496      latest/stable     canonical✓     base
core22                     20250110                        1748      latest/stable     canonical✓     base
core24                     20241217                        739       latest/stable     canonical✓     base
cups                       2.4.11-3                        1070      latest/stable     openprinting✓  -
firefox                    136.0-3                         5862      latest/candidate  mozilla✓       -
firefox_beta               137.0b4-1                       5885      latest/beta       mozilla✓       -
firefox_edge               138.0a1                         5884      latest/edge       mozilla✓       -
gnome-3-28-1804            3.28.0-19-g98f9e67.98f9e67      198       latest/stable     canonical✓     -
gnome-42-2204              0+git.38ea591                   202       latest/stable/…   canonical✓     -
gnome-46-2404              0+git.9899d6d-sdk0+git.a4bf1ef  x1        latest/stable     -              -
gtk-common-themes          0.1-81-g442e511                 1535      latest/stable/…   canonical✓     -
lxd                        5.21.3-75def3c                  32455     5.21/stable       canonical✓     -
matterhorn                 50200.19.0-31-g05148608         x1        -                 -              -
mesa-2404                  24.2.8                          495       latest/stable     canonical✓     -
snap-store                 41.3-77-g7dc86c8                x1        latest/stable/…   -              -
snapcraft                  8.7.1                           13693     latest/stable     canonical✓     classic
snapd                      2.67.1                          23771     latest/stable     canonical✓     snapd
snapd-desktop-integration  0.9                             253       latest/stable/…   canonical✓     -
snappy-debug               0.36-snapd2.59.4                704       latest/stable     canonical✓     -
sourcecraft                0+git.07a932f                   273       latest/beta       sergiusens     classic
surl                       0.8.0                           414       latest/stable     verterok       -

@sdeziel1, if I installed docker once (which I do not remember doing), I certainly rebooted my system since as I do that daily.

iptables-save returned nothing, but here is nft list ruleset:

table inet firewalld { # progname firewalld
	flags owner,persist

	chain mangle_PREROUTING {
		type filter hook prerouting priority mangle + 10; policy accept;
		jump mangle_PREROUTING_POLICIES
	}

	chain mangle_PREROUTING_POLICIES {
		iifname "vetha39c8bf0" jump mangle_PRE_policy_allow-host-ipv6
		iifname "vetha39c8bf0" jump mangle_PRE_public
		iifname "vetha39c8bf0" return
		iifname "wlp3s0" jump mangle_PRE_policy_allow-host-ipv6
		iifname "wlp3s0" jump mangle_PRE_public
		iifname "wlp3s0" return
		jump mangle_PRE_policy_allow-host-ipv6
		jump mangle_PRE_public
		return
	}

	chain nat_PREROUTING {
		type nat hook prerouting priority dstnat + 10; policy accept;
		jump nat_PREROUTING_POLICIES
	}

	chain nat_PREROUTING_POLICIES {
		iifname "vetha39c8bf0" jump nat_PRE_policy_allow-host-ipv6
		iifname "vetha39c8bf0" jump nat_PRE_public
		iifname "vetha39c8bf0" return
		iifname "wlp3s0" jump nat_PRE_policy_allow-host-ipv6
		iifname "wlp3s0" jump nat_PRE_public
		iifname "wlp3s0" return
		jump nat_PRE_policy_allow-host-ipv6
		jump nat_PRE_public
		return
	}

	chain nat_POSTROUTING {
		type nat hook postrouting priority srcnat + 10; policy accept;
		jump nat_POSTROUTING_POLICIES
	}

	chain nat_POSTROUTING_POLICIES {
		iifname "vetha39c8bf0" oifname "vetha39c8bf0" jump nat_POST_public
		iifname "vetha39c8bf0" oifname "vetha39c8bf0" return
		iifname "wlp3s0" oifname "vetha39c8bf0" jump nat_POST_public
		iifname "wlp3s0" oifname "vetha39c8bf0" return
		oifname "vetha39c8bf0" jump nat_POST_public
		oifname "vetha39c8bf0" return
		iifname "vetha39c8bf0" oifname "wlp3s0" jump nat_POST_public
		iifname "vetha39c8bf0" oifname "wlp3s0" return
		iifname "wlp3s0" oifname "wlp3s0" jump nat_POST_public
		iifname "wlp3s0" oifname "wlp3s0" return
		oifname "wlp3s0" jump nat_POST_public
		oifname "wlp3s0" return
		iifname "vetha39c8bf0" jump nat_POST_public
		iifname "vetha39c8bf0" return
		iifname "wlp3s0" jump nat_POST_public
		iifname "wlp3s0" return
		jump nat_POST_public
		return
	}

	chain nat_OUTPUT {
		type nat hook output priority dstnat + 10; policy accept;
		jump nat_OUTPUT_POLICIES
	}

	chain nat_OUTPUT_POLICIES {
		oifname "vetha39c8bf0" jump nat_OUT_public
		oifname "vetha39c8bf0" return
		oifname "wlp3s0" jump nat_OUT_public
		oifname "wlp3s0" return
		jump nat_OUT_public
		return
	}

	chain filter_PREROUTING {
		type filter hook prerouting priority filter + 10; policy accept;
		icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
		meta nfproto ipv6 fib saddr . mark . iif oif missing drop
	}

	chain filter_INPUT {
		type filter hook input priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		iifname "lo" accept
		ct state invalid drop
		jump filter_INPUT_POLICIES
		reject with icmpx admin-prohibited
	}

	chain filter_FORWARD {
		type filter hook forward priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		iifname "lo" accept
		ct state invalid drop
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
		jump filter_FORWARD_POLICIES
		reject with icmpx admin-prohibited
	}

	chain filter_OUTPUT {
		type filter hook output priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		oifname "lo" accept
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
		jump filter_OUTPUT_POLICIES
	}

	chain filter_INPUT_POLICIES {
		iifname "vetha39c8bf0" jump filter_IN_policy_allow-host-ipv6
		iifname "vetha39c8bf0" jump filter_IN_public
		iifname "vetha39c8bf0" reject with icmpx admin-prohibited
		iifname "wlp3s0" jump filter_IN_policy_allow-host-ipv6
		iifname "wlp3s0" jump filter_IN_public
		iifname "wlp3s0" reject with icmpx admin-prohibited
		jump filter_IN_policy_allow-host-ipv6
		jump filter_IN_public
		reject with icmpx admin-prohibited
	}

	chain filter_FORWARD_POLICIES {
		iifname "vetha39c8bf0" oifname "vetha39c8bf0" jump filter_FWD_public
		iifname "vetha39c8bf0" oifname "vetha39c8bf0" reject with icmpx admin-prohibited
		iifname "vetha39c8bf0" oifname "wlp3s0" jump filter_FWD_public
		iifname "vetha39c8bf0" oifname "wlp3s0" reject with icmpx admin-prohibited
		iifname "vetha39c8bf0" jump filter_FWD_public
		iifname "vetha39c8bf0" reject with icmpx admin-prohibited
		iifname "wlp3s0" oifname "vetha39c8bf0" jump filter_FWD_public
		iifname "wlp3s0" oifname "vetha39c8bf0" reject with icmpx admin-prohibited
		iifname "wlp3s0" oifname "wlp3s0" jump filter_FWD_public
		iifname "wlp3s0" oifname "wlp3s0" reject with icmpx admin-prohibited
		iifname "wlp3s0" jump filter_FWD_public
		iifname "wlp3s0" reject with icmpx admin-prohibited
		oifname "vetha39c8bf0" jump filter_FWD_public
		oifname "vetha39c8bf0" reject with icmpx admin-prohibited
		oifname "wlp3s0" jump filter_FWD_public
		oifname "wlp3s0" reject with icmpx admin-prohibited
		jump filter_FWD_public
		reject with icmpx admin-prohibited
	}

	chain filter_OUTPUT_POLICIES {
		oifname "vetha39c8bf0" jump filter_OUT_public
		oifname "vetha39c8bf0" return
		oifname "wlp3s0" jump filter_OUT_public
		oifname "wlp3s0" return
		jump filter_OUT_public
		return
	}

	chain filter_IN_public {
		jump filter_IN_public_pre
		jump filter_IN_public_log
		jump filter_IN_public_deny
		jump filter_IN_public_allow
		jump filter_IN_public_post
		meta l4proto { icmp, ipv6-icmp } accept
	}

	chain filter_IN_public_pre {
	}

	chain filter_IN_public_log {
	}

	chain filter_IN_public_deny {
	}

	chain filter_IN_public_allow {
		tcp dport 22 accept
		ip6 daddr fe80::/64 udp dport 546 accept
	}

	chain filter_IN_public_post {
	}

	chain filter_OUT_public {
		jump filter_OUT_public_pre
		jump filter_OUT_public_log
		jump filter_OUT_public_deny
		jump filter_OUT_public_allow
		jump filter_OUT_public_post
	}

	chain filter_OUT_public_pre {
	}

	chain filter_OUT_public_log {
	}

	chain filter_OUT_public_deny {
	}

	chain filter_OUT_public_allow {
	}

	chain filter_OUT_public_post {
	}

	chain nat_OUT_public {
		jump nat_OUT_public_pre
		jump nat_OUT_public_log
		jump nat_OUT_public_deny
		jump nat_OUT_public_allow
		jump nat_OUT_public_post
	}

	chain nat_OUT_public_pre {
	}

	chain nat_OUT_public_log {
	}

	chain nat_OUT_public_deny {
	}

	chain nat_OUT_public_allow {
	}

	chain nat_OUT_public_post {
	}

	chain nat_POST_public {
		jump nat_POST_public_pre
		jump nat_POST_public_log
		jump nat_POST_public_deny
		jump nat_POST_public_allow
		jump nat_POST_public_post
	}

	chain nat_POST_public_pre {
	}

	chain nat_POST_public_log {
	}

	chain nat_POST_public_deny {
	}

	chain nat_POST_public_allow {
	}

	chain nat_POST_public_post {
	}

	chain filter_FWD_public {
		jump filter_FWD_public_pre
		jump filter_FWD_public_log
		jump filter_FWD_public_deny
		jump filter_FWD_public_allow
		jump filter_FWD_public_post
	}

	chain filter_FWD_public_pre {
	}

	chain filter_FWD_public_log {
	}

	chain filter_FWD_public_deny {
	}

	chain filter_FWD_public_allow {
		oifname "wlp3s0" accept
		oifname "vetha39c8bf0" accept
	}

	chain filter_FWD_public_post {
	}

	chain nat_PRE_public {
		jump nat_PRE_public_pre
		jump nat_PRE_public_log
		jump nat_PRE_public_deny
		jump nat_PRE_public_allow
		jump nat_PRE_public_post
	}

	chain nat_PRE_public_pre {
	}

	chain nat_PRE_public_log {
	}

	chain nat_PRE_public_deny {
	}

	chain nat_PRE_public_allow {
	}

	chain nat_PRE_public_post {
	}

	chain mangle_PRE_public {
		jump mangle_PRE_public_pre
		jump mangle_PRE_public_log
		jump mangle_PRE_public_deny
		jump mangle_PRE_public_allow
		jump mangle_PRE_public_post
	}

	chain mangle_PRE_public_pre {
	}

	chain mangle_PRE_public_log {
	}

	chain mangle_PRE_public_deny {
	}

	chain mangle_PRE_public_allow {
	}

	chain mangle_PRE_public_post {
	}

	chain filter_IN_policy_allow-host-ipv6 {
		jump filter_IN_policy_allow-host-ipv6_pre
		jump filter_IN_policy_allow-host-ipv6_log
		jump filter_IN_policy_allow-host-ipv6_deny
		jump filter_IN_policy_allow-host-ipv6_allow
		jump filter_IN_policy_allow-host-ipv6_post
	}

	chain filter_IN_policy_allow-host-ipv6_pre {
	}

	chain filter_IN_policy_allow-host-ipv6_log {
	}

	chain filter_IN_policy_allow-host-ipv6_deny {
	}

	chain filter_IN_policy_allow-host-ipv6_allow {
		icmpv6 type nd-neighbor-advert accept
		icmpv6 type nd-neighbor-solicit accept
		icmpv6 type nd-redirect accept
		icmpv6 type nd-router-advert accept
	}

	chain filter_IN_policy_allow-host-ipv6_post {
	}

	chain nat_PRE_policy_allow-host-ipv6 {
		jump nat_PRE_policy_allow-host-ipv6_pre
		jump nat_PRE_policy_allow-host-ipv6_log
		jump nat_PRE_policy_allow-host-ipv6_deny
		jump nat_PRE_policy_allow-host-ipv6_allow
		jump nat_PRE_policy_allow-host-ipv6_post
	}

	chain nat_PRE_policy_allow-host-ipv6_pre {
	}

	chain nat_PRE_policy_allow-host-ipv6_log {
	}

	chain nat_PRE_policy_allow-host-ipv6_deny {
	}

	chain nat_PRE_policy_allow-host-ipv6_allow {
	}

	chain nat_PRE_policy_allow-host-ipv6_post {
	}

	chain mangle_PRE_policy_allow-host-ipv6 {
		jump mangle_PRE_policy_allow-host-ipv6_pre
		jump mangle_PRE_policy_allow-host-ipv6_log
		jump mangle_PRE_policy_allow-host-ipv6_deny
		jump mangle_PRE_policy_allow-host-ipv6_allow
		jump mangle_PRE_policy_allow-host-ipv6_post
	}

	chain mangle_PRE_policy_allow-host-ipv6_pre {
	}

	chain mangle_PRE_policy_allow-host-ipv6_log {
	}

	chain mangle_PRE_policy_allow-host-ipv6_deny {
	}

	chain mangle_PRE_policy_allow-host-ipv6_allow {
	}

	chain mangle_PRE_policy_allow-host-ipv6_post {
	}
}
table inet lxd {
	chain pstrt.lxdbr0 {
		type nat hook postrouting priority srcnat; policy accept;
		ip saddr 10.47.137.0/24 ip daddr != 10.47.137.0/24 oifname != "lxdbr0" masquerade
		ip6 saddr fd42:7f0:dca8:1282::/64 ip6 daddr != fd42:7f0:dca8:1282::/64 oifname != "lxdbr0" masquerade
	}

	chain fwd.lxdbr0 {
		type filter hook forward priority filter; policy accept;
		ip version 4 oifname "lxdbr0" accept
		ip version 4 iifname "lxdbr0" accept
		ip6 version 6 oifname "lxdbr0" accept
		ip6 version 6 iifname "lxdbr0" accept
	}

	chain in.lxdbr0 {
		type filter hook input priority filter; policy accept;
		iifname "lxdbr0" tcp dport 53 accept
		iifname "lxdbr0" udp dport 53 accept
		iifname "lo" tcp dport 53 accept
		iifname "lo" udp dport 53 accept
		ip daddr 10.47.137.1 tcp dport 53 drop
		ip daddr 10.47.137.1 udp dport 53 drop
		iifname "lxdbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		iifname "lxdbr0" udp dport 67 accept
		ip6 daddr fd42:7f0:dca8:1282::1 tcp dport 53 drop
		ip6 daddr fd42:7f0:dca8:1282::1 udp dport 53 drop
		iifname "lxdbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		iifname "lxdbr0" udp dport 547 accept
	}

	chain out.lxdbr0 {
		type filter hook output priority filter; policy accept;
		oifname "lxdbr0" tcp sport 53 accept
		oifname "lxdbr0" udp sport 53 accept
	}
}

Thanks for the output. I’m still trying to reproduce (slow downloads) but in the meantime, have you applied that recommendation: https://documentation.ubuntu.com/lxd/en/latest/howto/network_bridge_firewalld/#firewalld-add-the-bridge-to-the-trusted-zone

1 Like

@nteodosio I cannot reproduce with a 25.04 VM in which I launch a 24.04 container. There it works for me but I don’t have firewalld getting in the way.

$ lxc launch ubuntu-minimal-daily:25.04 v1 --vm -p vm
Launching v1
$ lxc shell v1                
root@v1:~# snap install lxd
2025-03-11T15:14:36Z INFO Waiting for automatic snapd restart...
lxd (5.21/stable) 5.21.3-75def3c from Canonical✓ installed
root@v1:~# lxd init --auto
root@v1:~# lxc launch ubuntu:24.04 c1
Launching c1
root@v1:~# lxc list
+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| c1   | RUNNING | 10.202.231.109 (eth0) | fd42:be7a:1988:f1c5:216:3eff:fe9e:85a8 (eth0) | CONTAINER | 0         |
+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
root@v1:~# lxc shell c1 -- getent hosts snapcraft.io 
2620:2d:4000:1::27 snapcraft.io
2620:2d:4000:1::26 snapcraft.io
2620:2d:4000:1::28 snapcraft.io
1 Like

Hmmm I don’t know why I have firewalld. I certainly don’t use that. And after apt remove firewalld, the error is no longer hit. Thank you!

2 Likes

It might be that it comes in the default desktop install for 25.04. Anyway, it’s good that you’ve put this on our radar as if you are the first, you are certainly not the last to trip on it :wink:

Thanks!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.