Cannot ping Containers or VMs

so i have a fresh install of lxd 5.21 on ubuntu 24. i used lxd init and created the default bridge

i create a container and i ping everything from there. internal network and internet, no problems.

but i cant reach the container , no ping no ssh.

i CAN ping it from the lxd server itself

this is my bridge config

 lxc network info lxdbr0
Name: lxdbr0
MAC address: 00:16:3e:79:b1:a4
MTU: 1500
State: up
Type: broadcast

IP addresses:
  inet	10.128.48.1/24 (global)
  inet6	fd42:969c:da3c:73a2::1/64 (global)
  inet6	fe80::216:3eff:fe79:b1a4/64 (link)

Network usage:
  Bytes received: 46.89kB
  Bytes sent: 341.72kB
  Packets received: 383
  Packets sent: 375

Bridge:
  ID: 8000.00163e79b1a4
  STP: false
  Forward delay: 1500
  Default VLAN ID: 1
  VLAN filtering: true
  Upper devices: tap283c79ff

yaml

name: lxdbr0
description: ''
type: bridge
config:
  ipv4.address: 10.128.48.1/24
  ipv4.nat: 'true'
  ipv6.address: fd42:969c:da3c:73a2::1/64
  ipv6.nat: 'true'

Where are those connections coming from? Is this from an external machine, not the LXD host itself?
If so, then that’s likely due to the ipv4.nat and ipv6.nat being enabled making those IP ranges only locally reachable from the LXD host itself and other instances on it.

yes they come from outside. i tried disabling the nat but at that point the containers are completely isolated

If you have a wired NIC, you could have your LXD instances connected to your LAN and thus sharing the same IP range(s) as your other machines. This should avoid needing NAT and also make it easier for external connectivity.

To do that, set bridge.external_interfaces to a physical NIC that’s connected to your LAN and then remove the ipv4.* and ipv6.* settings on that lxdbr0. After that, your instances will be directly on your LAN and will request DHCP/DNS information like any other machine.

apologies im a bit out of my element here, you mean in the yaml section?

name: lxdbr0
description: ''
type: bridge
config:
  ipv4.firewall: 'true'
  ipv4.address: 10.128.48.1/24
  ipv4.nat: 'true'
  ipv6.address: fd42:969c:da3c:73a2::1/64
  ipv6.nat: 'true'

because if i try im getting this

I would caution that using bridge.external_interfaces will expose LXD’s DHCP server to the external network, possibly causing disruption. So use with caution.

If you just want to expose certain ports from services inside containers to the external network using the host’s IP then you could also at using the proxy device.

See https://documentation.ubuntu.com/lxd/en/latest/reference/devices_proxy/

Alternatively you could setup a static route in your network’s route to route the lxdbr0 private subnet (10.128.48.0/24 in this case) to the host’s IP.

im not sure if maybe this is the wrong product for us. we used to use vmware and all the vms were directly accessible. we just want to be able to access these vms without extra configurations for each one.

is this a weird use case? im very confused

I suspect what you want to use is to just connect the instance directly on the the external network and not use LXD’s default managed lxdbr0 network at all.

Try this:

lxc config device add <instance> eth0 nic nictype=macvlan parent=<hosts external interface name>

https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/#nic-macvlan

Alternatively you can setup a manual bridge that is connected to your external network and then use:

lxc config device add <instance> eth0 nic nictype=bridged parent=<manual bridge name>

https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/#nic-bridged

See netplan/examples/bridge.yaml at main · canonical/netplan · GitHub

lxc config device add eth0 nic nictype=macvlan parent=

so i tried this but then i still have the bridge with the same name eth0. how can i remove it and/or directly create an instance with the right settings?

i set up another LXD server and skipped the bridge creation during lxd init and now i have 1 maclavan interface connected to the instance but no ip at all

 lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, powerflex, zfs, btrfs, ceph) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=9GiB]: 30
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: ens160
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 
Port to bind LXD to [default=8443]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config:
  core.https_address: '[::]:8443'
networks: []
storage_pools:
- config:
    size: 30GiB
  description: ""
  name: default
  driver: zfs
storage_volumes: []
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: macvlan
      parent: ens160
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null


ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:2a:69:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2a02:908:c20a:1c10:216:3eff:fe2a:6950/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86398sec preferred_lft 14398sec
    inet6 fe80::216:3eff:fe2a:6950/64 scope link 
       valid_lft forever preferred_lft forever
root@testmaclavan:~# 

setup netplan then lxdbr0 again

If the bridge eth0 is coming from a profile the lxc config device add will replace it in that instance.

If the instance already has an eth0 device in its local config, then use lxc config device remove <instance> eth0 first before doing lxc config device add.