Network profile hosting setup

I have three dedicated servers at a big hoster in europe.
They are running Ubuntu 24.04 LTS and snap lxd.
They are directly connected to the internet with a public ip:

# ifconfig # example of one of the three servers

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  inet 188.138.29.147  netmask 255.255.255.0  broadcast 188.138.29.255
  ether ac:1f:6b:fd:b6:2a  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
  inet 127.0.0.1  netmask 255.0.0.0
  loop  txqueuelen 1000  (Local Loopback)

lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  inet 192.168.124.1  netmask 255.255.255.0  broadcast 0.0.0.0
  ether 00:16:3e:df:95:7f  txqueuelen 1000  (Ethernet)

vethc411bfab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  ether d2:a7:b3:6b:aa:18  txqueuelen 1000  (Ethernet)

# ip route
default via 188.138.29.1 dev eth0 proto static 
188.138.29.0/24 dev eth0 proto kernel scope link src 188.138.29.147 metric 100 
188.138.29.1 dev eth0 proto dhcp scope link src 188.138.29.147 metric 100 
192.168.124.0/24 dev lxdbr0 proto kernel scope link src 192.168.124.1 
188.138.32.235 via 192.168.124.15 dev lxdbr0 

# ip link show type bridge
4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:df:95:7f brd ff:ff:ff:ff:ff:ff

# lxc network list
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+
|   NAME   |   TYPE   | MANAGED |       IPV4       |           IPV6            | DESCRIPTION | USED BY |  STATE  |
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+
| eth0     | physical | NO      |                  |                           |             | 0       |         |
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+
| lxdbr0   | bridge   | YES     | 192.168.124.1/24 | fd42:62d6:6252:fe5e::1/64 |             | 2       | CREATED |
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+

This all pretty standard with the exception of the 188.138.32.235
route to 192.168.124.15 which will be explained down there.

I also have subscribed to a /27 network with 30 public ips:
188.138.32.224/27 => 188.138.32.225 … 188.138.32.254 which
can be routed individually to one of the three servers.

Since these IPs are routed to the public IPs of one of the servers,
no layer2 (ARP) setup to these IPs is necessary to get the
packets to these IPs into the public interface (eth0) of the servers.

My setup assigns one of these IPs directly to a lxd container or vm.
No NAT should be used - the public IP is fully connected to the vm.
Network on the container/vm is like follows:

# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.124.15  netmask 255.255.255.0  broadcast 192.168.122.255
        inet6 fe80::fcce:26ff:fe29:7c15  prefixlen 64  scopeid 0x20<link>
        ether fe:ce:26:29:7c:15  txqueuelen 1000  (Ethernet)

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 188.138.32.235  netmask 255.255.255.255  broadcast 188.138.32.235
        ether fe:ce:26:29:7c:15  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.124.1   0.0.0.0         UG    0      0        0 eth0
192.168.124.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0

+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| joan | RUNNING | 192.168.124.15 (eth0) | fd42:62d6:6252:fe5e:216:3eff:fef1:37ce (eth0) | CONTAINER | 0         |
+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+

This is pretty standard with lxd default profile with the exception of
eth0:0 which adds the public ip to the vm/container.

On the host server two additional network setups for incoming and
outgoing packets are necessary:

  1. A routing to the vm public ip through the bridge:
ip route add 188.138.32.235 via 192.168.124.15 dev lxdbr0

So every packet to 188.138.32.235 which arrives at 188.138.29.147
(the servers IP/eth0) gets to the vm/container via the bridge.

  1. Every packet from the vm/container to the internet is routed to the
    server through the bridge and furthermore to the default route of it.
    To make these packets origin from the vm/container’s public IP
    (188.138.32.235) an iptables POSTROUTING SNAT source rewrite is needed:
Chain POSTROUTING
SNAT all  --  * eth0 192.168.124.15 0.0.0.0/0 to:188.138.32.235

iptables -t nat -A POSTROUTING -s 192.168.124.15 -j SNAT --to 188.138.32.235

This all works prefectly - but i have to set up for every vm completely manually.
Would it be possible to write a profile that can set this up automatically?

Thanks for any help.

Cheers
Axel Reinhold

Profiles only apply config to the instances they are assigned to, rather than LXD host configuration.

However it struck me that your scenario may be better suited by avoiding bridge networking entirely (or at least not only using it) and for the external IPs using a routed NIC instead.

https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/#nictype-routed

This NIC type sets up IP neighbour proxy rules on your specified host parent interface, and then sets up static routes into your container’s network interface.

This allows you to configure the actual external IP inside the container and it be correctly routed to and from the container via the host, no NAT required.

thank you for that hint. In the moment i had understand how this onlink thing works i realized this is the way to go. Because it is not a container but a vm i had to set up the network in the vm manually:

cat /etc/systemd/network/enp5s0.network
[Match]
Name=enp5s0

[Network]
Address=188.138.31.248/32
DNS=8.8.8.8
#192.168.124.1 does no more work dnsmasq does not answer
       
[Route]
Gateway=169.254.0.1
PreferredSource=188.138.31.248
GatewayOnlink=true

The only issue i have is, that LXDs dnsmasq on the host does no more resolv.
Is there a configuration to open it up again over this route. Routing
to 192.168.124.1 works. This is the lxdbr0 on the host:

lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.124.1  netmask 255.255.255.0  broadcast 0.0.0.0

Yes we have a firewall rule added by LXD now that prevents DNS requests from IPs outside of the lxdbr0 network range.

You could potentially run your own DNS resolver (like dnsmasq or unbound), point to an external resolver, or add a NIC to your instance that is connected to lxdbr0 (with DHCP client disable so as not to replace the default route) and then access dnsmasq that way.

How can i see this rule? iptables -L shows only my own rules.

LXD is probably using nftables, so sufo nft list ruleset