Network profile hosting setup

I have three dedicated servers at a big hoster in europe.
They are running Ubuntu 24.04 LTS and snap lxd.
They are directly connected to the internet with a public ip:

# ifconfig # example of one of the three servers

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  inet 188.138.29.147  netmask 255.255.255.0  broadcast 188.138.29.255
  ether ac:1f:6b:fd:b6:2a  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
  inet 127.0.0.1  netmask 255.0.0.0
  loop  txqueuelen 1000  (Local Loopback)

lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  inet 192.168.124.1  netmask 255.255.255.0  broadcast 0.0.0.0
  ether 00:16:3e:df:95:7f  txqueuelen 1000  (Ethernet)

vethc411bfab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  ether d2:a7:b3:6b:aa:18  txqueuelen 1000  (Ethernet)

# ip route
default via 188.138.29.1 dev eth0 proto static 
188.138.29.0/24 dev eth0 proto kernel scope link src 188.138.29.147 metric 100 
188.138.29.1 dev eth0 proto dhcp scope link src 188.138.29.147 metric 100 
192.168.124.0/24 dev lxdbr0 proto kernel scope link src 192.168.124.1 
188.138.32.235 via 192.168.124.15 dev lxdbr0 

# ip link show type bridge
4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:df:95:7f brd ff:ff:ff:ff:ff:ff

# lxc network list
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+
|   NAME   |   TYPE   | MANAGED |       IPV4       |           IPV6            | DESCRIPTION | USED BY |  STATE  |
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+
| eth0     | physical | NO      |                  |                           |             | 0       |         |
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+
| lxdbr0   | bridge   | YES     | 192.168.124.1/24 | fd42:62d6:6252:fe5e::1/64 |             | 2       | CREATED |
+----------+----------+---------+------------------+---------------------------+-------------+---------+---------+

This all pretty standard with the exception of the 188.138.32.235
route to 192.168.124.15 which will be explained down there.

I also have subscribed to a /27 network with 30 public ips:
188.138.32.224/27 => 188.138.32.225 … 188.138.32.254 which
can be routed individually to one of the three servers.

Since these IPs are routed to the public IPs of one of the servers,
no layer2 (ARP) setup to these IPs is necessary to get the
packets to these IPs into the public interface (eth0) of the servers.

My setup assigns one of these IPs directly to a lxd container or vm.
No NAT should be used - the public IP is fully connected to the vm.
Network on the container/vm is like follows:

# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.124.15  netmask 255.255.255.0  broadcast 192.168.122.255
        inet6 fe80::fcce:26ff:fe29:7c15  prefixlen 64  scopeid 0x20<link>
        ether fe:ce:26:29:7c:15  txqueuelen 1000  (Ethernet)

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 188.138.32.235  netmask 255.255.255.255  broadcast 188.138.32.235
        ether fe:ce:26:29:7c:15  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.124.1   0.0.0.0         UG    0      0        0 eth0
192.168.124.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0

+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| joan | RUNNING | 192.168.124.15 (eth0) | fd42:62d6:6252:fe5e:216:3eff:fef1:37ce (eth0) | CONTAINER | 0         |
+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+

This is pretty standard with lxd default profile with the exception of
eth0:0 which adds the public ip to the vm/container.

On the host server two additional network setups for incoming and
outgoing packets are necessary:

  1. A routing to the vm public ip through the bridge:
ip route add 188.138.32.235 via 192.168.124.15 dev lxdbr0

So every packet to 188.138.32.235 which arrives at 188.138.29.147
(the servers IP/eth0) gets to the vm/container via the bridge.

  1. Every packet from the vm/container to the internet is routed to the
    server through the bridge and furthermore to the default route of it.
    To make these packets origin from the vm/container’s public IP
    (188.138.32.235) an iptables POSTROUTING SNAT source rewrite is needed:
Chain POSTROUTING
SNAT all  --  * eth0 192.168.124.15 0.0.0.0/0 to:188.138.32.235

iptables -t nat -A POSTROUTING -s 192.168.124.15 -j SNAT --to 188.138.32.235

This all works prefectly - but i have to set up for every vm completely manually.
Would it be possible to write a profile that can set this up automatically?

Thanks for any help.

Cheers
Axel Reinhold

Profiles only apply config to the instances they are assigned to, rather than LXD host configuration.

However it struck me that your scenario may be better suited by avoiding bridge networking entirely (or at least not only using it) and for the external IPs using a routed NIC instead.

https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/#nictype-routed

This NIC type sets up IP neighbour proxy rules on your specified host parent interface, and then sets up static routes into your container’s network interface.

This allows you to configure the actual external IP inside the container and it be correctly routed to and from the container via the host, no NAT required.

thank you for that hint. In the moment i had understand how this onlink thing works i realized this is the way to go. Because it is not a container but a vm i had to set up the network in the vm manually:

cat /etc/systemd/network/enp5s0.network
[Match]
Name=enp5s0

[Network]
Address=188.138.31.248/32
DNS=8.8.8.8
#192.168.124.1 does no more work dnsmasq does not answer
       
[Route]
Gateway=169.254.0.1
PreferredSource=188.138.31.248
GatewayOnlink=true

The only issue i have is, that LXDs dnsmasq on the host does no more resolv.
Is there a configuration to open it up again over this route. Routing
to 192.168.124.1 works. This is the lxdbr0 on the host:

lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.124.1  netmask 255.255.255.0  broadcast 0.0.0.0

Yes we have a firewall rule added by LXD now that prevents DNS requests from IPs outside of the lxdbr0 network range.

You could potentially run your own DNS resolver (like dnsmasq or unbound), point to an external resolver, or add a NIC to your instance that is connected to lxdbr0 (with DHCP client disable so as not to replace the default route) and then access dnsmasq that way.

How can i see this rule? iptables -L shows only my own rules.

LXD is probably using nftables, so sufo nft list ruleset

i have setup manually a working network configuration which
combines bridged, routed, macvlan and ipvlan networks. It has
the features of these network types and avoids their specific
disadvantages and creates a layer3 routing from the instance
to the host and the internet no matter if the host is directly
connected or through an external router with or without nat.

The setup needs no firewall rules or additional bridges or
virtual interfaces - it uses the standard bridge lxdbr0.
Since it is a routed connection the host can fully control
the traffic to the instance by simple firewall FORWARD rules.

On the instance the network looks pretty simple and the setup
needs three commands on the instance and two commands on the
host to configure:

ip address add 192.168.9.155/32 broadcast 192.168.9.155 dev eth0
ip -4 route add default via 192.168.181.1 dev eth0 proto static onlink
echo "nameserver 192.168.4.200" &gt;/etc/resolv.conf

hmm5 ~ # ifconfig
eth0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;  mtu 1500
        inet 192.168.9.155  netmask 255.255.255.255  broadcast 192.168.9.155
        inet6 fe80::216:3eff:fec8:3249  prefixlen 64  scopeid 0x20&lt;link&gt;
        ether 00:16:3e:c8:32:49  txqueuelen 1000  (Ethernet)

lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt;  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10&lt;host&gt;

hmm5 ~ # ip route
default via 192.168.181.1 dev eth0 proto static onlink
HOST:
ip -4 route add 192.168.9.155/32 via 192.168.9.155 dev lxdbr0 onlink
ip neighbour add proxy 192.168.9.155 dev bond0 nud permanent

bond0: flags=5187&lt;UP,BROADCAST,RUNNING,MASTER,MULTICAST&gt;  mtu 1500
        inet 192.168.9.81  netmask 255.255.255.0  broadcast 192.168.9.255
        inet6 fe80::921b:eff:fe34:a705  prefixlen 64  scopeid 0x20&lt;link&gt;
        ether 90:1b:0e:34:a7:05  txqueuelen 1000  (Ethernet)

enp5s0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;  mtu 1500
        inet 192.168.4.241  netmask 255.255.255.0  broadcast 192.168.4.255
        inet6 2003:a:112c:9200:921b:eff:fe30:e968  prefixlen 64  scopeid 0x0&lt;global&gt;
        inet6 fe80::921b:eff:fe30:e968  prefixlen 64  scopeid 0x20&lt;link&gt;
        inet6 fde2:8acd:e9d3:0:921b:eff:fe30:e968  prefixlen 64  scopeid 0x0&lt;global&gt;
        ether 90:1b:0e:30:e9:68  txqueuelen 1000  (Ethernet)

lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt;  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10&lt;host&gt;

lxdbr0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;  mtu 1500
        inet 192.168.181.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:16:3e:c0:d0:68  txqueuelen 1000  (Ethernet)

veth17721407: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;  mtu 1500
        ether 0a:95:c4:e5:87:54  txqueuelen 1000  (Ethernet)

mask ~/adm # ip route
default via 192.168.4.200 dev enp5s0 src 192.168.4.241 metric 4
192.168.4.0/24 dev enp5s0 proto dhcp scope link src 192.168.4.241 metric 4
192.168.9.0/24 dev bond0 proto kernel scope link src 192.168.9.81
192.168.181.0/24 dev lxdbr0 proto kernel scope link src 192.168.181.1
192.168.9.155 via 192.168.9.155 dev lxdbr0 onlink

mask ~/adm # arp -a
? (192.168.4.96) at 90:1b:0e:08:fb:ec [ether] on enp5s0
? (192.168.4.124) at 00:30:48:92:04:70 [ether] on enp5s0
? (192.168.9.84) at 4c:72:b9:e6:57:b4 [ether] on bond0
? (192.168.4.89) at 90:1b:0e:0e:fe:b3 [ether] on enp5s0
? (192.168.9.155) at 00:16:3e:c8:32:49 [ether] on lxdbr0
digitalisierungsbox (192.168.4.200) at 00:09:4f:bf:b8:ba [ether] on enp5s0
? (192.168.9.155) at &lt;from_interface&gt; PERM PUB on bond0

INCUS CONFIG INSTANCE:
...
config:
  volatile.eth0.host_name: veth17721407
  volatile.eth0.hwaddr: 00:16:3e:c8:32:49
profiles:
- default

INCUS PROFILE DEFAULT (STANDARD):
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: masksp
    type: disk
name: default
used_by:
- /1.0/instances/hmm5

NETWORK:
mask ~/adm # incus network show lxdbr0
config:
  ipv4.address: 192.168.181.1/24
  ipv4.nat: "false"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/hmm5
- /1.0/profiles/default
- /1.0/profiles/routed
managed: true
status: Created
locations:
- none
project: default

My question is about the possibility of setup this configuration
with a profile and/or config without the needs of manually scripts
on the instance and host. I tried different such profiles and
config without success.

Have you considered using cloud-init config in LXD profiles/instance config to configure the guests?

https://documentation.ubuntu.com/lxd/en/latest/cloud-init/