Installing LXD on a firewall appliance running Ubuntu Server (Firewalla)

Hi Everyone,

I’ve asked the team over at Firewalla about using LXD instead of Docker, and I’ve also asked them about how their application uses the Ubuntu network stack so that I can configure LXD to avoid interfering with their setup. The objective is to setup a LXD container that sits directly on the L2 network without any DNS, DNSMASQ or NAT - basically just another network device that can be managed by the Firewalla application sitting on the host.

LXD as alternative to Docker on Firewalla:
https://help.firewalla.com/hc/en-us/community/posts/19708451335059-Canonical-s-own-container-technology-LXD-easier-to-use-and-far-more-secure-than-Docker?page=1

How does Firewalla do networking on their FWG appliance:
https://help.firewalla.com/hc/en-us/community/posts/20083791423379-LXD-Network-setup-to-work-with-Firewalla-Gold-with-LACP-established-using-all-three-LAN-interfaces

And another on Reddit where Firewalla did a poll of LXD vs Docker (it just proved that people don’t know about LXD):
https://www.reddit.com/r/firewalla/comments/15xd7ka/lxd_vs_docker_containers/

They haven’t responded to the network question so I thought to ask here if there might be a way to ensure LXD just presents itself to their established network setup without interfering with it.

Firewalla has guidance on their website linking to a third party scripted Docker installation of a Ubiquiti Unifi Network application instance and it seems that it uses the standard Docker Bridge mode:
https://help.firewalla.com/hc/en-us/articles/360053441074-Guide-How-to-run-UniFi-Controller-on-the-Firewalla-Gold-Series-Boxes

I think that if I configure a Linux Bridge Adapter it will wreck their implementation, and I need the Firewalla application on the LXD host to see the LXD container instance as just another device on the Ubiquiti management VLAN.

Any thoughts on how I could get this happening without breaking the Firewalla appliance (it is a Firewalla Gold). Cheers, Nick.

You can use a physical NIC device to literally pass a NIC on the host into a container or VM.

Alternatively you can setup an unmanaged bridge that hasthe physical port connected to it and use the bridge NIC type to connect to that.

Finally you can also use macvlan NIC type to connect to a physical NIC without removing it from the host-side, but this restricts each instance to a single MAC address and doesn’t allow direct communication with the host, only the l2 network.

https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/

1 Like

Thanks @tomp

I’ve had a chance to explore the Firewalla device and here is the configuration:

  • four physical 1G Ethernet interfaces
  • 1 WAN interface
  • 3 LAN interfaces

The three LAN eth interfaces have been aggregated using a LACP LAG that interconnects to a L2 switch. This LAG seems to have been bonded using ifenslave and is shown as bond0.

A number of VLAN interfaces have been attached to bond0 and present as bond0.10, bond0.20, bond0.30.

$ ifenslave --all-interfaces
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1b brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
7: ifb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN mode DEFAULT group default qlen 32
    link/ether ca:b0:07:de:f9:c7 brd ff:ff:ff:ff:ff:ff
8: ifb1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN mode DEFAULT group default qlen 32
    link/ether d2:50:d9:44:9f:b4 brd ff:ff:ff:ff:ff:ff
9: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
10: bond0.10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
11: bond0.20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
12: bond0.30@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
13: bond0.99@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff
14: bond0.98@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT group default qlen 1000
    link/ether 20:6d:31:02:04:1a brd ff:ff:ff:ff:ff:ff

The host is Ubuntu Server 18.04 with LXD 3.0.3 installed but not initialised.

Is there any way to initialise LXD without a bridge and attach a single container directly to bond0 which is the native untagged network (not a VLAN)? I need to initialise LXD without changing the setup and prefer to do this without installing a lxdbr0 that comes configured with NAT and DNSMASQ.

I guess the other option might be to create and attach a simple Linux bridge, that is L2 only, to bond0 and attach a container directly to it.

The goal is to interconnect the container to the native network so that it receives a DHCP lease from the host, just like all other network devices on the network.

You don’t have to run lxd init, or you can just answer no when it asks about setting up a managed network.

If you don’t run lxd init then you can run the steps manually, for example creating a storage pool with lxc storage create, assigning a root disk device for that pool to the default profile with lxc profile device add default root disk path=/ pool=<pool> and then finally adding an unmanaged bridged NIC to the default profile referencing your unmanaged bridge lxc profile device add default eth0 nic nictype=bridged parent=<unmanaged bridge>

This may also be useful:

https://www.youtube.com/watch?v=TmGvbXfwJEA

1 Like