Use an existing unmanaged network (bridge)?

How do you get LXD to let you add a container on a network that is an existing unmanaged bridge?

They show up in networks and they otherwise work fine (they’re existing) in my setup.

However, when I create a new container in the GUI, I can only select lxdbr0. If I go into YAML and manually enter the bridge name, I get " Profile update failed

Device validation failed for “eth-1”: Failed loading device “eth-1”: Failed to load network “br120” for project “default”: Network not found"

I don’t want LXD to manage these bridges or provide NAT or DNS etc. I just want to plumb a container into an existing bridge. This can be accomplished via CLI but how do I configure it int the GUI as part of the profile?

lxc config device add opnsense-host1 eth0 nic nictype=bridged parent=br120 name=eth0
$ lxc network list
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
|      NAME       |   TYPE   | MANAGED |      IPV4       |           IPV6           | DESCRIPTION | USED BY |  STATE  |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br0             | bridge   | NO      |                 |                          |             | 1       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br111           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br112           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br113           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br114           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br115           | bridge   | NO      |                 |                          |             | 1       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br116           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br117           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br118           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br119           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br120           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br121           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br122           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br123           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br124           | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| br-da8d4f09b760 | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| docker0         | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| enp2s0          | physical | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| enp3s0          | physical | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| lxdbr0          | bridge   | YES     | 10.208.164.1/24 | fd42:22f:9c84:a91a::1/64 |             | 2       | CREATED |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| virbr0          | bridge   | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan111         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan112         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan113         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan114         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan115         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan116         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan117         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan118         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan119         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan120         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan121         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan122         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan123         | vlan     | NO      |                 |                          |             | 0       |         |
+-----------------+----------+---------+-----------------+--------------------------+-------------+---------+---------+
| vlan124         | vlan     | NO      |                 |                          |             | 0       |         |

Nevermind I figured it out - you can simply edit the YAML on the profile directly, but it doesn’t appear supported by the GUI.

Hi,

you can also create a network of type physical that connects to an existing physical interface. Then, in instance configuration, you should be able to select that network.

Create network of type physical:

And select that network (test in my case) in instance configuration:

Docs: Reference - Networks - Physical network

EDIT: My mistake, this is not suitable for your use-case.

Sorry, but the above approach does not fit your explained use-case. In the above example the physical interface is passed directly to the instance, and will not allow connections from multiple containers.

thanks anyway! 20chars

LXD UI 0.18 landing in LXD 6.5 will support using managing macvlan networks which offer similar functionality as bridges without the need to create unmanaged bridges:

Macvlan has the big downside that the guests won’t be able to reach the host :frowning: which is functionality that I require

SR-IOV would be swell if my hardware supported it :smiley: but alas…

I would also really like to have this feature added to the ui. Can include a warning if you feel that is necessary but I don’t see why that would be needed. It’s the default setup for every other KVM based virtualization software (OpenNebula, Proxmox, oVirt…).

Hi @vosdev

As a temporary workaround, on the macvlan side, I believe you can add a macvlan interface for your host (so moving the host’s IP onto that and off of the physical interface, a bit like you would do with bridge) and that should allow your instances to communicate with the host.

An alternative workaround is that you could setup a profile for your unmanaged bridged NIC device and then add that to the instance(s) via the UI.

On the wider point about unmanaged bridge support in the UI I wanted to take the opportunity to explain where we are planning to go with this.

The focus for the team is on MicroCloud, with emphasis on the “Cloud” experience part of that.

As a consumer of public cloud I do not need to concern myself (much) with the physicalities of the underlying platform. I get to choose which logical network(s) I connect my instances too, without needing to know how they are actually connected under the hood.

Ofcourse the platform provider does need to concern themselves with such things to ensure that the consumer doesn’t have to.

There is a separation of concerns - the provider models the physical world into a logical representation, and the consumer uses that logical representation for their workloads.

Because of LXD’s heritage there is sometimes a blurring of those roles, and this is a good example of such a case. As a consumer launching an instance and wanting to connect to a logical network I need to concern myself with the physical implementation of that network (which interface is connected to that network on the specific cluster member my instance is being placed onto).

This has big downsides when it comes to clustering, as the same logical network may be reached using differently named physical network interfaces on each cluster member. This means that if you want to move an instance to a different host, its config may also need to be updated to point to the correct network interface, and if you forget then it may end up on the wrong network or not start at all.

Also if you have an instance that has a NIC referencing a parent network interface (as opposed to a logical network) then this prevents an instance from being migrated during cluster evacuation because LXD cannot know that the equivalent parent interface is present on the new cluster or even if it represents the same logical network.

So what we want to do is continue the theme of separating concerns that we already have with macvlan and sriov managed networks and add the ability for a physical managed network to reference bridge parent interface. This allows for the admin to specify which physical network interface represents that logical network on each cluster member.

Then the instance’s physical NIC device can reference that by using a network=<physical network> option.

This separates the modelling of the physical network to the admin of the MicroCloud, who creates the managed physical network, from the consumer who connects instances to the logical network and doesn’t care about the underlying physicalities.

This also enables cluster evacuation and planned future instance placement/balancing improvements when using an external bridge network.

There is a Github issue that tracks this feature:

1 Like

An alternative workaround is that you could setup a profile for your unmanaged bridged NIC device and then add that to the instance(s) via the UI.

Yes that’s what i’ve been doing :slight_smile:

Ofcourse the platform provider does need to concern themselves with such things to ensure that the consumer doesn’t have to.

Yes, in the case of my self-hosted LXD cloud I am the platform provider. :slight_smile: And I prefer my users don’t have to think about complex network config. Simply select it from a list and be done. The UI has been an amazing addition to LXD (That’s why 20% of the bugs/feature requests are made by me haha) and I would love to see this added as well.

This has big downsides when it comes to clustering, as the same logical network may be reached using differently named physical network interfaces on each cluster member. This means that if you want to move an instance to a different host, its config may also need to be updated to point to the correct network interface, and if you forget then it may end up on the wrong network or not start at all.

Yeah @edlerd used this example before. I just never ran into it in the years of running LXD in production since version 2.x/3.0 LTS. Have always created manual bridges with the same name on all nodes… br0 if I don’t care about vlans and br<vlan-id> if I do on the node/cluster. I think if you do this wrong it’s more user error but I understand you want to help prevent this from happening.

Also if you have an instance that has a NIC referencing a parent network interface (as opposed to a logical network ) then this prevents an instance from being migrated during cluster evacuation because LXD cannot know that the equivalent parent interface is present on the new cluster or even if it represents the same logical network.

Ahhh I have always worked with nictype: bridged, it never occured to me to give a whole nic to a single guest. I understand that this scenario can cause issues in a cluster scenario where the hosts have a different name.

So what we want to do is continue the theme of separating concerns that we already have with macvlan and sriov managed networks and add the ability for a physical managed network to reference bridge parent interface. This allows for the admin to specify which physical network interface represents that logical network on each cluster member.

I think this will solve a lot of issues :slight_smile:

The config we have used for all these years to connect an instance to an unmanaged bridge is the following. This allows us to put a guest on the physical network. In some clusters we would have separate profiles with separate bridges to get guests onto different VLANs, hence the change in device name incase they need access to multiple vlans.

devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: ovs0
    type: nic

Bridge in vlan6 on my hosts:

devices:
  eth6:
    name: eth6
    nictype: bridged
    parent: ovs6
    type: nic

For years this has been the recommended way to get your guests on a physical network/vlan. We ran this since LXD 3.x in all our clusters and standalone nodes.

When this feature is created and we move from a setup like this towards a managed network with unmanaged bridges, will that still use the physical network’s addressing/routing?

This also enables cluster evacuation and planned future instance placement/balancing improvements when using an external bridge network.

Will this new setup give these users the benefits of a managed network like ACLs? I’m not familiar with how the ACLs are applied. I’m guessing this requires LXD managing the routing.

Exactly our thoughts too.

1 Like

Even if using nictype: bridged with parent value, in the case of externally managed networks, there’s no guarantee that the same parent exists on the target member, or that it represents the same logical network.

1 Like

Yes, nothing about the actual underlying implementation will change.

This is just adding another level of indirection, where the “template”, if you like, of the per-member “parent” settings would be stored in the managed physical network, and then when a physical NIC references a managed network using network: foo, it will load the associated member configuration to get the parent, and then if it is an unmanaged bridge, it will connect the instance NIC to it, the same as bridged NIC does today.

1 Like

No, this would still only be available for LXD managed private bridge or ovn networks.

Still a great improvement :slight_smile: thanks for the thorough explanation!

1 Like