Help with Multipass Networking - Bridge Configuration

So I have been playing around with multipass using the qemu backend on Ubuntu 22.04 as my host. My current configuration has several methods of working with VM’s installed.

  1. KVM libvirt and Virtual Machine Manager
  2. Qemu - Quickemu with Quickgui and qqX Virtual Machine Manager
  3. Multipass

I have not detected any conflicts having all these installed as mostly they appear to work within their isolated installations. Meaning VM’s launched with quickemu or qqX do not show up in VMM or KVM because it is using Qemu. I have not launched any VM’s using KVM.

I have configured a 500GB partition that stores all my VM’s no matter where they are created from.

VMM created a default bridge called virbr0 when installed. Quickemu and qqX launch their VM’s in user mode.

I have 2 VM’s running on Qemu using Quickemu and qqX:

  1. Windows 11
  2. Macos Sonoma

My use case is to be able to develop full stack cross platform applications from the Desktop, web, server to iOS and Android. Devops is a big strategy in my development lifecycle so I created a 5 node test system that has:

  1. Docker master
  2. Docker worker
  3. Kubernetes Master
  4. Kubernetes Worker
  5. Virtualmin Server

i have installed Portainer to the swarm custer with agents running on the kubernetes nodes which gives me an enterprise grade docker desktop replacement. This allows me to test and simulate clustering with these systems and launch swarm containers and orchestrate pods. I am able to push traffic from the host to the multipass network and from the vm’s to the host. I am able to push traffic from the VM’s using the virbr0 bridge to the mpqemubr0 bridge and vice versa. All Vm’s can communicate with the internet via my wireless network. I have a node with virtualmin setup to provide tradition web hosting services and DNS using bind to the network. I created netplans on the multipass vm’s to use the DNS provided by this server for name resolution but for this to work the bridge created by multipass which is mpqemubr0 must be configured with the DNS server IP and domain search information. I can run Ubuntu’s Advanced Network Configuration GUI and add this information which works but does not persist during reboots as multipass will recreate the bridge upon reboot.

I will add the windows and macos workstations to the bridge with “allow mpqemubr0” in the qemu bridge.conf file then all i have to do is updated the .conf files for them and add the network=“mpqemubr0” flag and they will be able to connect to the multipass bridge and i will have a decent development network running.

What is stopping this is the fact that multipass keeps recreating the bridge with it’s default configurations which effectively removing my DNS server entries on reboot which makes me have multiple mpqemubr0 setups and I have to delete them but if i add my settings back to the bridge using the Ubuntu GUI it will keep doing this.

Where are the configuration files located for the creation of the bridge so I can add the settings to that so it will not keep recreating the bridge. I have searched and searched and can’t locate them. Is this hardcoded or something that can be configured without switching backends?

Hi @mrpast3wart!

Wow, that is quite a use case you have going on here :slightly_smiling_face:

To answer your question, we don’t have a configuration file to create the bridge- it is all done in the Multipass code. You can find the code here: multipass/src/platform/backends/qemu/linux/qemu_platform_detail_linux.cpp at main · canonical/multipass · GitHub

To be honest, I’m not really sure how we could handle something like this programmatically and still account for all of the different use cases. That said, I do think this is worthy of a feature request. Would you mind filling out Sign in to GitHub · GitHub so we can keep track of it? Thanks!

I will open up a feature request as you suggest. I been thinking about this some and I may be off but what about not hard coding the creation of the default bridge but instead allow for maybe say a YAML file that could define multiple network configurations that can then be picked up by multipass. Doing something like this maintains flexibility which would cover any use case that could be thought of.

The code could iterate over the YAML file and any defined network can be created (multipass network ls, multipass network edit) including maybe finding a way to have multipass ssh into VM’s running on the network configuring interfaces there as well. This would allow users to actually add network definitions to their devops strategy committing this to source control and would be reproducible. Combining this with the already capable cloud-init could be the beginning of some sort of orchestration for multipass and may lend itself to enhancing other Ubuntu offerings that use Multipass. Think docker compose. If multipass was capable of executing a YAML file similar to a docker compose YAML files one could create an entire test lab and run multipass up and have VM’s spin up in a grand fashion.

I searched the web for examples of using YAML for defining networks and discovered this

https://medium.com/@aifakhri/modelling-network-device-configuration-with-yaml-ad88e36abe04

I believe something like this would increase adoption of multipass also making it possible for future enhancements to maybe start moving to being able to use multipass in production environments with a solid API in place this could be a enterprise grade system for simple IaaS. Being cross platform already and capable of being controlled remotely add gas to this concept. Since docker is open source i’m pretty sure one could take a look at their codebase to get an ideal how this works it would just be doing it for VM’s instead of containers. Now on the other hand if you really want to get snazzy make multipass when using the LXD backend capable of creating containers that can work alongside the VM’s and you have a very viable product slap a GUI on top and BAM. LOL thinking out loud here.

Of course production usage may require targeting a few more hypervisors so that multipass could make use type 1 system since it already supports type 2 deployments. But anyway like I said thinking out loud.

1 Like

Hey @mrpast3wart,

Those are some really great ideas that you propose. Yes, please open the feature request and dump these ideas into it so it can be tracked more easily. Thank you for your input here!

Will absolutely do that.