QEMU is a machine emulator that can run operating systems and programs for one machine on a different machine. However, it is more often used as a virtualiser in collaboration with KVM kernel components. In that case it uses the hardware virtualisation technology to virtualise guests.
Although QEMU has a command line interface and a monitor to interact with running guests, they are typically only used for development purposes. libvirt provides an abstraction from specific versions and hypervisors and encapsulates some workarounds and best practices.
While there are more user-friendly and comfortable ways, the quickest way to get started with QEMU is by directly running it from the netboot ISO. You can achieve this by running the following command:
sudo qemu-system-x86_64 -enable-kvm -cdrom http://archive.ubuntu.com/ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso
Downloading the ISO provides for faster access at runtime. We can now allocate the space for the VM:
qemu-img create -f qcow2 disk.qcow 5G
And then we can use the disk space we have just allocated for storage by adding the argument:
These tools can do much more, as you’ll discover in their respective (long) manpages. They can also be made more consumable for specific use-cases and needs through a vast selection of auxiliary tools - for example virt-manager for UI-driven use through libvirt. But in general, it comes down to:
qemu-system-x86_64 options image[s]
Please see this follow-up guide for details on virtualizing graphics using QEMU/KVM.
Upgrading the machine type
If you are unsure what this is, you might consider this as buying (virtual) hardware of the same spec but a newer release date. You are encouraged in general and might want to update your machine type of an existing defined guests in particular to:
- to pick up latest security fixes and features
- continue using a guest created on a now unsupported release
In general it is recommended to update machine types when upgrading QEMU/KVM to a new major version. But this can likely never be an automated task as this change is guest visible. The guest devices might change in appearance, new features will be announced to the guest and so on. Linux is usually very good at tolerating such changes, but it depends so much on the setup and workload of the guest that this has to be evaluated by the owner/admin of the system. Other operating systems were known to often be severely impacted by changing the hardware. Consider a machine type change similar to replacing all devices and firmware of a physical machine to the latest revision - all considerations that apply there apply to evaluating a machine type upgrade as well.
As usual with major configuration changes it is wise to back up your guest definition and disk state to be able to do a rollback – just in case. There is no integrated single command to update the machine type via
virsh or similar tools. It is a normal part of your machine definition, and therefore updated the same way as most others.
First shutdown your machine and wait until it has reached that state.
virsh shutdown <yourmachine> # wait virsh list --inactive # should now list your machine as "shut off"
Then edit the machine definition and find the type in the type tag at the machine attribute.
virsh edit <yourmachine> <type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
Change this to the value you want. If you need to check what types are available via “-M ?”, note that while providing upstream types as convenience only Ubuntu types are supported. There you can also see what the current default would be. In general it is strongly recommended that you change to newer types if possible to take advantage of newer features, but also to benefit from bugfixes that only apply to the newer device virtualisation.
kvm -M ? # lists machine types, e.g. pc-i440fx-xenial Ubuntu 16.04 PC (i440FX + PIIX, 1996) (default) ... pc-i440fx-bionic Ubuntu 18.04 PC (i440FX + PIIX, 1996) (default) ...
After this you can start your guest again. You can check the current machine type from guest and host depending on your needs.
virsh start <yourmachine> # check from host, via dumping the active xml definition virsh dumpxml <yourmachine> | xmllint --xpath "string(//domain/os/type/@machine)" - # or from the guest via dmidecode (if supported) sudo dmidecode | grep Product -A 1 Product Name: Standard PC (i440FX + PIIX, 1996) Version: pc-i440fx-bionic
If you keep non-live definitions around - such as .xml files - remember to update those as well.
This also is documented along with some more constraints and considerations at the Ubuntu Wiki
QEMU usage for microvms
QEMU became another use case being used in a container-like style providing an enhanced isolation when compared to containers, but being focused on initialisation speed.
To achieve that several components have been added:
- the microvm machine type
- alternative simple firmware (FW) that can boot linux called qboot
- QEMU build with reduced features matching these use cases called
For example, if you happen to already have a stripped down workload that has all it would execute in an initrd you might run it like the following:
sudo qemu-system-x86_64 -M ubuntu-q35 -cpu host -m 1024 -enable-kvm -serial mon:stdio -nographic -display curses -append 'console=ttyS0,115200,8n1' -kernel vmlinuz-5.4.0-21 -initrd /boot/initrd.img-5.4.0-21-workload
To run the same with
qboot and the minimized
qemu you would do the following:
Run it with with type
microvm, so change
qboot bios, adding
Install the feature-minimized
sudo apt install qemu-system-x86-microvm
An invocation will now look like:
sudo qemu-system-x86_64 -M microvm -bios /usr/share/qemu/bios-microvm.bin -cpu host -m 1024 -enable-kvm -serial mon:stdio -nographic -display curses -append 'console=ttyS0,115200,8n1' -kernel vmlinuz-5.4.0-21 -initrd /boot/initrd.img-5.4.0-21-workload
That will cut down the
virtual-hw initialisation time a lot. You will now – more than you already have before – spend the majority of time inside the guest, which implies that further tuning probably has to go into that kernel and userspace initialisation time.
qbootBIOS and other components of this are rather new upstream and not as verified as many other parts of the virtualisation stack. Therefore, none of the above is the default. Being the default would mean many upgraders would regress finding a QEMU that doesn’t have most features they are accustomed to using. Due to that the
qemu-system-x86-microvmpackage is intentionally a strong opt-in conflicting with the normal