Ubuntu installation on a RISC-V virtual machine using a server install image and QEMU

Overview

Starting with Ubuntu 22.04 a server install image is made available. This topic describes how to use the image for an installation on a virtual machine.

A general overview of the installation process is available at https://ubuntu.com/tutorials/install-ubuntu-server.

Prerequistites

To run the installation you will need the following packages:

  • qemu-system-misc - QEMU is used to emulate a virtual RISC-V machine.
  • opensbi - OpenSBI provides the Supervisor Execution Environment running in machine mode.
  • u-boot-qemu - U-Boot is the firmware implementing the UEFI API and loads GRUB.
sudo apt-get install qemu-system-misc opensbi u-boot-qemu

Download the image either using your web browser or with

wget https://cdimage.ubuntu.com/ubuntu-server/daily-live/current/jammy-live-server-riscv64.img.gz 

Installation

Extract the image.

gzip -d jammy-live-server-riscv64.img.gz

Create the disk image on which you will install Ubuntu. 16 GiB should be enough.

dd if=/dev/zero bs=1M of=disk count=1 seek=16383

Start the installer with:

/usr/bin/qemu-system-riscv64 -machine virt -m 4G -smp cpus=2 -nographic \
    -bios /usr/lib/riscv64-linux-gnu/opensbi/generic/fw_jump.bin \
    -kernel /usr/lib/u-boot/qemu-riscv64_smode/u-boot.bin \
    -netdev user,id=net0 \
    -device virtio-net-device,netdev=net0 \
    -drive file=jammy-live-server-riscv64.img,format=raw,if=virtio \
    -drive file=disk,format=raw,if=virtio \
    -device virtio-rng-pci

Follow the installation steps in https://ubuntu.com/tutorials/install-ubuntu-server.

When rebooting we have to remove the installer image. Otherwise the installer will restart.

U-Boot gives you a 2 second time window to press the Enter key to reach the U-Boot console. In U-Boot’s console you can use the poweroffcommand to stop QEMU. Another option to exit QEMU is pressing keys CTRL-a followed by key x.

Run Ubuntu

To run you installed Ubuntu image use

/usr/bin/qemu-system-riscv64 -machine virt -m 4G -smp cpus=2 -nographic \
    -bios /usr/lib/riscv64-linux-gnu/opensbi/generic/fw_jump.bin \
    -kernel /usr/lib/u-boot/qemu-riscv64_smode/u-boot.bin \
    -netdev user,id=net0 \
    -device virtio-net-device,netdev=net0 \
    -drive file=disk,format=raw,if=virtio \
    -device virtio-rng-pci

In Ubuntu 22.04 the number of virtual CPUs is limited to 8 in QEMU and in the Linux kernel.

3 Likes

Nice guide @xypron, thanks.
In my testing on Focal I was not able to use these instructions, as the boot stops with a traceback 5 seconds in.
Running this on Jammy is working just fine.

Are there some fixes that might be SRUed, or is there a minimum series that people should be running if they want to test this?

1 Like

Please, use the opensbi and u-boot-qemu from the same release as the one that you are installing.

1 Like

U-Boot gives you a 2 second time window to press the Enter key to reach the U-Boot console

Have you considered the -no-reboot argument for qemu-system ? It results in the installation VM shutting down instead of booting again, which would then make it easier to run your second command line that does not contain the installer image.

1 Like

@dbungert I used -no-reboot and that seemed helpful.

I also added -vga virtio to get it to expand to the full terminal tab width.

@xypron Thank you for posting this. I was able to follow it (with some minor tweaks for the change in the jammy server image name, and the above extra qemu flags).

My final working command is:

/usr/bin/qemu-system-riscv64 -machine virt -m 4G -smp cpus=2 -nographic \
    -bios /usr/lib/riscv64-linux-gnu/opensbi/generic/fw_jump.bin \
    -kernel /usr/lib/u-boot/qemu-riscv64_smode/u-boot.bin \
    -netdev user,id=net0 \
    -device virtio-net-device,netdev=net0 \
    -drive file=qemu_disk.bin,format=raw,if=virtio \
    -device virtio-rng-pci \
    -vga virtio \
    -no-reboot

Now I wanted to get a GUI on the VM if possible.

I tried following your instructions from here: Installing the Gnome Desktop on RISC-V Ubuntu 22.04 Jammy

I ran the installer for the missing GUI packages as given:

sudo apt-get install \
  adwaita-icon-theme-full \
  mutter \
  gdm3 \
  gnome \
  gnome-shell-extension-appindicator \
  gnome-shell-extension-desktop-icons-ng \
  gnome-shell-extension-ubuntu-dock \
  gnome-terminal \
  network-manager-gnome \
  ubuntu-gnome-wallpapers \
  ubuntu-settings

However, at the end of this, after I reboot, no GUI appears. I tried both with and without WaylandEnable=false, but the result was the same.

Is there an additional activation step needed?

You have explicitly switched off graphic output with:

-nographic

If you want a graphical console, you could use:

-serial mon:stdio \
-device virtio-gpu-pci -full-screen \
-device qemu-xhci \
-device usb-kbd \
-device usb-mouse \

I would not expect the GUI to be usable due to performance.

Best regards

Heinrich

Thanks for getting back to me so quickly. I had thought the -nographic would be a problem, but when I removed it before the system wouldn’t boot, and then I forgot about it.

When I add those command line args, I get gtk initialization failed. And when I add -D ./log.txt as I’ve seen mentioned elsewhere, I get an empty log. Any idea how to proceed?

The following worked for me:

wget https://cdimage.ubuntu.com/releases/22.04.2/release/ubuntu-22.04.2-preinstalled-server-riscv64+unmatched.img.xz
xz -d ubuntu-22.04.2-preinstalled-server-riscv64+unmatched.img.xz
qemu-img resize -f raw ubuntu-22.04.2-preinstalled-server-riscv64+unmatched.img +8G
/usr/bin/qemu-system-riscv64 -machine virt -m 4G -smp cpus=2 \
-bios /usr/lib/riscv64-linux-gnu/opensbi/generic/fw_jump.bin \
-kernel /usr/lib/u-boot/qemu-riscv64_smode/u-boot.bin \
-netdev user,id=net0 \
-device virtio-net-device,netdev=net0 \
-drive file=ubuntu-22.04.2-preinstalled-server-riscv64+unmatched.img,format=raw,if=virtio \
-device virtio-rng-pci \
-serial mon:stdio \
-device virtio-gpu-pci -full-screen \
-device qemu-xhci \
-device usb-kbd \
-device usb-mouse

Once logged in:

sudo apt-get update
sudo apt-get install \
  adwaita-icon-theme-full \
  mutter \
  gdm3 \
  gnome \
  gnome-shell-extension-appindicator \
  gnome-shell-extension-desktop-icons-ng \
  gnome-shell-extension-ubuntu-dock \
  gnome-terminal \
  network-manager-gnome \
  ubuntu-gnome-wallpapers \
  ubuntu-settings
# set WaylandEnable=false in /etc/gdm3/custom.conf
sudo systemctl start gdm3

After several minutes the Ubuntu logo showed up.

The system is much too slow to be used.

Thank you! I was able to replicate your results on a physical Linux system. And I can confirm that indeed the system is too slow to be usable for my limited GUI use case (but I had to try, just in case it was slow but still usable. But no, you’re right, it was totally unusable and unresponsive even as far as mouse movement went.)

However, I couldn’t replicate within the Linux VM where I was initially working. That had the same gtk initialization failed error. Just to conclude, can you think of any reason why GTK wouldn’t work in a Linux VM running in VMWare workstation? (FWIW I enabled pass-through hypervisor support in the VM, even though I know it shouldn’t be necessary here since it should be using pure emultation, not Intel VMX.)

I have no access to VMWare. You can check if virtio-gpu is available with:

lspci -nn

virtio-gpu has:

Field value
vendor ID 0x1af4
prod ID 0x1050
sub-class 0x80

I’m not sure if you meant to check from within the RISC-V QEMU VM or from outside. Here’s both (from within my prior CLI invocation of QEMU w/o mouse etc):

inside QEMU:

00:00.0 Host bridge [0600]: Red Hat, Inc. QEMU PCIe Host bridge [1b36:0008]
00:01.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG [1af4:1005]
00:02.0 SCSI storage controller [0100]: Red Hat, Inc. Virtio block device [1af4:1001]

outside QEMU:

00:00.0 Host bridge [0600]: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge [8086:7190] (rev 01)
00:01.0 PCI bridge [0604]: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge [8086:7191] (rev 01)
00:07.0 ISA bridge [0601]: Intel Corporation 82371AB/EB/MB PIIX4 ISA [8086:7110] (rev 08)
00:07.1 IDE interface [0101]: Intel Corporation 82371AB/EB/MB PIIX4 IDE [8086:7111] (rev 01)
00:07.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI [8086:7113] (rev 08)
00:07.7 System peripheral [0880]: VMware Virtual Machine Communication Interface [15ad:0740] (rev 10)
00:0f.0 VGA compatible controller [0300]: VMware SVGA II Adapter [15ad:0405]
00:10.0 SCSI storage controller [0100]: Broadcom / LSI 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI [1000:0030] (rev 01)
00:11.0 PCI bridge [0604]: VMware PCI bridge [15ad:0790] (rev 02)
00:15.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:15.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:15.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:15.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:15.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:15.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:15.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:15.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:16.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:17.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
00:18.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
02:00.0 Ethernet controller [0200]: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) [8086:100f] (rev 01)
02:01.0 Multimedia audio controller [0401]: Ensoniq ES1371/ES1373 / Creative Labs CT2518 [1274:1371] (rev 02)
02:03.0 SATA controller [0106]: VMware SATA AHCI controller [15ad:07e0]
03:00.0 Non-Volatile memory controller [0108]: VMware Device [15ad:07f0]

It doesn’t look like 0x1af4:0x1050 is available