Accelerate network and storage applications on RISC-V with DPDK and SPDK

Overview

Kernel space drivers add overhead to network and block device applications. Data is transferred twice: once between the device and the kernel and once between the kernel and the user space. Using user space drivers to directly transfer data between devices and the user space memory can lead to significant speedup of the operations. Also duplicate data queue management in the kernel and the user space can be avoided.

The Data Plane Development Kit (DPDK) provides libraries to implement user space networking based applications and provides drivers for specific hardware.

The Storage Performance Development Kit (SPDK) provides libraries implementing user space drivers for NVMe and is the basis for high performance storage applications. It includes iSCSI and NVMeOF targets which make use of DPDK for the networking.

RISC-V support has been added to DPDK 22.07 and to SPDK 22.05.

RISC-V preview in PPA

For Ubuntu we tend only to package the long term releases of DPDK. 22.11 is the next expected LTS release. To provide early preview access to DPDK and SPDK on RISC-V we have packaged DPDK 22.03 and SPDK 22.01 with RISC-V enabling patches for Ubuntu 22.10 (Kinetic Kudu) in a personal package archive. You can add this archive to your installation with the following commands:

sudo add-apt-repository ppa:ubuntu-risc-v-team/release

iSCSI target walkthrough

The following runs through setting up an iSCSI target on an SiFive HiFive Unmatched board.

Ubuntu 22.10 is installed on an USB drive. An NVMe drive is installed and will be published as logical unit by the iSCSI target.

Add the PPA with SPDK and DPDK:

$ sudo add-apt-repository ppa:ubuntu-risc-v-team/release

Install the SPDK and the daemonize package:

$ sudo apt-get install daemonize spdk

SPDK needs hugepages. We can add our settings to the kernel command line:

default_hugepagesz=1G hugepagesz=1G hugepages=4

If booting via GRUB this can be done via file /etc/default/grub. Afterwards

$ sudo update-grub
$ sudo reboot

We will need the PCI address of the NVMe drive (06:00.0 in the example):

$ lspci
00:00.0 PCI bridge: SiFive, Inc. FU740-C000 RISC-V SoC PCI Express x8 to AXI4 Bridge
01:00.0 PCI bridge: ASMedia Technology Inc. ASM2824 PCIe Gen3 Packet Switch (rev 01)
02:00.0 PCI bridge: ASMedia Technology Inc. ASM2824 PCIe Gen3 Packet Switch (rev 01)
02:02.0 PCI bridge: ASMedia Technology Inc. ASM2824 PCIe Gen3 Packet Switch (rev 01)
02:03.0 PCI bridge: ASMedia Technology Inc. ASM2824 PCIe Gen3 Packet Switch (rev 01)
02:04.0 PCI bridge: ASMedia Technology Inc. ASM2824 PCIe Gen3 Packet Switch (rev 01)
02:08.0 PCI bridge: ASMedia Technology Inc. ASM2824 PCIe Gen3 Packet Switch (rev 01)
04:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller
05:00.0 VGA compatible controller: NVIDIA Corporation GK208B [GeForce GT 710] (rev a1)
05:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)
06:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961/SM963

We require the vfio-pci kernel module:

$ sudo modprobe vfio-pci

You can automatically load the module adding its name to file /etc/modules.

RISC-V does not support IOMMU yet. So we need to tell the vfio driver not to use it:

$ echo 1 | sudo tee -a /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

Next attach the vfio-pci driver to the NVMe drive. Don’t worry about the warnings:

$ sudo /usr/share/spdk/scripts/setup.sh
modinfo: ERROR: Module vfio_iommu_type1 not found.
0000:06:00.0 (144d a804): nvme -> vfio-pci
"user" user memlock limit: 1994 MB

This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as user "user".
To change this, please adjust limits.conf memlock limit for user "user".
modprobe: FATAL: Module msr not found in directory /lib/modules/5.15.0-1008-generic

We need iscsi_tgt application to be running:

sudo daemonize -e /tmp/iscsi_tgt_err.log -o /tmp/isci_tgt.log -p /tmp/iscsi_tgt.pid /usr/bin/sudo /usr/bin/iscsi_tgt

Create a portal group. Adjust the addresses in the following instructions according to your network settings:

$ sudo /usr/share/spdk/scripts/rpc.py iscsi_create_portal_group 1 192.168.0.62:3260

Create an initiator group.

$ sudo /usr/share/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 192.168.0.0/24

Create the block device:

$ sudo /usr/share/spdk/scripts/rpc.py bdev_nvme_attach_controller -b NVMe1 -t PCIe -a 0000:06:00.0

Define the LUN:

$ sudo /usr/share/spdk/scripts/rpc.py iscsi_create_target_node --disable-chap disk1 "Data Disk1" "NVMe1n1:0" 1:2 64'

Now we have running iSCSI server. On a productive system you likely would set up CHAP authentication to get a minimum of security.

On the client side you will need Open-iSCSI to discover the target and to login:

$ sudo iscsiadm -m discovery -t sendtargets -p 192.168.0.62
$ sudo iscsiadm -m node --login

Further reading

For more documentation see: