Options on where to run and how to build for (Ubuntu) Linux on s390x

The following options cover compile and build options that are:
local native, local cross, local emulated or remote native

  1. Use your ‘own’ IBM Z or LinuxONE (aka s390x) system

  2. Cross-compilation

  3. LinuxONE Cloud instances

  4. Travis CI

  5. IBM Cloud Hyper Protect Services

  6. Launchpad PPAs

  7. snapcraft.io

  8. Qemu

  9. zPDT - z1090 or z1091

  10. Hercules

  11. Docker (multiarch)

1 Like
  1. Use your ‘own’ IBM Z or LinuxONE (aka s390x) system

In the ideal case that you have your own IBM or LinuxONE system things are obvious and straight-forward. Just compile, e.g. the kernel (from git), on your s390x system (LPAR, z/VM guest, KVM virtual machine or (LXD) container).
The following URL is recommended for cloning the Ubuntu git tree (here bionic):
SRC-URL=”https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/bionic”
(alternatively: SRC-URL=“git://kernel.ubuntu.com/ubuntu/ubuntu-bionic”)
Now clone the ‘master’ branch:
$ git clone $SRC-URL
Or even better ‘master-next’ - since it incl. everything that’s already accepted and applied:
$ git clone $SRC-URL --branch master-next --single-branch
$ cd ubuntu-bionic
$ sudo apt -q -y update
$ sudo apt -y -q install software-properties-common
$ sudo add-apt-repository -s “deb http://us.ports.ubuntu.com/ubuntu-ports/ $(lsb_release -sc) main”
$ sudo add-apt-repository -s “deb http://us.ports.ubuntu.com/ubuntu-ports/ $(lsb_release -sc)-updates main”
$ sudo apt -q -y update
$ sudo apt -q -y --no-install-recommends install libncurses-dev flex bison openssl libssl-dev dkms libelf-dev libudev-dev libpci-dev libiberty-dev autoconf fakeroot
$ sudo apt -q -y --no-install-recommends build-dep linux linux-image-$(uname -r)
Get rid of potential perl warnings about missing locales:
$ sudo apt -q -y install locales-all
$ sudo update-locale
Compile:
$ fakeroot debian/rules clean
Quick build:
$ DEB_BUILD_OPTIONS=parallel=8 AUTOBUILD=1 NOEXTRAS=1 fakeroot debian/rules binary-headers binary-generic binary-perarch
or just:
$ DEB_BUILD_OPTIONS=parallel=$(lscpu | grep ^CPU(s): | awk ‘{ print $2 }’) AUTOBUILD=1 NOEXTRAS=1 fakeroot debian/rules binary-headers binary-generic binary-perarch
$ ls -a …/*.deb
…/linux-buildinfo-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-headers-4.15.0-71_4.15.0-71.80_all.deb
…/linux-headers-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-image-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-modules-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-modules-extra-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-tools-4.15.0-71_4.15.0-71.80_s390x.deb
…/linux-tools-4.15.0-71-generic_4.15.0-71.80_s390x.deb

Further references:










1 Like
  1. Cross-compilation

The gcc compiler collection - as well as other compilers - is capable to generate code for a different architecture than the one it runs on - this is known as cross-compilation.
Here we cross-compile the kernel on x86_64 (aka amd64) for s390x.
$ uname -a
Linux cross 4.15.0-66-generic #75~16.04.1-Ubuntu SMP Tue Oct 1 14:01:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ arch
x86_64
$ sudo apt -q -y update
$ sudo apt -q -y --no-install-recommends install fakeroot build-essential crossbuild-essential-s390x kexec-tools libelf-dev
binutils-dev libncurses-dev flex bison openssl libssl-dev dkms libelf-dev libudev-dev libpci-dev libiberty-dev autoconf
$ sudo apt -y -q install software-properties-common
$ sudo add-apt-repository -s “deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main”
$ sudo add-apt-repository -s “deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc)-updates main”
$ sudo apt-get build-dep linux
The following URL is recommended for cloning the Ubuntu git tree (here bionic):
SRC-URL=”https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/bionic”
(alternatively: SRC-URL=“git://kernel.ubuntu.com/ubuntu/ubuntu-bionic”)
So clone the ‘master’ branch:
$ git clone $SRC-URL
Or even better ‘master-next’ - since it incl. everything that’s already accepted and applied:
$ git clone $SRC-URL --branch master-next --single-branch
$ cd ubuntu-bionic
$ export $(dpkg-architecture -as390x); export CROSS_COMPILE=s390x-linux-gnu-
$ fakeroot debian/rules clean
$ fakeroot debian/rules binary-headers binary-generic binary-perarch
$ ls -a …/*.deb
…/linux-buildinfo-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-headers-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-headers-4.15.0-71_4.15.0-71.80_all.deb
…/linux-image-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-modules-4.15.0-71-generic_4.15.0-71.80_s390x.deb
…/linux-modules-extra-4.15.0-71-generic_4.15.0-71.80_s390x.deb

1 Like
  1. LinuxONE Cloud instances
    Get a s390x LinuxONE Cloud instance from the LinuxONE Community Cloud that is hosted at Marist College.

Further references:
https://developer.ibm.com/linuxone/resources/
https://linuxone.cloud.marist.edu/cloud/
https://developer.ibm.com/linuxone/

  1. Travis CI

Build your Open Source projects on IBM Power and IBM Z CPU architecture (using Travis CI)
Open source projects can run on travis-ci.org or travis-ci.com.
The official Travis documentation page about “Building on Multiple CPU Architectures” (*) provides detailed steps for building and testing on multiple architectures, including s390x.
All builds and test on Travis are run isolated in LXD containers.

Further reference:
https://blog.travis-ci.com/2019-11-12-multi-cpu-architecture-ibm-power-ibm-z
https://docs.travis-ci.com/user/reference/overview/
https://docs.travis-ci.com/user/multi-cpu-architectures (*)

  1. IBM Cloud Hyper Protect Services

Build and develop for free on IBM Cloud using the Hyper Protect Services product portfolio for apps, AI, analytics, and more.

Further references:
https://cloud.ibm.com/registration (free) IBM Cloud account associated with an IBM ID required
https://cloud.ibm.com/login
https://cloud.ibm.com/

  1. Launchpad PPAs

Launchpad offers so called Personal Package Archives (PPAs) that allow to package and compile for different architectures. By default PPAs build for x86 and amd64 - other architectures need to be explicitly enabled.
First create a Launchpad account: https://login.launchpad.net/+new_account
Sign the Ubuntu code of conduct (that’s required to activate PPAs): https://launchpad.net/codeofconduct
Activate/create a PPA: https://launchpad.net/people/+me/

1 - Personal package archives - PPA

And fill out the fields with the (PPA) name and description:

Once the PPA is created:

Select ‘Change details’.

At the edit (PPA) dialog:

Scroll down to the bottom of the page where all supported architectures are listed:

Finally select the architecture(s) you want the PPA to be compiled and build for.
A GPG key need to be registered with your Launchpad account to be able to upload: https://help.launchpad.net/Packaging/PPA/Uploading

Further references:



  1. snapcraft.io

On October 2018 the support for build architectures and packaging at snapcraft.io was expanded and covers now s390x (as well as ppc64el). Some of the more famous snap packages are the ones that are used in the Charmed Distribution of Kubernetes (CDK) or kata.

Further references:



  1. Qemu

Qemu is today mostly known if used in combination with KVM and libvirt as full virtualization stack - means where host and guest are of the same architecture. However, qemus roots are emulation and it’s still pretty good in emulating different architectures, which means it allows to run a different architecture in the guest compared to the host. But this flexibility comes with a performance penalty compared to virtualization, since the code needs to be transformed for the target architecture.
Qemu is used here to run an s390x guest on top of an x86_64 (aka amd64) host.
$ sudo apt install qemu-system-s390x qemu-utils
$ mkdir qemu && cd qemu
$ qemu-img create -f raw ubuntu-bionic.raw 5G
Formatting ‘ubuntu-bionic.raw’, fmt=raw size=5368709120
Alternatively use qcow2 instead of raw
$ qemu-img info ubuntu-bionic.raw
image: ubuntu-bionic.raw
file format: raw
virtual size: 5.0G (5368709120 bytes)
disk size: 0
There are now at least the following 3 different options:

  1. Installation with the help of installer-kernel and -initrd
    $ wget http://ports.ubuntu.com/ubuntu-ports/dists/bionic/main/installer-s390x/current/images/generic/kernel.ubuntu http://ports.ubuntu.com/ubuntu-ports/dists/bionic/main/installer-s390x/current/images/generic/initrd.ubuntu
    $ ls -l *.ubuntu
    -rw-rw-r-- 1 user group 13375970 Apr 25 2018 initrd.ubuntu
    -rw-rw-r-- 1 user group 4390912 Apr 25 2018 kernel.ubuntu
    $ sudo qemu-system-s390x -machine s390-ccw-virtio -m 2048 -nographic -drive file=./ubuntu-bionic.raw,format=raw -kernel ./kernel.ubuntu -initrd ./initrd.ubuntu
    and install a fully emulated s390x system - you may use ‘Auto-configure networking’ since DHCP works out of the box::

[!] Configure the network

Networking can be configured either by entering all the information
manually, or by using DHCP (or a variety of IPv6-specific methods) to
detect network settings automatically. If you choose to use
autoconfiguration and the installer is unable to get a working
configuration from the network, you will be given the opportunity to
configure the network manually.
Auto-configure networking?


Or directly go to the d-i shell:
type “<”

[!] Ubuntu installer main menu:

Choose the next step in the install process:

Configure the network device
Configure the network
Continue installation remotely using SSH
Choose language
Choose a mirror of the Ubuntu archive
Download installer components
Change debconf priority
Save debug logs
Execute a shell <===
Abort the installation

BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3) built-in shell (ash)
Enter ‘help’ for a list of built-in commands.

~ # cat /proc/cpuinfo
vendor_id : IBM/S390
'# processors : 1
bogomips per cpu: 13370.00
max thread id : 0
features : esan3 zarch stfle msa ldisp eimm etf3eh highgprs
facilities : 0 1 2 3 4 7 9 16 17 18 19 21 22 24 25 27 30 31 32 33 34 35 40 41 45 49 51 52 71 72 76 77 138
processor 0: version = 00, identification = 000000, machine = 2827 # EC12
~ # uname -a
Linux (none) 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:14:23 UTC 2018 s390x GNU/Linux

  1. Installation with the help of an Ubuntu Server s390x ISO image
    $ wget http://cdimage.ubuntu.com/releases/18.04.3/release/ubuntu-18.04.3-server-s390x.iso
    $ ls -l *.iso
    -rw-rw-r-- 1 user group 657401856 Aug 5 20:49 ubuntu-18.04.3-server-s390x.iso
    $ sudo qemu-system-s390x -machine s390-ccw-virtio -m 2048 -nographic -drive file=./ubuntu-bionic.raw,format=raw --cdrom ./ubuntu-18.04.3-server-s390x.iso

  2. Using the Ubuntu Server s390x Cloud image
    $ wget https://cloud-images.ubuntu.com/releases/bionic/release/ubuntu-18.04-server-cloudimg-s390x.img
    $ ls -l .img
    -rw-rw-r-- 1 user group 307167232 Nov 14 17:49 ubuntu-18.04-server-cloudimg-s390x.img
    A cloud-init data source file might need to be created to be able to login to the Cloud image.
    $ sudo apt-get install cloud-image-utils

    $ cat >cloud-instance-data.cfg <<EOF
    #cloud-config
    password: mysecretpassword
    chpasswd: { expire: False }
    ssh_pwauth: True
    EOF
    $ cat cloud-instance-data.cfg
    #cloud-config
    password: mysecretpassword
    chpasswd: { expire: False }
    ssh_pwauth: True
    $ cloud-localds cloud-instance-data.img cloud-instance-data.cfg
    $ ls -l cloud-instance-data.

    -rw-rw-r-- 1 user group 78 Nov 20 07:37 cloud-instance-data.cfg
    -rw-rw-r-- 1 user group 374784 Nov 20 07:40 cloud-instance-data.img
    Start the emulated qemu system:
    $ sudo qemu-system-s390x -name qemu -machine s390-ccw-virtio -m 2048 -nographic -drive file=./ubuntu-18.04-server-cloudimg-s390x.img,format=qcow2,media=disk -drive file=./cloud-instance-data.img,format=raw

    In this case a full installation is not needed, just boot up and use the Cloud image.
    With providing the data in cloud-instance-data.img, it’s possible to login with user ‘ubuntu’ and password ‘mysecretpassword’, like specified above.
    Alternatively ssh-key based login can be enabled by using ~/.ssh/id_rsa.pub and the “ssh_authorized_keys: …” keyword in cloud-instance-data.cfg.

Further references:
https://wiki.qemu.org/Documentation/Platforms/S390X
https://virtualpenguins.blogspot.com/
https://www.qemu.org/

  1. zPDT - z1090 or z1091

The ‘z Personal Development Tool’ is a commercial software emulator, secured by a USB dongle aka Token, that allows to run s390x code (incl. full operating systems) on amd64 hardware for development and test purposes. The zPDT token, that needs to be activated through an IBM business partner, zPDT supplier, or through IBM Resource Link®, is required.

Further references:
https://www.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase
http://www.redbooks.ibm.com/abstracts/sg248205.html?Open
http://www.redbooks.ibm.com/Redbooks.nsf/searchsite?SearchView=&query=zPDT&SearchWV=true

  1. Hercules

Hercules is an Open Source emulator for System/370, ESA/390, and the z/Architecture.
Hercules v4.2 has been confirmed to run Ubuntu without issues.

Further references:


  1. Docker (multiarch)

Another simple way to create a s390x environment is using docker multiarch.
$ uname -m
x86_64
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x

$ docker run --rm -t s390x/ubuntu uname -m
s390x

Or Debian:
$ docker run --rm -t multiarch/debian-debootstrap:s390x-jessie uname -a
Linux 0b76a785e457 5.0.0-36-generic #39~18.04.1-Ubuntu SMP Tue Nov 12 11:09:50 UTC 2019 s390x GNU/Linux

Since this is not stable (at least not on big endian architectures), it’s not recommended and only mentioned here for completeness reasons.

Further references:




in steps 1, 2, and 3 there are backslash characters in the command examples that i do not understand. in the wget commands it appears to separate a list of URLs to be downloaded. it is unclear to me what the meaning is in later commands like sudo qemu-system-s390x.

my intent is to carry out these steps in an AWS EC2 instance running Ubuntu bionic server. i am hoping to end up with an EC2 AMI that will automatically run the chosen emulator and the installed system on the emulator. i may be doing this on ARM64 architecture underneath the emulator.

A backslash at the end of a line is commonly used to indicate that the command in that line continuous on the next line - that’s often used in documentation or in scripts.
Unfortunately the representation in edit and display mode here is different, so that some of the backslashes partly ended somewhere in the middle of a line.
I removed them now completely - which on the other hand side might make it now a bit more difficult to identify where a command ends and its output starts - anyway …

“qemu-system-s390x” is a390x architecture specific version of the QEMU machine emulator and virtualizer: https://www.qemu.org/
In case qemu is called with “-enable-kvm” it does full-virtualization - commonly referred to as KVM - and in case it’s called w/o, it emulates the target architecture.
Check ou the kvm “command”, which is a script that just calls qemu using “-enable-kvm”.
Here on an Intel system:
$ cat $(which kvm)
#!/bin/sh
exec qemu-system-x86_64 -enable-kvm “$@”
and again on a real s390x system:
$ cat $(which kvm)
#!/bin/sh
exec qemu-system-s390x -enable-kvm “$@”

So if you install “qemu-kvm” on an Intel system, you will first of all get the x86_64 and i386 qemu variants (qemu-system-x86_64 and qemu-system-i386), since kvm virtualization is the typical use case (which requires the same architecture on the virtual machine aka guest and on the host).

If you want to emulate a different architecture, the binary that is able to emulate the target architecture needs to be explicitly installed, hence the “apt install qemu-system-s390x”.

There is no guarantee that these steps work on an AWS EC2 instance (or other already virtualized platforms), since such instances are of course already virtualized, and nested virtualization might not be possible and such instances may not offer all CPU facilities that qemu needs for it’s emulation.

i believe pure emulation should work in nested virtualization, since no CPU virtualization-assist features are expected to be used. i cannot imagine any such assist features being needed when emulating s390x on x86_64 or arm64. in the event instances of s390x are offered by the cloud provider, this emulation would not be needed.

CPU virtualization-assist features are obviously not needed if doing emulation, but the emulator itself might need some CPU facilities (depending on how it was compiled) that may not be passed-through the virtualization layer.

As an example see the cpu features of this KVM host example environment:
$ grep -m 1 flags /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d

And compare it with this KVM guest environment (that runs on the above host):
$ grep -m 1 flags /proc/cpuinfo
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl cpuid tsc_known_freq pni vmx cx16 x2apic hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority ept vpid

(so I’m talking about CPU flags beyond vmx)