Intel 32bit packages on Ubuntu from 19.10 onwards

I oppose the idea of dropping 32 bit binaries. There is no good technical argument for it, but many against it. It will break printer drivers and break many 32 bit binaries for other programs for which there is no 64-bit alternative (could be commercial programs where a company or individual only has a license for a 32 bit program). Brother, the printer maker, may be discouraged by the effort it will take to redo all of these drivers. It will also damage other systems such as Valve/Steam which depend on 32 bit binaries and run many programs for which there is no sources for at all. Many Windows programs have no 64 bit alternative and no source code, and we want to be able to run them.

The work needed to fix these problems is far greater than the work needed to simply continue making 32 bit binaries. The building of 32 bit binaries should be an automated process and simply runs parallel to building the 64 bit binaries and should consume very, very little of developer resources. Thus there is no real benefit to no longer building 32 bit.

The idea of trying to figure out which 32 bit binaries are not a dependency for something else, and remove those but not other items from the build process does not make sense. This is because trying to figure out which programs are not a dependency and which ones are would take more time and resources than simply leaving things as they are now and building everything for 32 bit. Plus something which is a dependency would be accidentally removed which would end up consuming more developer resources to have to fix that problem. This is as opposed to just continuing to build everything for 32 bit which consumes very little or no developer resources and will be most likely to not break anything.

So it is very clear just leaving things as they are now, building everything for 32 bit, consumes the least developer resources and carries the least risk of breaking anything, thus the least risk of unnecessarily consuming time resources to fix something that didn’t need to be broken in the first place.

5 Likes

@RussianNeuroMancer on the subject of pcsx2, I see that the upstream build system says that 64-bit x86 support is “not ready yet”. Do you know what exactly that means or where I might find out?
I’m able to get the software to build on amd64 with a few minor changes to the build system, but I have no way to test the result; is it worth some empirical testing here to see how close to “done” the amd64 support is? https://launchpad.net/~vorlon/+archive/ubuntu/ppa/+packages

Not sure what this could mean; I thought they never ever going to implement amd64 support due to reasons explained here.

Tried to run couple of games, it just crash as soon as it tries to run any emulation.

People thus far have mentioned Bother and Cannon Printers as a potential issue. Let me add another one. Lexmark uses 32bit drivers for some of their higher end enterprise printers. For example there are many use cases to need a Lexmark 3950 and it’s derivatives. The drivers provided by Lexmark not only are 32bit but depend on 32bit java as well for the proprietary print manager. It was no IceTea or OpenJDK compatible so one needed the 32bit Oracle Java packages to be installed. This just adds one more non-game to the mix that will effect the enterprise and business end.

1 Like

2 posts were merged into an existing topic: Dropping 32 bit support (i.e. games support) will hurt Ubuntu. Big time

2 posts were merged into an existing topic: Dropping 32 bit support (i.e. games support) will hurt Ubuntu. Big time

Thanks, not particularly surprising but good to have it confirmed. In principle it would be possible to use the 32-bit emulator implementation on amd64 without having to completely port everything to 64-bit, but that would still be a significant amount of work and evidently hasn’t been done here.

1 Like

To be a little more specific, if you compile it as 64 bit, you would also need to turn change four different settings to interpreter from recompiler. At that point, you may be able to run some games, but at slow enough speeds not to be particularly playable. (And it will crash if built as a debug build, and hasn’t been extensively tested when compiled as 64 bits, as the recompilers not working is enough of an issue for it not to be supported.)

The trouble is that it uses dynamic recompilation in several places to speed things up, and porting a JIT over between architectures is not the easiest task, especially when large portions of it were written by people no longer with the project.

1 Like

I’ve been working on a script to automatically create a LXD container with Steam inside. I’ll post details later. It uses nvidia.driver.capabilities: to expose the required GPU capabilities.

I see you’re installing the nvidia userspace inside the container, this shouldn’t be required because that is what nvidia.runtime and nvidia.driver.capabilities in the LXD profile is taking care of by exposing the hosts nvidia userspace inside the container. Bundling the nvidia userspace inside the container has the drawback that it make the container brittle; any nvidia update on the host will break the container.

However, there appears to be a bug in LXD where it does not currently expose the nvidia Vulkan driver to the container. I’ll file a bug later If I’m right about that.

I haven’t been able to expose the PulseAudio socket from the host to the container yet either and haven’t started looking into how to expose controller/joysticks, which for Steam will also need to be able to update their firmware.

1 Like

Thanks for working on this.

With Pulseaudio, if you share the Unix socket to the container, the socket contains the machineid of the host, which is different to that of the container. Therefore, it is not accepted. You would either replace the machineid, or clear it.

Edit: the X11 socket passes many environment variables to the container. You can view them with xprop -root. In there is the Pulseaudio variable with the machineid.

1 Like

The snaps we are currently using are done by Canonical, made by the same person that work on the deb and tested the same way the equivalents are. Said differently those holding that hard line are not doing it based on reasonable facts but are mostly showing resistance to change (which often seem based on old facts/problems for earlier snap days/misinformation, so there might be a need for more communication around those topics)

“It will run but it will run like s*”
-conanichal

We’re running Ubuntu MATE 18.04 32 bit in 1000+ Greek schools, as 40% of them still have Pentium 4’s, and it’s easier to have the same environment in all of them.
Additionally, we’re using around fifty 32 bit Windows-based educational apps via Wine.

I’m not sure when the percentage of Pentium 4’s will be low enough to switch to 64 bit, but dropping the 32 bit arch won’t really make schools buy new hardware sooner; they do know the hardware is old but many just don’t have enough money to replace it yet.

Of course it’s Canonical’s decision since they pay for everything; the schools that still need 32 bit can switch to Debian (the 5/10 year support of 18.04 means old applications, libreoffice, kde-edu etc, so Debian will be preferred over an old 18.04).

I’m just not sure how much effort is actually involved for maintaining 32 bit (I haven’t seen that many bugs that are specific to 32 bit) or for the bandwidth (if only 1% is using 32bit, then that’s not much bandwidth). I do understand the server space argument though.
So what I wanted to say is, sure, this needs to happen at some point, but if it’s not a lot of cost/effort, then now might be a bit soon; less people will need 32bit after a few years.

4 Likes

FWIW, I’m still using Canon Scangear (originally available in this PPA, I’ve recompiled it for newer releases) for my Canon scanner. There’s an open-source SANE driver as well, but it is so buggy that it’s unusable.

The Linux drivers and application were originally available on an Asian Canon support site. Not sure if that site still exists.

I might be the only one still using that particular software, but I definitely need it to use my hardware.

The disk space hog that is snap is painful. Having a large amount of slightly different compressed bits that are in fact the same bits when uncompressed multiple times over taking up space really aggravates me. Currently I use separate bottles for each app and use hard links to save space. I know you can use parallel installs of snaps and content snaps, but it seems like too much hard work, and then making sure that the user space Mesa libraries don’t have incompatibilities with the kernel drm / gpu drivers. And GPUs bought after 19.04 in particular will be a very tricky problem with the current decided direction.

1 Like

I still don’t understand the rationale here.

Dropping the i386 architecture I can understand. Removing the installer images and repositories for an architecture which represents a very small part of the userbase has been done for other distros too.

What I don’t understand is how or why this stops updated lib32-* packages of 32-bit libraries being provided in the amd64 repos for 64-bit installations, or why this means “i386” libraries must be frozen. I suspect the whole “i386” and “32-bit” nomenclature is also getting confused.

Surely all 32-bit applications running on a 64-bit host which currently use $library:i386 should be able to use a lib32-$library instead? Isn’t this essentially what multiarch support was supposed to take care of?

This should also mean that wine32:i386 could just be wine32:amd64 instead, and steam can just depend on e.g.

lib32-alsa-lib  lib32-alsa-plugins  lib32-atk 
lib32-cairo  lib32-curl  lib32-dbus-glib  lib32-fontconfig  lib32-freetype2  lib32-freeglut  lib32-gconf  lib32-gdk-pixbuf2
lib32-glew1.10  lib32-glib2  lib32-glu  lib32-gtk2  lib32-libgudev>=230  lib32-libappindicator-gtk2  lib32-libcaca
lib32-libcanberra  lib32-libcups  lib32-libcurl-compat  lib32-libcurl-gnutls  lib32-dbus  lib32-libdrm  lib32-libgcrypt15
lib32-libice  lib32-libjpeg6  lib32-libnm-glib  lib32-libpng12  lib32-libpulse  lib32-librtmp0  lib32-libsm
lib32-libtheora  lib32-libtiff4  lib32-libudev0-shim  lib32-libusb  lib32-libva1  lib32-libvorbis  lib32-libvpx1.3
lib32-libwrap  lib32-libxcomposite  lib32-libxcursor  lib32-libxft  lib32-libxi  lib32-libxinerama  lib32-libxmu
lib32-libxrandr  lib32-libxrender  lib32-libxtst  lib32-libxxf86vm  lib32-nspr  lib32-nss  lib32-openal  lib32-openssl-1.0
lib32-pango  lib32-sdl  lib32-sdl2  lib32-sdl2_image  lib32-sdl2_mixer  lib32-sdl2_ttf  lib32-sdl_image  lib32-sdl_mixer
lib32-sdl_ttf  lib32-libvdpau

etc. as it already does in other 64-bit distros.


Edit:

It’s not like software no longer compiles with -m32. I could understand the issue if you were trying to retain non-SSE2 support in 32-bit binaries (à la -march=i686 vs -march=pentium4), but that’s irrelevant for lib32-*.

3 Likes

Similar case to Madrid Linux. It is a distro based on Ubuntu LTS used in a lot of schools in Madrid, Spain (if not all schools, there was a mass migration recently).

There is a ton of educative proprietary software that is made to run with 32-bit libraries and a lot of it is win32 only (thus it needs 32-bit Wine to work). In a school environment where the majority of people don’t have technical knowledge of computers it is important to keep simplicity. Ubuntu made things simple that was the reason for choosing it as the base of that distro, containers will always be more complicated than maintaining a set of system libraries due to the new challenges and possible performance penalties.

Why do we have to reinvent the wheel and confuse everyone?

4 Likes

This is exactly what me and others have suggested before, thanks for explaining it further.

1 Like

Thanks for the info, that’s an interesting feature. Why didn’t you just mention it in the other thread? It doesn’t fix all of the problems I mentioned, but at least one of them.

My aim is to explain how all this works in a layered way. For example, I did not cover audio, because it would make the guide a bit more compicated.
You cannot imagine how fragile is the process of composing a guide that many people will manage to follow.

1 Like