24.04.1 can't load root partition on 6.11 - full disk encryption (lvm)

Original thread| Ubuntu Forums from 4 Weeks ago

sdacrypt prompt won’t even load on 6.11; or 6.8-0.48
but if I revert back to 6.8.0-35-generic I’m fine

I got one successful boot without drivers on 6.8.0-49 | Ubuntu Forums – you can see my external thread on LinuxQuestions (there’s also one on Linux4Noobs) on the proprietary reddit.

Bump (posts must be at least 20 characters – Have you tried the :heart: button?))

Caveat: I am not qualified to help with this subject but I see you have not really received too many responses.

This is what I suggest:

You need to check those kernels to see what compatibility or configuration issues may or may not exist.

I would start with this to see what specific kernel modules sdacrypt needs and whether they are present in the kernels you refer to:
lsmod | grep sdacrypt
or
modinfo sdacrypt

Check boot logs after a failed boot with this:
journalctl -b -1 | grep sdacrypt

If you can access recovery mode or an earlier kernel, check logs from the problematic boot.

Also, review release notes for 6.8.0-48, 6.8.0-49, and 6.11 to identify changes related to storage or cryptographic modules that might affect sdacrypt.

I hope this helps you get at least some way further with the problem and hopefully someone with more knowledge will jump in and offer advice.

1 Like

I think you were on to something, there were three issues identified, one of which I think wasn’t relevant:

Relevant:

  1. /boot was full (and presumably why the long compilation process would result in a failure)
  2. The system headers & modules weren’t configured for the kernels I was trying.
    (likely unrelated)
  3. The swap partition had a different UUID to when it was first created, causing an fstab error (that was a red-herring of sorts)

Solutions:
3) I temporarily disabled disk swap [I have zram set up on 32GiB; so swap is very low priority]

1) I took great care by doing the following:

#!/bin/bash
$ ;# information gathering to preserve two kernels - the booted one (uname) and the 
;# stock one
$ uname -r
$ apt info linux-generic|grep Version  # If you are on Hardware Enabled Kernel,
;# instead do ` $ apt list --installed linux-generic-hwe-\* ` (version is 3rd-to last column)

Then I purged all other kernels, in my case there were quite a few from testing (Synaptic helped me find them all):

# apt remove --purge linux-image-6.12*
# apt remove --purge linux-image-6.13*
# apt remove --purge linux-image-6.8-0.48* # make sure this isn't your current kernel!

and their headers & modules and tools

Then I cleared the /boot partition of (now orphaned) entries (use the -i flag to make sure you don’t remove anything you are currently dependent on):

# rm -i /boot/*.6.12
# rm -i /boot/*.6.13
# rm -i /boot/*.6.8-0.48 # make sure this isn't your current kernel!

^ the above deleted System.map and initrd.img

2)

Critically - make sure that you have the corresponding headers and modules for the kernel you are trying to upgrade to, in my case:
linux-hwe-6.11-headers-6.11.0-19 # linux-headers-generic # stock
linux-modules-6.11.0-19-generic # linux-modules-6.8.0-55-generic # stock

Next I regenerated the initramfs for the kernels actually in use, use one of the following two commands to match your kernel or regenerate all

# update-initramfs -u -k 6.11.0-19 # current HWE kernel
# update-initramfs -u

Then I regenerated grub:

# update-grub

Verify that you see the kernel you know works on the grub entries

You can also use ls /boot to check there’s a corresponding vmlinuz (in this case -vmlinuz-6.11.0-19-generic)

Once you’re confident you have both a kernel you know works; and you have given yourself a shot at the new one with a correct initramfs:

# reboot
1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.