Advanced Installation

Software RAID

Redundant Array of Independent Disks “RAID” is a method of using multiple disks to provide different balances of increasing data reliability and/or increasing input/output performance, depending on the RAID level being used. RAID is implemented in either software (where the operating system knows about both drives and actively maintains both of them) or hardware (where a special controller makes the OS think there’s only one drive and maintains the drives ‘invisibly’).

The RAID software included with current versions of Linux (and Ubuntu) is based on the ‘mdadm’ driver and works very well, better even than many so-called ‘hardware’ RAID controllers. This section will guide you through installing Ubuntu Server Edition using two RAID1 partitions on two physical hard drives, one for / and another for swap.

RAID Configuration

Follow the installation steps until you get to the Guided storage configuration step, then:

Select Custom storage layout.

Create the /boot partition in a local disk. So select one of the devices listed in available devices and Add GPT Partition. Next, enter the partition size, then choose the desired Format (ext4) and /boot as mount point. And finally, select Create.

Now to create the RAID device select Create software RAID (md) under AVAILABLE DEVICES.

Add the name of the RAID disk (the default is md0).

For this example, select “1 (mirrored)” in RAID level, but if you are using a different setup choose the appropriate type (RAID0 RAID1 RAID5 RAID6 RAID10).

Note

In order to use RAID5, RAID6 and RAID10 you need more than two drives. Using RAID0 or RAID1 only two drives are required.

Select the devices that will be used by this RAID device. The real devices can be marked as active or spare, by default it becomes active when is selected.

Select the Size of the RAID device.

Select Create.

The new RAID device (md0 if you did not change the default) will show up in the available devices list, with software RAID 1 type and the chosen size.

Repeat steps above for the other RAID devices.

Partitioning

Select the RAID 1 device created (md0) then select “Add GPT Partition”.

Next, select the Size of the partition. This partition will be the swap partition, and a general rule for swap size is twice that of RAM. Enter the partition size, then choose swap in Format. And finally, select Create.

Note

A swap partition size of twice the available RAM capacity may not always be desirable, especially on systems with large amounts of RAM. Calculating the swap partition size for servers is highly dependent on how the system is going to be used.

For the / partition once again select the RAID 1 device then “Add GPT Partition”.

Use the rest of the free space on the device, choose the format (default is ext4) and select / as mount point, then Create.

Repeat steps above for the other partitions.

Once it is finished select “Done”.

The installation process will then continue normally.

Degraded RAID

At some point in the life of the computer a disk failure event may occur. When this happens, using Software RAID, the operating system will place the array into what is known as a degraded state.

If the array has become degraded, due to the chance of data corruption, by default Ubuntu Server Edition will boot to initramfs after thirty seconds. Once the initramfs has booted there is a fifteen second prompt giving you the option to go ahead and boot the system, or attempt manual recover. Booting to the initramfs prompt may or may not be the desired behavior, especially if the machine is in a remote location. Booting to a degraded array can be configured several ways:

  • The dpkg-reconfigure utility can be used to configure the default behavior, and during the process you will be queried about additional settings related to the array. Such as monitoring, email alerts, etc. To reconfigure mdadm enter the following:

    sudo dpkg-reconfigure mdadm
    
  • The dpkg-reconfigure mdadm process will change the /etc/initramfs-tools/conf.d/mdadm configuration file. The file has the advantage of being able to pre-configure the system’s behavior, and can also be manually edited:

    BOOT_DEGRADED=true
    

    Note

    The configuration file can be overridden by using a Kernel argument.

  • Using a Kernel argument will allow the system to boot to a degraded array as well:

    • When the server is booting press Shift to open the Grub menu.

    • Press e to edit your kernel command options.

    • Press the down arrow to highlight the kernel line.

    • Add “bootdegraded=true” (without the quotes) to the end of the line.

    • Press Ctrl+x to boot the system.

Once the system has booted you can either repair the array see the next section for details, or copy important data to another machine due to major hardware failure.

RAID Maintenance

The mdadm utility can be used to view the status of an array, add disks to an array, remove disks, etc:

  • To view the status of an array, from a terminal prompt enter:

    sudo mdadm -D /dev/md0
    

    The -D tells mdadm to display detailed information about the /dev/md0 device. Replace /dev/md0 with the appropriate RAID device.

  • To view the status of a disk in an array:

    sudo mdadm -E /dev/sda1
    

    The output if very similar to the mdadm -D command, adjust /dev/sda1 for each disk.

  • If a disk fails and needs to be removed from an array enter:

    sudo mdadm --remove /dev/md0 /dev/sda1
    

    Change /dev/md0 and /dev/sda1 to the appropriate RAID device and disk.

  • Similarly, to add a new disk:

    sudo mdadm --add /dev/md0 /dev/sda1
    

Sometimes a disk can change to a faulty state even though there is nothing physically wrong with the drive. It is usually worthwhile to remove the drive from the array then re-add it. This will cause the drive to re-sync with the array. If the drive will not sync with the array, it is a good indication of hardware failure.

The /proc/mdstat file also contains useful information about the system’s RAID devices:

cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sda1[0] sdb1[1]
      10016384 blocks [2/2] [UU]
      
unused devices: <none>

The following command is great for watching the status of a syncing drive:

watch -n1 cat /proc/mdstat

Press Ctrl+c to stop the watch command.

If you do need to replace a faulty drive, after the drive has been replaced and synced, grub will need to be installed. To install grub on the new drive, enter the following:

sudo grub-install /dev/md0

Replace /dev/md0 with the appropriate array device name.

Resources

The topic of RAID arrays is a complex one due to the plethora of ways RAID can be configured. Please see the following links for more information:

Logical Volume Manager (LVM)

Logical Volume Manger, or LVM, allows administrators to create logical volumes out of one or multiple physical hard disks. LVM volumes can be created on both software RAID partitions and standard partitions residing on a single disk. Volumes can also be extended, giving greater flexibility to systems as requirements change.

Overview

A side effect of LVM’s power and flexibility is a greater degree of complication. Before diving into the LVM installation process, it is best to get familiar with some terms.

  • Physical Volume (PV): physical hard disk, disk partition or software RAID partition formatted as LVM PV.

  • Volume Group (VG): is made from one or more physical volumes. A VG can can be extended by adding more PVs. A VG is like a virtual disk drive, from which one or more logical volumes are carved.

  • Logical Volume (LV): is similar to a partition in a non-LVM system. A LV is formatted with the desired file system (EXT3, XFS, JFS, etc), it is then available for mounting and data storage.

Installation

As an example this section covers installing Ubuntu Server Edition with /srv mounted on a LVM volume. During the initial install only one Physical Volume (PV) will be part of the Volume Group (VG). Another PV will be added after install to demonstrate how a VG can be extended.

There are several installation options for LVM in Guided storage configuration step:

  • Select “Use an entire disk”, “Set up this disk as an LVM group”, and Done. This option will create a /boot partition in the local disk and the rest of the disk space is allocated to the LVM group.
  • Select “Use an entire disk”, “Set up this disk as an LVM group”, “Encrypt the LVM group with LUKS”, insert the password (and confirm it), and Done. The output is the same as described above but the LVM group is encrypted.
  • Select “Custom storage layout”, and Done. At this time the only way to configure a system with both LVM and standard partitions, during installation, is to use this approach. This is the option used in this example.

Follow the installation steps until you get to the Storage configuration step, then:

Let’s first create a /boot partition in a local disk. Select the hard disk under AVAILABLE DEVICES, and Add GPT Parition. Add the size and format (ext4), then select /boot as mount point. Finally, select Create. The /boot partition will be listed under FILE SYSTEM SUMMARY.

Next, create standard swap, and / partitions with whichever filesystem you prefer following the steps above.

Now the LVM volume group will be created. Select “Create volume group (LVM)”. Enter a name for the volume group (default is vg0), select the device (LVM physical volume) and the size, and choose “Create”. There is an option to encrypt your volume, if you want it encrypted select “Create encrypted volume” and enter a password (also confirm it). The brand new LVM group (if the default was not changed it is vg0) will be listed as a device in AVAILABLE DEVICES.

To create a LVM logical volume select the created LVM volume group and “Create Logical Volume”. Give it a name (default is lv-0), let’s call it lv-srv since this will be used to mount /srv. Insert the size of the volume, your preferred filesytem format, and select /srv as mount point. Choose “Create”. The LVM logical volume mounted at /srv will be listed in the FILESYSTEM SUMMARY.

Finally, select “Done”. Then confirm the changes and continue with the rest of the installation.

There are some useful utilities to view information about LVM:

  • pvdisplay: shows information about Physical Volumes.

  • vgdisplay: shows information about Volume Groups.

  • lvdisplay: shows information about Logical Volumes.

Extending Volume Groups

Continuing with srv as an LVM volume example, this section covers adding a second hard disk, creating a Physical Volume (PV), adding it to the volume group (VG), extending the logical volume srv and finally extending the filesystem. This example assumes a second hard disk has been added to the system. In this example, this hard disk will be named /dev/sdb and we will use the entire disk as a physical volume (you could choose to create partitions and use them as different physical volumes)

Warning

Make sure you don’t already have an existing /dev/sdb before issuing the commands below. You could lose some data if you issue those commands on a non-empty disk.

First, create the physical volume, in a terminal execute:

sudo pvcreate /dev/sdb
                

Now extend the Volume Group (VG):

sudo vgextend vg0 /dev/sdb

Use vgdisplay to find out the free physical extents - Free PE / size (the size you can allocate). We will assume a free size of 511 PE (equivalent to 2GB with a PE size of 4MB) and we will use the whole free space available. Use your own PE and/or free space.

The Logical Volume (LV) can now be extended by different methods, we will only see how to use the PE to extend the LV:

sudo lvextend /dev/vg0/srv -l +511

The -l option allows the LV to be extended using PE. The -L option allows the LV to be extended using Meg, Gig, Tera, etc bytes.

Even though you are supposed to be able to expand an ext3 or ext4 filesystem without unmounting it first, it may be a good practice to unmount it anyway and check the filesystem, so that you don’t mess up the day you want to reduce a logical volume (in that case unmounting first is compulsory).

The following commands are for an EXT3 or EXT4 filesystem. If you are using another filesystem there may be other utilities available.

sudo umount /srv
sudo e2fsck -f /dev/vg0/srv

The -f option of e2fsck forces checking even if the system seems clean.

Finally, resize the filesystem:

sudo resize2fs /dev/vg0/srv

Now mount the partition and check its size.

mount /dev/vg0/srv /srv && df -h /srv

Resources

Great article. I tried to follow your guide for the newly released 20.04 server, but it could not proceed. I am trying to install as RAID 1, but no matter how I slice/dice the drives or in which order (create md0 first, last), there is no way to proceed.

Would appreciate any ideas you have or your experience if you’ve tried 20.04.

Hi Shamus. Thanks for your feedback!

In which step exactly are you stuck in the RAID setup? Were you able to create the RAID device (md0 is the default name)? Or are you trying to select the drives to setup the RAID device?

Hello,

I also encountered an issue with the above instructions.

While attempting to install Ubuntu Server 20.04 on SoftRaid1, the instructions above state to create a /boot partition on one of the disks, and then create a software raid of two disks. If you create a boot partition one one of the two disks you desire to put into SoftRaid1, the disk with the boot partition on it does not show up as an available disk to put into SoftRaid1.

When creating the /boot partition, the installer will also create an /eft/boot partition as well, and mark the disk as a boot disk. This disables the ability to put the two disks into SoftRaid1.

If you group the disks into softraid1 first, and then go through and create the /boot, /, and swap partitions, you cannot proceed with installation.

After doing further digging, I think one of the items missing from the instructions above is that the Software RAID needs to be created against partitions, not disks. Currently have Ubuntu 20.04 installing after realizing that the RAID needed to be created against two unformated partitions.