Planning a server build. Low use, low risk (behind residential 5G modem), own business. Server will initially host a network file share, dhcp, dns, mosquitto mqtt, node red, and provide routing control for the network. This is predominantly a learning exercise triggered by being able to buy the server for less than second hand bespoke NFS solutions! I have some experience with Ubuntu Server 22, having setup the above (minus nfs) on an old desktop.
I was leaning towards software raid arrays for the network storage, is that the common preference at the moment?
Would it be advisable to do the same for the hard drives (likely to be ssd) hosting the operating system for the server too?
My system had been delivered to the person I bought it from setup as a NAS system. One gotcha for me (discovered after a few attempts of installing Ubuntu Server; on array, on plain disks, in UEFI mode, in BIOS mode, ā¦) was that it had already had itās RAID controller flashed to āIT modeā, which among other things makes all the disks accessible individually to the OS:
The ubuntu installer was able to install the OS to the disks, but the system refused to boot, despite seeing āubuntuā on the drive [/s in the case of md arrays]. The person who originally flashed the raid controller hadnāt flashed two additional images to allow the controller to present disks to the system to boot from in BIOS or UEFI mode.
So, system booting & updating now. More posts from me likely soon regarding setting up nftables!
So, in answer to my original quiry the performance improvements offered by the new RAID firmware combined with dell no longer supporting it made using the alternative firmware an easy choice. Maybe it does still do arrays, but Iāve setup with software arrays.
will it do arrayās via Madadm or vdevs pools for ZFS with the Perc controller flashed to IT mode?
absolutely it will . Once configured correctly.
The advantage is that the controller now hands the drives to the Bios as individual disks.
The sofware then assemble those drives into arrays or vdev (pools) depending on if you chose Madam or ZFS
This seems to be quite common as may advocate NOT flashing the bios (*.rom and uefi.rom files) and just installing the firmware and deleting any bios installed. They are of the opinion that it (controller bios) increases the boot time.
And it does, HOWEVER the down side in not having the LSI bios one cannot have the controller bios (your PERC XXX) hand off the boot device to the system bios. Also that you canāt access the direct attached devices prior to system bios boot. In short within the bios setting with the LSI controller (which even if it is a Dell Perc it is now actually a LSI because of firmware) has the ability to define what is a boot device ā¦ without that the system bios has not clue that the boot device is available. The lack of the LSI bios litterer y hides the boot device (drive) and is the default behavior
There are work around and solutions for this.
which will depend upon your desires.
and welcome to the site
1st solution is contained in the links you provided in your 2nd post (flash the bios on to the Perc)
2nd solution is to add a small SATA or Msata with usb adapter drive to the system board to boot and run the OS from this leave the drives connected to the Perc as data drives. Which if your unsure of proceeding with solution 1 is a pretty good choice. In my honest opion this is the route I would go. Your choices are pretty good as a SSD or spinning rust could be utilized.
But as I stated it is your decision as to route to take.
My vote goes towards mdadm raid too. And using the HBA in IT mode if that fits your needs.
Not sure if you are really considering ZFS because that is a great product for storage but is whole other ball game because it works quite differently from mdadm. At least it did when I last looked at it.
Somewhere in there you asked about raid for the OS. Even if this is just for you and you can accept some down time, I would still set up mdadm raid1 with at least 2x partitions for the OS. When partitioning the disks you can set up for example 32GB partitions which are usually big enough for OS unless you keep large data inside one of its mount points. WATCH OUT for that if you go with such small partitions size, that will be all the space you will have for the OS and the data partition(s) will be the remaining of the disks so repartitioning will not be an easy task.
If using small space for the OS you have to organize your data locations good. For example nothing big in /home unless you mount it on another partition, etc. If using very big space for OS you are wasting it if most remains unused.
FYI I have raid1 of 32GB for my home server root and have never reached more than 40% use.
LOL actually I went to ZFS (which is literally a filesystem with software RAID) from MDADM.
But I will not criticize MDADM it is proven and easier to get a grasp on at first. As long as one follows the rules of software raids. I mentioned because the OP stated it and really is a good choice.
My advice pick oneā¦learning MDADM first is a great choice. You could always change it later if it doesnāt suit you.
That being said I prefer to set my OS on a separate drive. As one can actually corrupt /wipe and reinstall the OS and majority of the time recover a Mdadm array or a ZFS pool, that data which is stored on the data drives in an array or even pool.
All of the above is useless though without a good backup strategy, as no Raid level or Zpool level is a backup. they are what they are redundant disks. @DarkoD it is sound advice for the OP
Iām going with two seperate arrays. To be honest with you Iām not entirely sure whether it is a ZFS, or MDADM array that I set up for the OS, I just did it with the installer. It was a bit convoluted as youneeded to set both drives as bootable, then create matching undefined partitions on the restof the drives, then create the array, and finally your partitioning scheme on top.
In the past Iāve always fully encrypted the drives but in this case Iāve moved away from that to having the OS bootable without needing to key in the password and then setup a raid 1 with the remainder of the drives and encrypt that. The data will then be on that partition which I can then unlock via ssh if I needed to after a reboot or power loss.
I will read up on which way to take the remainjng array regards ZFS/MDADM. thanks for your comments.
Right now Iām reading up on nftables and whether dual stack IPV4/6 is viable with for a private network & dnsmasq. Certinally blowing cobwebs from the grey matter but nftables is finally begining to click!