I have a brand new system built around an ASUS Prime Z690-P D4 motherboard and a Core i9-12900KS CPU.
This motherboard offers hardware RAID5, so I wanted to give it a try. That seems simpler than doing it in software (as I usually do).
However, upon booting into setup (USB drive) for Alma Linux 8.6 Minimal, I find that no hard drives show up. I double-checked and a nice RAID5 array has been set up in the BIOS. Total size 8 TB, and I have (3) 4 TB drives in the machine so that 8 TB number is perfect.
It would seem the only problem is that the Alma Linux installer is not loading the drivers for RAID, so it can’t see the RAID array that’s there.
Is software raid really to be preferred over motherboard-based RAID? Seems like it would be simpler to control it at a BIOS level, and then Linux would just see a nice monolithic hard drive to work with. What are some of the advantages of software raid?
I’ve never heard that consumer-grade RAID controllers are that bad or unreliable. But mere software is rock solid? I guess I haven’t wrapped my brain around that one yet.
Kernel sees every disk directly and mdadm assembles the arrays. There used to be separate softwares for fakeRAIDs but now they are integrated into the Linux software RAID package.
On this example machine the array is RAID-1, mirror, which is trivial – one can read single disk without assembling the array. The striping RAID modes are not so easy. I have had this array “fall apart” a couple times; no array was assembled and OS saw two disks with identical content – really bad when “both filesystems” have same UUID. Luckily, this array is only for data; no OS nor homes.
The only thing that the “RAID firmware” does is initialization of array metadata on the drives (and perhaps some support for boot). Let say you have striped RAID-5. EFI must load bootloader from the drive(s). Is the bootloader on single drive or is it striped over multiple drives? The bootloader must then load the OS kernel and initramfs that surely span multiple disks. How can it do that? Kernel uses initramfs to assemble the array and mount it. I have no idea how fakeRAID gets all this done. (Then again, something does load kernel and initramfs from hardware RAID too – the hw controllers do indeed have their own drivers.)
In earlier days (outside of Linux) there used to be question: Are Intel chipset RAID X and Y compatible? Can one migrate array from one board to another or is “wipe and create” necessary? That is, (backward) compatibility was not ensured even within one brand.
I once had RAID-1 array on NVidia chipset fakeRAID. I moved the disks to another motherboard, where they were connected to LSI’s chip. Linux did continue to assemble the NVidia array, because it did saw the metadata created by the first motherboard. This did reveal that the stuff “in BIOS” is utterly meaningless. However, LSI firmware cannot make changes to metadata written by Intel firmware. At worst it partially overwrites the foreign metadata.
With Linux software RAID you have complete control of what you have and it is independent from the disk controllers that the system has. You can use whole disks for array and create partitions within the array or (the more common) partition the disks and create arrays from select partitions. It is much less mind-boggling to have ESP and /boot on RAID-1 and the other filesystems on “fancier” RAID modes.
Once more question – HOW do I get to a prompt of some sort in AlmaLinux so I can create my software RAID array? There’s no “live CD” element so I can’t just do a temp install of mdadm, etc. and set up the RAID drive(s). Do I have to use a LiveCD of some other distro to do this before I boot into the Alma Linux installer?