Ask Your Question

how do I configure Fedora 23 for full redundancy with Raid1?

asked 2016-05-31 22:12:11 -0500

dz2020 gravatar image


My old Fedora 6? 8? system is dying and I want to configure Fedora 23 the same way, with 2 fully redundant disks so I can use it primarily as a reliable Samba and iMAP server. In the old system, i configured swap, boot and root as RAID1 devices, ran grub to make each disk bootable and everything worked. Failed disks could easily be upgraded and the system kept working.

With Fedora 23 and a new UEFI (asrock) system with 4 TB drives, I have struggled to do the same thing. I think I need /boot/efi on both disks (which can not be done with RAID1) so I first booted in F23 live (using the workstation ISO) and used gdisk to create 2, 200 MB /boot/efi partitions, then booted with F23 server and used anaconda to create Raid1 partitions for swap, /, /boot, and /home. i tried copying /boot/efi to the second disk partition (/dev/sdb1) using both dd and by mounting it under a temporary name (/efitmp) and using cp -r. Both seem to 'sort of' work but when I reboot with one disk, i get massive errors and /home is not mounted. I have tried to explore efibootmgr to rename the boot partition but it seems like that's not my problem.

Can anyone point me to an idiots guide to how to properly configure the system to be fully redundant? I have been going through the Fedora manuals and online searches but failing.

thanks for any help.

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2016-06-01 11:10:59 -0500

florian gravatar image

updated 2016-06-01 15:03:13 -0500

You can use anaconda do setup a Software Raid.

Quick test in a virtual machine using a netinstall image (F23 x86_64) shows.... image description

You need to select at least two disks as install destination in order to be able to setup a Raid. You need to choose Manual Partitioning

Have you seen these guides here

EDIT: I let the installer run all the way through and system was correctly configured as RAID1. Everything seem to work. (Removed one of the virtual hard disks, and system still booted up, and so on).

edit flag offensive delete link more

answered 2016-06-04 11:07:25 -0500

dz2020 gravatar image

updated 2016-06-13 22:33:04 -0500

updated and SOLVED


Thank you for your answer. The key to fixing this issue is that the arrays need to be rebuilt after a failure. I tried the following using MBR and GRUB as well as UEFI and it seems to work (albeit a bit cumbersome). I hope it will keep working after disk failure and will just need help after a reboot.

To make the system redundant:

  • boot first with the Workstation disk and go to Live mode, then utilities->terminal and use gdisk to create 2 identical /boot/efi partitions of size 200M in /dev/sda and dev/sdb. Simply start gdisk and use 'p' to see whats there and 'd' to get rid of anything. Then use 'n' with the default start and size 200M to create the partition. Be sure to set the type to EF00 (EFI boot).

  • Restart the machine with the server disk (assuming you want server code). Hit 'del' in the boot process and insure that the DVD is used as the primary boot device

  • select both disks and use Anaconda manual partitioning and first 'reformat' one of the 'unknown' EFI boot partitions to be the /boot/efi partition. You have to set the mount point. The create all the other partitions (including swap) as RAID1

  • Anaconda will say there is an error that efi/boot points to a raid array and will have boot problems upon failures and to hit done to go ahead anyway. Go ahead.

  • after the machine is up, all looks good. (look at cat /proc/mdstat and df to see)

  • copy the /boot/efi partition on sda1 to sdb1 using "dd if=/dev/sda1 of=/dev/sdb1" Many references said to use efibootmgr to create a different label for sdb1 but my hardware saw the second drive. Perhaps this will cause problem later, I don't know.

  • shutdown machine an unplug one drive and the USB drive

  • reboot and watch the errors. It will come up in "emergency" mode.

  • enter the root password and do 'cat /proc/mdstat' In my case, all the arrays except /dev/md124 (which was the big /home directory) were fine. /dev/md124 was 'inactive' and the device which was there, /dev/sda5, was showing it was a spare [S].

  • to fix this, the array must be stopped the the device added. Note that you are adding the device which was listed as the [S] spare. mdadm --stop /dev/md124 and mdadm -A --force /dev/md124 /dev/sda5

  • reboot and it will work fine.

  • shutdown and reconnect the drive, then re-add all the other drives using mdadm --manage /dev/md124 --add /dev/sdb5 etc. Note that you can see what drive numbers to add from the output of 'cat /proc/mdstat'

good luck, thanks for the patience.

edit flag offensive delete link more


Hi, I have to admit that I am not experienced with RAID. I just know what it is, and tested it a few times, just for fun. I think you should edit your question (or even ask a new question), and hope some experts to jump in.

If you think this is a bug (anaconda saying it would create /boot/efi on the second volume but not doing it), please report it on

If you want a quick and dirty solution: Disable EFI in your settings, and install a regular BIOS-firmware based Fedora.

florian gravatar imageflorian ( 2016-06-06 15:45:56 -0500 )edit

Question Tools



Asked: 2016-05-31 22:12:11 -0500

Seen: 1,591 times

Last updated: Jun 13 '16