Software RAID 1 works just fine in Fedora, but there are some "catches" one should know about it.
RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks that may use parallel access for high data-transfer rates when reading but more commonly operate independently to provide high I/O transaction rates. Level 1 provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost (two HDs at the size of just one). The storage capacity of the level 1 array is equal to the capacity of one of the mirrored hard disks in a Hardware RAID or one of the mirrored partitions in a Software RAID.
To setup the RAID scheme, create the proper Software RAID partitions during install (anaconda) and then create MD devices on them.
Once you've booted your system, type, as root:
in order to get the status of the raid devices.
You should get something like this:
Code:
Personalities : [raid1]
md1 : active raid1 sda2[1] sdb2[0]
8193024 blocks [2/2] [UU]
md2 : active raid1 sdb3[0] sda3[1]
522048 blocks [2/2] [UU]
md3 : active raid1 sdb5[0] sda5[1]
69232000 blocks [2/2] [UU]
md0 : active raid1 sdb1[0] sda1[1]
200704 blocks [2/2] [UU]
unused devices: <none>
You can also obtain details about each md device by typing:
Code:
# mdadm --detail /dev/md0
And you could get something like:
Code:
/dev/md0:
Version : 00.90.01
Creation Time : Mon Apr 3 13:34:22 2006
Raid Level : raid1
Array Size : 200704 (196.00 MiB 205.52 MB)
Device Size : 200704 (196.00 MiB 205.52 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Apr 4 14:28:42 2006
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 6ae01339:fae084d5:131cb285:487d3985
Events : 0.241
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1
As you can see, the both disks are presente - /dev/sda1 and /dev/sdb1.
Anaconda seems to install the bootloader (grub) in the first partition of each hard disk, and not at the MBR of each disk. There's no problem with that, but if you remove one of the disks, you would have to manually teach your system how to boot and where to find the grub.conf file.
To avoid the trouble, it's a good idea to write the bootloader to the MRB of each disk.
To do that, open a terminal and, as root, enter in the grub shell:
Code:
# grub
GNU GRUB version 0.95 (640K lower / 3072K upper memory)
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.]
grub>
Now type the following commands to write the bootloader (change the drives number accordingly to your system):
Code:
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
(this makes the first partion (0) of the first disk (0) the root partition to work with)
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
(now we copied the bootloader from the first partition (0) of the first disk (0) to the MBR of the first disk - hd0).
Do the same with the second disk:
Code:
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> quit
Now you are set.
Remove one of the disks and put the other as the first disk of your machine, and then type:
Code:
# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Mon Apr 3 13:34:22 2006
Raid Level : raid1
Array Size : 200704 (196.00 MiB 205.52 MB)
Device Size : 200704 (196.00 MiB 205.52 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Apr 4 14:28:42 2006
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 6ae01339:fae084d5:131cb285:487d3985
Events : 0.241
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sda1
1 8 1 1 active sync removed
As you can see, the status points one of the disks has been removed.
Check with "cat /proc/mdstat" and you'll notice RAID isn't working anymore.
Now you can wipe out the removed HD, removing all of it's partitions and plug it back again.
Create the exact same partitions on it (as if your cloning the other disk partition scheme - as in "dd if=/dev/sda of=/dev/sdb bs=512 count=1") and type the following to sync the disk again (remember to substitute the drives paths):
Code:
# partprobe /dev/sdb <- this will inform the OS about partitioning changes
# mdadm --manage /dev/md0 --add /dev/sdb1
You'll notice it's re-syncing by typing:
You'll get somthing like:
Code:
[=>...................] recovery = 5.6% (201216/3582336)
When it achieves 100%, open the grub shell again and re-copy the bootloader to the disks' MBR:
Code:
# grub
grub> root (hd0,0)
grub> setup (hd0)
grub> root (hd1,0)
grub> setup (hd1)
grub> quit
If you want to check if everything is ok:
Code:
echo check > /sys/block/md<#>/md/sync_action
If you need to repair:
Code:
echo repair > /sys/block/md<#>/md/sync_action
If you want to go further:
http://www.howtoforge.com/software-r...-boot-fedora-8
http://tldp.org/HOWTO/Software-RAID-HOWTO.html
Cheers!