Fedora Linux Support Community & Resources Center

Go Back   FedoraForum.org > Fedora Resources > Guides & Solutions (No Questions)
FedoraForum Search

Forgot Password? Join Us!

Guides & Solutions (No Questions) Post your guides here (No links to Blogs accepted). You can also append your comments/questions to a guide, but don't start a new thread to ask a question. Use another forum for that.

Reply
 
Thread Tools Search this Thread Display Modes
  #1  
Old 7th November 2007, 11:18 PM
Duli Offline
Registered User
 
Join Date: Jul 2006
Location: Sao Paulo, SP - Brazil
Age: 34
Posts: 702
How-to Software RAID 1 (mirroring) in Fedora

Software RAID 1 works just fine in Fedora, but there are some "catches" one should know about it.

RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks that may use parallel access for high data-transfer rates when reading but more commonly operate independently to provide high I/O transaction rates. Level 1 provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost (two HDs at the size of just one). The storage capacity of the level 1 array is equal to the capacity of one of the mirrored hard disks in a Hardware RAID or one of the mirrored partitions in a Software RAID.

To setup the RAID scheme, create the proper Software RAID partitions during install (anaconda) and then create MD devices on them.

Once you've booted your system, type, as root:
Code:
# cat /proc/mdstat
in order to get the status of the raid devices.

You should get something like this:

Code:
Personalities : [raid1]
md1 : active raid1 sda2[1] sdb2[0]
      8193024 blocks [2/2] [UU]

md2 : active raid1 sdb3[0] sda3[1]
      522048 blocks [2/2] [UU]

md3 : active raid1 sdb5[0] sda5[1]
      69232000 blocks [2/2] [UU]

md0 : active raid1 sdb1[0] sda1[1]
      200704 blocks [2/2] [UU]
unused devices: <none>
You can also obtain details about each md device by typing:
Code:
# mdadm --detail /dev/md0
And you could get something like:
Code:
/dev/md0:
        Version : 00.90.01
  Creation Time : Mon Apr  3 13:34:22 2006
     Raid Level : raid1
     Array Size : 200704 (196.00 MiB 205.52 MB)
    Device Size : 200704 (196.00 MiB 205.52 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Apr  4 14:28:42 2006
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 6ae01339:fae084d5:131cb285:487d3985
         Events : 0.241

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8        1        1      active sync   /dev/sda1
As you can see, the both disks are presente - /dev/sda1 and /dev/sdb1.

Anaconda seems to install the bootloader (grub) in the first partition of each hard disk, and not at the MBR of each disk. There's no problem with that, but if you remove one of the disks, you would have to manually teach your system how to boot and where to find the grub.conf file.

To avoid the trouble, it's a good idea to write the bootloader to the MRB of each disk.

To do that, open a terminal and, as root, enter in the grub shell:

Code:
# grub

GNU GRUB version 0.95 (640K lower / 3072K upper memory)

[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.]

grub>
Now type the following commands to write the bootloader (change the drives number accordingly to your system):

Code:
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
(this makes the first partion (0) of the first disk (0) the root partition to work with)

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
(now we copied the bootloader from the first partition (0) of the first disk (0) to the MBR of the first disk  - hd0).
Do the same with the second disk:

Code:
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.

grub> quit
Now you are set.

Remove one of the disks and put the other as the first disk of your machine, and then type:

Code:
# mdadm --detail /dev/md0

/dev/md0:
        Version : 00.90.01
  Creation Time : Mon Apr  3 13:34:22 2006
     Raid Level : raid1
     Array Size : 200704 (196.00 MiB 205.52 MB)
    Device Size : 200704 (196.00 MiB 205.52 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Apr  4 14:28:42 2006
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 6ae01339:fae084d5:131cb285:487d3985
         Events : 0.241

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync  /dev/sda1
       1       8        1        1      active sync   removed
As you can see, the status points one of the disks has been removed.

Check with "cat /proc/mdstat" and you'll notice RAID isn't working anymore.

Now you can wipe out the removed HD, removing all of it's partitions and plug it back again.

Create the exact same partitions on it (as if your cloning the other disk partition scheme - as in "dd if=/dev/sda of=/dev/sdb bs=512 count=1") and type the following to sync the disk again (remember to substitute the drives paths):

Code:
# partprobe /dev/sdb      <- this will inform the OS about partitioning changes

# mdadm --manage /dev/md0 --add /dev/sdb1
You'll notice it's re-syncing by typing:

Code:
# cat /proc/mdstat
You'll get somthing like:
Code:
[=>...................] recovery = 5.6% (201216/3582336)
When it achieves 100%, open the grub shell again and re-copy the bootloader to the disks' MBR:

Code:
# grub
grub> root (hd0,0)
grub> setup (hd0)
grub> root (hd1,0)
grub> setup (hd1)
grub> quit

If you want to check if everything is ok:

Code:
echo check > /sys/block/md<#>/md/sync_action
If you need to repair:

Code:
echo repair > /sys/block/md<#>/md/sync_action
If you want to go further:

http://www.howtoforge.com/software-r...-boot-fedora-8

http://tldp.org/HOWTO/Software-RAID-HOWTO.html


Cheers!

Last edited by Duli; 19th April 2010 at 12:20 AM. Reason: typo fixed
Reply With Quote
  #2  
Old 8th November 2007, 11:30 AM
stevea Online
Registered User
 
Join Date: Apr 2006
Location: Ohio, USA
Posts: 8,712
Very nice to see the commands, but ...
Isn't the content of /etc/mdadm.conf critical ?
Diskdruid will allow yo uto create the raid partitions and build the raid1 at install-time as well.
Reply With Quote
  #3  
Old 8th November 2007, 11:51 AM
Duli Offline
Registered User
 
Join Date: Jul 2006
Location: Sao Paulo, SP - Brazil
Age: 34
Posts: 702
stevea: I´ve just thought about a quick how to, to help new users. Please, feel free to add information and contribute! Thanks!
Reply With Quote
  #4  
Old 12th December 2007, 01:02 PM
Duli Offline
Registered User
 
Join Date: Jul 2006
Location: Sao Paulo, SP - Brazil
Age: 34
Posts: 702
Just found this really nice how-to from Fedora Faily Packages:

Quote:
When you boot from a Fedora installation disc and enter rescue mode, you have the option of having the filesystems from your Fedora installation mounted at /mnt/sysimage. If you do not select this option, or if it fails, RAID devices will not be configured for use (as is also the case with LVM, as discussed yesterday).

On a Fedora system, RAID arrays are managed by the mdadm utility. This program expects the RAID configuration to be available at /etc/mdadm.conf. In order to use mdadm without this configuration file, it is necessary to create a dummy configuration file. This is a multi-step process:

1. Create a dummy mdadm configuration file containing only the partitions to be scanned for possible RAID elements:

sh-3.2# echo "DEVICE /dev/[hs]d?[0-9]" >/tmp/mdadm.conf

2. Use mdadm's scanning capability to identify any RAID arrays and array members and append that information to the dummy configuration file:

sh-3.2# mdadm --examine --scan --config=/tmp/mdadm.conf >>/tmp/mdadm.conf

3. Start the detected arrays:

sh-3.2# mdadm --assemble --scan --config=/tmp/mdadm.conf
mdadm: /dev/md0 has been started with 2 drives (out of 3).

4. You can now perform any RAID recovery tasks that are needed -- adding (or re-adding) elements to arrays, for example. To view the status of your arrays, cat /proc/mdstat.

If you have layered LVM on top of RAID, manually scan for and activate your volume groups as discussed yesterday.
Reply With Quote
Reply

Tags
software raid mirror

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Howto setup RAID 1 (mirroring) with Fedora 8? CD-RW Using Fedora 2 28th March 2008 06:16 AM
Software Raid Mirroring on Install? cliff Installation, Upgrades and Live Media 18 4th March 2006 03:57 PM


Current GMT-time: 10:40 (Tuesday, 29-07-2014)

TopSubscribe to XML RSS for all Threads in all ForumsFedoraForumDotOrg Archive
logo

All trademarks, and forum posts in this site are property of their respective owner(s).
FedoraForum.org is privately owned and is not directly sponsored by the Fedora Project or Red Hat, Inc.

Privacy Policy | Term of Use | Posting Guidelines | Archive | Contact Us | Founding Members

Powered by vBulletin® Copyright ©2000 - 2012, vBulletin Solutions, Inc.

FedoraForum is Powered by RedHat