PDA

View Full Version : Trouble booting a degraded RAID-1 array



aluchko
10th September 2006, 04:24 AM
I should mention that I'm reposting this thread (http://www.linuxquestions.org/questions/showthread.php?p=2399215#post2399215) from LinuxQuestions and am hoping a more OS-specific forum will help.

I've been attempting to migrate my running server to RAID-1 using another hard drive identical to the current one. The general approach I've used is to create new drive to mirror the old one, set up the new drive as a degraded RAID-1 array using mdadm, rsync the contents of the old drive to the new drive, then add the old drive to the new RAID array when everything is working.

fdisk -l reports

Disk /dev/hda: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 fd Linux raid autodetect
/dev/hda2 14 30515 245007315 8e Linux LVM

#Disk /dev/hdb: 251.0 GB, 251000193024 bytes
255 heads, 63 sectors/track, 30515 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hdb1 * 1 13 104391 fd Linux raid autodetect
/dev/hdb2 14 30515 245007315 fd Linux raid autodetect

The old disk is /dev/hda and as can be seen I've successfully migrated over the boot partition to run on a RAID device /dev/md1 (though I still boot off hda1 instead of md1).

On /dev/md0 I've created a logical group /dev/lvm-raid with two logical volumes, lvm0 (filesystem) and lvm1 (swap, yeah raid-0 would be better for this).


# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [232.12 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [1.50 GB] inherit
ACTIVE '/dev/lvm-raid/lvm0' [232.12 GB] inherit
ACTIVE '/dev/lvm-raid/lvm1' [1.50 GB] inherit
# pvdisplay /dev/md0
--- Physical volume ---
PV Name /dev/md0
VG Name lvm-raid
PV Size 233.66 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 59816
Free PE 9
Allocated PE 59807
PV UUID 9e8Ixn-xC4P-27qH-YrSP-KO8W-lfli-doMB34

But whenever I try to boot lvm0 using the following lines in grub

title Fedora Core raid (2.6.17-1.2174_FC5)
root (hd0,0)
kernel /vmlinuz-2.6.17-1.2174_FC5 ro root=/dev/lvm-raid/lvm0 rhgb quiet
quiet
initrd /initrd-2.6.17-1.2174_FC5.img

I get the following error during boot

Reading all physical volumes. This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
2 logical volume(s) in volume group "VolGroup00" now active
mount: count not find filesystem '/dev/root'
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
It seems to me that it's not detecting the lvm-raid volume group at startup.

I've tried rebuilding the boot image with

mkinitrd -v --with=raid1 /boot/initrd-2.6.17-1.2174_FC5.img 2.6.17-1.2174_FC5
as well as setting /dev/lvm-raid/lvm0 to be / in /ect/fstab but it doesn't seem to make any difference.

As well when I boot with a rescue CD /dev/lvm-raid doesn't exist though /dev/VolGroup00 does and I can mount /dev/md0

Does anyone know how I can successfully boot /dev/lvm-raid/lvm0 ?

thanks

Ogilthorpe
25th August 2007, 01:57 AM
Ok, I have the exact same setup and I will share how I fixed the bootup problem and what is my problem now ;)

to boot, you need to redo your initrd with the following mods :

open mkinitscript and put the name of your volume group at vgscan="lvm-raid"

go in /etc/lvm/lvm.conf and look for a line about md device detection and put it to 0

do a mkinitrd -v -f --preload=dm-mod with=raid1 /boot/initVERSION.img VERSION

Now, the system should boot correctly

My problem is that when I boot on the new degraded lvm-raid, when I do a cat /proc/mdstat, there is only md0 available... my md1 is not there (lvm-raid)... I tried to mdadm -A --force /dev/md1 /dev/sdb2 put it won't work, it says that sdb2 doesn't have a superblock even if mdadm --examine /dev/sdb2 tells me that everything is ok.

If I boot to the old disk, I see md0 and md1.

Can someone help me here because it's the last step that I need to do before adding the first disk for the raid to sync.

Thanks

Ogilthorpe
26th August 2007, 05:29 PM
ok, just to update you guys on what I have found...

if I check the syslog, I see that at bootup, mdadm tries to create md1 but fail because on a bd_claim error, meaning that the partition is already mounted...

I created my initrd with mkinitrd -f -v --preload=raid1 --preload=dm-mod initVERSION VERSION

my grub is like this

root (hd0,0)
kernel /vmlinuz-VERSION ro root=/dev/lvm-raid/root md=1,/dev/sda2,/dev/sdb2 rhgb quiet
initrd /initrd-VERSION.img

What am I doing wrong ????????????