How to migrate a Fedora Core install from a non-LVM disk to a new disk with LVM support enabled. Combines a hard disk upgrade with LVM installation.
The procedures here are not for the faint of heart! Do this at your own risk. You could damage or destroy data if you mis-type a command. These are my experiences, YMMV.
LVM HOWTO http://www.tldp.org/HOWTO/LVM-HOWTO/
Void Main has also been answering a lot of questions regarding LVM, see http://fedoraforum.org/forum/showthread.php?t=32695
GRUB manual: http://www.gnu.org/software/grub/manual/
I set up a fedora core server some time ago, but recently I've been running low on disk space. What I wanted to do was install a new hard drive. However, I figured this time around it might be a good idea to set up LVM, since then in the future if I run low on disk space (or otherwise need to resize a partition) the process would be easier.
The starting state is:
Fedora Core 2 (although 3 should work, and 1 may work) The new hard drive will eventually be replacing the old hard drive, after the existing data is copied over to it.
Things you should have before you start:
The RPM for the kernel you are using, documentation you may need during the process. A rescue CD can be quite handy. You probably should back up any important data, as we are playing with partition tables and master boot records!
Install new hard drive
Shutdown the server machine and install the new hard drive. Leave the existing hard drive where it is. (ie, at this point, don't switch slave/master on the IDE chain) Boot the existing Fedora Core install. For the purposes of the rest of this document, I'm going to assume that your existing drive is hdx, and the new drive is hdy.
Partition new hard drive
Once the system comes up, partition hdy. I used fdisk.
Make sure you get the right drive here! If you mess with the partition table on your working disk (hdx) you can cause serious problems and are likely to cause data loss! If you feel unsure, do this step with the current hard drive disconnected, and boot with a rescue cd instead.
The setup for the partitions should be:
primary partition 1 (/boot) - this is a standard linux partition (id 83). I made it 100MB, you can pick whatever.
primary partition 2 (for LVM) - the rest of the disk space. Regardless of how you want your partitions set up, this is the rest of the disk space. You can change the type to 8e (lvm) if you want, although I think that is only required if you are using LVM1.
Format boot with whatever type of filesystem you want. I would reccommend ext2/3. The simplest way to do this would be:
# mkfs.ext3 /dev/hdy1
# e2label /dev/hdy1 (some unique label here, I used boot2)
Again, make extra sure you get this right! You wouldn't want to format the wrong partition...
Now we are going to set up the new disk with LVM. The LVM HOWTO is useful for this section, although in some places it is set up for LVM1 instead of LVM2. These instructions should work for both.
# pvcreate /dev/hdy2
# vgcreate vol_grp_1 /dev/hdy2
# vgchange -a y vol_grp_1
Now you have created the basic structure for the LVM partition. Now we add logical volumes to it.
All I wanted was swap space and a root partition. If you want to add other partitions, modify these instructions accordingly. The name of the volume group needs to match whatever you made it in the "vgcreate" line above, but the name of the other volumes (what follows -n in lvcreate) is up to you.
First, a 1024MB swap:
# lvcreate -L 1024 -n log_vol_swap vol_grp_1
(repeat as needed for other partitions)
Then, a root volume filling the rest of the disk:
# vgdisplay vol_grp_1 | grep "Total PE"
A single line should be displayed: "Total PE" and then a number. Use this number in the next command:
# lvcreate -l (Whatever Total PE was) -n log_vol_root
Note the lowercase 'l' instead of the uppercase 'L' in the first command. L means the number is in MB, while l means that the number is in extents.
The logical volumes are created as: /dev/(volume group name)/(logical volume name)
Format other lvm partitions
Now format the other partitions you created. For my setup:
# mkswap /dev/vol_grp_1/log_vol_swap
# mkfs.ext3 /dev/vol_grp_1/log_vol_root
Now to copy the files. First, go to single user mode.
Note that this shuts down most everything on the system, so make sure you have everything before you do this (ie, the kernel rpm, any documentation needs to be handy, etc) We go to single user mode so that the files aren't changing when we copy.
Now, mount the new partitions
# mkdir /mnt/new_boot
# mount /dev/hdy1 /mnt/new_boot
# mkdir /mnt/new_root
# mount /dev/vol_grp_1/log_vol_root /mnt/new_root
(... as needed)
Copy over your existing files. You can use cp or rsync for this, it is up to you...
# cp -avx / /mnt/new_root
-- or --
# rsync -avx / /mnt/new_root
You can omit the 'v' if you don't want to see the file names as they are copied.
You will also need to do this for /boot and /mnt/new_boot as well. If you had multiple partitions, this will only copy the files on root. You'll have to repeat this for every partition you want to copy over.
Chroot to new drive
Now, we need to work on the new drive.
# chroot /mnt/new_root
# mount /dev/hdy1 /boot
Now operations will effect the new setup instead of the old setup.
Now to edit fstab. You should see several lines in fstab, one for each partition on the old system. The important lines are the ones for /boot, /, and swap. (You may also have to change things for other partitions you had before / have now)
My root directory line looked like:
LABEL=/ / ext3 defaults 1 1
Which I changed to:
LABEL=/boot /boot ext3 defaults 1 2
LABEL=boot2 /boot ext3 defaults 1 2
The LABEL comes from whatever you used for the e2label command above. You could also use /dev/hdy1 instead of using LABEL, but that would cause problems when we remove the old drive and make the new drive hdx.
/dev/hdx# swap swap defaults 0 0
Updating the boot config
/dev/vol_grp_1/log_vol_swap swap swap defaults 0 0
At this point, the system is almost ready to go. However, if you weren't using LVM before, your initrd probably doesn't load the LVM modules or activate the volume group before it attempts to boot. It probably also has the wrong device information. You can go and edit the initrd, but that is a difficult task. LVM1 shipped with a script to generate an initrd, but it isn't finished for LVM2 yet (at least not in the version I had installed with FC2) For my setup, mkinitrd did not properly generate the initrd: it was missing the 'lvm' binary and the calls to activate the volumes. The scripts that come with the kernel RPMs, however, are designed to properly generate the initrd based on the fstab.
# rpm -ivh --force kernel-version.rpm
This will force an update to your grub.conf, and the initrd image.
Edit /boot/grub/grub.conf. There should be a set of entries for each kernel you have installed. Look for the entry for the kernel you just re-installed. The first line gives the title that shows up in the grub menu. The second line shows the location of the boot partition. Probably, this reads something like "root (hd#,0)". If you plan to move the new drive (ie, put it as the primary master where you old hard drive used to be), you will need to change this line. Grub counts the drives from zero, where (hd0,0) would be the first partition on the primary master drive, and (hd1,0) would be the first partition on the primary slave drive. Change this line to match where the drive will be once you are finished.
Now, install grub on the new drive.
# grub-install /dev/hdy
This will make the drive bootable.
The new drive should be ready. Shut down and remove the old drive, and put the new drive in its place. I would leave the old drive in the case but unhooked until you boot the new drive successfully. If you have a rescue CD handy, this may not be needed.
With any luck, the new drive should boot! Congratulations, you've set up a new drive with LVM support. Once you feel confident in the new drive's setup, you may want to add the old drive to the logical volume and exploit its space. See the links in the references section for more details on this.
If you only have one drive, it may be possible to clear some space for a LVM partition and then follow these instructions to migrate to it. However, this is a bit more complicated.
If you ran into trouble, there are a number of places you could have hit a snag. If you can't get anything after the BIOS, probably grub is misinstalled or needs work. If grub comes up, but there isn't a menu, probably the grub.conf file has errors. If the kernel panics because it can't find init, it is likely that your initrd is wrong. Check the fstab and reinstall the kernel. I've also seen where fsck has issues because there is a mistake in fstab. In any case, either boot from the rescue cd or swap the drives back to where they were and start debugging.