1. verify the identity of your new disk. Since you seem to have "dm" devices without valid partition tables, this
should be relatively straight forward. Use "cat /proc/partitions". This command will list devices and their
known partitions. In my case, all devices have partition tables:
$ cat /proc/partitions
major minor #blocks name
8 0 244198584 sda
8 1 204800 sda1
8 2 243991201 sda2
8 16 244198584 sdb
8 17 244196001 sdb1
253 0 233816064 dm-0
253 1 10174464 dm-1
8 32 58633344 sdc
8 33 58629186 sdc1
The disk device names all start with sd. Originally it ment Scsci Disk, but since there has been a unification between SCSI, ATAPI, SATA, and even USB disks (ATAPI and SATA and USB disks all use a subset of the SCSI
protocol) they all use the same base name.
The letter following sd (as in sda, sdb, and sdc) designate the order the disks were identified by Linux (presented
by the BIOS configuration too). This means that it is possible to have disks change their identity. For instance, If
I remove the disk sdb in the list, then on next reboot, Linux will identify the disk sdc in the list, as sdb. This renaming
can be handled by assigning a universal id to the disk for use in the fstab. Using the UUID allows for hotswap by
forcing the mount to search for the correct disk by locating the UUID on the disk to be mounted. As I don't swap
disks, this is optional.
The dm-0 and dm-1 partitions are logical partitions used by my Fedora installation. dm-0 is a duplicate name for
the root filesystem (sda1), dm-1 is swap. I personally don't use this, but the device mapper uses them to identify
the root and swap partitions during boot.
Uninitialized disks should show up as sdb or sdc. In the list above, I have one primary partition on each disk, so
there are added entries for sdb1 and sdc1.
These partitions are created by either cfdisk or fdisk.
cfdisk is the utility recommended by man pages (man fdisk, man cfdisk). fdisk is more "forgiving" of partition errors,
cfdisk is more strict, and can prevent errors. What I'm going to present is creating a single partition occupying the
entire drive (the simple case) intended for data storage.
run the program as "cfdisk /dev/sd<letter>" in a terminal window logged in as root. The <letter> corresponds
to your identified disk. This will bring up a display like the following. Note - I already have a partition on the disk
so what you see will be a little different - there should be options shown for creating a partition:
cfdisk (util-linux-ng 2.14.2)
Disk Drive: /dev/sdc
Size: 60040544256 bytes, 60.0 GB
Heads: 255 Sectors per Track: 63 Cylinders: 7299
Name Flags Part Type FS Type [Label] Size (MB)
sdc1 Primary Linux ext2 60036.32
[ Bootable ] [ Delete ] [ Help ] [ Maximize ] [ Print ]
[ Quit ] [ Type ] [ Units ] [ Write ]
Toggle bootable flag of the current partition
Each of the options "[bootable] [Delete],,," will have some explanitory text at the bottom. You should have an
option for "New" to create a new partition. Since I haven't done this in quite a while, you may be reqested
to create a new partition before this window even shows. cfdisk can identify an uninitialized partition table.
The partition should be identified as a "linux" partition type. The type is used to help isolate problems from
other operating systems and use appropriate tool/filesystem types when trying to mount partitions.
Once created, you have to "write" the partition table to the disk. When finished, quit the cfdisk program.
List the partitions again (cat /proc/partitions). You should see the new partition present. cfdisk may ask that
you reboot the system. You can, but there should be an utility (/sbin/partx) that can tell the kernel to load
the partition table (the command should be "/sbin/partx -a /dev/sd<letter>"). Rebooting may be simpler.
Once rebooted list the partitions again (cat /proc/partitions). You really should see the new partition named like
The partition table has now been initialized. The next step is initializing the filesystem.
I usually just accept the system defaults and just use the command "mkfs /dev/sd<letter>1". This will most likely
create an ext2 based filesystem (did for me, but the defaults may have changed). Once nice thing about ext2
is that it is compatable with ext3 (a journaling filesystem), and can be changed after use.
The "mkfs" command will list a number of blocks, take a while to perform, and then finish up. Most of the
information can be ignored. The blocks being presented are redundancy information about alternate headers and
such. What is spends most of its time doing is creating the inodes for the filesystem. When it finishes, the disk
should be usable. A quick test is the command "mount /dev/sd<letter>1 /mnt". This should mount the disk
on /mnt. A df should show the new mount, size of the disk, number of inodes...
After testing, dismount the disk (/mnt is used for multi-media things).
If you want the disk mounted at boot you can continue on:
Decide where in the directory hierarchy you want the disk. Lets assume you want it as "/data".
In a terminal window (logged in as root):
This will create an empty directory. You can manually mount the disk if you want ("mount /dev/sd<letter>1 /data").
and decide if this is reasonable. If not, dismount, delete the directory (or rename it).
edit the /etc/fstab file to add the line:
/dev/sd<letter>1 /data ext3 defaults 0 0
The first field is the partition to mount. The second field is the directory the partition will show up as.
The third is the filesystem type. ext3 is the journaling update to ext2 that I assume was created by the "mkfs"
command when the filesystem was initialized. It can be "ext2". the "defaults" are those that apply to the
filesystem. Normal is read/write. If a filesystem is to be read-only replace "defaults" with "ro". The two fields
with "0" are used for determining when/how mounts are done, and when/how backups are done. 0 in the
first position helps track system backups (depending on how you do them), the second determines the order
the mount is done during boot.
0 in this position is reasonable, as the root filesystem will already be mounted (has to be to even find the startup
scripts), so mounting /data on the first pass is reasonable.
If you have a number of disks (/a /b /c) and you decide that you wanted a new disk (z) mounted under a (/a/z)
then you can't really mount the z until after the "a" is mounted. It makes sense then to have 0 for the a,b and c
entries, and 1 for the z.
Hope this helps. It isn't complete, especially with respect to error recovery. But it can get you started.
Once you are comfortable with having identified the disk, you can play with the partition table, (create more, delete
some, resize...). Try out the logical volumes... You can actually play with raid configuration by putting multiple
partitions on a disk, then raid them together. (This is really only usefull for learning, if the disk fails you still
loose everything, and raid is intended for auto recovery...)