Software RAID setup

I am in the process of setting up a box for someone and I thought I’d document the software RAID portion of it a little bit, in case it is helpful to anyone else.

I’m a bit of a command line junkie so it should come as no surprise then that I prefer to setup my software RAID sets using the command line tools available. The system in question this time is a newly installed CentOS 5 box. In this article I’ll concentrate on creating a mirrored set.

The system has a full set of drives including two optical drives on IDE (secondary master and slave) and then one 80GB drive on the primary master. There are also two other 80GB drives to deal with so I installed a simple promise IDE controller and connected the two drives using a single cable. While it is technically better to use two cables so that each drive is alone, these drives when mirrored, will not be able to best the speed of the controller.

To get started, I had to figure out how the drives were named or addressed by the system. I do this by issuing the dmesg command to list all “recent” messages logged by the kernel and piping the output to grep to search for entries that begin with hd.

[[email protected] ~]# dmesg | grep ^hd

The first few lines are the ones I’m interested in. Here is a listing of what I got:

[[email protected] ~]# dmesg | grep ^hd
hda: WDC WD800JB-00FMA0, ATA DISK drive
hdc: COMPAQ DVD-ROM LTD122, ATAPI CD/DVD-ROM drive
hdd: LG CD-RW CED-8120B, ATAPI CD/DVD-ROM drive
hde: WDC WD800BB-00DKA0, ATA DISK drive
hdf: WDC WD800JB-00FMA0, ATA DISK drive

I know that during the install the drive referenced as hda was used for installing CentOS. This leaves hde and hdf as my obvious choices.

Now that I know what the drives are referenced as I can get to work creating my RAID set. You manage software RAID on Linux using the mdadm tool. This tool allows you to manage Linux MD (Multiple Devices) aka, Linux Software RAID. You can learn more about mdadm by typing ‘man mdadm’ at the command prompt. The following command tells mdadm to create a new mirrored raid set.

[[email protected] ~]# mdadm --create /dev/md0 --level 1 --raid-devices 2 /dev/hde /dev/hdf

Since I have personally used these drives in the past for software raid, I got the following information.

mdadm: /dev/hde appears to be part of a raid array:
    level=raid1 devices=2 ctime=Mon Jan 21 09:42:39 2008
mdadm: /dev/hdf appears to be part of a raid array:
    level=raid1 devices=2 ctime=Mon Jan 21 09:42:39 2008
Continue creating array?

I can safely type ‘yes’ and press enter.

At this point the system will initialize the mirror set. You can watch the progress (or view the status of your array) by looking at the /proc/mdstat file. You can look at this file by typing ‘cat /proc/mdstat’ as show below:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb[0] sdc[1]
488386496 blocks [2/2] [UU]

unused devices:
[[email protected] ~]#

Since I handed the system over to the end user before actually finishing this howto…the drive names have changed as well as the array size. The contents of the file will be very similar however.

The last thing to do is to create the mdadm.conf file. This file tells the system how to assemble the array at boot time. Without this file the RAID array will not be built when the system boots. The file is pretty simple and for this RAID set it would look like this.

DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 devices=/dev/hde,/dev/hdf

The MAILADDR is for the mdmonitor package which emails you any mdstat changes. You can change this to a different email address if you wish, I generally leave it as is and forward all of root’s mail to my normal user account.

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *