When you record and download lots of TV, movies, music, etc. it can chew up the disk space pretty quickly. If you don't keep on top of your DVD burning you will end up running out of disk space, like me.
That's okay, I have two 200GB drives mirrored in a RAID1 array. If I break the mirror and concatenate the drives I could use all 400GB of space available to me... but if one drive died, I would lose everything that was not backed up.
Buying two more drives isn't the answer as I only have one more PATA drive connection available.
Perhaps I could back everything up, buy two bigger drives, install a new RAID1 array and copy everything back over. That means giving up two perfectly servicable 200GB drives.
If only I could add a third drive and convert the RAID1 array to a RAID5 array. Then I would get the full 400GB of space, and still retain the redundancy. Yeah, right...
Then I stumbled across this blog entry in which a guy creates some experimental loopback devices, creates a RAID1 array and then converts it to a RAID5 array with no data loss.
I was intrigued.
The theory says that the RAID5 algorithm, when applied to 2 disks only, ends up looking like a RAID1 array except for the RAID metadata. If you overwrite the RAID1 metadata with the RAID5 metadata, mdadm should recognise the 2 disk RAID5 array and not mess with the contents.
Once the metadata is updated, you can then add a third partition to the array and grow the RAID5 array to utilise it. All that remains is to then resize the filesystem to fill the new space.
The main question is, am I brave enough to try it?
You bet I am!
Of course, everything is caveated with the usual "back everything up before you attempt this procedure" and, like a good boy, I borrowed a 400GB external drive from work and rsync'd all the important stuff across... and, with heart in mouth, followed the procedure...
Boot from a Fedora Core 6 rescue CDROM and get to a command prompt.
You must ensure you have a recent kernel (> 2.6.17) and that you have a recent version of the mdadm software:
# uname -a Linux localhost.localdomain 2.6.18-1.2798.fc6 #1 SMP Mon Oct 16 14:54:20 EDT 2006 i686 unknown # mdadm --version mdadm - v2.5.4 - 13 Ocotober 2006
Stop the array:
# mdadm --stop /dev/md0
Overwrite the RAID1 metadata with the RAID5 metadata:
# mdadm --create /dev/md0 --level=5 -n 2 /dev/hda1 /dev/hdb1 mdadm: /dev/hda1 appears to contain an ext2fs file system size=1946592K mtime=Sat Apr 14 07:18:32 2007 mdadm: /dev/hda1 appears to be part of a raid array: level=1 devices=2 ctime=Sat Sep 17 16:17:45 2005 mdadm: /dev/hdb1 appears to contain an ext2fs file system size=1946592K mtime=Sat Apr 17 07:18:32 2007 mdadm: /dev/hdb1 appears to be part of a raid array: level=1 devices=2 ctime=Sat Sep 17 16:17:45 2005 Continue creating array? y mdadm: array /dev/md0 started.
At this point the RAID software decided it wanted to rebuild the array. Uh-oh, there goes my data...
I quickly mounted /dev/md0 and had a look... all my data is still intact! Oh well, let the software do it's thing. Who am I to argue?
Add in the third, new partition:
# mdadm --add /dev/md0 /dev/hdd1
So far, so good. Once the rebuild is complete, grow the RAID5 onto the new partition: (NB: use the --backup-file option in case the grow is interrupted. It will allow a safe recovery.)
# mdadm --grow /dev/md0 --raid-disks=3 --backup-file=/mnt/tmp/raid1-5.backup.file mdadm: Need to backup 128K of critical section .. mdadm: ... critical section passed.
I'm impressed that I've had no problems so far.
The reshaping of the RAID5 from a 2 disk to a 3 disk array takes quite a while (about 6.5 hours for around 200GB of raw data) but the filesystem resize shouldn't take anywhere near as long:
# e2fsck -f /dev/md0 # resize2fs -p /dev/md0
Apart from the modification of the RAID metadata, the whole operation can be done "online". I chose to do it from single-user/rescue mode as I wanted to make sure there was no data loss. If you're not too bothered then you could leave the whole thing up and running.