Reshaping RAID5 array to RAID6
I am running lucid and have a 4+1(spare) RAID5 array made up of 1TB disks. I upgraded my mdadm to version 3.1.4 and then performed the following operation:
$sudo mdadm --grow /dev/md3 --level=6 --read-device=5 --backup-file=/var/lib/mysql/md3backup
I have a 500GB drive mounted at /var/lib/mysql which is mostly empty and not part of any RAID array.
The reshaping started and everything looked OK. The access lights on the 5 drives were all coming on at the same time on a regular basis. The status from /proc/mdstat showed the array being reshaped to RAID6, albeit slowly. The status showed an average speed of 4000KB and an estimated completion time of 4000 minutes. This all seemed reasonable. This was performed in late afternoon.
The next morning I checked the status and the average speed was down to 300->400KB and the estimated time to complete was 40,000 minutes. When I look at the drive lights, I have one drive whose access light is on solid and the other four drives come on intermittently.
Running iotop doesn't show anything useful. mdadm and kjournal show up occasionally. The same is true for top (running on an i5 2500K Intel processor). Here is the output of cat /proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sde3[4](S) sda3[3] sdc3[1] sdd3[2] sdb3[0]
987904 blocks [4/4] [UUUU]
md0 : active raid1 sde1[4](S) sdc1[1] sdd1[2] sdb1[0] sda1[3]
200704 blocks [4/4] [UUUU]
md3 : active raid6 sde4[4] sdb4[0] sdc4[1] sda4[3] sdd4[2]
2832167424 blocks super 0.91 level 6, 64k chunk, algorithm 18 [5/4] [UUUU_]
[==>..................] reshape = 14.4% (136683520/944055808) finish=36779.9min speed=365K/sec
md1 : active raid1 sde2[4](S) sdc2[1] sda2[3] sdd2[2] sdb2[0]
31457216 blocks [4/4] [UUUU]
unused devices: <none>
My biggest concern is keeping this system running for 20+ days without any hiccups. Does anyone have an idea of what has caused mdadm to slow down so much?
|