Quote:
Originally Posted by damiendusha
The RAID array was intact, but needs to be assembled:
mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
I have almost the same problem as the OP. I started growing an array, but noticed that in over an hour the speed of the op never increased over 0K/s, so I tried to reboot. Now, when I reassemble my array, it segfaults.
I'm using kernel 2.6.31-14-generic and mdadm v2.6.7.1.
For an hour after starting the grow command, the speed of the operation was 0K/s the entire time. I thought something must be wrong, and the best course of action would be to reboot and start over from a clean boot. I'm not a linux expert so I thought that if I rebooted everything would try to exit gracefully.
After one hour, the system still had not finished shutting down. I then did alt-sysreq RSEISUB waiting over one minute between each command. I've included the syslog of what happened up until the next start up, but put it last because it's by far the longest.
When I rebooted, the array seemed to be up, but mounting it resulted in a bad FS type error, even when I tried to specify it (ext4). After stopping the inactive array, and trying to reassemble it, mdadm crashed to segmentation fault.
Is it possible to recover the data? We have backups, but they're spread out over 1500 DVDs.*
When I examine the drives, the output looks pretty much like this for each drive (6 drives say active and 4 say clean, corresponding to the 6 original and 4 added drives):
$ mdadm --examine /dev/sda
/dev/sda:
Magic : a92b4efc
Version : 00.91.00
UUID : 56c16545:07db76d6:e368bf24:bd0fce41
Creation Time : Tue Feb2 09:58:58 2010
Raid Level : raid5
Used Dev Size : 976762368 (931.51 GiB 1000.20 GB)
Array Size : 8790861312 (8383.62 GiB 9001.84 GB)
Raid Devices : 10
Total Devices : 10
Preferred Minor : 0
Reshape pos'n : 0
Delta Devices : 4 (6->10)
Update Time : Thu Mar 18 23:33:40 2010
State : active
Active Devices : 10
Working Devices : 10
Failed Devices : 0
Spare Devices : 0
Checksum : 79904299 - correct
Events : 270611
Layout : left-symmetric
Chunk Size : 256K
Number Major Minor RaidDevice State
this 4 804active sync /dev/sda
0 0 8 160active sync /dev/sdb
1 1 8 961active sync /dev/sdg
2 2 81122active sync /dev/sdh
3 3 8 483active sync /dev/sdd
4 4 804active sync /dev/sda
5 5 8 325active sync /dev/sdc
6 6 81606active sync /dev/sdk
7 7 81447active sync /dev/sdj
8 8 81288active sync /dev/sdi
9 9 8 809active sync /dev/sdf