LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   software raid5 "unknown-block (8,113)" (https://www.linuxquestions.org/questions/linux-general-1/software-raid5-unknown-block-8-113-a-607943/)

Meta42 12-19-2007 02:45 PM

software raid5 "unknown-block (8,113)"
 
Hi,

I think Ive gone and let my raid die on me, but I thought I'd ask around before I start crying.

I have 8 drives running software raid 5 under fedora 8. This is a purely for storage so no system partitions lives on it.

What has happened now is that 1 drive died a couple of days ago, and before I've had time/money to replace it, a second problem has showed up causing the raid not to become active.

I'm not sure what the nature of this new error is, if it's another dead drive or some benign error I can fix.

Quote:

mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
and proc/mdsta:
Quote:

cat /proc/mdstat
Personalities :
md0 : inactive sdg1[0](S) sde1[6](S) sdc1[5](S) sdf1[4](S) sdi1[3](S) sdd1[2](S) sdj1[1](S)
2129423104 blocks

unused devices: <none>

In dmesg I can read:
Quote:

md: md0 stopped.
md: bind<sdj1>
md: bind<sdd1>
md: bind<sdi1>
md: bind<sdf1>
md: bind<sdc1>
md: bind<sde1>
md: could not open unknown-block(8,113).
md: md_import_device returned -6
md: bind<sdg1>
And mdadm -E for the sde1 partition:
Quote:

[root@black ~]# mdadm -E /dev/sde1
/dev/sde1:
Magic : a92b4efc
Version : 00.90.00
UUID : 844b0326:f0dcec42:4a5ce17e:73a744f3
Creation Time : Wed Oct 24 23:06:45 2007
Raid Level : raid5
Used Dev Size : 293040000 (279.46 GiB 300.07 GB)
Array Size : 2051280000 (1956.25 GiB 2100.51 GB)
Raid Devices : 8
Total Devices : 7
Preferred Minor : 0

Update Time : Sun Dec 9 15:11:30 2007
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 1
Spare Devices : 0
Checksum : 7c694e4c - correct
Events : 0.976212

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
this 6 8 65 6 active sync /dev/sde1

0 0 8 97 0 active sync /dev/sdg1
1 1 8 145 1 active sync /dev/sdj1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 129 3 active sync /dev/sdi1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 33 5 active sync /dev/sdc1
6 6 8 65 6 active sync /dev/sde1
7 7 0 0 7 faulty removed

Anyone know what I might try before killing it and starting over?

Meta42 12-19-2007 04:41 PM

Ok, now i feel like a complete noob again...

mdadm -As /dev/md0 activated the raid apparently the error messages were just caused by the missing drive, now I only need to get my ass to the store and replace that missing drive (getting a hot spare while I'm at it!)


All times are GMT -5. The time now is 06:36 AM.