Register a domain and help support LQ
Go Back > Forums > Linux Forums > Linux - Server
User Name
Linux - Server This forum is for the discussion of Linux Software used in a server related context.


  Search this Thread
Old 07-26-2011, 05:55 AM   #1
LQ Newbie
Registered: May 2007
Posts: 4

Rep: Reputation: 0
mdadm raid6 active despite 3 drive failures

I am currently having problems with my RAID partition. First two disks were having trouble (sde, sdf). Through smartctl I noticed there were some bad blocks, so first I set them to fail, and readded them so that the RAID array will overwrite these.

Since that didn't work, I went ahead and replaced the disks. The recovery process was slow and I left things running overnight. This morning I find out that another disk (sdb) has failed. Strangely enough the array has not become inactive.

md3 : active raid6 sdf1[15](S) sde1[16](S) sdak1[10] sdj1[8] sdk1[9] sdb1[17](F) sdan1[13] sdd1[2] sdc1[1] sdg1[5] sdi1[7] sdal1[11] sdam1[12] sdao1[14] sdh1[6]
25395655168 blocks level 6, 64k chunk, algorithm 2 [15/12] [_UU__UUUUUUUUUU]

Does anyone have any recommendations as the steps to take ahead with regards to recovery/fixing the problem? The disk is basically full so I haven't written anything to disk in the interim of this problem.

Old 07-26-2011, 09:18 PM   #2
LQ Guru
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.7, Centos 5.10
Posts: 16,651

Rep: Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155Reputation: 2155
Well there's a good summary of RAID types here, but basically it says you only need 4 active disks to keep a RAID 6 running.
You seem to have 15(?) total disks, with 2 Syncing and one Failed; just replace the Failed one and continue.
Obviously the less you use the raid, the faster the syncs will complete.

cat /proc/mdstat

mdaddm --detail /dev/md3
Re space full:
when it's finished syncing, you need to do at least 1 of

1. purge some space
2. add more disks
3. backup and replace with something else
Old 07-26-2011, 09:34 PM   #3
LQ Newbie
Registered: May 2007
Posts: 4

Original Poster
Rep: Reputation: 0
I think that RAID6 has a fault tolerance of two partitions, that's why I'm so worried. Therefore, with two disks turned into 'spares' and another one failing, I don't think there's anymore tolerance for errors.




array, mdadm, raid, raid6, smartd

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm - RAID5 to RAID6, Spare won't become Active Fmstrat Linux - General 7 06-21-2011 09:52 PM
mdadm & raid6: does re-create(with different chunk size) + resync destroy orig. data? schanhorst Linux - Server 2 10-15-2010 04:31 PM
mdadm & raid6: does re-create(with different chunk size)+resync destroy orig. data? schanhorst Linux - Server 1 10-14-2010 08:06 PM
MDADM RAID5 coversion to RAID6 and drive sizes. kripz Linux - Server 2 12-03-2009 06:33 AM
Mdadm: reporting 2 drive failures in RAID5 array wolfywolf Linux - Software 3 04-26-2009 11:54 AM

All times are GMT -5. The time now is 12:46 AM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration