LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 03-10-2012, 06:16 PM   #1
mlefevre
LQ Newbie
 
Registered: Jul 2004
Posts: 16

Rep: Reputation: 0
Need help in recovering from disk failure using mdadm


I have a Centos 6 based server with 4 1TB disks in RAID5. This morning one of the disks failed. I was successful in removing it from the RAID configuration, but I'm having trouble replacing it with the spare that I had on-hand (but not installed). To complicate matters, when I removed /dev/sdc (the failed disk) and installed the new disk, one of the remaining good drives moved to /dev/sdc and the newly installed drive came is as /dev/sdd. I've never done this before so I've probably made some mistakes already. I'm hoping someone with some good mdadm knowldege can help me.

Here is /proc/mdstat:

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[4](S) sda[0] sdb[1]
2930284032 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/2] [UU__]
bitmap: 3/8 pages [12KB], 65536KB chunk

Here is the state of the 4 disks that are installed:

[root@server ~]# mdadm -E /dev/sda
/dev/sda:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : 345f32b7:3bd78c4a:add713b3:b776a718

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 15:21:12 2012
Checksum : df9bd861 - correct
Events : 9349

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AA.. ('A' == active, '.' == missing)

[root@server ~]# mdadm -E /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : a2aa0da7:d286f913:662ea9a5:d4d5c2ee

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 15:21:12 2012
Checksum : 6194893e - correct
Events : 9349

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1
Array State : AA.. ('A' == active, '.' == missing)

[root@server ~]# mdadm -E /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : f40a712d:2377e272:13576645:61952f5c

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 14:54:23 2012
Checksum : 5409bbcd - correct
Events : 9338

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 3
Array State : AA.A ('A' == active, '.' == missing)

[root@server ~]# mdadm -E /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : 3a91f2ec:496de175:0eed2f3c:297382ec

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 15:21:12 2012
Checksum : 9da7b24d - correct
Events : 9349

Layout : left-symmetric
Chunk Size : 512K

Device Role : spare
Array State : AA.. ('A' == active, '.' == missing)

I can't figure out how to "get the big picture" for this array from mdadm so ask if you need more info to help me.

Thanks a lot in advance.

Marc
 
Old 03-10-2012, 07:11 PM   #2
[GOD]Anck
Member
 
Registered: Dec 2003
Location: The Netherlands
Distribution: Slackware
Posts: 171

Rep: Reputation: 35
Quote:
Originally Posted by mlefevre View Post
I can't figure out how to "get the big picture" for this array from mdadm so ask if you need more info to help me.
Marc
You can "get the big picture" with 'mdadm -D /dev/mdX' where X is your array. This will tell you the state of the array and all component disks. That said, it's unusual for any disk to just change device assignment like that. Also, 3/4 of your device superblocks claim 2 out of 4 disks are missing; in a raid5 setup with one parity disk, that would be fatal.
 
Old 03-10-2012, 07:19 PM   #3
mlefevre
LQ Newbie
 
Registered: Jul 2004
Posts: 16

Original Poster
Rep: Reputation: 0
So are you telling me that there is no way to recover from this state? Three of the 4 disks in the original array are still present. The array rebuilt after the disk failed and I thought it got to a good state. Is it possible to just reassemble the array from the 3 disks and then add the 4th in (a spare now)?

Here's the big picture:

[root@server ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.1
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Array Size : 2930284032 (2794.54 GiB 3000.61 GB)
Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Mar 10 15:21:12 2012
State : active, FAILED
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Name : server:0 (local to host server)
UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Events : 9349

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 0 0 2 removed
3 0 0 3 removed

4 8 48 - spare /dev/sdd
 
Old 03-10-2012, 07:22 PM   #4
mlefevre
LQ Newbie
 
Registered: Jul 2004
Posts: 16

Original Poster
Rep: Reputation: 0

maybe a better formatted "big picture":

[root@server ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.1
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Array Size : 2930284032 (2794.54 GiB 3000.61 GB)
Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Mar 10 15:21:12 2012
State : active, FAILED
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Name : server:0 (local to host server)
UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Events : 9349

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 0 0 2 removed
3 0 0 3 removed

4 8 48 - spare /dev/sdd
 
Old 03-10-2012, 07:55 PM   #5
[GOD]Anck
Member
 
Registered: Dec 2003
Location: The Netherlands
Distribution: Slackware
Posts: 171

Rep: Reputation: 35
Quote:
Originally Posted by mlefevre View Post
So are you telling me that there is no way to recover from this state? Three of the 4 disks in the original array are still present. The array rebuilt after the disk failed and I thought it got to a good state. Is it possible to just reassemble the array from the 3 disks and then add the 4th in (a spare now)?
A 4-disk raid5 array would not be able to rebuild on 3 disks, there would be no room for parity. The array would be marked as degraded. You may be able to re-add a "removed" disk (that did not fail) using mdadm --add / --re-add (see man mdadm). You will still need the 4th (spare) disk present for a full rebuild to a good state.
 
  


Reply

Tags
failure, mdadm, raid



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Software RAID (mdadm) - RAID 0 returns incorrect status for disk failure/disk removed Marjonel Montejo Linux - General 4 10-04-2009 06:15 PM
help with mdadm disk failure ufmale Linux - Server 3 05-29-2008 09:59 AM
replace failure disk and rebuild RAID with mdadm ufmale Linux - Software 0 11-15-2007 02:24 PM
Major problem with software raid (mdadm) and disk failure norwolf Linux - Server 8 07-27-2007 06:14 AM
LVM - recovering data after a disk failure. bogaurd Linux - Software 2 12-19-2005 11:34 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 11:22 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration