LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (http://www.linuxquestions.org/questions/linux-general-1/)
-   -   Need help in recovering from disk failure using mdadm (http://www.linuxquestions.org/questions/linux-general-1/need-help-in-recovering-from-disk-failure-using-mdadm-933810/)

mlefevre 03-10-2012 06:16 PM

Need help in recovering from disk failure using mdadm
 
I have a Centos 6 based server with 4 1TB disks in RAID5. This morning one of the disks failed. I was successful in removing it from the RAID configuration, but I'm having trouble replacing it with the spare that I had on-hand (but not installed). To complicate matters, when I removed /dev/sdc (the failed disk) and installed the new disk, one of the remaining good drives moved to /dev/sdc and the newly installed drive came is as /dev/sdd. I've never done this before so I've probably made some mistakes already. I'm hoping someone with some good mdadm knowldege can help me.

Here is /proc/mdstat:

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[4](S) sda[0] sdb[1]
2930284032 blocks super 1.1 level 5, 512k chunk, algorithm 2 [4/2] [UU__]
bitmap: 3/8 pages [12KB], 65536KB chunk

Here is the state of the 4 disks that are installed:

[root@server ~]# mdadm -E /dev/sda
/dev/sda:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : 345f32b7:3bd78c4a:add713b3:b776a718

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 15:21:12 2012
Checksum : df9bd861 - correct
Events : 9349

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AA.. ('A' == active, '.' == missing)

[root@server ~]# mdadm -E /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : a2aa0da7:d286f913:662ea9a5:d4d5c2ee

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 15:21:12 2012
Checksum : 6194893e - correct
Events : 9349

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1
Array State : AA.. ('A' == active, '.' == missing)

[root@server ~]# mdadm -E /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : f40a712d:2377e272:13576645:61952f5c

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 14:54:23 2012
Checksum : 5409bbcd - correct
Events : 9338

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 3
Array State : AA.A ('A' == active, '.' == missing)

[root@server ~]# mdadm -E /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Name : server:0 (local to host server)
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 1953523120 (931.51 GiB 1000.20 GB)
Array Size : 5860568064 (2794.54 GiB 3000.61 GB)
Used Dev Size : 1953522688 (931.51 GiB 1000.20 GB)
Data Offset : 2048 sectors
Super Offset : 0 sectors
State : clean
Device UUID : 3a91f2ec:496de175:0eed2f3c:297382ec

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 10 15:21:12 2012
Checksum : 9da7b24d - correct
Events : 9349

Layout : left-symmetric
Chunk Size : 512K

Device Role : spare
Array State : AA.. ('A' == active, '.' == missing)

I can't figure out how to "get the big picture" for this array from mdadm so ask if you need more info to help me.

Thanks a lot in advance.

Marc

[GOD]Anck 03-10-2012 07:11 PM

Quote:

Originally Posted by mlefevre (Post 4623691)
I can't figure out how to "get the big picture" for this array from mdadm so ask if you need more info to help me.
Marc

You can "get the big picture" with 'mdadm -D /dev/mdX' where X is your array. This will tell you the state of the array and all component disks. That said, it's unusual for any disk to just change device assignment like that. Also, 3/4 of your device superblocks claim 2 out of 4 disks are missing; in a raid5 setup with one parity disk, that would be fatal.

mlefevre 03-10-2012 07:19 PM

So are you telling me that there is no way to recover from this state? Three of the 4 disks in the original array are still present. The array rebuilt after the disk failed and I thought it got to a good state. Is it possible to just reassemble the array from the 3 disks and then add the 4th in (a spare now)?

Here's the big picture:

[root@server ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.1
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Array Size : 2930284032 (2794.54 GiB 3000.61 GB)
Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Mar 10 15:21:12 2012
State : active, FAILED
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Name : server:0 (local to host server)
UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Events : 9349

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 0 0 2 removed
3 0 0 3 removed

4 8 48 - spare /dev/sdd

mlefevre 03-10-2012 07:22 PM


maybe a better formatted "big picture":

[root@server ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.1
Creation Time : Wed Dec 28 09:14:59 2011
Raid Level : raid5
Array Size : 2930284032 (2794.54 GiB 3000.61 GB)
Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Mar 10 15:21:12 2012
State : active, FAILED
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Name : server:0 (local to host server)
UUID : df49e51e:9ef5c518:27d37da2:aa0aa661
Events : 9349

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 0 0 2 removed
3 0 0 3 removed

4 8 48 - spare /dev/sdd

[GOD]Anck 03-10-2012 07:55 PM

Quote:

Originally Posted by mlefevre (Post 4623713)
So are you telling me that there is no way to recover from this state? Three of the 4 disks in the original array are still present. The array rebuilt after the disk failed and I thought it got to a good state. Is it possible to just reassemble the array from the 3 disks and then add the 4th in (a spare now)?

A 4-disk raid5 array would not be able to rebuild on 3 disks, there would be no room for parity. The array would be marked as degraded. You may be able to re-add a "removed" disk (that did not fail) using mdadm --add / --re-add (see man mdadm). You will still need the 4th (spare) disk present for a full rebuild to a good state.


All times are GMT -5. The time now is 08:08 PM.