LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 04-23-2010, 02:07 PM   #1
anon195
LQ Newbie
 
Registered: Apr 2010
Posts: 1

Rep: Reputation: 0
2 "failed" drives in a 3-disk RAID-5 array


Hello,

Since I moved my machine, I am unable to mount my RAID array. At first, I think sdb was in trouble, but since next reboot, it's sdc. Everything happened in a short time frame and there was no write on my valuable data. Then I think I can recover everything, provided that I am careful with what I do.

'mdadm --detail /dev/md0' gives:
Code:
/dev/md0:
        Version : 01.02
  Creation Time : Wed Apr 15 00:00:14 2009
     Raid Level : raid5
  Used Dev Size : 976762432 (931.51 GiB 1000.20 GB)
   Raid Devices : 3
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Apr 22 23:40:27 2010
          State : active, degraded, Not Started
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           Name : server:d0  (local to host server)
           UUID : e84b8f97:fd7fd496:1f9adc88:b8915c4d
         Events : 299841

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      spare rebuilding   /dev/sdb
       2       0        0        2      removed
I notice a different state for sdb and sdc.

But although sdb is marked as "rebuilding", it is not since there is only one good drive. So, 'mdadm --examine /dev/sd[abc] | grep Events' returns:
Code:
         Events : 299841
         Events : 299841
         Events : 299838
sdb seems up to date. So, I was about to recreate the array with 'mdadm --create /dev/md0 --assume-clean --level=5 --verbose --raid-devices=3 /dev/sda /dev/sdb missing' as explained here: http://kevin.deldycke.com/tag/mdadm/ and then add the third drive for reconstruction.

But I also noticed the following line in the report from 'mdadm --examine /dev/sdb':
Code:
Recovery Offset : 6400 sectors
Do you think it is safe to assume this drive is clean (Events is the same as sda) and type the above command?

Thank you very much,
Yann
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How to rebuild/repair array(Raid 1) with only "mdadm" command Slack12.2 michas100 Linux - Software 4 03-19-2010 04:53 AM
Fedora Core 4 wont "see" my Promise Fastrak 378 IDE-RAID array as one drive. Justinr999 Linux - Hardware 0 07-19-2005 01:30 AM
HELP! Suse9.2 "Failed to boot from Hard Disk..." ccfrosty Linux - General 2 01-30-2005 06:38 PM
Dirty "RAID"-array in linear mode after hdd went offline tomas412 Linux - Software 1 08-14-2004 10:12 AM
"No Hard Disk Drives detected" error when installing Woody as guest OS in VMware Thomas Anderson Debian 1 01-25-2004 04:41 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 05:59 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration