LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 05-23-2007, 10:45 PM   #1
sauce
Member
 
Registered: Oct 2005
Distribution: Slackware, Ubuntu
Posts: 52

Rep: Reputation: 15
RAID-1 failing, is my brand new disk BAD??


I setup a nice RAID-1 this afternoon. I get home thinking all my arrays will be synced perfectly. Howoever one partition is failing rebuild (/home), over and over again. Is my hdd bad already?

This is a brand new box, brand new install, with 2 brand new Samsung HD501LJ (rated highest on newegg!)

I happen to have 1 more disk that I was gonna put in a USB enclosure. Depending on what people say here, I may pop it in just to see if sdb is really going bad.

This is a Dell PowerEdge box with an onboard ICH7 controller.

Code:
Linux fbserver 2.6.21.1 #6 SMP Wed May 23 12:41:49 EDT 2007 i686 pentium4 i386 GNU/Linux
Code:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda6[2] sdb6[1]
      30001280 blocks [2/1] [_U]
      [==>..................]  recovery = 11.0% (3300352/30001280) finish=5.7min speed=76758K/sec

md2 : active raid1 sda7[0] sdb7[1]
      995904 blocks [2/2] [UU]

md3 : active raid1 sda8[0] sdb8[1]
      445289536 blocks [2/2] [UU]

md0 : active raid1 sda5[0] sdb5[1]
      10000320 blocks [2/2] [UU]

unused devices: <none>
Code:
# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.03
  Creation Time : Wed May 23 13:46:36 2007
     Raid Level : raid1
     Array Size : 30001280 (28.61 GiB 30.72 GB)
    Device Size : 30001280 (28.61 GiB 30.72 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Wed May 23 19:46:17 2007
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 15% complete

           UUID : bf8daea5:ac626144:b62173d3:4a299e9f
         Events : 0.494

    Number   Major   Minor   RaidDevice State
       2       8        6        0      spare rebuilding   /dev/sda6
       1       8       22        1      active sync   /dev/sdb6
This is dmesg. The following info is repeated dozens of times.
Code:
SCSI device sda: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
ata1.01: (BMDMA stat 0x25)
ata1.01: cmd c8/00:08:31:11:25/00:00:00:00:00/f2 tag 0 cdb 0x0 data 4096 in
         res 51/40:08:31:11:25/40:02:02:00:00/f2 Emask 0x9 (media error)
ata1.00: configured for UDMA/133
ata1.01: configured for UDMA/133
sd 0:0:1:0: SCSI error: return code = 0x08000002
sdb: Current [descriptor]: sense key=0x3
    ASC=0x11 ASCQ=0x4
Descriptor sense data with sense descriptors (in hex):
        72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
        02 25 11 31
end_request: I/O error, dev sdb, sector 35983665
ata1: EH complete
raid1: sdb: unrecoverable I/O read error for block 11789696
SCSI device sdb: 976773168 512-byte hdwr sectors (500108 MB)
sdb: Write Protect is off
sdb: Mode Sense: 00 3a 00 00
SCSI device sdb: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
SCSI device sda: 976773168 512-byte hdwr sectors (500108 MB)
sda: Write Protect is off
sda: Mode Sense: 00 3a 00 00
SCSI device sda: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
SCSI device sdb: 976773168 512-byte hdwr sectors (500108 MB)
sdb: Write Protect is off
sdb: Mode Sense: 00 3a 00 00
SCSI device sdb: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
md: md1: recovery done.
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sda6
 disk 1, wo:0, o:1, dev:sdb6
RAID1 conf printout:
 --- wd:1 rd:2
 disk 1, wo:0, o:1, dev:sdb6
RAID1 conf printout:
 --- wd:1 rd:2
 disk 0, wo:1, o:1, dev:sda6
 disk 1, wo:0, o:1, dev:sdb6
md: recovery of RAID array md1
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 30001280 blocks.

Last edited by sauce; 05-23-2007 at 10:48 PM.
 
Old 05-24-2007, 01:08 PM   #2
archtoad6
Senior Member
 
Registered: Oct 2004
Location: Houston, TX (usa)
Distribution: MEPIS, Debian, Knoppix,
Posts: 4,727
Blog Entries: 15

Rep: Reputation: 231Reputation: 231Reputation: 231
Do you know smartctl?

Suggest reading its "FM" & trying some of its tests. BTW, the man page in Q, is "too long" rather than the "too short kind", so more than 1 reading may be necessary.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Linear RAID failing to automount at boot metamechanical Linux - General 10 10-29-2006 08:01 PM
Slackware 10.2 HPT372 3 Drive RAID Storage failing ERRDivideByZero Slackware 10 02-09-2006 04:31 PM
Hard disk failing? write error: Bad file descriptor glt Linux - Hardware 4 12-14-2005 10:41 AM
Install Redhat 9 on a brand new hard disk/computer swaungcenter Red Hat 1 01-24-2004 08:34 AM
Format brand new disk nickhowes Linux - Hardware 9 07-29-2003 12:49 PM


All times are GMT -5. The time now is 07:20 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration