LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 03-17-2014, 01:57 AM   #1
mpyusko
Member
 
Registered: Oct 2003
Location: Rochester, NY, USA
Distribution: Salckware ver 10.1 - 14.1, Debian too.
Posts: 432
Blog Entries: 1

Rep: Reputation: 41
RAID1 - Degraded Array, but hardware appears fine


I've been searching and digging and I can't figure out why my array is in a degraded state. I have two matched 1TB drives in a RAID1 configuration, /dev/md0. It is the data partition/array for a File/media/everything server. The Win7x64 resides on md1 (not installed but space reserved...shrug), Swap on md2, Debian Jessie/testing resides on md3-9. This is what I have figured out so far...

Code:
root@abacus:~# cat /proc/mdstat
Personalities : [raid1] 
md9 : active raid1 sdc10[3] sdd10[2]
      386724728 blocks super 1.2 [2/2] [UU]
      
md8 : active raid1 sdd9[0] sdc9[2]
      1950708 blocks super 1.2 [2/2] [UU]
      
md7 : active raid1 sdd8[0] sdc8[2]
      9763768 blocks super 1.2 [2/2] [UU]
      
md6 : active raid1 sdd7[0] sdc7[2]
      9763768 blocks super 1.2 [2/2] [UU]
      
md5 : active raid1 sdd6[0] sdc6[2]
      9763768 blocks super 1.2 [2/2] [UU]
      
md4 : active raid1 sdd5[0] sdc5[2]
      9763768 blocks super 1.2 [2/2] [UU]
      
md3 : active raid1 sdd3[0] sdc3[2]
      97268 blocks super 1.2 [2/2] [UU]
      
md2 : active (auto-read-only) raid1 sdc2[3] sdd2[2]
      1950708 blocks super 1.2 [2/2] [UU]
      
md1 : active (auto-read-only) raid1 sdc1[3] sdd1[2]
      58591160 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdb1[1]
      976758841 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>
root@abacus:~#
Code:
root@abacus:~# mdadm --detail  /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Mar  8 11:39:10 2011
     Raid Level : raid1
     Array Size : 976758841 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976758841 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Mar 17 01:49:09 2014
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : abacus:0  (local to host abacus)
           UUID : eab73cc3:d3a96f18:224c7f1c:afd15d16
         Events : 1296

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1
root@abacus:~#
Code:
root@abacus:~# mdadm --examine  /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : eab73cc3:d3a96f18:224c7f1c:afd15d16
           Name : abacus:0  (local to host abacus)
  Creation Time : Tue Mar  8 11:39:10 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
     Array Size : 976758841 (931.51 GiB 1000.20 GB)
  Used Dev Size : 1953517682 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=272 sectors
          State : clean
    Device UUID : fe3edce1:44d93bfb:1a6ee65a:8add510c

    Update Time : Sat Mar 15 21:25:47 2014
       Checksum : 185f0eb9 - correct
         Events : 610


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@abacus:~#
Code:
root@abacus:~# mdadm --examine  /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : eab73cc3:d3a96f18:224c7f1c:afd15d16
           Name : abacus:0  (local to host abacus)
  Creation Time : Tue Mar  8 11:39:10 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
     Array Size : 976758841 (931.51 GiB 1000.20 GB)
  Used Dev Size : 1953517682 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=272 sectors
          State : clean
    Device UUID : fe3edce1:44d93bfb:1a6ee65a:8add510c

    Update Time : Mon Mar 17 01:57:45 2014
       Checksum : 1861a2a5 - correct
         Events : 1298


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
root@abacus:~#
Both /dev/sda1 and /dev/sdb1 should be in use for /dev/md0. Smartctl does not show anything out of the ordinary; Short self-tests come back fine and no pending or reallocated sectors. I can access both drives via cfdisk, etc so again there is no mechanical disconnection. I have not made any hardware configuration changes to the BIOS. This array uses the integrated...
Code:
00:1f.2 RAID bus controller: Intel Corporation 82801ER (ICH5R) SATA Controller (rev 02) (prog-if 8f)
        Subsystem: Intel Corporation Device 3465
        Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx-
        Status: Cap- 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 18
        Region 0: I/O ports at b800 [size=8]
        Region 1: I/O ports at b480 [size=4]
        Region 2: I/O ports at b400 [size=8]
        Region 3: I/O ports at b080 [size=4]
        Region 4: I/O ports at b000 [size=16]
        Kernel driver in use: ata_piix
There is another RAID Controller with matched 500GB drives....
Code:
03:01.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
        Subsystem: Silicon Image, Inc. SiI 3114 SATARaid Controller
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 32, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 16
        Region 0: I/O ports at cc80 [size=8]
        Region 1: I/O ports at cc00 [size=4]
        Region 2: I/O ports at c880 [size=8]
        Region 3: I/O ports at c800 [size=4]
        Region 4: I/O ports at c480 [size=16]
        Region 5: Memory at fbeffc00 (32-bit, non-prefetchable) [size=1K]
        Expansion ROM at fbe00000 [disabled] [size=512K]
        Capabilities: [60] Power Management version 2
                Flags: PMEClk- DSI+ D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=2 PME-
        Kernel driver in use: sata_sil
... which is oddly enough showing a degraded status in the BIOS, but not under Linux. At the BIOS level, the 2 500GB's are shown as a single array, and the 2 1TB's are shown as a single array. It is my understanding that this is so Windows understands them and has no effect on Linux. I've tried to glean through the logs, but without really knowing what to look for....
Code:
[    0.940540] ata5.00: ATAPI: AOPEN   COM5232/AAH, 1.05, max UDMA/33
[    0.940661] ata7.00: ATA-8: Hitachi HDS721010CLA332, JP4OA3MA, max UDMA/133
[    0.940669] ata7.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 0/32)
[    0.940759] ata8.00: ATA-8: Hitachi HDS721010CLA332, JP4OA3MA, max UDMA/133
[    0.940764] ata8.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 0/32)
[    0.956408] ata5.00: configured for UDMA/33
[    0.956490] ata7.00: configured for UDMA/133
[    0.956567] ata8.00: configured for UDMA/133
[    0.958666] scsi 1:0:0:0: CD-ROM            AOPEN    COM5232/AAH      1.05 PQ: 0 ANSI: 5
[    0.959232] scsi 5:0:0:0: Direct-Access     ATA      Hitachi HDS72101 JP4O PQ: 0 ANSI: 5
[    0.959703] scsi 7:0:0:0: Direct-Access     ATA      Hitachi HDS72101 JP4O PQ: 0 ANSI: 5
[    0.965155] sr0: scsi3-mmc drive: 4x/52x writer cd/rw xa/form2 cdda tray
[    0.965162] cdrom: Uniform CD-ROM driver Revision: 3.20
[    0.965399] sr 1:0:0:0: Attached scsi CD-ROM sr0
[    0.968508] sr 1:0:0:0: Attached scsi generic sg0 type 5
[    0.968600] scsi 5:0:0:0: Attached scsi generic sg1 type 0
[    0.968679] scsi 7:0:0:0: Attached scsi generic sg2 type 0
[    1.096042] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[    1.108121] ata1.00: ATA-8: ST3500418AS, CC49, max UDMA/133
[    1.108126] ata1.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 0/32)
[    1.124416] ata1.00: configured for UDMA/100
[    1.124589] scsi 0:0:0:0: Direct-Access     ATA      ST3500418AS      CC49 PQ: 0 ANSI: 5
[    1.124811] scsi 0:0:0:0: Attached scsi generic sg3 type 0
[    1.242174] e1000 0000:07:04.0 eth1: (PCI:33MHz:32-bit) 00:0e:0c:e3:fc:74
[    1.242180] e1000 0000:07:04.0 eth1: Intel(R) PRO/1000 Network Connection
[    1.444037] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[    1.452375] ata2.00: ATA-8: ST3500418AS, CC38, max UDMA/133
[    1.452381] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 0/32)
[    1.468386] ata2.00: configured for UDMA/100
[    1.468565] scsi 2:0:0:0: Direct-Access     ATA      ST3500418AS      CC38 PQ: 0 ANSI: 5
[    1.468788] scsi 2:0:0:0: Attached scsi generic sg4 type 0
[    1.604039] tsc: Refined TSC clocksource calibration: 3591.000 MHz
[    1.788028] ata3: SATA link down (SStatus 0 SControl 310)
[    2.108026] ata4: SATA link down (SStatus 0 SControl 310)
[    2.117288] sd 5:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[    2.117406] sd 0:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB)
[    2.117409] sd 2:0:0:0: [sdd] 976773168 512-byte logical blocks: (500 GB/465 GiB)
[    2.117428] sd 5:0:0:0: [sda] Write Protect is off
[    2.117435] sd 5:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    2.117486] sd 7:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[    2.117501] sd 5:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.117563] sd 2:0:0:0: [sdd] Write Protect is off
[    2.117570] sd 2:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[    2.117629] sd 2:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.117730] sd 0:0:0:0: [sdc] Write Protect is off
[    2.117736] sd 0:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[    2.117906] sd 7:0:0:0: [sdb] Write Protect is off
[    2.117912] sd 7:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    2.118202] sd 0:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.118234] sd 7:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.130233]  sdb: sdb1
[    2.130696] sd 7:0:0:0: [sdb] Attached SCSI disk
[    2.134950]  sda: sda1
[    2.135443] sd 5:0:0:0: [sda] Attached SCSI disk
[    2.181285]  sdd: sdd1 sdd2 sdd3 sdd4 < sdd5 sdd6 sdd7 sdd8 sdd9 sdd10 >
[    2.182320] sd 2:0:0:0: [sdd] Attached SCSI disk
[    2.183801]  sdc: sdc1 sdc2 sdc3 sdc4 < sdc5 sdc6 sdc7 sdc8 sdc9 sdc10 >
[    2.184789] sd 0:0:0:0: [sdc] Attached SCSI disk
[    2.521500] md: md0 stopped.
[    2.523676] md: bind<sdb1>
[    2.525660] md: raid1 personality registered for level 1
[    2.526148] md/raid1:md0: active with 1 out of 2 mirrors
[    2.526183] md0: detected capacity change from 0 to 1000201053184
[    2.526319] RAID1 conf printout:
[    2.526321]  --- wd:1 rd:2
[    2.526325]  disk 1, wo:0, o:1, dev:sdb1
[    2.528456]  md0: unknown partition table
[    2.573728] md: md1 stopped.
[    2.575092] md: bind<sdd1>
[    2.575364] md: bind<sdc1>
[    2.577736] md/raid1:md1: active with 2 out of 2 mirrors
[    2.577768] md1: detected capacity change from 0 to 59997347840
[    2.577818] RAID1 conf printout:
[    2.577821]  --- wd:2 rd:2
[    2.577824]  disk 0, wo:0, o:1, dev:sdc1
[    2.577827]  disk 1, wo:0, o:1, dev:sdd1
[    2.595731]  md1:
[    2.604201] Switched to clocksource tsc
[    2.781802] md: md2 stopped.
[    2.783104] md: bind<sdd2>
[    2.783360] md: bind<sdc2>
[    2.785407] md/raid1:md2: active with 2 out of 2 mirrors
[    2.785438] md2: detected capacity change from 0 to 1997524992
[    2.785604] RAID1 conf printout:
[    2.785606]  --- wd:2 rd:2
[    2.785617]  disk 0, wo:0, o:1, dev:sdc2
[    2.785621]  disk 1, wo:0, o:1, dev:sdd2
[    2.810447]  md2: unknown partition table
[    2.880364] md: md3 stopped.
[    2.881699] md: bind<sdc3>
[    2.881951] md: bind<sdd3>
[    2.884328] md/raid1:md3: active with 2 out of 2 mirrors
[    2.884362] md3: detected capacity change from 0 to 99602432
[    2.884535] RAID1 conf printout:
[    2.884543]  --- wd:2 rd:2
[    2.884553]  disk 0, wo:0, o:1, dev:sdd3
[    2.884555]  disk 1, wo:0, o:1, dev:sdc3
[    2.886751]  md3: unknown partition table
[    3.026240] md: md4 stopped.
[    3.027742] md: bind<sdc5>
[    3.027999] md: bind<sdd5>
[    3.030160] md/raid1:md4: active with 2 out of 2 mirrors
[    3.030195] md4: detected capacity change from 0 to 9998098432
[    3.030377] RAID1 conf printout:
[    3.030386]  --- wd:2 rd:2
[    3.030391]  disk 0, wo:0, o:1, dev:sdd5
[    3.030393]  disk 1, wo:0, o:1, dev:sdc5
[    3.038314]  md4: unknown partition table
[    3.108442] md: md5 stopped.
[    3.109775] md: bind<sdc6>
[    3.110041] md: bind<sdd6>
[    3.112340] md/raid1:md5: active with 2 out of 2 mirrors
[    3.112377] md5: detected capacity change from 0 to 9998098432
[    3.112564] RAID1 conf printout:
[    3.112566]  --- wd:2 rd:2
[    3.112570]  disk 0, wo:0, o:1, dev:sdd6
[    3.112572]  disk 1, wo:0, o:1, dev:sdc6
[    3.127267]  md5: unknown partition table
[    3.304989] md: md6 stopped.
[    3.306352] md: bind<sdc7>
[    3.306638] md: bind<sdd7>
[    3.308924] md/raid1:md6: active with 2 out of 2 mirrors
[    3.308964] md6: detected capacity change from 0 to 9998098432
[    3.309008] RAID1 conf printout:
[    3.309038]  --- wd:2 rd:2
[    3.309041]  disk 0, wo:0, o:1, dev:sdd7
[    3.309044]  disk 1, wo:0, o:1, dev:sdc7
[    3.319245]  md6: unknown partition table
[    3.513107] md: md7 stopped.
[    3.515549] md: bind<sdc8>
[    3.515867] md: bind<sdd8>
[    3.517625] md/raid1:md7: active with 2 out of 2 mirrors
[    3.517666] md7: detected capacity change from 0 to 9998098432
[    3.517868] RAID1 conf printout:
[    3.517870]  --- wd:2 rd:2
[    3.517874]  disk 0, wo:0, o:1, dev:sdd8
[    3.517876]  disk 1, wo:0, o:1, dev:sdc8
[    3.529384]  md7: unknown partition table
[    3.720763] md: md8 stopped.
[    3.722097] md: bind<sdc9>
[    3.722432] md: bind<sdd9>
[    3.724714] md/raid1:md8: active with 2 out of 2 mirrors
[    3.724752] md8: detected capacity change from 0 to 1997524992
[    3.724782] RAID1 conf printout:
[    3.724787]  --- wd:2 rd:2
[    3.724791]  disk 0, wo:0, o:1, dev:sdd9
[    3.724794]  disk 1, wo:0, o:1, dev:sdc9
[    3.744477]  md8: unknown partition table
[    3.936943] md: md9 stopped.
[    3.938253] md: bind<sdd10>
[    3.938453] md: bind<sdc10>
[    3.940494] md/raid1:md9: active with 2 out of 2 mirrors
[    3.940546] md9: detected capacity change from 0 to 396006121472
[    3.940581] RAID1 conf printout:
[    3.940586]  --- wd:2 rd:2
[    3.940596]  disk 0, wo:0, o:1, dev:sdc10
[    3.940599]  disk 1, wo:0, o:1, dev:sdd10
[    3.950495]  md9: unknown partition table
Like I said, I don't know what caused it to enter "degraded" status. Usually I can track it down to a physically failing disk. I have a thought to wipe the "removed" disk and reinsert it into the array as if it was a new replacement. The problem is it has been 3(?) years since I had to swap out the drives but not really a textbook swap. (seriously... one failed, I sent it out for warranty, and by the time it returned the second drive failed so I actually built a new array with 1TB drives, then copied data from the single remaining drive onto the new array, then swapped that for warranty too. When I had both back they became the OS drives.) For system Specs see "Abacus" in my signature. Thank you.

BTW.... I had the Intel BIOS perform a consistency check and it came back fine, but Linux still marks it as removed.

Thanks.
 
Old 03-17-2014, 10:04 AM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,140

Rep: Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263
You can either do BIOS "fake RAID" or md software RAID, but not both. The system looks confused about this. Which are you using?

Note: The device UUID on the two disks in md RAID should not be the same. They look the same because the BIOS RAID has overwritten one of your disks.

Last edited by smallpond; 03-17-2014 at 10:08 AM.
 
Old 03-17-2014, 11:35 AM   #3
mpyusko
Member
 
Registered: Oct 2003
Location: Rochester, NY, USA
Distribution: Salckware ver 10.1 - 14.1, Debian too.
Posts: 432

Original Poster
Blog Entries: 1

Rep: Reputation: 41
Here's the history....

I had two 500GB Seagate drives in a NAS200 configured RAID1. I outgrew the NAS200 and also had a need for a regular Development Server. Something I could toy with and test websites and other things on. So I bought the SIL card to move the two drives from the NAS200 into a regular Linux Box. I configured them using the SIL Bios to be RAID1 Array. After doing some performance testing and cost evaluations, I decided to get an SE7520BD2 (DDR2) with a pair of 2.8GHz Xeons. At this point I moved the Data Array into the new server (And the basement... holy crap it's loud!) still configured in RAID1 array via the SIL card. Linux detected they were a RAID1 array and assembled them as such. Well the drives crapped out on me and I bought 2 new Hitachi 1TBs and configured them via Intel BIOS as RAID1. I copied the data over from the old Pre-Fail 500GB to the new array and sent it out for warranty. In the meantime I was using an older single SATA drive for the OS. When I finally had both 500GB back from warranty via seagate, I created a single RAID1 array via the SIL bios and moved the Debian OS to several RAID1 volumes (see above)... leaving one volume empty incase I needed to install windows to carry out a firmware or bios update. At boot, Intel BIOS initializes and assembles /dev/md0, but then Linux also does too. Next SIL Bios initializes and assembles a single RAID1 array that everything else is broken up and installed onto. Linux then Initializes and assembles 9 RAID1 volumes. It doesn't seem to see or care what the BIOS does. As I stated before I was under the impression that assembling via BIOS was only so windows knew there was an array there. This is how it has been working for a few years.

Now the only thing I can think of is it was able to maintain this level of sanity via some bug that recently was fixed. But thats pure speculation, since you tell me it's not supposed to work this way (?).

At this point all my data is intact. The machine continues to function as normal except the one degraded array. So should I disassemble them in both SIL and Intel BIOS, wipe the "removed" drive and let Linux do it's own thing? Or is there some other route I should take? I should note that I have parts on order so I plan on installing windows to run some benchmarks as soon as they come in.

BTW, the Intel Raid controller is integrated into the mobo. I'm not clear if it is in fact H/W Raid or "Fake-RAID". http://download.intel.com/support/mo...hnology_ug.pdf

Thanks.
 
Old 03-17-2014, 12:15 PM   #4
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,140

Rep: Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263
Here's a description. The Intel motherboard RAID does not have its own CPU, which is why its called fake RAID.

http://www.servethehome.com/differen...software-raid/
 
Old 03-17-2014, 12:55 PM   #5
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,140

Rep: Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263
BTW, I don't know what to suggest for you to do. If you want to use the BIOS RAID, you should use the Intel driver instead of md as described in the manual that you linked to. If you want to use md, you should turn off the BIOS RAID. However, I don't know what doing either of these things will do to your current data. Make sure you back it up before doing anything. You could also continue with the BIOS thinking the drives are mirrored and md thinking the partitions are mirrored. That's been running so far, but I don't recommend it.

If you decide to go with md, make sure to do grub-install to both sda and sdb so you can boot from either in case of a failure.
 
Old 03-17-2014, 02:30 PM   #6
mpyusko
Member
 
Registered: Oct 2003
Location: Rochester, NY, USA
Distribution: Salckware ver 10.1 - 14.1, Debian too.
Posts: 432

Original Poster
Blog Entries: 1

Rep: Reputation: 41
RAID1 - Degraded Array, but hardware appears fine

I used to have grub installed to sda and sdb but a recent kernel update won't fit in the 63 sectors. I also had grub installed in C&D too and that is how the system is managing to boot right now. Its another problem I'm working on but I don't post questions until I've either given it my best effort or its critical for production.
 
Old 04-03-2014, 10:36 AM   #7
mpyusko
Member
 
Registered: Oct 2003
Location: Rochester, NY, USA
Distribution: Salckware ver 10.1 - 14.1, Debian too.
Posts: 432

Original Poster
Blog Entries: 1

Rep: Reputation: 41
RAID1 - Degraded Array, but hardware appears fine

I ordered 2 new 3TB WD Red drives. I'm going to move my data to those as EXT4. I already went through my current configuration, but now I'm going to make some changes. I'll lay it out then ask.... feel free to advise.

I have the SIL "fake RAID" card. I also have Intel's integrated "fake RAID" controller. The mobo BIOS will not let me select anything attached to the SIL card as the boot device. I am going to install the 2x 3TB hd's to the SIL card (mirrored) and leave the 2x 500GB's, but attach them to the Intel controller (also mirrored). The 500's will contain a bootable partition for Win7pro64 and several partitions to contain the Debian Testing LAMP. There will also be a partition of ”flex-space" that I occasionally use for temporary backups and special projects. All partitions will be EXT4 except windows of course. Currently I have a mix of reiserfs and ext3.

The server will act as a developmental webserver running ispconfig3, an ftp and mail server, ssh, NFS, DLNA and samba shares. It also crunches for BOINC. The data partition (3TB) will hold home movies, photos, music, shared downloads, etc. The 500's will house my FTP, WWW, mail, etc.

The server has been running for about 3 years and has evolved into a mess. I'm going to start with a fresh install, carrying over as few config files as possible.

Now the questions...

1 what is the point of a "fake RAID" if Linux and Windows can do the RAID functions instead?

2 originally I configured RAID at the hardware BIOS level, but Linux seemed to ignore it (or not detect it ) and I proceeded to create new volumes and arrays under Linux.

3eRAID1 (mirrored) enhances Read performance because it can read both drives at the same time, however Write performance remains unchanged since it has to write both places at the same time, correct? If so, should a hdparm -t /dev/md0 return higher read speed than say hdparm -t /dev/SDA? Or is there some other easy way to benchmark array performance?

I thought I understood RAID but apparently I misinterpreted something somewhere.

Thanks
 
  


Reply

Tags
raid1



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Help Raid1 degraded haloway13 Linux - Hardware 2 07-06-2013 10:52 PM
Help with degraded RAID1 jimjxr Linux - Server 3 03-26-2013 11:44 AM
Raid1 degraded after reboot crazy4nix Linux - Software 1 12-01-2011 12:06 PM
raid1 mdadm repair degraded array with used good hard drive catbird Linux - Hardware 7 07-09-2009 12:31 AM
Degraded raid1 array problem on slack 12.0.0 / mdadm slack-12.0.0 Slackware 5 10-12-2007 06:36 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 04:54 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration