LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 06-02-2015, 11:28 AM   #1
wincen
Member
 
Registered: Jun 2002
Posts: 33

Rep: Reputation: 15
degraded RAID 1 mdadm


I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>
mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed
mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
 
Old 06-02-2015, 12:20 PM   #2
lazydog
Senior Member
 
Registered: Dec 2003
Location: The Key Stone State
Distribution: CentOS Sabayon and now Gentoo
Posts: 1,249
Blog Entries: 3

Rep: Reputation: 194Reputation: 194
For md1 you could try

Code:
mdadm --readwrite /dev/md1
As for md0 you could remove the failed disk form the raid and then check it for bad blocks or maybe simply reformat it and then add it back into the raid.
 
Old 06-02-2015, 01:31 PM   #3
SCSIraidGURU
Member
 
Registered: Oct 2014
Posts: 69

Rep: Reputation: Disabled
I would replace the bad drive. Reformatting it can cause issues if the sectors are bad. Backup first.
 
Old 06-02-2015, 02:44 PM   #4
wincen
Member
 
Registered: Jun 2002
Posts: 33

Original Poster
Rep: Reputation: 15
Is the degraded drive due to bad blocks or just the drives some how getting out of sync?
Is this why the system is mounting md2 in read-only automatically?
 
Old 06-02-2015, 02:47 PM   #5
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Your mdadm output is claiming there is no /dev/sdb1, its status is "removed". This would likely be caused by a corruption in that partition, possibly due to a failing drive.
 
Old 06-02-2015, 03:22 PM   #6
SCSIraidGURU
Member
 
Registered: Oct 2014
Posts: 69

Rep: Reputation: Disabled
I have 30+ years of RAID experience at home and in the data center I manage. To get RAID to be at its best performance, you want matching drives with the same firmware. If you have dissimilar drives, it can cause issues with performance and reliability. If a RAID controller removes a drive, usually it means it failed or disconnected from a loose power cable. I have seen SATA cables not connect tight and vibrate loose. I would suggest replacing both drives with identical drives with same firmware. You will have less problems in the future. Installing a faster drive or a larger drive can cause issues writing with older drive. If you are using software RAID instead of hardware RAID, it can become more of a concern. At home I run the same SAS RAID drives and controllers I use in my data center.
 
Old 06-02-2015, 04:01 PM   #7
wincen
Member
 
Registered: Jun 2002
Posts: 33

Original Poster
Rep: Reputation: 15
Both my drives are identical, however they were on an older server which died. I removed the drives, backup one and placed them in a new sever. It's possible the drive is damaged, but is it also possible when I backup one drive I somehow got the the drives out of sync?

I'll give these suggestions a try.
 
Old 06-02-2015, 04:09 PM   #8
frostschutz
Member
 
Registered: Apr 2004
Distribution: Gentoo
Posts: 95

Rep: Reputation: 28
auto-read-only is not an error. It just means nothing has tried to write on it yet - which is expected to happen for boot or swap partitions as long as these are not in active use.

as for the raid that's missing a drive, show mdadm --examine output for both its members. the update time of the missing drive should tell you how long it's been missing, and if your machine was running at the time you could check your system log history if there are any messages.

If the data on /dev/md0 is OK, you can re-add the missing disk and see if the sync goes OK. Otherwise check dmesg why it failed, also check SMART data of both disks.

Code:
mdadm /dev/md99 --add /dev/sdxy1
(maybe need --fail and --remove before you can --add. Or alternatively, --re-add)

Last edited by frostschutz; 06-02-2015 at 04:19 PM.
 
Old 06-03-2015, 11:02 AM   #9
wincen
Member
 
Registered: Jun 2002
Posts: 33

Original Poster
Rep: Reputation: 15
mdadm --readwrite /dev/md1 does put md1 in readwrite mode, but I'm not sure why that drive's capacity shows up as only 4 GB, it's the exact same dive as md0 and both should be 1 TB.

Here's the output of mdadm --examine for each drive.

mdadm --examine /dev/sdb1
Code:
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 71c80baf:c5020223:fbb1120f:3aa695e2
           Name : name:0  (local to host name)
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1945698304 (927.78 GiB 996.20 GB)
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 1945698032 (927.78 GiB 996.20 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=272 sectors
          State : clean
    Device UUID : 

    Update Time : Mon Feb 16 13:00:26 2015
       Checksum : 35bdceae - correct
         Events : 3496236


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdb2
Code:
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 02d147d7:15cdc2da:819437ac:fc408339
           Name : name:1  (local to host name)
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 7819264 (3.73 GiB 4.00 GB)
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 7819240 (3.73 GiB 4.00 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=24 sectors
          State : clean
    Device UUID : 

    Update Time : Wed Jun  3 01:35:55 2015
       Checksum : f918a9ed - correct
         Events : 117


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
 
Old 06-03-2015, 11:59 AM   #10
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by wincen View Post
mdadm --readwrite /dev/md1 does put md1 in readwrite mode, but I'm not sure why that drive's capacity shows up as only 4 GB, it's the exact same dive as md0 and both should be 1 TB.
This question does not make sense. /dev/md0 and /dev/md1 are not drives, they are RAID arrays build using partitions on your drives. Both arrays use both drives. md0 uses the first partition on the two drives, which is 1 TB, md1 uses the second partition on the two drives, which is 4 GB.

/dev/sda is your first drive, /dev/sdb is your second drive. /dev/sda1 is the first partition on the first drive, /dev/sda2 is the second partition on the first drive, /dev/sdb1 is the first partition on the second drive, /dev/sdb2 is the second partition on the second drive.

/dev/md0 uses /dev/sda1 and /dev/sdb1 and is 1 TB. This means that sda1 and sdb1 (the first partition on each drive) are each 1 TB.

/dev/md1 uses /dev/sda2 and /dev/sdb2 and is 4 GB. This means that sda2 and sdb2 (the second partition on each drive) are each 4 GB.

That is what mdadm is reporting. Is that correct, or is it not?

Last edited by suicidaleggroll; 06-03-2015 at 12:01 PM.
 
Old 06-03-2015, 12:06 PM   #11
SCSIraidGURU
Member
 
Registered: Oct 2014
Posts: 69

Rep: Reputation: Disabled
Did you configure hardware raid on your motherboard before installing Linux?
 
Old 06-03-2015, 01:24 PM   #12
wincen
Member
 
Registered: Jun 2002
Posts: 33

Original Poster
Rep: Reputation: 15
SCSIraidGURU, I'm using a software RAID 1 from an older Debian install. The installer configured it for me. I'm fairly inexperienced with RAID. I am not using a hardware RAID.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] mdadm clean, degraded rereader Linux - Server 3 10-15-2014 07:30 AM
[SOLVED] RAID 5 array not assembling all 3 devices on boot using MDADM, one is degraded. kirby9 Linux - Software 11 11-20-2010 10:32 AM
mdadm and degraded arrays radnoran Linux - Newbie 1 01-11-2010 01:18 PM
mdadm issues - active, degraded, Not Started lecnt Linux - General 6 02-23-2009 08:38 AM
mdadm: active, degraded ... what do I do? mesosphere Linux - General 7 06-09-2008 03:18 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 04:41 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration