LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 10-17-2006, 08:15 AM   #1
jahlewis
LQ Newbie
 
Registered: Oct 2006
Location: Charlottesville, VA
Distribution: smeserver
Posts: 2

Rep: Reputation: 0
RAID help restoring a degraded array partition


I appear to have a degraded RAID array, specifically, hda2 is bad. What is the best way to recover it? this is a long post with details, but the root of it is in the last sentence...

Code:
 ¦ Current RAID status:                                                     ¦ 
  ¦                                                                          ¦ 
  ¦ Personalities : [raid1]                                                  ¦ 
  ¦ md1 : active raid1 hdb2[1]                                               ¦ 
  ¦       155918784 blocks [2/1] [_U]                                        ¦ 
  ¦                                                                          ¦ 
  ¦ md2 : active raid1 hdb3[1] hda3[0]                                       ¦ 
  ¦       264960 blocks [2/2] [UU]                                           ¦ 
  ¦                                                                          ¦ 
  ¦ md0 : active raid1 hdb1[1] hda1[0]                                       ¦ 
  ¦       104320 blocks [2/2] [UU]                                           ¦ 
  ¦                                                                          ¦ 
  ¦ unused devices: <none>                                                   ¦ 
  ¦                                                                          ¦ 
  ¦                                                                          ¦ 
  ¦ There should be two RAID devices, not 3
Here is my current filesystem setup

Code:
[root@gluon]# df -h 
Filesystem            Size  Used Avail Use% Mounted on 
/dev/md1              147G  8.9G  131G   7% / 
/dev/md0               99M   32M   63M  34% /boot 
none                  315M     0  315M   0% /dev/shm 
/dev/hdd1             230G   63G  156G  29% /mnt/bigdisk
and here are some details on the RAID settings for md0 and md1 (md2 is just like md0)

Code:
[root@gluon]# mdadm -D /dev/md0 
/dev/md0: 
        Version : 00.90.01 
  Creation Time : Thu Jan 12 19:26:31 2006 
     Raid Level : raid1 
     Array Size : 104320 (101.88 MiB 106.82 MB) 
    Device Size : 104320 (101.88 MiB 106.82 MB) 
   Raid Devices : 2 
  Total Devices : 2 
Preferred Minor : 0 
    Persistence : Superblock is persistent 

    Update Time : Mon Oct 16 18:38:10 2006 
          State : clean 
 Active Devices : 2 
Working Devices : 2 
 Failed Devices : 0 
  Spare Devices : 0 


    Number   Major   Minor   RaidDevice State 
       0       3        1        0      active sync   /dev/hda1 
       1       3       65        1      active sync   /dev/hdb1 
           UUID : 5139bc2e:39939d3e:5abd791c:3ce0a6ef 
         Events : 0.3834


[root@gluon]# mdadm -D /dev/md1 
/dev/md1: 
        Version : 00.90.01 
  Creation Time : Thu Jan 12 19:21:55 2006 
     Raid Level : raid1 
     Array Size : 155918784 (148.70 GiB 159.66 GB) 
    Device Size : 155918784 (148.70 GiB 159.66 GB) 
   Raid Devices : 2 
  Total Devices : 1 
Preferred Minor : 1 
    Persistence : Superblock is persistent 

    Update Time : Mon Oct 16 18:27:38 2006 
          State : clean, degraded 
 Active Devices : 1 
Working Devices : 1 
 Failed Devices : 0 
  Spare Devices : 0 


    Number   Major   Minor   RaidDevice State 
       0       0        0       -1      removed 
       1       3       66        1      active sync   /dev/hdb2 
           UUID : 0a968a22:d1b0d2bd:ab248bae:ec482cc1 
         Events : 0.12532934
As I interpret this, /dev/md1 is broken, with /dev/hda2 not being mirrored.

However, if I try to add hda2 back to md1, I get an invalid argument error:

Code:
[root@gluon]# mdadm -a /dev/md1 /dev/hda2 
mdadm: hot add failed for /dev/hda2: Invalid argument
So... I tried removing the partition first:

Code:
[root@gluon]# mdadm /dev/md1 -r /dev/hda2 -a /dev/hda2 
mdadm: hot remove failed for /dev/hda2: No such device or address
So now what? the / partition on hda is hosed? How do I rebuild that? I'm quickly diving out of my depth here...

FWIW

Code:
[root@gluon init.d]# mdadm -E /dev/hdb2 
/dev/hdb2: 
          Magic : a92b4efc 
        Version : 00.90.00 
           UUID : 0a968a22:d1b0d2bd:ab248bae:ec482cc1 
  Creation Time : Thu Jan 12 19:21:55 2006 
     Raid Level : raid1 
    Device Size : 155918784 (148.70 GiB 159.66 GB) 
   Raid Devices : 2 
  Total Devices : 1 
Preferred Minor : 1 

    Update Time : Mon Oct 16 18:27:38 2006 
          State : clean 
 Active Devices : 1 
Working Devices : 1 
 Failed Devices : 0 
  Spare Devices : 0 
       Checksum : b0a4fa9a - correct 
         Events : 0.12532934 


      Number   Major   Minor   RaidDevice State 
this     1       3       66        1      active sync   /dev/hdb2 
   0     0       0        0        0      removed 
   1     1       3       66        1      active sync   /dev/hdb2


[root@gluon init.d]# mdadm -E /dev/hda2 
/dev/hda2: 
          Magic : a92b4efc 
        Version : 00.90.00 
           UUID : 0a968a22:d1b0d2bd:ab248bae:ec482cc1 
  Creation Time : Thu Jan 12 19:21:55 2006 
     Raid Level : raid1 
    Device Size : 155918784 (148.70 GiB 159.66 GB) 
   Raid Devices : 2 
  Total Devices : 2 
Preferred Minor : 1 

    Update Time : Sun Oct 15 21:07:07 2006 
          State : clean 
 Active Devices : 2 
Working Devices : 2 
 Failed Devices : 0 
  Spare Devices : 0 
       Checksum : b0a3ce33 - correct 
         Events : 0.12532928 


      Number   Major   Minor   RaidDevice State 
this     0       3        2        0      active sync   /dev/hda2 
   0     0       3        2        0      active sync   /dev/hda2 
   1     1       3       66        1      active sync   /dev/hdb2

WhatsHisName in thread 429857 suggests running mdadm -C if all else fails... So I did:

Code:
[root@gluon init.d]# mdadm -C /dev/md1 -l1 -n2 /dev/hda2 /dev/hdb2 
mdadm: /dev/hda2 appears to contain an ext2fs file system 
    size=155918784K  mtime=Mon Oct 16 18:27:39 2006 
mdadm: /dev/hda2 appears to be part of a raid array: 
    level=1 devices=2 ctime=Thu Jan 12 19:21:55 2006 
mdadm: /dev/hdb2 appears to contain an ext2fs file system 
    size=155918784K  mtime=Sun Oct 15 20:33:12 2006 
mdadm: /dev/hdb2 appears to be part of a raid array: 
    level=1 devices=2 ctime=Thu Jan 12 19:21:55 2006 
Continue creating array?
And I chickened out. I'm afraid of wiping the contents of the surviving partition. Does anyone know if I chose to continue, what would happen?

Last edited by jahlewis; 10-17-2006 at 08:19 AM.
 
Old 10-17-2006, 03:53 PM   #2
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
No, running the last command would be a bad idea.

But before doing anything else, you should backup the system.

Quote:
Code:
# mdadm -D /dev/md1 

          State : clean, degraded 

    Number   Major   Minor   RaidDevice State 
       0       0        0       -1      removed 
       1       3       66        1      active sync   /dev/hdb2
As I interpret this, /dev/md1 is broken, with /dev/hda2 not being mirrored.
Yes and No. The array is degraded and the active device is hdb2. hda2 is out-of-service.

Quote:
...However, if I try to add hda2 back to md1, I get an invalid argument error:

Code:
# mdadm -a /dev/md1 /dev/hda2
mdadm: hot add failed for /dev/hda2: Invalid argument
So... I tried removing the partition first:

Code:
# mdadm /dev/md1 -r /dev/hda2 -a /dev/hda2
mdadm: hot remove failed for /dev/hda2: No such device or address
The first command failed because of bad syntax. The second command failed because there was no drive to remove (i.e., it had already been removed).


Retry the add using the correct syntax in the command.

Code:
# mdadm /dev/md1 -a /dev/hda2
This is covered in the mdadm manpage section “MANAGE MODE”.
 
Old 10-17-2006, 07:55 PM   #3
jahlewis
LQ Newbie
 
Registered: Oct 2006
Location: Charlottesville, VA
Distribution: smeserver
Posts: 2

Original Poster
Rep: Reputation: 0
Thanks WhatsHisname, I tried and got
Code:
[root@gluon ~]# mdadm /dev/md1 -a /dev/hda2
mdadm: hot add failed for /dev/hda2: Invalid argument
Anything else I can try?

I'm now verifying backups (backuppc)...

In reading the mdadm manpage, would this be what I now need?

Code:
mdadm -v --assemble --run --force /dev/md1 /dev/hdb2 /dev/hda2
Is this going to wipe the data on hdb2?

Or should I just go and remove hda from the three md's, remove the drive, reformat, replicate the partitions with sfdisk, then add them back to the md's?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Degraded Array on Software Raid pcinfo-az Linux - Hardware 8 07-03-2008 10:43 AM
Software RAID-1 unable to boot degraded keithk23 Linux - Server 2 09-27-2006 08:52 AM
Trouble booting a degraded RAID-1 array aluchko Linux - Software 3 09-09-2006 10:26 PM
RAID 1 Degraded Array gsoft Debian 2 08-18-2006 02:17 PM
Replacing dead drive in degraded RAID-5. Cronjob Linux - Hardware 1 02-22-2006 11:02 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 06:21 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration