LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Reply
 
Search this Thread
Old 01-09-2013, 10:42 PM   #1
suicidaleggroll
Senior Member
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 3,202

Rep: Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136
mdadm partition missing


So I have a mdadm-controlled RAID 10 that's been working fine for about 2 years. I backed up everything on the RAID in preparation for the move to a hardware RAID 5 (was going to wipe the 4 disks and re-initialize them in a hardware RAID 5, then copy everything back onto the array.

Unfortunately it appears that the RAID card is too new for my OS (Adaptec 6405, OpenSUSE 11.4), so I decided to update my OS to OpenSUSE 12.2. Before updating the OS, I decided to back up everything on my root drive to the backup drive as well, so that I could restore my config files, etc. on the new OS.

Unfortunately I made a mistake in my copy command and managed to remove several important files off of the RAID backup. My goal now is to re-mount my original RAID 10 to re-copy those files back onto the backup drive before going forward with the format, reinstall, and re-initialization of the RAID onto the hardware controller.

The problem I'm facing is my RAID 10 array does not want to re-mount, and I'm not sure why. mdadm seems happy, fdisk seems happy, yet the device does not exist for me to mount.

Code:
# mdadm --detail /dev/md/Raid
/dev/md/Raid:
      Container : /dev/md/imsm0, member 0
     Raid Level : raid10
     Array Size : 3907023872 (3726.03 GiB 4000.79 GB)
  Used Dev Size : 1953512064 (1863.01 GiB 2000.40 GB)
   Raid Devices : 4
  Total Devices : 4

          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K


           UUID : ddfd8409:89e7f880:5dfdc893:e49e88bb
    Number   Major   Minor   RaidDevice State
       3       8        0        0      active sync   /dev/sda
       2       8       16        1      active sync   /dev/sdb
       1       8       32        2      active sync   /dev/sdc
       0       8       48        3      active sync   /dev/sdd
Code:
# ls -l /dev/md/Raid /dev/md/imsm0 
lrwxrwxrwx 1 root root 8 Jan 10  2013 /dev/md/Raid -> ../md126
lrwxrwxrwx 1 root root 8 Jan 10  2013 /dev/md/imsm0 -> ../md127
Code:
# fdisk -l /dev/md126

WARNING: GPT (GUID Partition Table) detected on '/dev/md126'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/md126: 4000.8 GB, 4000792444928 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814047744 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes
Disk identifier: 0x00000000

      Device Boot      Start         End      Blocks   Id  System
/dev/md126p1             256  4294967550  2147483647+  83  Linux
/dev/md126p4               1           1           0+  ee  GPT
Partition 4 does not start on physical sector boundary.

Partition table entries are not in disk order
Code:
ls -l /dev/md1*
brw-rw---- 1 root disk 9, 126 Jan 10  2013 /dev/md126
brw-rw---- 1 root disk 9, 127 Jan 10  2013 /dev/md127
Code:
# mount /dev/md126p1 /home
mount: special device /dev/md126p1 does not exist




So what am I missing?
/dev/md127 used to be the container
/dev/md126 used to be the device
/dev/md126p1 used to be the 4TB ext4 partition on the RAID

It seems the /dev/md127 is happy, /dev/md126 is happy, fdisk reports the existence of /dev/md126p1 like normal, yet there is no /dev/md126p1 for me to mount?

I copied the output of mdadm --detail /dev/md126 when it was working fine:
Code:
# mdadm --detail /dev/md126
/dev/md126:
      Container : /dev/md/imsm0, member 0                                                                                                                  
     Raid Level : raid10                                                                                                                                   
     Array Size : 3907023872 (3726.03 GiB 4000.79 GB)                                                                                                      
  Used Dev Size : 1953512064 (1863.01 GiB 2000.40 GB)                                                                                                      
   Raid Devices : 4                                                                                                                                        
  Total Devices : 4                                                                                                                                        
                                                                                                                                                           
          State : active                                                                                                                                   
 Active Devices : 4                                                                                                                                        
Working Devices : 4                                                                                                                                        
 Failed Devices : 0                                                                                                                                        
  Spare Devices : 0                                                                                                                                        
                                                                                                                                                           
         Layout : near=2                                                                                                                                   
     Chunk Size : 64K                                                                                                                                      
                                                                                                                                                           
                                                                                                                                                           
           UUID : ddfd8409:89e7f880:5dfdc893:e49e88bb                                                                                                      
    Number   Major   Minor   RaidDevice State                                                                                                              
       3       8        0        0      active sync   /dev/sda                                                                                             
       2       8       16        1      active sync   /dev/sdb                                                                                             
       1       8       32        2      active sync   /dev/sdc                                                                                             
       0       8       48        3      active sync   /dev/sdd
The only difference between that and what I'm seeing now is the State used to be "active" and now it's "clean". Is there some command I'm missing to set the RAID to active status?

Last edited by suicidaleggroll; 01-09-2013 at 10:49 PM.
 
Old 01-09-2013, 11:31 PM   #2
suicidaleggroll
Senior Member
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 3,202

Original Poster
Rep: Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136Reputation: 1136
So after a significant amount of banging my head against the wall and trying various commands that didn't work (mknod, etc), I finally fixed it:

Code:
partprobe /dev/md126
This resulted in the output:
Code:
Error: The primary GPT table is corrupt, but the backup appears OK, so that will be used.
After which /dev/md126p1 was created, and:
Code:
# mount -a
# df -h
Filesystem            Size  Used Avail Use% Mounted on
rootfs                 36G  9.9G   26G  28% /
devtmpfs              3.9G  236K  3.9G   1% /dev
tmpfs                 4.0G  2.4M  4.0G   1% /dev/shm
/dev/sde2              36G  9.9G   26G  28% /
/dev/sdf1             3.6T  1.9T  1.8T  52% /media/backups
tmpfs                 4.0G  240K  4.0G   1% /tmp
/dev/md126p1          3.6T  2.0T  1.6T  56% /home
/home is back, now I can complete my backups and move forward with the format and install of OpenSUSE 12.2
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm RAID5 degraded, recreated, partition problems?? codemastermm Linux - Server 16 01-08-2013 04:14 PM
Can't create partition on mdadm raid5 and images mounted over loopback emat Linux - Software 2 06-11-2011 02:23 PM
Partitions missing on startup after mdadm snafu KingPong Linux - Hardware 6 01-31-2009 05:06 AM
MDADM problem: partition lost because of mkfs.ext3 trivet.es Linux - Hardware 2 11-05-2008 08:51 AM
Raid 0 with mdadm,mounted on wrong partition pichici Linux - Newbie 1 11-10-2007 11:29 PM


All times are GMT -5. The time now is 02:41 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration