LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 06-19-2006, 01:10 PM   #1
1N4148
LQ Newbie
 
Registered: Jun 2006
Distribution: Gentoo
Posts: 22

Rep: Reputation: 15
RAID array gone, can't get it working again!


Hello there
I've got Ubuntu Dapper Drake (6.06) installed on a normal PATA disk (/dev/hda2). I also have 2 SATA drives accessed by a SilRaid-Controller (Hardware-RAID deactivated because it's just a Fake-Software-RAID), which i wanted to use for a RAID-0 array.
They are /dev/sda and /dev/sdb
I don't have any other SATA-drives or SATA-controllers.
I set them up with mdadm (had to load module md first and set it to autoload at boot, also had to create a /dev/md0 device). Formatted em with ReiserFS, worked fine so far. So i set them up in fstab to mount into the /home directory for me to use as a fast disk to keep my files, copied the home onto the array and deleted the old one in /home
Tried to reboot. Well, looks like the array refuses to mount and mdadm says it doesn't have anything in it's config files (it worked before reboot!)
So i tried the --assemble command to look if i could get the bugger going again. No, he can't do it, Error opening device /dev/sda1: No such file or directory he says. Huh?
Now I'm feeling kinda stupid for deleting my home and not being able to mount the array (which isn't a array anymore because mdadm won't work) :s
Why is /dev/sda non-existant to mdadm? The block-device is there, I checked it in mc. I also can access sda/sdb with hdparm, that's not the issue. mdadm refuses to see my 2 SATA disks, but can see my PATA disk just fine. Both SATA drives are definately NOT mounted and therefore should be usable.
Whats the matter? What information do you need? How to fix this?
Help is appreciated, because I need my home back so I don't have to use Windoze anymore :/

Last edited by 1N4148; 06-19-2006 at 01:13 PM.
 
Old 06-20-2006, 11:50 AM   #2
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
Try running:
Code:
# mdadm -D -s
and see what it finds.

Debian and its variants are picky about having /etc/mdadm/mdadm.conf properly configured on boot. If the above scan identified md0, then try updating mdadm.conf and reboot.

Code:
# cd /etc/mdadm
# cp mdadm.conf mdadm.conf.`date +%y%m%d`
# echo "DEVICE partitions" > mdadm.conf
# mdadm -D -s >> mdadm.conf
 
Old 06-20-2006, 11:55 AM   #3
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
Forgot to add, the outputs might look something like this:
Code:
# mdadm -D -s
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=e0ce05c5:35ee4bf9:3f2969f5:3f14f098
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=599bd69c:7678cdde:b81c006f:6373e926

# cat /etc/mdadm/mdadm.conf
DEVICE partitions
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=599bd69c:7678cdde:b81c006f:6373e926
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=e0ce05c5:35ee4bf9:3f2969f5:3f14f098

# cat /proc/mdstat
Personalities : [raid1]

md1 : active raid1 hde6[0] hdg6[1]
      104320 blocks [2/2] [UU]

md0 : active raid1 hde5[0] hdg5[1]
      104320 blocks [2/2] [UU]

unused devices: <none>

# mdadm -D /dev/md0

/dev/md0:
        Version : 00.90.03
  Creation Time : Sun Oct 30 14:50:33 2005
     Raid Level : raid1
     Array Size : 104320 (101.89 MiB 106.82 MB)
    Device Size : 104320 (101.89 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Jun 20 06:27:25 2006
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : e0ce05c5:35ee4bf9:3f2969f5:3f14f098
         Events : 0.2844

    Number   Major   Minor   RaidDevice State
       0      33        5        0      active sync   /dev/hde5
       1      34        5        1      active sync   /dev/hdg5

Last edited by WhatsHisName; 06-20-2006 at 11:57 AM.
 
Old 06-21-2006, 12:17 PM   #4
1N4148
LQ Newbie
 
Registered: Jun 2006
Distribution: Gentoo
Posts: 22

Original Poster
Rep: Reputation: 15
Thanks for replying!

I ran it, but don't get any output (I'm root)
(Un)fortunately, I figured out what the problem is: The partition table on both drives is gone!
How can this happen? I tried several tools to get it back (Testdisk, gpart and fixdisktable) but none of them are able to see a RAID0-partition table.
I'm frustrated. How can this happen? Has the geometry changed? Did some piece of software delete it? It must have happened at the reboot, perhaps the SilImage BIOS?
I need to get it going again, at least to get my home back

Anyone any ideas?


--Here's what I did to get it going the first time:
-Disabled RAID in SilImage Controller BIOS
-modprobe md and set it to autostart
-MAKEDEV /dev/md0
-Gave both drives a partition table and one partition each using the whole diskspace
-Used mdadm to create a RAID0-array
-Copied my home onto the array
-Set it up in fstab to mount into /home
-Deleted old home
-Rebooted
-Panicked :/

Last edited by 1N4148; 06-21-2006 at 02:39 PM.
 
Old 06-22-2006, 11:43 AM   #5
1N4148
LQ Newbie
 
Registered: Jun 2006
Distribution: Gentoo
Posts: 22

Original Poster
Rep: Reputation: 15
I managed to get my data back. There was a Windoze utility name Winhex which managed to connect both drives with a given chunk size and extract the data without the partition table.
38 views for my thread...this is quite pathetic for a board as big as this one...
Anyway, thanks to WhatsHisName for trying to help me and thanks to everyone who at least looked into the thread
 
Old 07-16-2009, 02:59 AM   #6
sime2
LQ Newbie
 
Registered: Nov 2003
Location: Portsmouth, Hants, UK
Distribution: Red hat / Slack
Posts: 2

Rep: Reputation: 0
Question

I have just had exactly this (3 times now) with CentOS 5.2 and software RAID 5... humm has a bug crep in there somewhere any one else come across this?
 
Old 07-16-2009, 04:48 AM   #7
sime2
LQ Newbie
 
Registered: Nov 2003
Location: Portsmouth, Hants, UK
Distribution: Red hat / Slack
Posts: 2

Rep: Reputation: 0
Thumbs up

OK,

I have figured this out for my case and it's NOT a bug. The Key is sdx and sdx1.

NOTE: I boot off a separate disk disk to my arrays so if you use the info here think about what you are doing!

The first chunk of a disk say sda holds the partition info after that fdisk would shove sda1 usually of type FD (Linux autodetect) when dealing with arrays.

The key to this is how you created the array and being very careful with the use of sdx and sdx1.

If you pootle into fdisk and create sda1 of type fd on disk sda and the same for disks sdb, sdc, and sdd all will be well when you write them out and save them, you can reboot and the disks and their partitions will still be there after the reboot, which is good :-)

If however after you have created the above and you do some thing similar to this...

mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sd[a,b,c,d]

(Note: you may be using raid X and fewer or more disks but the result will be the same)

you will be in an apparent world of pain when you reboot the box. Why? because you just overwrote the partition tables of all your disks with the raid signatures from the above mdadm command so if you pootled back into fdisk after a reboot... (Note: the first reboot will cause the system to fall to a console (if you have an entry in /etc/fstab) asking for root password or Ctrl-D to reboot - Login as root and run mount / -o remount,rw then cd to /etc and vi fstab where you need comment out the /dev/md0 array entry now reboot again and your box will come up) you would have NO partitions yes they have all gone which is bad :-(

To fix this edit (or create a new) /etc/mdadm.conf check the output of mdadm -E /dev/sda and the same for sdb, sdc, sdd looking at the value of UUID: which should be the same for all disks, copy it and paste it at the end of the ARRAY line of the conf file after the UUID= also edit the DEVICE line to be /dev/sda from /dev/sda1 etc and save

Now run

mdadm --assemble /dev/md0 /dev/sd[a,b,c,d]

(or similar to suit your system)

now mount the device

mount /dev/md0 /your_mount_point

And you should have everything back which is good so check it out with an ls.

Now re edit /etc/fstab and uncomment the md0 entry and reboot your box.

This time when it comes back everything should be working and it's time for coffee and cake :-)
 
  


Reply

Tags
mdadm, raid, sata



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Installing onto a RAID array.. f0rmula Linux - Software 2 10-13-2005 04:00 AM
Raid Array wwnexc Linux - Hardware 1 09-25-2005 05:47 PM
Problems adding Highpoint 1520 Raid 1 array to working RH9 spifl Linux - Hardware 3 07-16-2005 08:47 AM
Help!! Mounting a raid array Cannedsoup Linux - Hardware 1 07-07-2003 06:26 PM
power for raid array rtp405 Linux - Hardware 1 02-05-2003 05:36 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 04:32 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration