RAID array gone, can't get it working again!
I've got Ubuntu Dapper Drake (6.06) installed on a normal PATA disk (/dev/hda2). I also have 2 SATA drives accessed by a SilRaid-Controller (Hardware-RAID deactivated because it's just a Fake-Software-RAID), which i wanted to use for a RAID-0 array.
They are /dev/sda and /dev/sdb
I don't have any other SATA-drives or SATA-controllers.
I set them up with mdadm (had to load module md first and set it to autoload at boot, also had to create a /dev/md0 device). Formatted em with ReiserFS, worked fine so far. So i set them up in fstab to mount into the /home directory for me to use as a fast disk to keep my files, copied the home onto the array and deleted the old one in /home
Tried to reboot. Well, looks like the array refuses to mount and mdadm says it doesn't have anything in it's config files (it worked before reboot!)
So i tried the --assemble command to look if i could get the bugger going again. No, he can't do it, Error opening device /dev/sda1: No such file or directory he says. Huh?
Now I'm feeling kinda stupid for deleting my home and not being able to mount the array (which isn't a array anymore because mdadm won't work) :s
Why is /dev/sda non-existant to mdadm? The block-device is there, I checked it in mc. I also can access sda/sdb with hdparm, that's not the issue. mdadm refuses to see my 2 SATA disks, but can see my PATA disk just fine. Both SATA drives are definately NOT mounted and therefore should be usable.
Whats the matter? What information do you need? How to fix this?
Help is appreciated, because I need my home back so I don't have to use Windoze anymore :/
Debian and its variants are picky about having /etc/mdadm/mdadm.conf properly configured on boot. If the above scan identified md0, then try updating mdadm.conf and reboot.
Forgot to add, the outputs might look something like this:
Thanks for replying!
I ran it, but don't get any output (I'm root)
(Un)fortunately, I figured out what the problem is: The partition table on both drives is gone!
How can this happen? I tried several tools to get it back (Testdisk, gpart and fixdisktable) but none of them are able to see a RAID0-partition table.
I'm frustrated. How can this happen? Has the geometry changed? Did some piece of software delete it? It must have happened at the reboot, perhaps the SilImage BIOS?
I need to get it going again, at least to get my home back :(
Anyone any ideas?
--Here's what I did to get it going the first time:
-Disabled RAID in SilImage Controller BIOS
-modprobe md and set it to autostart
-Gave both drives a partition table and one partition each using the whole diskspace
-Used mdadm to create a RAID0-array
-Copied my home onto the array
-Set it up in fstab to mount into /home
-Deleted old home
I managed to get my data back. There was a Windoze utility name Winhex which managed to connect both drives with a given chunk size and extract the data without the partition table.
38 views for my thread...this is quite pathetic for a board as big as this one... :rolleyes:
Anyway, thanks to WhatsHisName for trying to help me and thanks to everyone who at least looked into the thread ;)
I have just had exactly this (3 times now) with CentOS 5.2 and software RAID 5... humm has a bug crep in there somewhere any one else come across this?
I have figured this out for my case and it's NOT a bug. The Key is sdx and sdx1.
NOTE: I boot off a separate disk disk to my arrays so if you use the info here think about what you are doing!
The first chunk of a disk say sda holds the partition info after that fdisk would shove sda1 usually of type FD (Linux autodetect) when dealing with arrays.
The key to this is how you created the array and being very careful with the use of sdx and sdx1.
If you pootle into fdisk and create sda1 of type fd on disk sda and the same for disks sdb, sdc, and sdd all will be well when you write them out and save them, you can reboot and the disks and their partitions will still be there after the reboot, which is good :-)
If however after you have created the above and you do some thing similar to this...
mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sd[a,b,c,d]
(Note: you may be using raid X and fewer or more disks but the result will be the same)
you will be in an apparent world of pain when you reboot the box. Why? because you just overwrote the partition tables of all your disks with the raid signatures from the above mdadm command so if you pootled back into fdisk after a reboot... (Note: the first reboot will cause the system to fall to a console (if you have an entry in /etc/fstab) asking for root password or Ctrl-D to reboot - Login as root and run mount / -o remount,rw then cd to /etc and vi fstab where you need comment out the /dev/md0 array entry now reboot again and your box will come up) you would have NO partitions yes they have all gone which is bad :-(
To fix this edit (or create a new) /etc/mdadm.conf check the output of mdadm -E /dev/sda and the same for sdb, sdc, sdd looking at the value of UUID: which should be the same for all disks, copy it and paste it at the end of the ARRAY line of the conf file after the UUID= also edit the DEVICE line to be /dev/sda from /dev/sda1 etc and save
mdadm --assemble /dev/md0 /dev/sd[a,b,c,d]
(or similar to suit your system)
now mount the device
mount /dev/md0 /your_mount_point
And you should have everything back which is good so check it out with an ls.
Now re edit /etc/fstab and uncomment the md0 entry and reboot your box.
This time when it comes back everything should be working and it's time for coffee and cake :-)
|All times are GMT -5. The time now is 08:52 AM.|