MandrivaThis Forum is for the discussion of Mandriva (Mandrake) Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I mean I want to read a raid-1 (but why not other types Raid-5) linux raid partition from a disk I put on another system, aka Linux (but not mandatory).
In fact if there a Windows tool would make the deal. That would read Linux Raid partition (not monted as Raid mdx thing) just raw as after Raid has been split.
Just found that the m fckr expert guy that built the system simply made a non Raid boot partition. After the system hanged, no way to start and to read anything. Now I told the company how stupid this was, we don't want to call him to fix.
My Linux system to read the lost Linux raid partitions would be Mandriva2005LE with diskdrake tool.
I tried the autodetect fstab and rewrote the fstab, but it seems to have completely wiped the hard drive. Cool. Now I am left with no option for the second drive. It must be the real one. However loosing them wouldn't be my fault since the guy didn't make his RAID-1 work.
By the way 400Gb of some 15 to 20 years unsaved data of vital files for the company. Loosing them would be lethal.
I put 2 of the 4 disks of the old crashed system. They are 160 GB both. On is the probably wiped partitions bootable. The other is not bootable.
Mandrake 10.1 is installed on a raid system. It has 8 raid partitions so this is all you get as md0 to 7.
So nothing to do with the 160 GB I want to read.
as you can see the system has 8 raid partitions on hda and hdc
The the partitions I want to get are on hde (master) and hdf (slave)
I already mounted hde2 and 3, hde3 is a big one (some 140GB). They have been recongnized by diskdrake as Linux Native in replacement of Linux Raid. I mounted that on var/hde2 and var/hde3. In fact it seems what is called Linux Native in Diskdrake is also ext2fs or ext3fs. So that should be ok. However, does the partition type change, the format type byte, imply erasing the data that was on it. I didnt touch hdf, so nothing is mounted and diskdrake says this is one has Linux Raid partitions (not mounted on any raid of course since the disk is not on his original system and doesn't boot on his own disks).
I remind you the purpose is only to get the data back. Tests are ok on hde since I think I already screwwed up, but not on hdf. I want to be sure about that.
Last edited by johnlefevre; 10-06-2005 at 06:58 AM.
sorry I've never built a raid partition with a command line on linux.
In fact I need to know not how to mount it, but to recreate the raid 1 without risking to erase all the content and using only one disk since the other may be wiped and it may lead to a sinching on him from the good one. Then once call it md8 is (re)built from existing Linux RAID hard drive partition, yes I can mount it.
Last edited by johnlefevre; 10-06-2005 at 09:42 AM.
before you go any further you need to make a copy of the drive, before its to late
to make a copy you need a drive that is equal or bigger then the drive you want to copy.
have a working linux system ready with only one drive attched to primary ide 1, once you have a booting system attach the drive you want to back up to primary ide 2 and the blank drive to secondry ide 1.
then run
dd if=/dev/hdb of=/dev/hdc
this will make an exact copy of the first drive to the second, when you've done that you can play with the new back up and the old one is still safe.
Another thing to remember is that linux will mirror from the first to the second drive when raiding, this is very important as if you get the drive around the wrong way and have the blank drive first it will make a copy of the blank drive to your old one erasing all you data!
If the mirroring is so. If drive 1 crashes, the mirror would blindly catch the naughties of the crashing drive. Isn't that supposed not to happen ? Knowing that is software raid1 in Linux trustful. What is better than basic Linux RAID. I have heard of LVMD... virtual paritions that can reallocate bad blocks under raid.
What are the command lines to recreate safely the raid1 from one disk partition only (I want it that way since i will copy all the data from it) ?
Thanks
Last edited by johnlefevre; 10-06-2005 at 10:45 AM.
If the mirroring is so. If drive 1 crashes, the mirror would blindly catch the naughties of the crashing drive. Isn't that supposed not to happen ? Knowing that is software raid1 in Linux trustful. What is better than basic Linux RAID. I have heard of LVMD... virtual paritions that can reallocate bad blocks under raid.
RAID (wether its software or hardware) is not a backup system, its for redundancy. The idea is that if 1 of the disks suffers a hardware fault the computer can keep going using the non-faulty disks, ie. the data is spread across multiple disks. If you do an rm -rf or format a partition accidentally or a lightning surge strikes your computer or whatever RAID cannot save you. That's what backups are for.
LVM is a different thing, it creates 'virtual partitions' on top of which you create an actual filesystem (ext3, ext2, reiser, etc.). Its useful in that you can dynamically shrink and grow the partition and you can span partitions across multiple disks and other nifty features. As for reallocating bad blocks I would've though that's the job of fsck.
Quote:
What are the command lines to recreate safely the raid1 from one disk partition only (I want it that way since i will copy all the data from it) ?
Thanks
Can't remember the exact syntax but it'd be something like
Code:
madadm /dev/md0 -a /dev/hda3
this would add the partition /dev/hda3 to the RAID device /dev/md0. It will probably DESTORY DATA on /dev/hda3
So as far as I can find any docs about that, there is no way of getting the data out of a Raid Linux partition, even if it's there, since I have to have saved the whole system, as raid for instance, before.
So having a for instance hda with all system on hda1 as ext2fs and data on hda5 as Raid Linux and then having a disk like hdc1 with mirrored hda5 on Raid Linux on md0, as Raid1, is COMPLETELY USELESS, since I will NEVER get my hdc1 data BACK.
This is NOT THE CASE on software Raid1 systems on Windows Server or hacked Windows 2000 or XP pro (you know probably about that hack which enables "back" the software Raid 1 and 5 on the Pro back from Server since everything is already there in the Pro).
So, Damn ! why am I using a Raid system for on Linux ?????
In fact I am fully convinced there is a way, a stupid, blatant way of getting it back at work !!!
Even in Mandrake/Mandriva in its own Raid gui system.
Last edited by johnlefevre; 10-08-2005 at 07:23 AM.
You really need someone who knows linux command line to show you, what area do you live, you should join a linux mailing list for your area and ask someone to show you how to mount the drives. The data should still be there as long as it was to start with
So as far as I can find any docs about that, there is no way of getting the data out of a Raid Linux partition, even if it's there, since I have to have saved the whole system, as raid for instance, before.
So having a for instance hda with all system on hda1 as ext2fs and data on hda5 as Raid Linux and then having a disk like hdc1 with mirrored hda5 on Raid Linux on md0, as Raid1, is COMPLETELY USELESS, since I will NEVER get my hdc1 data BACK.
RAID1 is where you have 2 partitions that are mirrors of each other. If one disk fails the machine will keep working and you'll have all your data still. You can then replace that broken disk and the RAID system will rebuild it by copying all the data to it. If 2 disks fail at once then you're stuffed. All of this works perfectly well in Linux - it happened to one of our servers at work a few weeks ago.
What RAID cannot do, wether its Windows RAID, Linux RAID, hardware RAID, is protect against data corruption, accidental deletions or anything else like that - that's what backups are for. RAID is only for redundancy (ie. keeping the machine running) in case of a hardware failure of one drive.
Originally posted by tkedwards RAID1 is where you have 2 partitions that are mirrors of each other. If one disk fails the machine will keep working and you'll have all your data still. You can then replace that broken disk and the RAID system will rebuild it by copying all the data to it. If 2 disks fail at once then you're stuffed. All of this works perfectly well in Linux - it happened to one of our servers at work a few weeks ago.
What RAID cannot do, wether its Windows RAID, Linux RAID, hardware RAID, is protect against data corruption, accidental deletions or anything else like that - that's what backups are for. RAID is only for redundancy (ie. keeping the machine running) in case of a hardware failure of one drive.
I completly understand that. I have created Raid1 on many systems Linux and Windows, all software. This means partition mirroring which says it all.
Now analysing the system I am to the conlusion that the system has all its partitions mirrored including /boot.
This leads me to the conclusion that it boots lilo over MBR on disk1 who failed and then cannot boot on disk2 because no lilo on MBR in any (I suppose not).
Now my hope is disk 1 isn't erased completely by that damn Diskdrake Gui which virtually is only able to create partitions.
Now, what if I copy that disk1 MBR to disk2 with some command line like this one :
disk 1 si hde and disk 2 is hdf
dd if=/dev/hde of =/dev/hdf bd=446 count=1
What is the risk ??? I would say NO risk.
I can somehow acces disk1 but it seems damaged with probably multiple block failure at the very beginning (hope not MBR).
I disk 1 is erased, I have some idea of how the partitions are monted :
second is swap,
first is /boot (??) or / because less than 2GB, third is / or /usr or /home since it is 8GB, fourth is /var since 140GB
Then what would be needed would be to build a lilo boot out of scratch that simulates all this.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.