This is an old topic, so I hope this info will get to someone who needs it.. please copy this info somewhere else if it's helpful to you!
Last week I bought 4 cheap 320GB SATA II Seagates to build a RAID 5. Have done much reading of random HOWTOs, most seemingly from last year or older, but finally it looks like I have a software (NOT fakeraid) RAID 5 setup that can be seen in both Ubuntu and Windows XP Pro. This info should work for any other distro that contains the LDM (MS Windows Logical Disk Management) driver to read MS Dynamic Disks. Since kernel version >= 2.5.?? afaik, this driver should be somewhere in the kernel source, so even if it is not enabled by default, you can add it if you build your own kernel. Note that you'll have to do the Toms Hardware XP Pro hex edits to be able to set up RAID 5 in XP: (http://www.tomshardware.com/2004/11/...raid_5_happen/) So, here's what I've done and where I am so far: 1. Plugged in the HDDs. My mobo is an Asus A8R32-MVP, so the controller is an ULi model.. can't remember which, but it's not important. This controller isn't supported by dmraid, and I have disabled it in BIOS anyway (I am doing pure software RAID). 2. Booted from Ubuntu 6.10 Desktop CD and installed Ubuntu to the 4th HDD (will be a spare for the RAID, when I'm done). In retrospect, another distro may be more useful, but this is what I had been playing with lately. 3. Made some identical partitions on the first 3 HDDs, similar to how it is explained in the following link, but with different partitions/sizes. The important thing is that I let XP have the first partition on the first disk (/dev/sda1). Windows isn't happy unless it is first. Since the other primary partition(s) are unused, I'll use /dev/sdb1 and /dev/sdc1 to play with distros I haven't tried yet. You could use them for anything else though. (http://www.overclock3d.net/articles....dows_and_linux) 4. Booted back into Windows, and kept going with the guide linked above. I deleted the partitions meant to be used *within* windows (partitions 4, 5, and 6, on the first 3 HDDs), and created the first RAID 5 volume (partition 4 on each). I didn't touch the partitions I'll be using in linux (2 and 3). For me, I will be moving the "Program Files" and "Documents And Settings" windows folders to the first such volume, so it is 120GB. It will be striped across /dev/sda4, sdb4, and sdc4. It took so long for XP to 'create' this volume that I moved on to the next step before creating the rest, just to see if it would work. /sdx2 and /sdx3 are going to be my RAIDed linux volumes, and sdx1 will not be part of the RAID as mentioned above. 5. Booted into Ubuntu, and even though the kernel could see the LDM info (try "dmesg | less" and then search ("/") for "LDM"), both gparted and fdisk reported only one big unrecognized volume for /dev/sda2, and all of /dev/sdb and /dev/sdc. The fact that the partitions were seen by the kernel was a hopeful sign, though, so I went on to try the following commands: A. I didn't have any /dev/mdX block devices, so I created some: sudo mknod /dev/md0 b 9 0 sudo mknod /dev/md1 b 9 1 sudo mknod /dev/md2 b 9 2 That's enough for me to create the 2 linux RAID 5 stripes and the first Windows stripe for testing.. though I still need to make md3 and md4 when I'm ready. B. Create the linux arrays. YMMV on the chunk sizes. If this works, then try making the one for the NTFS stripe: sudo mdadm --create /dev/md0 --chunk=16 --level=5 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc2 sudo mdadm --create /dev/md1 --chunk=32 --level=5 --raid-devices=3 /dev/sda3 /dev/sdb3 /dev/sdc3 C. The linux stripes need to be formatted: mke2fs -j -L number_one /dev/md0 mke2fs -j -L the_larch /dev/md1 D. Mount each of them in turn to double-check the size. They should be (numberOfDrives - 1) * sizeOfPartitionOnEach. mount /dev/md0 /media/extra mount /dev/md1 /media/extra2 df -k Filesystem 1K-blocks Used Available Use% Mounted on ... /dev/md0 1011800 17672 942732 2% /media/extra ... /dev/md1 20160892 176288 18960476 1% /media/extra2 So far so good! If you do a "cat /proc/mdstat" at this point, you may see that the RAID stripes are being 'recovered'. This is pretty meaningless since they have just now been created.. so, now we have some confidence to try accessing the NTFS stripe: 6. The Windows NTFS stripe doesn't need to be formatted, as long as you made it in Windows. Here is what it looked like for me, when I ran through all of the above steps: root@iddqd:/usr/src/linux# mknod /dev/md2 b 9 2 root@iddqd:/usr/src/linux# mdadm --create /dev/md2 --chunk=64 --level=5 --raid-devices=3 /dev/sda4 /dev/sdb4 /dev/sdc4 mdadm: array /dev/md2 started. root@iddqd:/usr/src/linux# mount -t ntfs /dev/md2 /media/extra root@iddqd:/usr/src/linux# ls -al /media/extra total 36 dr-x------ 1 root root 4096 2007-01-01 04:29 . drwxr-xr-x 8 root root 4096 2006-12-31 19:24 .. dr-x------ 1 root root 0 2007-01-01 04:29 System Volume Information root@iddqd:/usr/src/linux# umount /media/extra root@iddqd:/usr/src/linux# cat /proc/mdstat Personalities : [raid1] [raid10] [raid5] [raid4] md2 : active raid5 sdc4[3] sdb4[1] sda4[0] 163846016 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] [=>...................] recovery = 6.6% (5458432/81923008) finish=15.7min speed=80909K/sec md1 : active raid5 sdc3[2] sdb3[1] sda3[0] 20482560 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md0 : active raid5 sdc2[2] sdb2[1] sda2[0] 1027968 blocks level 5, 16k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> As you can see, the NTFS volume is now also being recovered. Since I had no data on the drive, I can't say whether anything would have been destroyed by this process-- however, I can say that when I let it finish, and rebooted back into Windows, there was no problem accessing it. In other words, although mdadm felt the need to rebuild the NTFS stripe, XP Pro didn't complain when it was done. Since this time, I have downloaded the 2.6.19 kernel to the NTFS Windows RAID and unpacked it here in Ubuntu. Working great as far as I can tell. -- Teague |
So I have since initialized the 2nd RAID 5 partition in Windows, which is around 75 GB in size all told. Obviously the shortcoming above is that I didn't know whether existing data would be corrupted.
After Windows finished formatting and recovering the stripe, I copied ~50 GB of data onto it, then booted back into Ubuntu. Went through the process of adding the stripe to mdadm, starting from mknod and letting it recover the array like it did with the first RAID 5 stripe. When it finished, I dug around the volume and could find no change to the data.. couldn't find any corruption or change from what it was before. Rebooted back into Windows, and it didn't even know that Ubuntu had done its own "recovery". I suppose what mdadm calls "recovery" is mostly just verification of the parity (which would account for the speed with which it finished). And that makes sense, if you don't read too much into the actual wording it uses. I gotta say, though, that I still wish I had the $$ for a pure hardware RAID.. sigh.. |
Hey "michaelsanford",
Thanks a bunch for the info on setting up a software raid0 as the boot device! You provided the missing bits of info I needed, and after maybe 20 minutes of struggling with Debian Etch installer's partitioning section, it looks like I'm successful! I still have to finish the installation and hope it boots, but it looks like it's installing to the raid. Errr, looks like you can install to a SW raid0 but you can't actually use it as a boot device. Oh well, nothing ventured... |
I have the same problem BUT mkfs will destory my data in /dev/md0
I want to keep them safe Help me |
"I have the same problem BUT mkfs will destory my data in /dev/md0 "
Did you do an install, or do you just have data on a RAID0? If you did an install and it won't boot, do you have any space on one of the drives that's not part of the array? If you do, then put your /boot directory on that and make the appropriate changes to grub. It should boot after that. I'm afraid that I don't really know how to move a OS to a data drive and make it bootable. I suppose it could be done, but it would be chancy. |
In order to boot from a software RAID, grub's files have to be on another partition that is not part of the software RAID array. Also initrd file have to load the require modules, if there are any, in order for software RAID to work or be detected by Linux. Grub does not have dmraid or software RAID capabilities yet. Soon it should and the complexity of the setup will be easier. RAID 0 really should not be used for the OS because it increases the chances that are equal to the amount of disks. RAID 0 will not speed up loading programs. RAID 1 will speed up programs because the array has copies that can cut down accessing times in half or more.
fanqi1234, if you put data on one of the partitions that is part of the array. There is a good possibility that mdadm may have trash your data. If you did not do this but you wrote files to one of the /dev/md device nodes, can get the data back if you use use dd to make an image of the drive. You will have to use a hard drive that is bigger than the array. /dev/mdX have to be formated before you use it for storing data. I recommend after you format it, place a test file and reboot the computer. If the file is there after you reboot, there is agood possibility that any data you place there should always be there every time you boot up the computer. moaimullet, you could have just buy a hardware controller to make it very, very easy for yourself. Probably there are some files on the NTFS partition that are corrupt. Lucky for you they were not system based and they did not have obscured permissions. RAID 5 needs a lot of processor resources for I/O transactions. However, you may have even higher chance of data corruption or data loss, so I suggest backing up your data. I would not do it your way because the reliability and stability have to go both ways for good operation. I know Linux is reliable and stable, but Windows is never reliable and stable. I rather pick being in debt for several months after buying a hardware RAID controller like from 3ware instead of doing it your way. Note: Using SCSI or SATA hard drives in a software RAID can change from one device node and to the next node. You may have to set the ID or use software labels to make it predictable upon boot up. BTW, I have not yet setup RAID, but I studied the documentation at every angle. |
in FC6 I created /dev/md0 (raid0 ,hda7 + hdb3) to save some files
yesterday I installed openSUSE10.2 to replace FC6(/dev/hda6) now,all my backup files is in the /dev/md0 I want /dev/md0 run just like it in FC6 with old data in it. -*- ps:I can't speak English well .I hope you can understand me. |
Quote:
Quote:
|
Quote:
Assuming this is correct, your data is still there but your installation destroyed your mdadm.conf. Don't panic! You will first need to verify that you have mdadm installed to SUSE. If you don't, install it. Then, do you remember how you created your array? DON'T DO THAT!!! However, you need to do something similar. What you need to do is run the mdadm command using the --assemble option to reassemble the array on your new operating system. Your command will look something like the following: Code:
mdadm --assemble /dev/md0 --level=0 --raid-devices=2 /dev/hda7 /dev/hdb3 Code:
echo 'DEVICE /dev/hda7 /dev/hdb3' > mdadm.conf One additional note. It is possible that mdadm might create a proper mdadm.conf when you install it. Check that first before you run the above commands. |
Quakeboy02,you have a good understanding.
now the array is running ok (I think) but "mount" says "wrong fs type" SUSE : Code:
fans:~ # mdadm -D /dev/md1 and my personal log for FC6: Code:
# mdadm -D /dev/md0 hda8 now called hda7 (because of some partition change when I was installing SUSE) (is this a problem?) |
Code:
fans:~ # cat /proc/mdstat Code:
fans:~ # cat /etc/mdadm.conf Code:
fans:~ # fdisk -l /dev/hda Code:
fans:~ # fdisk -l /dev/hdb Code:
fans:~ # mount /dev/md0 /mnt/md0 Code:
fans:~ # mount /dev/md0 /mnt/md0 -t ext3 |
Code:
fans:~ # mount /dev/md0 /mnt/md0 -t ext3 Also, I believe that you have to run this line in order that you won't have to manually assemble the array when you next boot. I could be wrong, of course. Code:
mdadm --detail --scan >> /etc/mdadm.conf |
While I was playing with software arrays, mdadm started insisting on building some arrays that I hadn't defined as a result of using the generic DEVICE statement like you have. I would change it, if I were you, to have only the devices actually used in the array as follows:
Code:
DEVICE /dev/hda7 /dev/hdb3 |
no ,it's not a mistake. I just -assemble /dev/md1 to try again.
I think array is running, but "mount" says "wrong fs type" Should I use "fsck.ext3 -y " or "mkfs.ext3 -S" ? # fsck.ext3 -n /dev/md0 > ./fsck.txt fsck.txt : Code:
Couldn't find ext2 superblock, trying backup blocks... |
"no ,it's not a mistake. I just -assemble /dev/md1 to try again."
Did you actually try "mount /dev/md1 /mnt" (without the fs type) after you reassembled it? I can't see what you're doing, so you have to be very specific when you tell me what you do. I cannot tell whether you have actually tried mounting without the fs type with the way it is assembled. |
All times are GMT -5. The time now is 11:12 PM. |