Why can't I mount this md0 raid? (mdadm and software raid)
Hi,
I am now 15hrs into trying to get raid 1 on my new server box. The past two nights were spent trying to get the mdadm procedure to work correctly (who needs sleep). I can't even mount, once, what I have now. I've read just about every manual, how-to, etc that I can find. I'm ready to give up on it. I've assembled the array and I just want to mount it so that I can then modify the fstab file. Quick history with some linux terminal feedback. Sorry it's long, but I'm not sure what you'd need to see to figure out what's going on. __________________________ 1)mdadm --detail /dev/md0 /dev/md0: Version : 00.90.01 Creation Time : Wed Jul 20 20:53:48 2005 Raid Level : raid1 Array Size : 156288256 (149.05 GiB 160.04 GB) Device Size : 156288256 (149.05 GiB 160.04 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Wed Jul 20 22:41:48 2005 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : ef71a76a:0fbb9297:c5c7d992:b68f5dac Events : 0.4 Number Major Minor RaidDevice State 0 22 1 0 active sync /dev/hdc1 1 22 65 1 active sync /dev/hdd1 2. cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdc1[0] hdd1[1] 156288256 blocks [2/2] [UU] unused devices: <none> 3 In case you were wondering [root@localhost ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) /dev/proc on /proc type proc (rw) /dev/sys on /sys type sysfs (rw) /dev/devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/hda1 on /boot type ext3 (rw) /dev/shm on /dev/shm type tmpfs (rw) /dev/hda2 on /var type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) automount(pid2210) on /misc type autofs (rw,fd=4,pgrp=2210,minproto=2,maxproto=4) automount(pid2250) on /net type autofs (rw,fd=4,pgrp=2250,minproto=2,maxproto=4)[root@localhost ~]# 4. And this might help, too. [root@localhost ~]# mdadm --examine /dev/md0 mdadm: No super block found on /dev/md0 (Expected magic a92b4efc, got 00000000) [root@localhost ~]# ____________________________________________________________ SO, I THINK I HAVE THE md0 raid ready to mount. Here is what happens when I type a few different commands? What am I doing wrong? [root@localhost ~]# mount /dev/md0 /mnt/alldata mount: you must specify the filesystem type SO I TRY EXT3, b/c I paritioned the drives with that. [root@localhost ~]# mount -t ext3 /dev/md0 /mnt/alldata mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so [root@localhost ~]# mount /dev/md0 mount: can't find /dev/md0 in /etc/fstab or /etc/mtab Any Ideas about how I can get this to mount. All the how-to's on mdadm are a bit hazy (the old raidtools tutorials seem to be easier for a newbie like me to understand). All i want is RAID. Help, pretty please :) |
Is this a completely new system with no files you need to preserve? If not, ignore this post.
If so, and I assume from the above that you have two hard disks, I discovered through a LOT of trial and error, mostly error, that you can initiate a RAID 0 array like this: 1. Boot to the install CD and run cfdisk. 2. Partition both disks identically and make your non-swap partitions type "Raid Autodetect". 3. Then, when you boot to the installer, you'll be prompted to install on /dev/md0 instead of either of the actual devices. 4. Then install lilo on the MBR of whichever drive is first in the bus (has the lower designation letter). You can try doing the root partition rather than MBR but I found on my system that was all the worked. This works with Slackware 10.1 and Raidlevel 0. I can only assume it will work with another OS and raid level, but I can't guarantee anything. I also can't remember if I had to do any specific setup of the /etc/raidtab file or not, but since /dev/md0 came about, IIRC, automatically, you will probably have to edit it to get a raid1 array. |
Hi,
I forgot to mention in my post that this raid system does not need to be bootable and won't house the operating system. It is a two disk array that I am going to use as data and image storage. I am using the mdadm procedure, which apparently, doesnt' use the raidtab files. Am i correct on this? |
Quote:
something like mkfs -t ext3 /dev/md0 |
Quote:
You are a genious. Yes, that was the fix. Right after I sent my reply to michael, i found a help file that listed that as a possible problem. Added the filesystem and it mounted, no problem. I swear, making a software raid for the first time is not the most intuitive thing. None of the mdadm how-to's mentioned that as a critical part. I will know next time. What was always bothering (confusing) me is that the linux Hardware Profiler changed to 'no filesystem' after I made the raid. Both disks were ext3's before that. I figured that this was just how the raid was reflected by gui and not that the assemble process had removed the filesystems. It seems obvious now, but it doesn't when you don't understand how the system works. Anyway, thanks to you and michael for the help. Now my mac osx client ins't playing well with my SAMBA services, which were working fine before the raid snafu. Off to fix that. Fingers crossed for a quick and painless resolution. |
Aah Mac OS X I an help you with that ;)
I should have thought about mkfs... |
Quote:
Speaking of that. Know how to fix this problem? http://discussions.info.apple.com/we...LK.6@.68b510a6 |
Either:
1. Bug in Tiger (which is totally possible). 2. There's a bug in Samba (less likely). Rick Van Vilet's suggestion that shares must be browseable is good, but IIRC that refers to the fact that you can browse the share itself, not browse to it, analogous to the x bit (in rwx) in Linux. I could be wrong though, I don't commonly use Samba to share files, I use NFS. |
Quote:
The linux server build is a replacement for my win2k box, so I no longer have any windows machines. So what are the advantages of going with NFS over Samba? |
Primarily, SPEED. Samba is drudgingly slow compared to NFS. Also, though Windows file sharing is well-integrated into OS X, I wouldn't really call it well-imlpemented. NFS is more 'native' to Linux so naturally you'd expect it to work a bit better.
NFS is, while complicated to learn to implement on OS X, not really hard to master. The main difference between 'regular' Linux's and OS X's implementation of NFS is the automounter. NFSd on most plain-vanilla Linux installations doesn't use an automounter, it just mounts the share like a regular mount. OS X on the other hand uses the automounter, which queries the NetInfoDB whenever you try to access the local mountpoint and then actually mounts it, unmounting it later after some timeout. It's a little silly IMHO but it works fine, you'll probably not notice the mounting delay. You can check out NFSManager at VersionTracker to figure out how the NetInfo entries should look, then just do it yourself. There are also lots of guides on the net, google 'em. |
It appears to be a samba bug. I tried to build some test code (that others have said fixes the problem), but had several errors. I figured I would just wait till the new version come out. It sounds like it is almost there. Now I am just loading the sambashares with the "connect to server" function and it is working. I rather deal with that, and forego the build error troubleshooting, until the new version comes out.
Then I may look into NFS. |
hello everyone
i'm new to linux and to this site :) i decided to post here because i found the above info very useful i use suse9.3 and i want to integrate my NTFS RAID 0 in suse wich i run from another HDD i don't want to lose any info cause i run windows from RAID. i have 2 NTFS partitions on RAID and this is what i get: please help if you can TIA linux:~ # mount /dev/md0 mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so linux:~ # dmesg | tail thinkpad: module not supported by Novell, setting U taint flag. thinkpad: I have registered to handle major: 10 minor: 170. NET: Registered protocol family 4 ax25: module not supported by Novell, setting U taint flag. NET: Registered protocol family 3 NET: Registered protocol family 5 md: mdadm(pid 9674) used obsolete MD ioctl, upgrade your software to use new ictls. NTFS-fs error (device md0): read_ntfs_boot_sector(): Primary boot sector is invalid. NTFS-fs error (device md0): read_ntfs_boot_sector(): Mount option errors=recover not used. Aborting without trying to recover. NTFS-fs error (device md0): ntfs_fill_super(): Not an NTFS volume. linux:~ # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.01 Creation Time : Sat Mar 25 17:27:27 2006 Raid Level : raid0 Array Size : 240121600 (229.00 GiB 245.88 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sat Mar 25 17:27:27 2006 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 4K UUID : db3d86ea:7f115b54:2b8b4ea4:6e37052a Events : 0.1 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb linux:~ # cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdb[1] sda[0] 240121600 blocks 4k chunks unused devices: <none> linux:~ # mount /dev/hda2 on / type reiserfs (rw,acl,user_xattr) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) tmpfs on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) usbfs on /proc/bus/usb type usbfs (rw) /dev/fd0 on /media/floppy type subfs (rw,nosuid,nodev,sync,fs=floppyfss,procuid) linux:~ # mdadm --examine /dev/md0 mdadm: No md superblock detected on /dev/md0. linux:~ # mount /dev/md0 mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so |
So it looks like you are having trouble mounting the raid, which seems to be made correctly.
here is an example. I think you forgot to specify the filetype and where you wanted the drive to mount. check man mount for details on mount command. mount -t ext3 /dev/md0 /mnt/drivemountshere or I may be way off base and not know what I'm talking about either. |
@cruiserparts: i didn't forgot to specify the filetype as i configured the /etc/fstab file
no advance so far some people say that integrating ntfs software raid without losing data on disks cannot be done another thing that bothers me regarding this problem is that my /proc/filesystems file looks like this and ntfs doesn't seem to appear: linux:~ # cat /proc/filesystems nodev sysfs nodev rootfs nodev bdev nodev proc nodev sockfs nodev debugfs nodev pipefs nodev futexfs nodev tmpfs nodev eventpollfs nodev devpts ext2 nodev ramfs nodev hugetlbfs minix iso9660 nodev nfs nodev mqueue nodev rpc_pipefs reiserfs nodev usbfs nodev subfs linux:~ # fdisk -l Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0xffff8ac9 of partition table 5 will be corrected by w(rite) Disk /dev/sda: 122.9 GB, 122942324736 bytes 255 heads, 63 sectors/track, 14946 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2621 21053151 7 HPFS/NTFS /dev/sda2 2622 29893 219062340 f W95 Ext'd (LBA) /dev/sda3 1 1 0 0 Empty /dev/sda5 ? 111428 212662 813164780 fb Unknown Disk /dev/sdb: 122.9 GB, 122942324736 bytes 255 heads, 63 sectors/track, 14946 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/hda: 30.7 GB, 30750031872 bytes 255 heads, 63 sectors/track, 3738 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 1 128 1028128+ 82 Linux swap / Solaris /dev/hda2 * 129 3738 28997325 83 Linux Disk /dev/md0: 245.8 GB, 245884518400 bytes 255 heads, 63 sectors/track, 29893 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/md0p1 * 1 2621 21053151 7 HPFS/NTFS /dev/md0p2 2622 29893 219062340 f W95 Ext'd (LBA) /dev/md0p3 1 1 0 0 Empty /dev/md0p5 2622 29893 219062308+ 7 HPFS/NTFS linux:~ # uname -a Linux linux 2.6.11.4-21.11-default #1 Thu Feb 2 20:54:26 UTC 2006 i686 athlon i386 GNU/Linux does it mean that my kernel was not compiled with the ntfs option enabled? |
for other people that find this via google like me... i have yet to see any linux(s) support MS "dynamic disk" bla bla software RAID. much like how MS dosen't reginozie a linux or any non MS partition.
|
This is an old topic, so I hope this info will get to someone who needs it.. please copy this info somewhere else if it's helpful to you!
Last week I bought 4 cheap 320GB SATA II Seagates to build a RAID 5. Have done much reading of random HOWTOs, most seemingly from last year or older, but finally it looks like I have a software (NOT fakeraid) RAID 5 setup that can be seen in both Ubuntu and Windows XP Pro. This info should work for any other distro that contains the LDM (MS Windows Logical Disk Management) driver to read MS Dynamic Disks. Since kernel version >= 2.5.?? afaik, this driver should be somewhere in the kernel source, so even if it is not enabled by default, you can add it if you build your own kernel. Note that you'll have to do the Toms Hardware XP Pro hex edits to be able to set up RAID 5 in XP: (http://www.tomshardware.com/2004/11/...raid_5_happen/) So, here's what I've done and where I am so far: 1. Plugged in the HDDs. My mobo is an Asus A8R32-MVP, so the controller is an ULi model.. can't remember which, but it's not important. This controller isn't supported by dmraid, and I have disabled it in BIOS anyway (I am doing pure software RAID). 2. Booted from Ubuntu 6.10 Desktop CD and installed Ubuntu to the 4th HDD (will be a spare for the RAID, when I'm done). In retrospect, another distro may be more useful, but this is what I had been playing with lately. 3. Made some identical partitions on the first 3 HDDs, similar to how it is explained in the following link, but with different partitions/sizes. The important thing is that I let XP have the first partition on the first disk (/dev/sda1). Windows isn't happy unless it is first. Since the other primary partition(s) are unused, I'll use /dev/sdb1 and /dev/sdc1 to play with distros I haven't tried yet. You could use them for anything else though. (http://www.overclock3d.net/articles....dows_and_linux) 4. Booted back into Windows, and kept going with the guide linked above. I deleted the partitions meant to be used *within* windows (partitions 4, 5, and 6, on the first 3 HDDs), and created the first RAID 5 volume (partition 4 on each). I didn't touch the partitions I'll be using in linux (2 and 3). For me, I will be moving the "Program Files" and "Documents And Settings" windows folders to the first such volume, so it is 120GB. It will be striped across /dev/sda4, sdb4, and sdc4. It took so long for XP to 'create' this volume that I moved on to the next step before creating the rest, just to see if it would work. /sdx2 and /sdx3 are going to be my RAIDed linux volumes, and sdx1 will not be part of the RAID as mentioned above. 5. Booted into Ubuntu, and even though the kernel could see the LDM info (try "dmesg | less" and then search ("/") for "LDM"), both gparted and fdisk reported only one big unrecognized volume for /dev/sda2, and all of /dev/sdb and /dev/sdc. The fact that the partitions were seen by the kernel was a hopeful sign, though, so I went on to try the following commands: A. I didn't have any /dev/mdX block devices, so I created some: sudo mknod /dev/md0 b 9 0 sudo mknod /dev/md1 b 9 1 sudo mknod /dev/md2 b 9 2 That's enough for me to create the 2 linux RAID 5 stripes and the first Windows stripe for testing.. though I still need to make md3 and md4 when I'm ready. B. Create the linux arrays. YMMV on the chunk sizes. If this works, then try making the one for the NTFS stripe: sudo mdadm --create /dev/md0 --chunk=16 --level=5 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc2 sudo mdadm --create /dev/md1 --chunk=32 --level=5 --raid-devices=3 /dev/sda3 /dev/sdb3 /dev/sdc3 C. The linux stripes need to be formatted: mke2fs -j -L number_one /dev/md0 mke2fs -j -L the_larch /dev/md1 D. Mount each of them in turn to double-check the size. They should be (numberOfDrives - 1) * sizeOfPartitionOnEach. mount /dev/md0 /media/extra mount /dev/md1 /media/extra2 df -k Filesystem 1K-blocks Used Available Use% Mounted on ... /dev/md0 1011800 17672 942732 2% /media/extra ... /dev/md1 20160892 176288 18960476 1% /media/extra2 So far so good! If you do a "cat /proc/mdstat" at this point, you may see that the RAID stripes are being 'recovered'. This is pretty meaningless since they have just now been created.. so, now we have some confidence to try accessing the NTFS stripe: 6. The Windows NTFS stripe doesn't need to be formatted, as long as you made it in Windows. Here is what it looked like for me, when I ran through all of the above steps: root@iddqd:/usr/src/linux# mknod /dev/md2 b 9 2 root@iddqd:/usr/src/linux# mdadm --create /dev/md2 --chunk=64 --level=5 --raid-devices=3 /dev/sda4 /dev/sdb4 /dev/sdc4 mdadm: array /dev/md2 started. root@iddqd:/usr/src/linux# mount -t ntfs /dev/md2 /media/extra root@iddqd:/usr/src/linux# ls -al /media/extra total 36 dr-x------ 1 root root 4096 2007-01-01 04:29 . drwxr-xr-x 8 root root 4096 2006-12-31 19:24 .. dr-x------ 1 root root 0 2007-01-01 04:29 System Volume Information root@iddqd:/usr/src/linux# umount /media/extra root@iddqd:/usr/src/linux# cat /proc/mdstat Personalities : [raid1] [raid10] [raid5] [raid4] md2 : active raid5 sdc4[3] sdb4[1] sda4[0] 163846016 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] [=>...................] recovery = 6.6% (5458432/81923008) finish=15.7min speed=80909K/sec md1 : active raid5 sdc3[2] sdb3[1] sda3[0] 20482560 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md0 : active raid5 sdc2[2] sdb2[1] sda2[0] 1027968 blocks level 5, 16k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> As you can see, the NTFS volume is now also being recovered. Since I had no data on the drive, I can't say whether anything would have been destroyed by this process-- however, I can say that when I let it finish, and rebooted back into Windows, there was no problem accessing it. In other words, although mdadm felt the need to rebuild the NTFS stripe, XP Pro didn't complain when it was done. Since this time, I have downloaded the 2.6.19 kernel to the NTFS Windows RAID and unpacked it here in Ubuntu. Working great as far as I can tell. -- Teague |
So I have since initialized the 2nd RAID 5 partition in Windows, which is around 75 GB in size all told. Obviously the shortcoming above is that I didn't know whether existing data would be corrupted.
After Windows finished formatting and recovering the stripe, I copied ~50 GB of data onto it, then booted back into Ubuntu. Went through the process of adding the stripe to mdadm, starting from mknod and letting it recover the array like it did with the first RAID 5 stripe. When it finished, I dug around the volume and could find no change to the data.. couldn't find any corruption or change from what it was before. Rebooted back into Windows, and it didn't even know that Ubuntu had done its own "recovery". I suppose what mdadm calls "recovery" is mostly just verification of the parity (which would account for the speed with which it finished). And that makes sense, if you don't read too much into the actual wording it uses. I gotta say, though, that I still wish I had the $$ for a pure hardware RAID.. sigh.. |
Hey "michaelsanford",
Thanks a bunch for the info on setting up a software raid0 as the boot device! You provided the missing bits of info I needed, and after maybe 20 minutes of struggling with Debian Etch installer's partitioning section, it looks like I'm successful! I still have to finish the installation and hope it boots, but it looks like it's installing to the raid. Errr, looks like you can install to a SW raid0 but you can't actually use it as a boot device. Oh well, nothing ventured... |
I have the same problem BUT mkfs will destory my data in /dev/md0
I want to keep them safe Help me |
"I have the same problem BUT mkfs will destory my data in /dev/md0 "
Did you do an install, or do you just have data on a RAID0? If you did an install and it won't boot, do you have any space on one of the drives that's not part of the array? If you do, then put your /boot directory on that and make the appropriate changes to grub. It should boot after that. I'm afraid that I don't really know how to move a OS to a data drive and make it bootable. I suppose it could be done, but it would be chancy. |
In order to boot from a software RAID, grub's files have to be on another partition that is not part of the software RAID array. Also initrd file have to load the require modules, if there are any, in order for software RAID to work or be detected by Linux. Grub does not have dmraid or software RAID capabilities yet. Soon it should and the complexity of the setup will be easier. RAID 0 really should not be used for the OS because it increases the chances that are equal to the amount of disks. RAID 0 will not speed up loading programs. RAID 1 will speed up programs because the array has copies that can cut down accessing times in half or more.
fanqi1234, if you put data on one of the partitions that is part of the array. There is a good possibility that mdadm may have trash your data. If you did not do this but you wrote files to one of the /dev/md device nodes, can get the data back if you use use dd to make an image of the drive. You will have to use a hard drive that is bigger than the array. /dev/mdX have to be formated before you use it for storing data. I recommend after you format it, place a test file and reboot the computer. If the file is there after you reboot, there is agood possibility that any data you place there should always be there every time you boot up the computer. moaimullet, you could have just buy a hardware controller to make it very, very easy for yourself. Probably there are some files on the NTFS partition that are corrupt. Lucky for you they were not system based and they did not have obscured permissions. RAID 5 needs a lot of processor resources for I/O transactions. However, you may have even higher chance of data corruption or data loss, so I suggest backing up your data. I would not do it your way because the reliability and stability have to go both ways for good operation. I know Linux is reliable and stable, but Windows is never reliable and stable. I rather pick being in debt for several months after buying a hardware RAID controller like from 3ware instead of doing it your way. Note: Using SCSI or SATA hard drives in a software RAID can change from one device node and to the next node. You may have to set the ID or use software labels to make it predictable upon boot up. BTW, I have not yet setup RAID, but I studied the documentation at every angle. |
in FC6 I created /dev/md0 (raid0 ,hda7 + hdb3) to save some files
yesterday I installed openSUSE10.2 to replace FC6(/dev/hda6) now,all my backup files is in the /dev/md0 I want /dev/md0 run just like it in FC6 with old data in it. -*- ps:I can't speak English well .I hope you can understand me. |
Quote:
Quote:
|
Quote:
Assuming this is correct, your data is still there but your installation destroyed your mdadm.conf. Don't panic! You will first need to verify that you have mdadm installed to SUSE. If you don't, install it. Then, do you remember how you created your array? DON'T DO THAT!!! However, you need to do something similar. What you need to do is run the mdadm command using the --assemble option to reassemble the array on your new operating system. Your command will look something like the following: Code:
mdadm --assemble /dev/md0 --level=0 --raid-devices=2 /dev/hda7 /dev/hdb3 Code:
echo 'DEVICE /dev/hda7 /dev/hdb3' > mdadm.conf One additional note. It is possible that mdadm might create a proper mdadm.conf when you install it. Check that first before you run the above commands. |
Quakeboy02,you have a good understanding.
now the array is running ok (I think) but "mount" says "wrong fs type" SUSE : Code:
fans:~ # mdadm -D /dev/md1 and my personal log for FC6: Code:
# mdadm -D /dev/md0 hda8 now called hda7 (because of some partition change when I was installing SUSE) (is this a problem?) |
Code:
fans:~ # cat /proc/mdstat Code:
fans:~ # cat /etc/mdadm.conf Code:
fans:~ # fdisk -l /dev/hda Code:
fans:~ # fdisk -l /dev/hdb Code:
fans:~ # mount /dev/md0 /mnt/md0 Code:
fans:~ # mount /dev/md0 /mnt/md0 -t ext3 |
Code:
fans:~ # mount /dev/md0 /mnt/md0 -t ext3 Also, I believe that you have to run this line in order that you won't have to manually assemble the array when you next boot. I could be wrong, of course. Code:
mdadm --detail --scan >> /etc/mdadm.conf |
While I was playing with software arrays, mdadm started insisting on building some arrays that I hadn't defined as a result of using the generic DEVICE statement like you have. I would change it, if I were you, to have only the devices actually used in the array as follows:
Code:
DEVICE /dev/hda7 /dev/hdb3 |
no ,it's not a mistake. I just -assemble /dev/md1 to try again.
I think array is running, but "mount" says "wrong fs type" Should I use "fsck.ext3 -y " or "mkfs.ext3 -S" ? # fsck.ext3 -n /dev/md0 > ./fsck.txt fsck.txt : Code:
Couldn't find ext2 superblock, trying backup blocks... |
"no ,it's not a mistake. I just -assemble /dev/md1 to try again."
Did you actually try "mount /dev/md1 /mnt" (without the fs type) after you reassembled it? I can't see what you're doing, so you have to be very specific when you tell me what you do. I cannot tell whether you have actually tried mounting without the fs type with the way it is assembled. |
Quote:
Hi Electro.. Yeah, it was a complete stab in the dark with this one. What made me take the jump was the fact that, should my costly hardware RAID controller die, I'd have to get a perfectly compatible replacement (e.g. the exact same model) or else lose everything. If the original RAID lasted a couple of years, that could get seriously expensive. That is way too much faith in hardware for me. So I did this knowing I had some free time ahead that could be spent reconfiguring if need be. That said, so far so good. Haven't run into any NTFS corruption or permission problems to date. But you're dead on with the Linux vs Windows reliability: my biggest problem is that when XP randomly reboots, which it likes to do every couple of weeks without any warning or useful debug info, it does a rebuild on all volumes at once. Since they are all on the same physical disks, this does create some grindage, and can last around 7 hours. My solution is to reboot manually every week or so.. using Firefox it's hardly a problem to get back to where I was (reboot without closing any of its windows ;), but having to remember is a pain sometimes.. Anyway, that is better left for a different thread. What is more relevant is that the CPU usage doesn't stray above 10%, even when rebuilding.. and I'm using an AMD 3500, hardly top of the line today. It's really negligible considering how much processor is on the market now. It might become an issue after Moore's Law does battle with the periodic table (and loses), but for now I'm happy.. :) |
Code:
fans:~ # mount /dev/md0 /mnt/md0 |
At this point, since the array is clean, it's no longer an array problem, I guess. It's a filesystem problem. It doesn't look like you have much choice other than to run "e2fsck -y /dev/md0" and have it repair the filesystem for you. I can't imagine why you wanted to run -S.
|
e2fsck -y /dev/md0
:D :D :D Haha ! I can mount it now ,and all data is ok |
Yay!!!!!!! Success is a wonderful thing! :)
|
D'oh
New post on old thread...
Been trying to set up a raid5 on Ubuntu 12.04 server and successfully created a raid on /dev/md0 but couldn't mount. I kept hitting the bad fs error. Command tried to mount it: me@myserver # sudo mount -t ext2 /dev/md0 /mnt/RaidDrive SHOULD have been trying: me@myserver # sudo mount -t ext2 /dev/md0p1 /mnt/RaidDrive --Just adding this in case anyone else came up with the same error... Google pointed this thread out, it may do so for someone in the future! |
All times are GMT -5. The time now is 10:01 AM. |