-   Linux - Software (
-   -   can not mount RAID 0 /dev/md0 (

dysenteryduke 09-14-2005 07:09 PM

can not mount RAID 0 /dev/md0

I am trying to set up a RAID 0 on linux kernel 2.6.11. I have 2 200 gb hard drives, 1 WD, 1 maxtor. I am running Slackware 10.1.

My /etc/raidtab:
raiddev /dev/md0
raid-level 0
nr-raid-disks 2
nr-spare-disks 0
chunk-size 32
persistent-superblock 1
device /dev/hdb1
raid-disk 0
device /dev/hdc1
raid-disk 1

the drives are /dev/hdb1 and /dev/hdc1

when I run mkraid /dev/md0 the raid is successfully created
i then created a ext3 filesystem mkejfs -j /dev/md0
the filesystem is succesfully created and I can mount it
mount -t ext3 /dev/md0 /videos

after I restart my computer I can no longer mount the drive. I get the following:
mount -t ext3 /dev/md0 /videos/
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or other error
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg has the following:
EXT3-fs: unable to read superblock

btw the in the kernel I have Multiple devices driver support (RAID and LVM), RAID support, Linear (append) mode, RAID-0, RAID-1 compiled into the kernel and Multipath I/O support, Faulty test module for MD, Device mapper support, Crypt target support, snapshot target, mirror target zero target compiled as modules.

Any suggestions are greatly appreciated.



dysenteryduke 09-15-2005 02:46 AM

sometimes the simplist approach is right
So i figured out what the problem was....

On my other machine where I have a Raid 1 setup, Slackware 10.1 automagically starts the raid device during boot. For some reason my machine with a Raid 0 (also running slackware 10.1), this is not the case. when I ran the command cat /proc/mdstat, there were no devices setup. My fix was to run raidstart --all (or specific device) after doing this, I can successfully mount my raid drive. Hopefully this will help someone else.


MensaWater 09-15-2005 08:55 AM

The automagically started stuff happens in the init scripts usually (could happen in /etc/inittab but that is done for very little these days).

On the host where it is starting have a look at /etc/init.d (may be symbolically linked to another directory).

You should be able to determine which file there has the raid startup stuff. (grep startraid *)

You should then look at /etc/rc?.d for files S* and K* which would be symbolically linked back to the one in /etc/init.d (or the real path init.d itself is linked to).

The rc?.d are rc1.d, rc2.d, rc3.d etc... 0 the number is the run level at which the script executes.

The S* files are the startup calls and the K* files are the stop (kill) calls that are done within the specific runlevel. (S* being done on boot up and K* being done on shutdown).

Also make sure you have a look for any config files the raid script in init.d is calling. Usually there is at least one. Often it has a variable that simply gets set to 0 for disabled (don't run) or 1 (run). If all your scripts and links are in place (init.d, rc?.d etc...) it is often just that you need to modify the config file to change this from 0 to 1 to have it run automagically.

Electro 09-15-2005 04:19 PM

When I want my hard drive automatically which is on a seperate controller, I have to place the controller's module (driver) in the ramdisk or initrd file. If you have RAID compiled as modules, you have to do the same.

jlightner, Slackware does not use System V style init scripts. It uses BSD style init scripts.

All times are GMT -5. The time now is 09:02 AM.