Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
when I run mkraid /dev/md0 the raid is successfully created
i then created a ext3 filesystem mkejfs -j /dev/md0
the filesystem is succesfully created and I can mount it
mount -t ext3 /dev/md0 /videos
after I restart my computer I can no longer mount the drive. I get the following:
mount -t ext3 /dev/md0 /videos/
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or other error
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg has the following:
EXT3-fs: unable to read superblock
btw the in the kernel I have Multiple devices driver support (RAID and LVM), RAID support, Linear (append) mode, RAID-0, RAID-1 compiled into the kernel and Multipath I/O support, Faulty test module for MD, Device mapper support, Crypt target support, snapshot target, mirror target zero target compiled as modules.
On my other machine where I have a Raid 1 setup, Slackware 10.1 automagically starts the raid device during boot. For some reason my machine with a Raid 0 (also running slackware 10.1), this is not the case. when I ran the command cat /proc/mdstat, there were no devices setup. My fix was to run raidstart --all (or specific device) after doing this, I can successfully mount my raid drive. Hopefully this will help someone else.
The automagically started stuff happens in the init scripts usually (could happen in /etc/inittab but that is done for very little these days).
On the host where it is starting have a look at /etc/init.d (may be symbolically linked to another directory).
You should be able to determine which file there has the raid startup stuff. (grep startraid *)
You should then look at /etc/rc?.d for files S* and K* which would be symbolically linked back to the one in /etc/init.d (or the real path init.d itself is linked to).
The rc?.d are rc1.d, rc2.d, rc3.d etc... 0 the number is the run level at which the script executes.
The S* files are the startup calls and the K* files are the stop (kill) calls that are done within the specific runlevel. (S* being done on boot up and K* being done on shutdown).
Also make sure you have a look for any config files the raid script in init.d is calling. Usually there is at least one. Often it has a variable that simply gets set to 0 for disabled (don't run) or 1 (run). If all your scripts and links are in place (init.d, rc?.d etc...) it is often just that you need to modify the config file to change this from 0 to 1 to have it run automagically.
When I want my hard drive automatically which is on a seperate controller, I have to place the controller's module (driver) in the ramdisk or initrd file. If you have RAID compiled as modules, you have to do the same.
jlightner, Slackware does not use System V style init scripts. It uses BSD style init scripts.