Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm running an external RAID array connected to my PC via SCSI with an Adaptec AHA39320 host. Everything works, except I have to manually mount the drives every time the drives or PC come up; the fstab contains the correct parameters and mount -a works. The problem appears to be that the drives are detected very late in the boot, around the same time as the login prompt appears; if I restart the array, there is no entry in /var/log/messages to show that it's come back up, which seems to indicate the trigger point for UDEV is missing. I created a rule that monitored for a SCSI device being added which then called mount -a, but the rule never triggered.
Any ideas? The dist is SliTaz 4.0 (2.6.37); running the aic79xx driver for the Adaptec host.
I had a similar thought, but it would be nice to get it to work as intended. Are there any settings in the SCSI BIOS that would affect this behaviour, e.g. removeable device settings?
I'd run a few live cd's of other distro's just to see if it has something to do with slitaz. And I get the feeling it does.
That distro is made to be small and fast. They may have cut some corners.
I don't see there being any settings in raid bios that would assist. The drives in general ought to be available after the scsi bios ends and passes off to next power on sequence.
I don't think that putting in some sleep time would be bad exactly. Knoppix used to make me wait till usb settled.
Jefro, I wasn't referring to the RAID BIOS, I was referring to the SCSI host BIOS. Anyway, it appears that it's normal behaviour for this hardware setup, but I found an answer courtesy of IBM. In order for the SCSI devices to become known to the operating system, the SCSI bus needs to be rescanned to bring up the newly ready devices; what I did was creat a cron job that reads the SCSI devices using lsscsi, rescan the bus, read the devices again using lsscsi and compare the results. If the 2 results were different, it then automounts the drives using mount -a (the mount points are already set in fstab). It works fine, the only issue is there's a cron job that does nothing 99 of the time, but the overhead is small.
I get it now, sorry I didn't fully read it well, you have a raid enclosure attached to a scsi adapter.
However, we have no idea where the issue lies. Plenty of linux systems have run on that type of hardware for decades without any issue. I still suspect the odd nature of Slitaz. (and I am a fan of Slitaz)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.