Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
In slackware's initrd, there is a specification for the "real root device" which is stored in /proc/sys/kernel/real-root-dev for when you use the switchroot command. The mkinitrd script automatically configures it. In your setup I'm sure there's something similar. In your init there's
Quote:
resume
echo > /proc/suspend2/do_resume
echo Mounting root filesystem /dev/root
mount -o defaults --ro -t ext3 /dev/root /sysroot
echo Switching to new root
switchroot --movedev /sysroot
echo Initrd finished
Clearly you are correct, change the nash command to setup /dev/root. Change /dev/root to either the UUID of the dev/mapper/nvidia_ecccjggb2 or replace /dev/root in your init with dev/mapper/nvidia_ecccjggb2
should i change the mkdevroot command or the mount command? i was thinking i should erase the "/dev/root" from the mkrootdev command so it would use the root= string that is passed in the kernel command line
I presume you changed /dev/root to /dev/mapper/nvidia_ecccjggb2 everywhere in the init script.
When init gets to the point of mounting /dev/mapper/nvidia_ecccjggb2, what is the error?
Has dmraid not recognized the raid 0 system by then so that there is a mount error on mounting /dev/mapper/nvidia_ecccjggb2 to /sysroot? Does the error occur afterwards with the switchroot command? Or is there an error with the mkdevroot command?
The mkdevroot command does not seem to throw an error, however i'll add some echo's in there to see where it should be throwing one if it is. the errors occur during the mount command. the system tries to mount the ext3 filesystem 3 times, each time it tries differently (i.e. once normal, once without any -o options, and last just plain old mount), and then it errors out with unable to mount. i'm not sure if switchroot throws an error either, i'll have to reboot and write it all down, but for now here's my latest init script...
Here's the init process on startup, word for word, starting from where dmraid is run:
Code:
/dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)!
creating root device
Trying to resume from /dev/mapper/nvidia_ecccjggb3
unable to access resume device (/dev/mapper/nvidia_ecccjggb3)
echo: cannot open /proc/suspend2/do_resume for write: 2
mounting root filesystem /dev/root
mount: error 6 mounting ext3 flags defaults
well, retrying without the options flags
mount: error 6 mounting ext3
well, retrying read-only without any flag
mount: error 6 mounting ext3
switching to new root
ERROR opening /dev/console!!!!: 2
error dup2'ing fd of 0 to 0
error dup2'ing fd of 0 to 1
error dup2'ing fd of 0 to 2
unmounting old /proc
unmounting old /sys
switchroot: mount failed: 22
initrd finished
kernel panic - not syncing: attempted to kill init!
theres a line that says: "mounting root filesystem /dev/root", but thats just the 'echo' line that i didnt change, as you can see in the above init script. shouldnt have anything to do with it.
Interesting that /dev/mapper/nvidia_ecccjggb3 is used by "resume" even though /dev/mapper/nvidia_ecccjggb2 is the only one mentioned in the init script. You mentioned you have three partitions in your RAID setup, what are they for?
Try inserting "dmraid -s" after "dmraid -ay" to see what was recognized. I'm also curious that the "pdc" (Promise) meta data format was discovered on your disks. Has other raid hardware mounted these disks before? For that matter, what does fdisk /dev/mapper/nvidia_ecccjggb say (after running dmraid -ay)?
Also try listing "mount" without any options to see what was mounted, and maybe "ls -l /dev/mapper/nvidia*" to see what dmraid mapped.
nvidia_ecccjggb3 is used by resume because of the kernel option resume= in my grub menu.lst. i set it to 3 not knowing what it should be set at, i figured the swap partition might be where a resume file could be stored. either way i dont think that should have any bearing on the root fs (resume is used like hibernation?).
partition 1 is windows ntfs format
partition 2 is linux ext3
partition 3 is linux swap
the disks used to be used on my onboard nvidia nforce raid on my old motherboard. they were transfered over to my current board with an ATI raid. the only other raid cards i have are sil 0680's, and i dont have a promise card at all. perhaps dmraid reads my ati hardware as promise?
The number of cylinders for this disk is set to 9001.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
i did not think the above would be a problem since i am able to mount all the partitions, and the bios is able to access the mbr and load grub just fine, but could this be a problem? and why would it be a problem only in the ramdisk and not when running off my 20 gig drive? if this can be changed without losing any data i can change it, but i dont know how.
i'll try dmraid -s and a plain mount, but i dont think 'nash' supports 'ls', i'll check the man page again.
Well, I don't know about "resume" and the swap file, as I don't use it. May well be that you are right about using the swap partition, and as you pointed out, it's kind of a side issue. Clearly though, resume failed also, not just "mount", indicating that there's a failure to access the raid affecting both operations on both partitions.
There could be any number of reasons that the "pdc" format shows up - I think dmraid confuses one kind of meta data for another sometimes, so perhaps it's reading your old ati meta data. It looks like dmraid uses the "nvidia" format anyway and presumably dmraid -s will confirm that.
After you type "fdisk /dev/mapper/nvidia_ecccjggb" and type "n" to look at the partitions I presume everything looks ok? Just checking everything else I can think of....Does "nash" support fdisk and can you find out what fdisk says from inside the initrd? But, like you, I suspect that this isn't the problem.
Quote:
and the bios is able to access the mbr and load grub just fine, but could this be a problem?
Unfortunately, bios and grub reading the raid set has nothing to do with dmraid and linux reading the raid set.
Quote:
i'll try dmraid -s and a plain mount, but i dont think 'nash' supports 'ls', i'll check the man page again
Must be someway of checking what /dev and /dev/mapper look like inside the initrd...
sorry, i have not had time to work on this. the kitchen is being remodeled and i'm doing the electrical, so now i've got a second job at home here. i'll be back to this soon though.
bet you thought i had given up on this, but no! i've pretty much got caught up on real life, so now ive got some play time again.
Heres what I just tried:
"dmraid -s" runs successfully as it should, printing this output:
Code:
/dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)!
*** Active Set
name : nvidia_ecccjggb
size : 144607488
stride : 128
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0
running just a plain "mount" unfortunatly just prints the usage of mount
as for trying to run "ls /dev/mapper" or any other non-built in nash command, this is taken from the nash man page:
Code:
There are two types of commands, built in and external. External commands are run from the
filesystem via execve(). If commands names are given without a path, nash will search it's builtin
PATH, which is /usr/bin, /bin, /sbin, /usr/sbin.
now, i have tried many times to get execve() to work correctly. ive tried 'execve(ls)' , 'execve("ls /dev/mapper")' , 'execve(ls, /dev/mapper)' - and many others. every time, i get the error message: Failed in exec of execve(whateveriput)
this leads me to believe my usage of execve() isnt the problem, but again nash being unable to run the non-built in command, even though the man page says that execve() should be used.
funny though, 'dmraid -ay' isnt a built-in nash command, but it seems to run fine?
also, any time i insert a command into the init script that fails, the script executes a second time from the failing command on. so if i put 3 commands that fail in the init script, the script will run 3 more times from the point of the failed command on. as there are other things after the failed commands, this puts all of that on the screen, and any messages that i need to be reading have now scrolled up on the screen past the view point. this makes trying different commands very difficult as i can only insert one command into my script at a time to try.
if you did not understand the above, it is better explained in this open bug: https://bugzilla.redhat.com/show_bug.cgi?id=144255
it seems someone did make a patch, but i have not figured out yet how to apply patches, or i would have done this.
Last edited by pepsimachine15; 08-27-2008 at 07:57 PM.
Well, glad to see you're back; hope the remodeling went well. I had to reread this thread to remember where we were! Anyway....
I'm puzzled that "mount" doesn't even show /proc and /sys mounted. Very odd. It shouldn't give you that message about usage if something is mounted, as /proc and /sys should be. Try adding the -v option to all the mount commands in the initrd to see if you can find out what's happening - maybe those earlier mounts are failing silently.
I wonder if it's that the external commands in general aren't working for you. dmraid works because you compiled a static standalone version. You can try copying over a version of "mount" from your filesystem's /sbin into your initrd and try running that one instead; rename it to "zmount" or something so that nash doesn't get confused if you have to. Same with "ls", call it "zls".
I've got some things to do tonight so i'm just throwing out a quick note - I did already copy /bin/ls over to my init-tree before i tried to run it from my init script. i had also done that before with /bin/sh to try to get a shell prompt. neither of those worked, only dmraid works. i have to check again, but maybe both 'sh' and 'ls' are symlinks to 'busybox' itself, which is why they arent working.
i'll do some further investigating when I get home tonight and post another reply.
ls seems to be an executable itself, not a symlink. should i be trying to call my external commands by just calling them in the script like i do with dmraid, and not execve()? why would the man page even bother referring to execve() if there is no need to use it in the script?
i'll try copying mount to init-tree/bin/zmount and ls to init-tree/bin/zls and call those straight from the script to see what happens.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.