LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 07-27-2008, 11:09 AM   #31
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284

In slackware's initrd, there is a specification for the "real root device" which is stored in /proc/sys/kernel/real-root-dev for when you use the switchroot command. The mkinitrd script automatically configures it. In your setup I'm sure there's something similar. In your init there's
Quote:
resume
echo > /proc/suspend2/do_resume
echo Mounting root filesystem /dev/root
mount -o defaults --ro -t ext3 /dev/root /sysroot
echo Switching to new root
switchroot --movedev /sysroot
echo Initrd finished
Clearly you are correct, change the nash command to setup /dev/root. Change /dev/root to either the UUID of the dev/mapper/nvidia_ecccjggb2 or replace /dev/root in your init with dev/mapper/nvidia_ecccjggb2
 
Old 07-27-2008, 12:16 PM   #32
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
should i change the mkdevroot command or the mount command? i was thinking i should erase the "/dev/root" from the mkrootdev command so it would use the root= string that is passed in the kernel command line
 
Old 07-27-2008, 12:30 PM   #33
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
neither changing the mount command, the mkrootdev command, nor both seemed to work.
 
Old 07-28-2008, 09:05 AM   #34
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
I presume you changed /dev/root to /dev/mapper/nvidia_ecccjggb2 everywhere in the init script.

When init gets to the point of mounting /dev/mapper/nvidia_ecccjggb2, what is the error?

Has dmraid not recognized the raid 0 system by then so that there is a mount error on mounting /dev/mapper/nvidia_ecccjggb2 to /sysroot? Does the error occur afterwards with the switchroot command? Or is there an error with the mkdevroot command?
 
Old 07-28-2008, 04:12 PM   #35
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
The mkdevroot command does not seem to throw an error, however i'll add some echo's in there to see where it should be throwing one if it is. the errors occur during the mount command. the system tries to mount the ext3 filesystem 3 times, each time it tries differently (i.e. once normal, once without any -o options, and last just plain old mount), and then it errors out with unable to mount. i'm not sure if switchroot throws an error either, i'll have to reboot and write it all down, but for now here's my latest init script...

Code:
#!/bin/nash

echo "Loading jbd.ko module"
insmod /lib/jbd.ko
echo "Loading ext3.ko module"
insmod /lib/ext3.ko

###load sata drivers first
echo "loading scsi_mod"
insmod /lib/scsi_mod.ko
echo "loading generic scsi driver"
insmod /lib/sg.ko
echo "loading sd_mod"
insmod /lib/sd_mod.ko
echo "loading libata..."
insmod /lib/libata.ko
echo "loading ahci..."
insmod /lib/ahci.ko

echo Mounting /proc filesystem
mount -t proc /proc /proc
echo Mounting sysfs
mount -t sysfs none /sys
echo Creating device files
mountdev size=32M,mode=0755
echo -n /sbin/hotplug > /proc/sys/kernel/hotplug
mkdir /dev/.udevdb
mkdevices /dev

###detect and map raid
dmraid -ay

echo Creating root device
mkrootdev /dev/mapper/nvidia_ecccjggb2
resume
echo > /proc/suspend2/do_resume
echo Mounting root filesystem /dev/root
mount -o defaults --ro -t ext3 /dev/mapper/nvidia_ecccjggb2 /sysroot
echo Switching to new root
switchroot --movedev /sysroot
echo Initrd finished
 
Old 07-28-2008, 04:32 PM   #36
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
Here's the init process on startup, word for word, starting from where dmraid is run:

Code:
/dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)!
creating root device
Trying to resume from /dev/mapper/nvidia_ecccjggb3
unable to access resume device (/dev/mapper/nvidia_ecccjggb3)
echo: cannot open /proc/suspend2/do_resume for write: 2
mounting root filesystem /dev/root
mount: error 6 mounting ext3 flags defaults
well, retrying without the options flags
mount: error 6 mounting ext3
well, retrying read-only without any flag
mount: error 6 mounting ext3
switching to new root
ERROR opening /dev/console!!!!: 2
error dup2'ing fd of 0 to 0
error dup2'ing fd of 0 to 1
error dup2'ing fd of 0 to 2
unmounting old /proc
unmounting old /sys 
switchroot: mount failed: 22
initrd finished
kernel panic - not syncing: attempted to kill init!

theres a line that says: "mounting root filesystem /dev/root", but thats just the 'echo' line that i didnt change, as you can see in the above init script. shouldnt have anything to do with it.
 
Old 07-29-2008, 12:41 PM   #37
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
more questions re mount failure

Interesting that /dev/mapper/nvidia_ecccjggb3 is used by "resume" even though /dev/mapper/nvidia_ecccjggb2 is the only one mentioned in the init script. You mentioned you have three partitions in your RAID setup, what are they for?

Try inserting "dmraid -s" after "dmraid -ay" to see what was recognized. I'm also curious that the "pdc" (Promise) meta data format was discovered on your disks. Has other raid hardware mounted these disks before? For that matter, what does fdisk /dev/mapper/nvidia_ecccjggb say (after running dmraid -ay)?

Also try listing "mount" without any options to see what was mounted, and maybe "ls -l /dev/mapper/nvidia*" to see what dmraid mapped.
 
Old 07-29-2008, 04:49 PM   #38
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
nvidia_ecccjggb3 is used by resume because of the kernel option resume= in my grub menu.lst. i set it to 3 not knowing what it should be set at, i figured the swap partition might be where a resume file could be stored. either way i dont think that should have any bearing on the root fs (resume is used like hibernation?).

partition 1 is windows ntfs format
partition 2 is linux ext3
partition 3 is linux swap

the disks used to be used on my onboard nvidia nforce raid on my old motherboard. they were transfered over to my current board with an ATI raid. the only other raid cards i have are sil 0680's, and i dont have a promise card at all. perhaps dmraid reads my ati hardware as promise?

fdisk says this:
[root@pc administrator]# fdisk /dev/mapper/nvidia_ecccjggb

The number of cylinders for this disk is set to 9001.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

i did not think the above would be a problem since i am able to mount all the partitions, and the bios is able to access the mbr and load grub just fine, but could this be a problem? and why would it be a problem only in the ramdisk and not when running off my 20 gig drive? if this can be changed without losing any data i can change it, but i dont know how.

i'll try dmraid -s and a plain mount, but i dont think 'nash' supports 'ls', i'll check the man page again.
 
Old 07-30-2008, 03:13 PM   #39
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Well, I don't know about "resume" and the swap file, as I don't use it. May well be that you are right about using the swap partition, and as you pointed out, it's kind of a side issue. Clearly though, resume failed also, not just "mount", indicating that there's a failure to access the raid affecting both operations on both partitions.

There could be any number of reasons that the "pdc" format shows up - I think dmraid confuses one kind of meta data for another sometimes, so perhaps it's reading your old ati meta data. It looks like dmraid uses the "nvidia" format anyway and presumably dmraid -s will confirm that.

After you type "fdisk /dev/mapper/nvidia_ecccjggb" and type "n" to look at the partitions I presume everything looks ok? Just checking everything else I can think of....Does "nash" support fdisk and can you find out what fdisk says from inside the initrd? But, like you, I suspect that this isn't the problem.

Quote:
and the bios is able to access the mbr and load grub just fine, but could this be a problem?
Unfortunately, bios and grub reading the raid set has nothing to do with dmraid and linux reading the raid set.

Quote:
i'll try dmraid -s and a plain mount, but i dont think 'nash' supports 'ls', i'll check the man page again
Must be someway of checking what /dev and /dev/mapper look like inside the initrd...
 
Old 08-05-2008, 09:50 PM   #40
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
sorry, i have not had time to work on this. the kitchen is being remodeled and i'm doing the electrical, so now i've got a second job at home here. i'll be back to this soon though.
 
Old 08-06-2008, 09:18 AM   #41
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
Good luck on the remodeling.
 
Old 08-27-2008, 07:54 PM   #42
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
bet you thought i had given up on this, but no! i've pretty much got caught up on real life, so now ive got some play time again.

Heres what I just tried:

"dmraid -s" runs successfully as it should, printing this output:
Code:
/dev/sda: "pdc" and "nvidia" formats discovered (using nvidia)!
/dev/sdb: "pdc" and "nvidia" formats discovered (using nvidia)!
*** Active Set
name   : nvidia_ecccjggb
size   : 144607488
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 2
spares : 0
running just a plain "mount" unfortunatly just prints the usage of mount

as for trying to run "ls /dev/mapper" or any other non-built in nash command, this is taken from the nash man page:
Code:
There are two types of commands, built in and external. External commands are run from the
filesystem via execve(). If commands names are given without a path, nash will search it's builtin
PATH, which is /usr/bin, /bin, /sbin, /usr/sbin.
now, i have tried many times to get execve() to work correctly. ive tried 'execve(ls)' , 'execve("ls /dev/mapper")' , 'execve(ls, /dev/mapper)' - and many others. every time, i get the error message: Failed in exec of execve(whateveriput)

this leads me to believe my usage of execve() isnt the problem, but again nash being unable to run the non-built in command, even though the man page says that execve() should be used.

funny though, 'dmraid -ay' isnt a built-in nash command, but it seems to run fine?

also, any time i insert a command into the init script that fails, the script executes a second time from the failing command on. so if i put 3 commands that fail in the init script, the script will run 3 more times from the point of the failed command on. as there are other things after the failed commands, this puts all of that on the screen, and any messages that i need to be reading have now scrolled up on the screen past the view point. this makes trying different commands very difficult as i can only insert one command into my script at a time to try.

if you did not understand the above, it is better explained in this open bug:
https://bugzilla.redhat.com/show_bug.cgi?id=144255
it seems someone did make a patch, but i have not figured out yet how to apply patches, or i would have done this.

Last edited by pepsimachine15; 08-27-2008 at 07:57 PM.
 
Old 08-28-2008, 11:15 AM   #43
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
mount failure, nash problems

Well, glad to see you're back; hope the remodeling went well. I had to reread this thread to remember where we were! Anyway....

I'm puzzled that "mount" doesn't even show /proc and /sys mounted. Very odd. It shouldn't give you that message about usage if something is mounted, as /proc and /sys should be. Try adding the -v option to all the mount commands in the initrd to see if you can find out what's happening - maybe those earlier mounts are failing silently.

I wonder if it's that the external commands in general aren't working for you. dmraid works because you compiled a static standalone version. You can try copying over a version of "mount" from your filesystem's /sbin into your initrd and try running that one instead; rename it to "zmount" or something so that nash doesn't get confused if you have to. Same with "ls", call it "zls".
 
Old 08-28-2008, 05:29 PM   #44
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
I've got some things to do tonight so i'm just throwing out a quick note - I did already copy /bin/ls over to my init-tree before i tried to run it from my init script. i had also done that before with /bin/sh to try to get a shell prompt. neither of those worked, only dmraid works. i have to check again, but maybe both 'sh' and 'ls' are symlinks to 'busybox' itself, which is why they arent working.

i'll do some further investigating when I get home tonight and post another reply.
 
Old 08-29-2008, 12:51 AM   #45
pepsimachine15
Member
 
Registered: Jun 2008
Posts: 122

Original Poster
Rep: Reputation: 16
ls seems to be an executable itself, not a symlink. should i be trying to call my external commands by just calling them in the script like i do with dmraid, and not execve()? why would the man page even bother referring to execve() if there is no need to use it in the script?

i'll try copying mount to init-tree/bin/zmount and ls to init-tree/bin/zls and call those straight from the script to see what happens.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Going from single hard drive to Software RAID 1 Zippy1970 Linux - Hardware 8 03-14-2008 09:38 AM
How To Clone Single Drive To Raid 1+0 (copy /dev/sde to /dev/md0)? alfista Linux - Server 2 03-13-2008 04:17 AM
Physically detect a failed hard drive in a software RAID 5 array testnbbuser Linux - Server 3 12-21-2007 05:10 PM
hard drive causing trouble with software raid array machs_fuel Linux - Hardware 2 07-15-2006 02:45 PM
adding a hard drive to an existing software raid array iammisc Linux - Hardware 3 03-01-2006 06:08 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 05:02 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration