FedoraThis forum is for the discussion of the Fedora Project.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I just upgraded a system running FC6 2.6.18, with a mirrored pair (nvidia mobo raid). All worked well under 2.6.18.
Once I upgraded to 2.6.22, the system no longer boots (see the error below). Can anyone give me a clue on how to get the new kernel working? (Please don't say reformat without mobo raid - I know that was stupid to begin with).
device-mapper: table: 253:0: mirror: Device lookup failure
device-mapper: reload ioctl failed: No such device or address
Unable to access resume device (LABEL=SWAP-nvidiajfe)
mount: could not find filesystem '/dev/root'
setuproot: moving /dev/ failed: No such file or directory
setuprooot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
I have reproduced this behavior on an ASUS M2N32 mobo (with Nvidia-590-SLI controller for SATA). The CPU is a dual-core AMD Athlon 5900+. I constructed the RAID 1 array using the BIOS utility, and then loaded Fedora 7 onto the disc(s). All seemed to be well, except for the nagging fact that /proc/mdstat was empty. So I powered down, disconnected the first SATA drive, and rebooted. This produced the kernel panic reported in the previous post. Next I powered down, reconnected the disc, and checked that the system booted normally. (Yes.) Then, for completeness, I powered down, and disconnected the *second* disc that was part of the RAID array, and booted the system again. This produced the identical panic report!
Once I restored the disc connections, and got the system back up, I checked and found this in /dev/mapper
brw-rw---- 1 root disk 253, 0 2007-11-12 17:20 nvidia_fjdddief
brw-rw---- 1 root disk 253, 1 2007-11-12 17:20 nvidia_fjdddiefp1
brw-rw---- 1 root disk 253, 2 2007-11-12 17:20 nvidia_fjdddiefp2
brw-rw---- 1 root disk 253, 3 2007-11-12 17:20 nvidia_fjdddiefp3
brw-rw---- 1 root disk 253, 4 2007-11-12 17:20 nvidia_fjdddiefp4
brw-rw---- 1 root disk 253, 5 2007-11-12 17:20 nvidia_fjdddiefp5
I believe this confirms that the entry that could not be looked up was the main RAID device, which in my case has several partitions defined on it.
My working theory now is that, even though the BIOS thinks that a RAID array exists, the kernel disagrees. Any recommendations on how to fix this would be appreciated.
Update on my troubles with RAID: the Nvidia controller is actually a FakeRAID controller (like WinModem), and even though UDEV managed to make up a bunch of nodes for it, when I unplugged either side of the alleged RAID 1 array, some error occurred that the driver couldn't handle.
I have gone to pure Software Raid, and am finally making some headway with that. But one caveat there as well: it appears that you really need to create a RAID device for each partition, rather than one that spans the whole disc, and then has partitions inside it. At least if anyone knows how to do RAID 1 the other way, I would love to hear about it.
From what I understood from ur last comment ...u want one single big RAID disk directly using the 2 devices ( no partitions ). If this is correct then I think using the direct disk name as RAID device can help u a bit. Also If u looking for partition under RAID ( md0 and then create partition on that ) then you can try creating a LVM out of the RAID device. This will allow you to create further partition in for of LV's on the underline RAID devices.
Actually, I wanted several partitions within a single RAID device, something that my further reading indicates is supported under software RAID only by using Logical Volume Management, even though some of the documentation might lead one to think otherwise. However, I have not taken the time or trouble to rebuild my systems in this way, since creating individual RAID devices (one per partition) seems to work just fine.
PS: This blog is not really like Instant Messaging; you might try to use the spellcheck features to improve the readability of your posting.