LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Linux From Scratch
User Name
Password
Linux From Scratch This Forum is for the discussion of LFS.
LFS is a project that provides you with the steps necessary to build your own custom Linux system.

Notices


Reply
  Search this Thread
Old 06-09-2011, 02:23 PM   #1
briandickens
LQ Newbie
 
Registered: Jun 2011
Posts: 5

Rep: Reputation: Disabled
Cannot open root device?


Built my LFS system following the latest book (6.8). Followed along and everything compiled great. When I got to the part where you boot into the new system, it wouldn't boot. I get the error "Cannot open root device /dev/sdf4 or unknown block(0,0)"

The system will however boot using the exisiting Ubuntu kernel that I had. This grub entry uses an initrd. When I took that line out of grub.cfg, it gave me the same result as above. Does this mean that I need to create an initrd for my LFS kernel? I'm not even sure what I would be using it for. ?

Any thoughts?
 
Old 06-09-2011, 03:47 PM   #2
frieza
Senior Member
 
Registered: Feb 2002
Location: harvard, il
Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16
Posts: 3,233

Rep: Reputation: 406Reputation: 406Reputation: 406Reputation: 406Reputation: 406
are you SURE it's /dev/sdf ? how many hard drives DO you have?
/dev/sda = first hard drive
/dev/sdb = second hard drive
/dev/sdc = third hard drive
...
/dev/sdf = sixth hard drive

do you really have SIX hard drives in your unit?

post the output of 'fdisk -l' (must be run as root)
 
Old 06-10-2011, 07:29 AM   #3
briandickens
LQ Newbie
 
Registered: Jun 2011
Posts: 5

Original Poster
Rep: Reputation: Disabled
I am sure it's sdf. I do not have 6 disks in my system though.

This is the output of fdisk -l. Bear in mind I am typing it, not pasting it as I haven't installed any means to connect remotely.

Code:
Disk /dev/sdf: 146.0 GB, 145999527936 bytes
255 heads, 63 sectors/track, 17750 cylinders, total 285155328 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00023202

   Device	Boot	    Start		End	    Blocks	Id	System
/dev/sdf1	  *	     2048    	     194559	     96256	83	Linux
/dev/sdf2		   196606	  164061183	  81932289	 5	Extended
/dev/sdf3		164061184	  169920511	   2929664	82	Linux Swap / Solaris
/dev/sdf4		169920512	  285153749	  57616619	83	Linux
/dev/sdf5		   196608	   19726335	   9764864	83	Linux
/dev/sdf6		 19728384	  164061183	  72166400	83	Linux
 
Old 06-13-2011, 05:40 AM   #4
Andrew Benton
Senior Member
 
Registered: Aug 2003
Location: Birkenhead/Britain
Distribution: Linux From Scratch
Posts: 2,073

Rep: Reputation: 64
Just because your host system sees it as sdf doesn't mean that your LFS system will call it that. Try different letters (sda, sdb and so on). Did you compile your kernel to use libata or the older IDE driver? libide used to call the devices hda not sda so It could be that which is the problem. Are you sure that you compiled your kernel with support for your motherboard's chipset and the filesystem on the partition? Don't mess about using your ubunut kernel and initrd. Concentrate on compiling a kernel that boots. Make it a monolithic blob with no modules to avoid problems
 
Old 06-14-2011, 02:32 PM   #5
briandickens
LQ Newbie
 
Registered: Jun 2011
Posts: 5

Original Poster
Rep: Reputation: Disabled
I've tried sda-f and hda-f and no luck. errors before the VFS errors seem to point to the kernel not seeing the RAID array?

Code:
md: Autodetecting RAID arrays.
md: Scanned 0 and added 0 devices.
md: autorun ...
md: ... autorun DONE
VFS: Cannot open root device "sdf4" or unknown-block(0,0)
So it seems like it's not seeing the array?

I compiled the kernel with every ATA driver I could check, and every SCSI and RAID driver too. I compiled in the correct filesystem.
 
Old 06-14-2011, 03:30 PM   #6
AndrewJM
LQ Newbie
 
Registered: Jun 2011
Distribution: LFS svn BLFS svn
Posts: 16

Rep: Reputation: Disabled
I had this when I used Ubuntu to create the partitions originally. They still seemed ok using debugfs -R feature /dev/<xxx> but wouldn't boot.

Copy the $LFS/* files to a backup directory on your host;

mkdir -v /somewhere/off/the/mnt/structure/LFSBKUP
cp -r $LFS/* /somewhere/off/the/mnt/structure/LFSBKUP

Then use the instructions in Chapter 2.3 of the LFS Book to reformat your LFS partitions (I actually use a GPARTED Live CD to do this) and copy back;

cp -r /somewhere/off/the/mnt/structure/LFSBKUP/* $LFS

Worked for me!
 
Old 06-14-2011, 04:45 PM   #7
briandickens
LQ Newbie
 
Registered: Jun 2011
Posts: 5

Original Poster
Rep: Reputation: Disabled
Thanks! I'll give that a shot tomorrow! Should I completely wipe the drive? Should I use a livecd otter than ubuntu? Maybe it doesn't matter as long as I don't use the ubuntu installer partition editor?
 
Old 06-14-2011, 04:51 PM   #8
AndrewJM
LQ Newbie
 
Registered: Jun 2011
Distribution: LFS svn BLFS svn
Posts: 16

Rep: Reputation: Disabled
Just using a format to EXT3 worked for me. I have all the partitions mention in the book, /, /home, /opt, /boot, /opt .....
So I needed to format each one and copy it all back. If you use the method in chapter 2. 3 - using the e2fsprogs in a temporary directory that should work I think
 
Old 06-15-2011, 01:35 PM   #9
briandickens
LQ Newbie
 
Registered: Jun 2011
Posts: 5

Original Poster
Rep: Reputation: Disabled
Still not working, but I'm not done trying. I was using a Ubuntu live CD to try to rebuild, but it still called the drives sdf and sdf (I broke the mirror). Now I'll try building using the LFS Live CD, which calls the drives sda and sdb and see if it comes out any different.
 
Old 01-06-2012, 04:19 AM   #10
xyon
LQ Newbie
 
Registered: Jan 2012
Location: 127.0.0.1
Distribution: Debian, Arch, LFS
Posts: 5

Rep: Reputation: Disabled
Old thread, but I've had this error and managed to resolve it with the lfs-initramfs from the Root_FS_on_RAID+encryption+LVM (I can't link to it, but it's the only RAID hint I've found).

The original problem was that udev wasn't populating any device nodes for my hard drives (Identified as sd{a,b,c,d}) - my RAID is based on partitions from all three drives - sdX(0,1,2). Thanks to the initramfs' capacity to provide a shell I managed to boot into sh and take a peek around, but very little seemed to be available to help. All my programs were in the LVM /usr partition which is obviously on the RAID it can't find, so I was a little stuck.

Firing back into the host distro I fiddled with the initramfs options and init script to pull mdadm's /etc/mdadm.conf in (which I pulled across from the host Debian live distro - it called my devices the same thing) and also made sure mdadm's "HOMEHOST" value matched the one I built the RAID with (Visible from mdadm -D /dev/mdX - it's the name of your RAID before the ":", eg Mainframe:0 for mine.

I also figured out that my kernel didn't have the right driver. Probing the host system revealed that the SATA controller was using the "sata_nv" driver module - nVidia chipset - so I rebuilt with that as built-in rather than excluded. (derp moment).

This finally allowed CLFS to boot up, but only as far as LVM. Because /usr and /var are mounted on LVM (as is /home, but less relevant), it wanted to mount those partitions and then fsck them, but that obviously doesn't work. I also tracked down a "Relocation error" in libdevicemapper.so which occurred when vgchange was being run during boot to activate the volume group with my logical volumes in it - so the system still won't boot.

I'm currently building a different version of device-mapper and the other userspace tools to fix my new problem, but the faffing mentioned above did sort the original problem. Note as well that when booting GRUB, a very helpful edit to the "kernel" line (hit 'e' before selecting an option) is to add "rw init=/bin/bash" and boot that. This will give you a root shell (no password needed) within your system to fix errors which come in after the root device is found. lfs-initramfs will drop you to sh if the root device doesn't mount.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Cannot open root device Crabillon Red Hat 2 10-26-2010 01:29 AM
Kernel 2.6.26.3: VFS cannot open root device, can't find root device dimm0k Linux - Kernel 1 09-21-2008 03:19 PM
VFS: Cannot open root device antony.booth Linux - General 2 05-25-2005 04:20 AM
VFS: Cannot open root device muerte42200 Linux - Newbie 6 01-21-2005 03:43 AM
Cannot open root device r2bit Linux - Software 1 01-08-2004 06:14 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Linux From Scratch

All times are GMT -5. The time now is 05:25 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration