Can any one help me in Understanding what is Root FS and why it is required?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Can any one help me in Understanding what is Root FS and why it is required?
[LIST=1][*]What is root fs?[*]Why it is required?[*]When root fs will alive?[*]Who is bringing up root fs?[*]When we mount other fs what will happen to root fs?[*]What contents(Directories) will be as part of root fs?
One of the basic design goals of the Multics operating system (from which UNIX, Linux, ... was derived) was that Multics implemented a single level store for data access, discarding the clear distinction between files (called segments in Multics) and process memory. The memory of a process consisted solely of segments which were mapped into its address space. To read or write to them, the process simply used normal CPU instructions, and the operating system took care of making sure that all the modifications were saved to disk. In POSIX terminology, it was as if every file was mmap()ed; however, in Multics there was no concept of process memory, separate from the memory used to hold mapped-in files, as Unix has. All memory in the system was part of some segment, which appeared in the file system; this included the temporary scratch memory of the process, its kernel stack, etc.
From that, the concept of the "root file system" should be clear: it's how you locate anything with which you wish to interact.
For example, if you want to look at one of your memory location, read from /proc/self/mem, etc.
Are you are asking specifically about the rootfs (as you might see listed with df -T)?
If so, as far as I know it's a specialized ramfs built in to kernes from 2.6. It's the filesystem that the initial linux startup code runs from which enables the startup to intialize and mount other storage devices like raid arrays, etc.
In a normal pc system the rootfs filesystem is hidden after the full system gets booted, but you may still see rootfs filesystem serving as the ramdisk if you're running a live linux.
Are you are asking specifically about the rootfs (as you might see listed with df -T)?
If so, as far as I know it's a specialized ramfs built in to kernes from 2.6. It's the filesystem that the initial linux startup code runs from which enables the startup to intialize and mount other storage devices like raid arrays, etc.
In a normal pc system the rootfs filesystem is hidden after the full system gets booted, but you may still see rootfs filesystem serving as the ramdisk if you're running a live linux.
Not hidden. The initrd ramdisk used for root during boot is discarded. The way this happens is that the kernel loads the ramdisk from the initrd (a compressed cpio backup), starts using the ramdisk for root to initialize the system, load required drivers, populate the /dev directory (which itself is a memory resident filesystem), and mount the real root (on /mnt specifically). Once the real root is mounted on /mnt, the init process then uses the "piviot_root" system call (linux only), to exchange the kernels view of root with /mnt. After the system call, the ramdisk (the original root) is mounted on /mnt, and the real root is now mounted. The ramdisk is dismounted, which releases the storage used back to the kernel for other uses. Once that is completed, the kernel init process execs the /sbin/init file which then takes over the boot process.
The only time you will see a ramdisk (more likely a tmpfs mount) will be when the system is running diskless.
It sounds like this info from kernel.org is outdated now?
Quote:
What is rootfs?
---------------
Rootfs is a special instance of ramfs (or tmpfs, if that's enabled), which is
always present in 2.6 systems. You can't unmount rootfs for approximately the
same reason you can't kill the init process; rather than having special code
to check for and handle an empty list, it's smaller and simpler for the kernel
to just make sure certain lists can't become empty.
Most systems just mount another filesystem over rootfs and ignore it. The
amount of space an empty instance of ramfs takes up is tiny.
And here's what I get with df -T on a Fedora 19 live usb:
The initrd can be quite a few MB in size now - and depending on the system, could even be multiple GB in size (though I haven't seen any). The GB sized ones would be able to contain the entire runtime system for a diskless node, and since such systems usually have multiple GB (even mine has 8) there is usually enough spare room for it.
I haven't checked, but I think the production Fedora releases (and any other distributions using systemd) will be similar.
Yes, and has been pointed out to them several times, that is a security nightmare the way it is done. All you have to do to create a denial of service is to fill the /run filesystem mount. Services will start failing (can't create logins, can't generate X authority files, can't restart services as pid files fail...) The system can be pushed into OOM without being able to recover...
A tmpfs system cannot have quotas imposed - though you can limit the total size, that doesn't prevent a user from using it all. Since the system sensitive files are also on the same filesystem it becomes trivial to carry out a DOS attack.
Even making /tmp a tmpfs mount makes the system vulnerable.
/dev/shm or /sys/fs/cgroup is not a problem, as only the system itself can create entries.
This is only semi-reasonable for workstations as the damage is only to a single system.
The initrd can be quite a few MB in size now - and depending on the system, could even be multiple GB in size (though I haven't seen any). The GB sized ones would be able to contain the entire runtime system for a diskless node, and since such systems usually have multiple GB (even mine has 8) there is usually enough spare room for it.
I think in most cases, though, such diskless systems maintain one or more additional ramdisks outside the initrd (usually as a separate squashfs) rather than attempting to cram everything into the initrd. IIRC, this is how most "toram" setups work; whether it's more efficient or it's some holdover from LILO complaining about 15-16MB memory holes, I don't know.
I think in most cases, though, such diskless systems maintain one or more additional ramdisks outside the initrd (usually as a separate squashfs) rather than attempting to cram everything into the initrd. IIRC, this is how most "toram" setups work; whether it's more efficient or it's some holdover from LILO complaining about 15-16MB memory holes, I don't know.
Most diskless nodes I've worked with use a NFS root as it is simpler to change configurations/add software without even requiring a reboot.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.